Creating hdri evo maps
CASINC
Posts: 76
I'm attempting to create my own hdri environmental maps for Daz using a theta Z1 360 camera. Long story short, they just aren't coming out normal hdri quality like all the other hdr's I use.
Workflow:
-Have been taking 8 different exposure levels in raw format
-Combine them in lightroom classic as a hdr.
-Stitch them with the theta stitcher.
- Open in photoshop, edit out stand and what not.
- As 32 bit, save as Radiance.
At this point I apply it to my scene in daz and it looks like an 8 bit jpg file with no real lighting info.
Any suggestions or help would be great. Thanks!
Comments
I meant what is the exposure difference expressed in F stops between each of the bracketed 8 shots you took?
For some more information you can take a look at the FAQ from HDRI Haven
How many stops/EVs do you capture for each HDRI?
How do you measure the dynamic range (EVs)?
How do you reach 24+ EVs?
That is shutter speeds not exposure. Looking at the specs it does exposure compensation from -2.0 to +2.0 EV in 1/3 EV steps. And "A smartphone is required to change modes or configure manual settings"
The Aperture is the exposure setting and that is what you need to change. You should be able to set it manually in the app. by -2.0 to +2.0 in increaments of 1/3. The 2.1 setting will probably only allow you to go up, then set it to the next setting which is 3.5 and go down a 1/3 at a time to -2.0 then up from 3.5 to +2.0, and then the next at 4.6 and do the same. By my calculations that should be 33 images ( probably wrong but you get the idea) :)
Lets step back to the beginning then.
The combining them in Lightroom to and HDR. Is that an HDR photo or an HDRI lighting file? The two are different.
https://vrender.com/what-is-hdri/
https://blog.hdrihaven.com/how-to-create-high-quality-hdri/
Did you read the HDR Haven article. In it he says that Lightroom changes the images as it combines them so he stopped using it. I have used the .dng converter once on one of my .raw files and it completely ruind it so I wouldn't recommend using .dng files :)
Greg Zal uses a Canon 600 D which is 18 MP, your Theta Z1 is 23 MP so the images from your camera are actually bigger than his :)
I don't know if this will explain his method as he goes through the camera and set up.
https://blog.hdrihaven.com/camera-gear-for-hdris/
I think the problem may be what to expect from the resolutions of the two cameras.
From the Thata homepage: Records natural 360° images using approx. 23MP(6720 x 3360, 7K) still image shooting and highly accurate image stitching.
The sensor size is a 1.0-inch back illuminated CMOS image sensor.
So the above resolution of the final stitched equirectangular image is roughly 23MP and is based on this sensor size of 1".
HDRI Haven on the other hand uses an APS-C sensor based DSLR camera with a rectilinear lens (maybe a 10.5 mm) that has a vertical field of view of 74°. This means to shoot a full sphere a lot of images will be shot and then stitched. (There are calulators available online to do this http://www.hdrlabs.com/tools/panocalc.html)
From the blog:
This means the final equirectangular image has a maximum resolution of 143MP (16950 x 8475, 16K)
This is a much larger resolution!
Also on HDRI Haven in the FAQ you find:
Why don't you use a 360 camera?
There are two main reasons: Resolution, and highlight clipping...
360 cameras generally have a very low resolution, simply due to the nature of their size and inherent quality of lenses that size. The highest resolution dual camera 360 cam at the time of writing this (October 2019) can produce a 7k panorama, which is a quarter of the total pixels needed at a bare minimum for HDRIs in my opinion (which is 14k). There is one 360 cam I know of that uses a few dozen wide angle lenses and can produce 16k HDRIs, but it's sold as a service charging you per panorama and doesn't seem to give you much control over the output, and still has the clipping issue...
Preventing highlight clipping is absolutely paramount when making high quality HDRIs that accurately capture light from the real world. 360 cameras use multiple fisheye lenses that have a bulbus front element, which makes it impossible to fit an ND filter on the lenses, which makes it impossible to avoid clipping.
So, 360 cams might be a useful tool for your own personal use, or for TDs on a film set that need to be fast and are willing to sacrifice on quality, but as a tool for producing the highest quality HDRIs for public consumption, they're not a good option at all.