HDR cameras hiding in plain sight, and how to get EXRs from them.

Let’s create an HDR image and then do some HDR tonemapping. First we’ll take an overexposed shot:

And then we’ll do an underexposed shot. This shot looks about 3.8 stops lower.

And then we’ll merge it to an EXR file and do some light tonemapping.

That, my friends, is an HDR image. It’s not a good image. It’s your typical “I just got a new camera so I walked outside and took a shot of the first thing I could see” image. And it doesn’t help that I did absolutely no other adjustments (like white balance). But, it works.

The typical way to get an HDR image is a huge pain. You have to get your tripod/remote shutter out and take a bracketed set of exposures (usually -2, -1, 0, 1, 2). And you have to make sure that the camera doesn’t shake (hence the tripod and remote shutter). And you have to make sure that nothing is moving, so you can forget about action shots. All of you HDR photographers out there know the pain of which I speak.

For the last several years I’ve been hoping that some day we would see a camera that would let you take a bright and dark image on the same sensor, and let you merge them into an EXR. And I’m psyched because I finally got one just a few days ago. Of course I felt a bit silly because this camera is the Fuji S5 and is now discontinued. Still, it’s a great camera. It shares the same body as the Nikon D200 and is compatible with the same lenses (with a few minor exceptions). You should get one too.

The Fuji S5 came out in 2007 with a CCD sensor that has large and small photosites. The small photosites record the scene a few stops lower than the large ones. If you can merge those two together, you get an HDR image. Dpreview.com has a better explanation: http://www.dpreview.com/reviews/fujifilms5pro/.

So that’s pretty cool. The software that comes with the S5 (as well as other processors like Photoshop’s RAW plugin) let’s you push the range quite a bit. And the camera’s processor can do some creative things to fit more highlight data into your jpeg images. Of course, despite all the marketing speak, all you’re really doing is taking the two images and blending them. That’s it.

You would think that HDR photographers would be all over this. It’s relatively straightforward to take an underexposed and overexposed image and blend them into an HDR image. So I would expect to see info on doing that with a single Fuji RAW file but I’ve looked everywhere on the internet and I can’t find anyone who has done that. Of course, if you find a link to someone who did I’m happy to give them credit for doing it first. The closest thing that I found (and was the inspiration for this approach) was this forum post: http://www.luminous-landscape.com/forum/index.php?topic=19630.

Instead, most people are content with using the HDR tonemapping software that comes with the camera. You can tweak the exposure and you can add a shoulder to your tone curve to recover a little bit of your highlights. Yawn. The S5 with default software does give you more range though. It has developed a bit of a niche with wedding photographers. I can see that range being useful when you are shooting a woman in a white dress standing next to a man in a black tux in direct sunlight.

Still, I want to get a real HDR file. So here’s how you can do this in Photoshop CS5. Feel free to test it with my raw file: test-raw.zip. And if you would rather just look at the finished file you can have that too: DSCF0009_final.exr.

Btw, you can use this same technique for more cameras than just the S5. The S3 should work. And the newer cameras by Fuji such as the X100, HS10, HS20, and upcoming X10 should work too (among others). Those cameras have other issues which I’ll discuss at the end. But the same process should work for them.

Process: Get an EXR file from Fuji RAW

  1. The first thing you have to do is take an HDR image. With your Fuji S5, go to the menu setting for D-Range and set it to 400%. Also, go to the Quality setting and set that to RAW. For the other cameras it really varies but generally means being in EXR mode and being in Dynamic Range sub-mode.
  2. Take a picture!
  3. Get a recent copy of dcraw. For those of you know don’t know, dcraw stands for Dave Coffin RAW. Dave Coffin has done all of us an incredible favor by reverse-engineering all the major RAW formats and making the source code available to everybody. If it wasn’t for him then most programs would not be able to read RAW files. One catch is that you have to either compile it yourself or find a copy online. I got mine from Alex Rietschin’s blog. Any source should work though.
  4. Time to take the RAW file and split it into the light image and the dark image. My image is named DSCF0009.RAF. So my command line will look like:

    dcraw64.exe -W -q 3 -s all -6 -g 2.4 12.92 DSCF0009.RAF

    Here’s what those commands mean:

    • -W: Means that we are using no white balance so that we can see the full range of what the camera is capturing.
    • -q 3: Use the high-quality interpolation (AHD).
    • -s all: This one is important. It means that we should extract both the bright and dark images stored in the RAW file.
    • -6: Store as a 16-bit file instead of 8.
    • -g 2.4 12.92: Use the curve for sRGB. If you know what this means, feel free to change it to something else.
  5. Time to open in photoshop. Dcraw will have created two files. In my case I have a DSCF0009_0.ppm and a DSCF0009_1.ppm. The _0 file is the main image and the _1 file is the underexposed image.
  6. The basic strategy from here on is that we will put the overexposed shot on top of the underexposed one. But instead of doing some kind of artistic frame merging we will try to recreate the “true” HDR image.
  7. First, open up the _0.ppm image. Here’s mine again:

    We’re going to create the blend mask here. Looking at this image, if a pixel is overexposed or close to overexposed then we want the pixel from the other frame (transparent). But if a pixel is in range then we should keep it (opaque). We can do this in three steps.

    • Desaturate the image. Image->Adjustments->Desaturate.
    • Levels. Image->Adjustments->Levels. I’ve found that setting the left level to 180 and the right one to 200 works pretty well. What we’re saying here is that any pixel less than 180 we will keep, pixels greater than 200 we will use from the other image, and anything in between will be semitransparent. Also, don’t go too far above 200. If even one of the three channels is clamping at 255 then it should be transparent. Ideally, in the previous step we should use the max of the three channels but a desaturation is fine for now.
    • Invert. Image->Adjustements->Invert. If you paid attention, you would notice that our levels operation was backwards. The invert fixes it.

    After that little diversion, we should have a layer mask that looks like this.
    Save it as _trans.ppm.

  8. At this point we have three files. For this next step we will merge them together.
    • XXXX_0.ppm: Our overexposed image.
    • XXXX_1.ppm: Our underexposed image.
    • XXXX_trans.ppm: Our mask.

    Now that we have all of these it’s time to merge them.

    • Open all three. For each image, assign the sRGB profile to it. Go to Edit->Assign Profile, and select sRGB IEC61966-2.1 for each one.
    • Now switch to the _1.ppm image. This image will eventually become our final HDR image. Make it 32-bit. Image->Mode->32 Bits/Channel.
    • Paste the _1.ppm file on top of the _0.ppm file. You should get a warning box that looks like this:

      Just hit ok and move on.

    • Our underexposed shot should be called Layer 1. Make a layer mask for it.
    • Time to copy the _trans.ppm file into our layers mask for Layer 1. Go channels, select Layer 1 Mask, make it visible (click the box on the left) and make RGB invisible.
    • Paste _trans.ppm onto Layer 1 Mask
    • Reselect the RGB channels to make them visible and don’t look turn off visibility of the mask.
    • Quick status check. At this point, you should have a horrible-looking image like this one.

      Also, your layers should look like this.

    • We will want to bias the exposure of the overexposed image to make the resulting image linear. To make it easier to see the luminance differene between them we should desaturate them. Select the background layer, and alt-click the half moon and create a Hue/Saturation layer. You will see this little window pop up.

      Make sure you select Use Previous Layer to Create Clipping Mask. Once you create the Hue/Saturation layer, set the Saturation to 0. Don’t worry…we’ll put it back later.

    • Repeat that same step for the Layer 1 mask. Create the Hue/Saturation layer, set the clipping mask, and set Saturation to 0.
    • Time to create the exposure layer. Select the Hue/Saturation layer, alt click the moon, select exposure, and create the clipping mask. Don’t move any sliders yet.
    • Your image should look like this. It’s the previous image, just desaturated.

      Also, here’s what your layers should look like. Note the little arrows.

    • Tweak that exposure slider until you see a clean blend between the underexposed and overexposed layers. For me, that happens at -3.8.
    • Disable the two Hue/Saturation layers.
    • Done! Just flatten the image and save as .EXR, .HDR, or whatever format makes you the happiest. Then tonemap it however you want. Here is a quick example using the Photoshop’s Local Adaptation algorithm.

Changes You Should Make:
That’s the basic approach. Now, when you are actually going through this process you will want to do a few things differently.

  1. You should use the -w flag (lower-case) during the dcraw step. In the steps above I used the -W flag (upper-case) which uses the maximum range possible because I was trying to demonstrate the amount of range in a raw file. But you will probably care about having an image that actually looks good so you should use -w.
  2. If you are taking outdoor shots you might run into the “pink of death” situation. If you search around the web you might see S5 owners complaining about their highlights turning pink or magenta when they push the range. This happens because when the _1 image overexposes it goes to pink. You will notice this in daylight lighting but you will rarely see it in tungsten lighting. When I was taking the test shot above, I had another shot where the _1 file looked like this:

    Yeah. To solve this problem just do a levels operation on the _1 file and the pink should go away like so:

    Of course, that means you will sacrifice some range but it’s better than going pink. For the pink, we start seeing it around 190 in the green channel (on a scale of 0 to 255) in sRGB space. That’s about 0.515 in linear space (on a scale of 0 to 1.0). Or in other words, about one stop.

Other Cameras:
Fuji has kept the idea of the HDR sensor alive in its new models. It calls this “EXR” mode (for EXtended Range). I’ve played around with them a bit and they work pretty well. From what I’ve seen there is one huge catch though: To shoot in EXR mode you have to be in full Auto. I was looking at the HS20 and they have a dial of shooting modes such as Manual, Aperture Priority, Panorama, EXR, etc. So to shoot in EXR mode, you have to have ISO, Exposure, and Aperture all in full Auto. ARG! Why should shooting in EXR mode force me to shoot with auto Aperture? Lame. Of course, if I’m wrong, please tell me. And Fuji, if you’re out there, I’m begging you to have a hidden menu option like “Always EXR Mode” so that I can shoot HDR shots in Aperture Priority.

Final Thoughts

  1. We get about 3.8 stops of extra range by doing this approach. And if we have pink issues, that goes down to potentially about 2.8 stops. If you are shooting a 3 bracket set with steps of 1 stop each then you are only getting an additional 2 stops of range. A 5 bracket set only gives you 4 stops. So it’s somewhere between a typical 3 exposure and 5 exposure bracket. Certainly I can get more range from shooting a 9 bracket set with my D200, but unless I’m shooting architecture it’s probably not worth it.
  2. When you take a 5 shot bracket you should usually merge with all 5 frames (as opposed to just the brihtest and darkest). Do you actually need the in-between frames? Not really. Those middle frames help with minimizing hue shifting and reducing noise, but you don’t actually need them for dynamic range unless they are more than 4 stops apart.
  3. The blending is nearly perfect, but not quite. There are no ghosting or camera shake issues (AWESOME!). Still, the two images are actually misaligned by a half pixel.
  4. This is great for skies and sunlight-shadows. When shooting outdoors in harsh lighting I find myself saying “If only I had one or two more stops”. Not any more. How many times have you shot someone with an overexposed sky behind them? The rule of thumb I hear is that your average sky is 3 stops brighter than the ground.
  5. If you can fix the lighting you should. It’s better to get the lighting right the first time than to try and fix it in post. So if you can use a polarizer or graduated neutral density filter to bring your sky down, then you should. And if your outdoor model is close enough then you should use your flash. HDR tonemapping should be your last choice, not your first choice.
  6. Action Shots! We can do HDR action shots now.
  7. One thing I’ve always wanted to do is shoot an ocean sunset. It’s tough because the specular reflections of the waves are extremely bright and they’re always moving. Time to try that.
  8. I’m really excited to go shooting. I would have gone the last few days but it was overcast. Not the best lighting for trying out HDR gear.
  9. The extra range is still not enough for some situations. But even if it’s not “enough”, much more of the frame will be exposed nicely which is still a win.
  10. This workflow is a bit of a pain and could be automated. Making a program to convert from a Fuji HDR-mode RAW file to some kind of real HDR file by tweaking a few sliders should be really easy. I’m debating throwing something like that together and making it freeware. There’s no way I’d sell it: I can’t honestly charge for something that should be included with the camera. But, I’m 90% sure that I’m going to write it for myself. For you Fuji owners, let me know if that would be useful to you.

For everyone with a recent Fuji camera: Go shooting!

  • Michael James

    I went with the D3 to get the better lenses and full frame. I almost got the S5, but the crop sensor stopped me.

    At the time I was making that choice the D3 was the heavy weight full frame option and the Nikon 14-24mm f/2.8G was the industry standard in every head to head lens test for sharpest wide angle glass.

    If it were not for that I’d have likely have gone with the Fuji S5 Pro. I wish the company could have continued on because they were clearly WAY ahead of the industry and unfortunately they didn’t market it well enough to get the sales high enough to continue on.

    Michael James

  • http://blog.19lights.com John Hable

    Hi Michael. Have to agree with you 100% on the value of marketing. A few years ago (2008?) I was searching for an SLR specifically because I wanted to play around with HDR, and I went with the Nikon D200. The S5 never really came on my rader. Kinda wish I had known about that 3 years ago.

    It’s a shame that they don’t make SLRs anymore, but I remain hopeful that someone will. In three years, which would you rather have: more megapixels, more dynamic range, or a cheaper camera? Someone will figure out that dynamic range is the most important thing to improve. We still have less range than film.

    Btw, last Sunday I finally played around with the S5 with decent lighting and it was pretty shocking what you can do with a legit HDR camera. I threw together a program to batch merge an S5 raw into an EXR file and I’m never going back to regular shooting. Eventually camera makers will come around. But we have to show them that HDR is useful for more than those Van Gogh-wannabe shots that you see on twitter.

  • Michael James

    Since day one dynamic range was the top of my list over megapixels! :)

    To “test” the film vs digital thing I took my Nikon D3 out with a Nikon F4 and F5 and shot two film stocks and tripoded all shots with the same lenses. D3 at camera’s base ISO 200. F5 with 100 stock film (I compensated with shutter speed to match the D3/200iso) and F4 with 200 stock film. Used the same lenses on all three shots on a sunny day near noon.

    After scanning and comparing images I was shocked. Shocked because I set out to prove that these film idiots that won’t let go were wrong. I was ready to post the results to prove that digital had exceeded film.

    Boy was I wrong.

    I shot 14 bit uncompressed NEF on the D3. No matter how I messed with the raws, the highlights clipped sooner. However, the noise data of the D3 was better. BUT!!!! that never made up for the fact that the rolloff of the film stock was so smooth vs the typical nasty clipping/saturation that occurs with digital.

    My end take away was that film is still a very large dynamic range and I get why photographers still use it. But the convenience of digital and lower cost overall really makes shooting digital my preferred method. As much as I appreciate the film argument, the costs and time to process/scan doesn’t work for me.

    If the S5 was full frame I’d sell my D3 and go find one.

    I really wish Canon/Nikon or someone would hurry up and pay attention to that process and idea that Fuji was way out in front with.

  • http://blog.19lights.com John Hable

    Yep, that sounds about right. Film is surprisingly good. I learned about that from a friend of mine (HP Duiker) back at Electronic Arts in 2006 and I’ve been pushing for it in games ever since.

    One thing I’ve been doing in NaturalHDR is putting together a film-like curve. So instead of doing the “crazy hdr thing”, you could emulate that look with a digital camera and software. If you have a legit HDR camera (like the Fuji S5), and you have a film profile and the right software, there is nothing stopping you from a near-perfect simulation of the “film look” with a digital camera.

  • http://19lights.com/wp/2011/10/16/how-hdr-is-that-camera/ How “HDR” is that camera? | 19lights

    [...] CCD sensor. I don’t believe dxomark.com on these cameras. As I mentioned a few posts ago the Fuji S5 can take two images at once and they both seem about 3.7 stops apart. The later cameras do a similar thing and the two images [...]

  • Digitalcoastimage

    Did you ever automate this? Just curious. I bought a FujiFilm S3 Pro over a year ago and have just been doing edits in lightroom. Not sure if Adobe is taking full advantage of that extra DR or not, but it sure feels pretty large when I do edits off the raw files [with the larger file sizes when using that setting]

  • JohnHable

    Hi Michael. Sorry for the uber, uber, uber late reply. Let’s just say I’ve been busy…

    Ultimately, I left this one on the drawing board and sold my S5 several months ago. At the time I thought this sensor design was a brilliant idea that would gain more adoption. But as it turns out it makes more sense to have larger sensors (with more electronics per photosite). Now we have cameras like the new Nikons that have 14 stops of dynamic range without requiring some kind of crazy debayer and merge. So yes, I let that one go.