File :-(, x, )
Anonymous
Hey /p/, HDR is just taking two pictures (or more) of the same thing with different exposures right? How long until HDR gets implemented as a feature in point and shoot cameras? It would take two different exposures at once and combine it in a jpeg.

It would work like this: the camera detects a light area and a dark area in the plane and auto expose both spots using spot exposure. Alternatively, you could manually choose the spots. Then, the camera would set itself up for two different shutter speeds and a fixed aperture. The first (underexposed) image would be stored before the shutter is closed while the longer, overexposed image would be stored after the shutter closes. Finally, the processor would stitch the pictures automatically using some sort of algorithm and display the result on the LCD. This way, the user would not need to take two pictures (like bracketing) and risk moving the camera. Of course, a tripod would still be useful for long exposures, but for an exposure of 1/1000 and 1/100, it would be useful.

Anyway, not that I'm a fan of HDR, but I think it would be a great novelty and marketing feature for future point and shoots. Also, people will probably call me a troll so if I got anything wrong, let me know.
>> Anonymous
Probably soon. It will just have a version of photomatix in it with all sliders set to max and everyone will instantly be a pro photographer.
>> Anonymous
the thing is you have to be able to read the value on the pixel sensor the first time with out altering it. the way i understand most camera sensor . is that the reading of the data resets the sensor area. thus you stll need to seperate exposures. of course this was all learned 3 years ago. and so i might be wrong.
>> Anonymous
>>191697
Then I guess, they could do two exposures like with bracketing. I'm not sure if point and shoots have this feature already.
>> eku !8cibvLQ11s
"Hdr" straight outta compt... camera would only require better sensors, and different kind of algorithm to process the data to jpg. Algorithm which wouldn't ditch all the highlights and shadows.

Rare are those times when dynamic range is so big it can't be stuffed in one photo (RAW).
>> Anonymous
IIRC, it's something like it can't read the data from the sensor quickly enough. The Nikon D3 goes at 9fps which means it takes about 1/9 a second to take and process a photo. Too slow to differentiate 1/1000 from 1/100 with the shutter still open because by the time it reads the data, it would have been exposed to 1/100. I also doubt P&S's are going to getting 9fps anytime soon.

It would be better to just manufacture sensors to have a high dynamic range just like film. For marketing it in the meantime, they could include a setting to darken highlights and brighten shadows in the RAW.
>> Anonymous
     File :-(, x)


Camera-Specific Properties:Camera SoftwareAdobe Photoshop CS3 WindowsImage-Specific Properties:Image OrientationTop, Left-HandHorizontal Resolution72 dpiVertical Resolution72 dpiImage Created2008:05:28 17:55:28Color Space InformationUncalibratedImage Width251Image Height251
>> Depressed Cheesecake !wFh1Fw9wBU
It's not that simple. Visual mapping of HDR is unsolved in photography because you're trying to convert an abstract 32-bit linear image to a visual 8-bit space. It's a creative process that the camera can't mimic (yet).
>> Anonymous
>> It would work like this: the camera detects a light area and a dark area in the plane and auto expose both spots using spot exposure.

Maybe someone else can shed some light on this but isn't what Active D Lighting and Dynamic Range Optimizer do on current cameras?