Please consider registering
guest

Log In Register

Register | Lost password?
Advanced Search

— Forum Scope —

  

— Match —

   

— Forum Options —

   

Wildcard usage:
*  matches any number of characters    %  matches exactly one character

Minimum search word length is 4 characters – maximum search word length is 84 characters

Topic RSS
What would adding compression do to the D16?
Topic Rating: +19 (19 votes) 
August 29, 2013
11:55 pm
Lord Bronco

Kickstarter Backer
Forum Posts: 92
Member Since:
March 20, 2012
Offline
76
0

Just for the manly bros: :-) )

 

 

-LBLaugh

August 30, 2013
3:51 am
wado1942
Guru
Forum Posts: 1107
Member Since:
March 19, 2012
Offline
77
0

Razz16mm said

Except that REC709 has a maximum defined photometric DR of 10 stops as represented by the standard 11 step log gray scale test pattern. Hence the need for some sort of compression scheme when converting from the wider DR CIE space to video.  How this compression is done affects the final appearance and rendering of scene values. Both luminance and chroma have to be modified.

The data has NO idea how many "stops" there are.  It's just numbers, regardless of whether the image sensor captured 2 stops or 200.  How could it possibly know how to make the input data conform to your 10-stop definition?  Unless you intentionally crank up the gain to force clipping, like taking "500" and pushing it up to "1023" or likewise crush the blacks, the contrast does not change.  With a given range of 0-1023, going from one space to the other, 0 stays 0, 1023 stays 1023, 500 stays 500 and so forth.

August 30, 2013
4:18 am
pask74
Member
Forum Posts: 854
Member Since:
January 15, 2013
Offline
78
+1

What about a specialized "geeks corner" on the forum?

;-)

The following users say thank you to pask74 for this useful post:

iaremrsir
August 30, 2013
5:11 am
Razz16mm
Guru
Forum Posts: 1541
Member Since:
March 27, 2012
Offline
79
0

wado1942 said

Razz16mm said

Except that REC709 has a maximum defined photometric DR of 10 stops as represented by the standard 11 step log gray scale test pattern. Hence the need for some sort of compression scheme when converting from the wider DR CIE space to video.  How this compression is done affects the final appearance and rendering of scene values. Both luminance and chroma have to be modified.

The data has NO idea how many "stops" there are.  It's just numbers, regardless of whether the image sensor captured 2 stops or 200.  How could it possibly know how to make the input data conform to your 10-stop definition?  Unless you intentionally crank up the gain to force clipping, like taking "500" and pushing it up to "1023" or likewise crush the blacks, the contrast does not change.  With a given range of 0-1023, going from one space to the other, 0 stays 0, 1023 stays 1023, 500 stays 500 and so forth.

The data is referenced to an analog voltage range. In this case the 700mv analog swing between 0-100 IRE. It is the analog reference that sets the DR, just as it is the analog output range of the sensor that determines the DR of the camera.   The data is just a defined numeric scale for the analog value range. Data is like a yard stick that is marked in discrete increments. You can measure in feet, inches, or 1/8ths, but whichever you use it doesn't change the length of the stick. The analog reference for HD video is still the NTSC/PAL luminance values for a C-phosphor standard CRT display.

20log(700/1mv)= 56.9dB/6= 9.48 stops maximum DR in theory. In practice gamma correction applied to the display cuts that in half as there is no current display technology that will show more than 5 stops photometric dynamic range as measured by a light meter.  You can divide that into ever finer gray scale steps, but at the point where the eye can't distinguish the change it becomes irrelevant. We see analog light values not data.

In practice what we do as videographers or digital cinematographers is no different than what Ansel Adams did with his photographic prints. We have to decide what visual elements of a scene are important and make sure the luminance values of those elements fall within the limits of observable display values. Light in light out. We are looking at an analog "print" of the original scene with luminance values limited by the performance of the display.

August 30, 2013
8:49 am
wado1942
Guru
Forum Posts: 1107
Member Since:
March 19, 2012
Offline
80
0

Razz16mm said
20log(700/1mv)= 56.9dB/6= 9.48 stops maximum DR in theory. In practice gamma correction applied to the display cuts that in half as there is no current display technology that will show more than 5 stops photometric dynamic range as measured by a light meter.  You can divide that into ever finer gray scale steps, but at the point where the eye can't distinguish the change it becomes irrelevant. We see analog light values not data.

I understand what you're saying but you're talking about limitations of display contrast.  It has no relevance to what I'm saying.  If you have a CCD that captures a 12-stop range and digitize it in 12-bit raw, then demosaic add gamma correction etc. that whole range is still there.  Now convert to 10-bit and it's still there.  Send it to a TV and it's still there.  The contrast upon output won't be as it was in the real world, because of the limitations you mentioned, but it's all still there.  In 10-bit, each "stop" would be 85 steps apart.  If the CCD captured 24 stops, they would be 43 steps apart.  You can send that 24-stop image straight to a video DAC and you'll see a murky, gray image on screen, because the display doesn't have the same contrast, but the min, max and everything in between is still there.  The latitude of the sensor doesn't go away just because you sent it via .7V instead of 18V.  The only way you'll lose that information as I've said previously is if you take, say 426 and boost it up to 1023, clipping everything that was above 426 in process.  THEN and ONLY THEN will you lose photographic latitude from the camera; because you MADE it do that.  You don't compress the image range to "fit" it into the REC709 spec, you have to take that 24-stop image from the camera and push it way into clipping to make it have only a 10-stop range.  You HAVE to take 426 and push it up to 1023 (or take 597 and push it down to 0, or a little of both) to lose all that information.

 

BTW, despite the random comments from the others, I have no vendetta against you and find the conversation rather stimulating.

August 30, 2013
9:39 am
Razz16mm
Guru
Forum Posts: 1541
Member Since:
March 27, 2012
Offline
81
0

wado1942 said

Razz16mm said
20log(700/1mv)= 56.9dB/6= 9.48 stops maximum DR in theory. In practice gamma correction applied to the display cuts that in half as there is no current display technology that will show more than 5 stops photometric dynamic range as measured by a light meter.  You can divide that into ever finer gray scale steps, but at the point where the eye can't distinguish the change it becomes irrelevant. We see analog light values not data.

I understand what you're saying but you're talking about limitations of display contrast.  It has no relevance to what I'm saying.  If you have a CCD that captures a 12-stop range and digitize it in 12-bit raw, then demosaic add gamma correction etc. that whole range is still there.  Now convert to 10-bit and it's still there.  Send it to a TV and it's still there.  The contrast upon output won't be as it was in the real world, because of the limitations you mentioned, but it's all still there.  In 10-bit, each "stop" would be 85 steps apart.  If the CCD captured 24 stops, they would be 43 steps apart.  You can send that 24-stop image straight to a video DAC and you'll see a murky, gray image on screen, because the display doesn't have the same contrast, but the min, max and everything in between is still there.  The latitude of the sensor doesn't go away just because you sent it via .7V instead of 18V.  The only way you'll lose that information as I've said previously is if you take, say 426 and boost it up to 1023, clipping everything that was above 426 in process.  THEN and ONLY THEN will you lose photographic latitude from the camera; because you MADE it do that.  You don't compress the image range to "fit" it into the REC709 spec, you have to take that 24-stop image from the camera and push it way into clipping to make it have only a 10-stop range.  You HAVE to take 426 and push it up to 1023 (or take 597 and push it down to 0, or a little of both) to lose all that information.

 

BTW, despite the random comments from the others, I have no vendetta against you and find the conversation rather stimulating.

Ha ha, I too find the conversation stimulating.  The issue is the different analog references between camera CIE color space and REC709/sRGB display color space. They do not match up even with gamma correction applied. The digital encoding is for completely different ranges of analog values.

When you do the first debayer pass, you compress the camera analog DR values into display analog values or you clip them. How you do this determines how the image looks. You do this while looking at a reference monitor to "print" the values the way you want to see them. A linear gamma compression will result in a very flat gray low contrast image on the display. If you encode to video that way, when you bring up the video contrast to normal  values without applying some sort of non-linear correction to retain desired highlight and shadow details, you will push the broader camera values out of display range at the ends and clip them. 

The camera DR represented by the data may be completely retained in a digital sense, but the analog values the data represents are different. 

August 31, 2013
12:32 pm
wado1942
Guru
Forum Posts: 1107
Member Since:
March 19, 2012
Offline
82
0

Razz16mm said

The camera DR represented by the data may be completely retained in a digital sense, but the analog values the data represents are different. 

We're basically in agreement except for what happens in the conversion.  In my experience, you don't have to "compress" the dynamic range of the scene, it's just there.  If you want the contrast upon output to a TV to be the same as a normal video camera, you need to EXPAND (thus clipping) the range.

August 31, 2013
1:35 pm
Razz16mm
Guru
Forum Posts: 1541
Member Since:
March 27, 2012
Offline
83
0

wado1942 said

Razz16mm said
The camera DR represented by the data may be completely retained in a digital sense, but the analog values the data represents are different. 

We're basically in agreement except for what happens in the conversion.  In my experience, you don't have to "compress" the dynamic range of the scene, it's just there.  If you want the contrast upon output to a TV to be the same as a normal video camera, you need to EXPAND (thus clipping) the range.

This is true with a video camera, because the DR compression has already occurred before the video is encoded. When you select a profile like Cine gamma 1 or 4 on a Sony from the table of choices the camera offers, you select the compression method applied. Of course if the scene itself is within 10 stops then no compression is needed.

Working with raw files you do this in the raw processor with exposure, gamma, curves, shadow and highlight recovery tools, toning, or conventional brightness and contrast controls.  You don't just magicially encode 682 discrete steps per stop over 12 stops DR into 124 steps per stop over 10 without significant crunching. 

The LUTs loaded in video cameras or applied in post are developed by observing monitors and instruments calibrated for the destination color space, sRGB/REC709. So the encoded result is REC709 compliant DR by definition. In that sense you are right. But if the CIE wide DR values are not compressed to REC709 range when the video is encoded, the out of range values are not recoverable afterwards as they are not included in the video data. Straight conversion with simple gamma correction that matches the white and black point of the original camera data to display space is still DR compression.

 

 

August 31, 2013
3:48 pm
wado1942
Guru
Forum Posts: 1107
Member Since:
March 19, 2012
Offline
84
0

Let me rephrase, if I'm handling raw data, I use a couple of tricks that strips metadata so I can make the choices myself.  I don't have to do anything to recover highlights or shadows.  I can see the unaltered, merely deBayered image on my screen and know where the shadows & highlights sit.  I just subtract a black frame and add gamma correction, no other curves, just gamma correction (which only changes the mid tones).  That by itself gives me a good 2-3 stops more latitude showing in the final product than using normal methods which retain the camera's metadata.  If I don't strip the metadata first, it looks pretty much like how those cameras' normal images look.  So, the conversion from raw to REC709 in of itself does not mess up the image, it's the metadata that TELLS the software to mess up everything so you have to perform all those extra tasks to rescue that detail.

Forum Timezone: America/Los_Angeles

Most Users Ever Online: 122

Currently Online: carloslamas
10 Guest(s)

Currently Browsing this Page:
1 Guest(s)

Top Posters:

Razz16mm: 1541

wado1942: 1107

pask74: 854

Thyl: 788

James M: 594

DVCinLV: 451

iaremrsir: 427

Harry Lime: 414

analoggab: 376

Ari Davidson: 368

Member Stats:

Guest Posters: 1

Members: 2180

Moderators: 0

Admins: 4

Forum Stats:

Groups: 1

Forums: 8

Topics: 883

Posts: 12705

Newest Members: jveigel, Hector, Charles, greganchors, haufe, Nathalie Durand, GoceBreka, Robert, Caleb, Fish_O

Administrators: StenRidley (3), elleschneider (66), joerubinstein (1782), toby (7)