Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: 50% more gamut with quantum dots  (Read 3471 times)
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1145


« on: February 23, 2013, 08:13:34 PM »
ReplyReply

Sony is releasing a new entry level DSLR that supports their "tri-Luminous" color technology on Monday. Basic info is out there about the camera. The fascinating thing to me was this article about the large color gamut :

"The TriLuminos gamut is massive. Unlike HDTV, it’s bigger than AdobeRGB and much bigger than regular sRGB (what most computer screens can show). It is 75.8% of the CIE 1931 colour space. That, by the way, is simply a standard based on what a bunch of test subjects could perceive back in 1931 and it’s been criticised for failing to include a wide enough range of genetic backgrounds and learned visual abilities."
http://www.photoclubalpha.com/2013/02/20/colour-and-power-benefits-of-sony-20-megapixel-sensor/

So of course I have to find out how on earth I am going to see this on a display. Which led to this article:

"Sony might have made news for being the first TV manufacturer with a 4K OLED TV at CES last week, but that wasn’t the only first the company was celebrating. Its new Triluminos displays are the first consumer devices (let alone TVs) to make use of quantum dots — a semiconductor technology that uses "tuned" nanocrystals so small that they exhibit quantum properties, emitting light only at predetermined wavelengths. The resulting displays reportedly see as much as a 50 percent increase in color gamut, or the range of colors that the screens can reproduce."

http://www.theverge.com/2013/1/16/3881546/sonys-new-triluminous-tvs-pursue-vibrant-hues-with-quantum-dots

Maybe this new stuff will come pre-color calibrated as long as it stays with that manufacturer. Does anyone else have a screen calibration system that can handle this gamut?

Viewing my pictures on a 4k Screen with 50% more color gamut is going to be a whole new world!
Logged
bill t.
Sr. Member
****
Offline Offline

Posts: 2711


WWW
« Reply #1 on: February 23, 2013, 10:26:31 PM »
ReplyReply

Cool!  But quite frankly I'm having enough trouble dealing with just the color gamut I already have, much less 50% more!  And when I am not there to observe it, will my quantum monitor dissolve into a sort of amorphous cloud of probability, only re-assembling as a monitor when I return?  Will I sometimes, at random, be able to view fully processed images that I haven't even taken yet?  The quantum world is weird, and not to be trifled with.

But seriously, sounds like that sometime in the next few years were all going to be spending a lot of money on new and improved display and colorimetric devices.
Logged
Jim Kasson
Sr. Member
****
Online Online

Posts: 1034



WWW
« Reply #2 on: February 24, 2013, 11:33:51 AM »
ReplyReply

"The TriLuminos gamut is massive... It is 75.8% of the CIE 1931 colour space.

From the diagram on the linked web page, it looks like the 75.8% statistic could be misleading. It is if it's calculated from the illustrated CIE xy chromaticity diagram, which is not anywhere near perceptually uniform. The xy diagram assigns too much weight to the greens. In the xy diagram that's in the OP linked-to article, most of the gain is in the greens.  I can't find the TriLuminos primaries on the web, but if someone will give me a pointer to them, I'll be happy to plot them on a CIE u'v' chromaticity diagram (not perfectly perceptually uniform, but much closer than xy) and post the results.

Jim
Logged

Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1227



WWW
« Reply #3 on: February 24, 2013, 11:13:56 PM »
ReplyReply

Bayer RGGB filters have to be the deciding factor on gamut capture capabilities considering all a sensor does is measure captured electrons filtered by the Bayer pattern from noise floor at 0 captured electrons to full saturation for each pixel site.

The voltage charges for each pixel is then read as grayscale luminance variances and converted to 1's and 0's for software like a Raw converter whether incamera or in post to be reconstructed and assigned color intensities as defined by the scene gamut filtering process of the sensor. They'ld have to measure the transmissive spectral qualities of the Bayer RGGB filter to know what to assign to the grayscale pattern.


Since gamut capture capability is guided by the filtering process of the sensor the only way to know how good it filters according to gamut is to examine and measure the rate of smoothness of variation between luminance grayscale patterns that define detail near saturation before detail blows out to posterization.


IOW a sensor that defines color in an image by luminance variation has to have gamut defined downstream which isn't the source. If Sony doesn't divulge the special spectral qualities of the Bayer filtering that provided the extra gamut capturing capabilities then third party Raw converter engineers have to guess and just trust what Sony says.


Of course the only way to see this smoothness of detail close to full saturation is on a display that can show it.
Logged
l_d_allan
Full Member
***
Offline Offline

Posts: 207



WWW
« Reply #4 on: February 25, 2013, 08:33:37 PM »
ReplyReply

I'm fuzzy on whether this post concerns the sensor of a DSLR, the LCD of a DSLR, computer monitors, or televisions.

To me, a better gamut on a DSLR's LCD is a "who cares". I really don't see how this applies to a sensor, at least with RAW captures. If it is a camera sensor, I suppost that might allow 15-bit or 16-bit actual bit depth instead of the common 14-bit depth in more recent DSLR's. According to the RawDigger utility, my Canon 5dm2 has about 14,000 different intensity levels for each of the three RGB channels, which requires 14-bit a-to-d converter.

My speculation (didn't track down the articles) is that it is about an eventual computer monitor that people using PhotoShop or other photo-edit software might use. Perhaps the product would be a competitor to high-end Dell, HP, Eizo, etc. monitors that have greater than 100% of the Adobe-1998 gamut.

Or not?
« Last Edit: February 26, 2013, 04:00:46 AM by l_d_allan » Logged

retired in Colorado Springs, CO, USA ... hobby'ist with mostly Canon gear ... let me know if you're in the area and would like a free guided tour of our photographically "target-rich environment"
Jim Kasson
Sr. Member
****
Online Online

Posts: 1034



WWW
« Reply #5 on: February 26, 2013, 10:25:52 AM »
ReplyReply

I'm fuzzy on whether this post concerns the sensor of a DSLR, the LCD of a DSLR, computer monitors, or televisions.

Certainly monitors and TVs. Near as I can figure out, Sony is using a backlight for televisions and perhaps monitors that is emits mainly in three narrow regions of the spectrum, possibly using technology from Nanosys, who have been pitching what sounds like something similar for a while. With such a backlight, all the red filter has to do is block the blue and green spectral peaks, all the blue filter has to do is block the red and green spectral peaks, and all the green filter has to do is block the blue and red spectral peaks. With a broad spectrum backlight the only way to get nearly-spectral monitor primaries was to have very narrow filters, which made the display as a whole inefficient. Now that there is not as high an efficiency penalty for moving the primaries out towards the spectral horseshoe, the Sony engineers have done so.

From the OP, it looks like Sony has also announced camera support for a new color space that is congruent, or close to so, with the gamut of their new displays. As you quite rightly point out, this probably has nothing to do with the sensor. Camera sensors sum luminous spectra into three buckets. They don't have gamuts in the sense that we talk about the gamut of an output device. If you could find a sensor which produced no response in some portion of he visible spectrum, you could say that its capture gamut omitted any colors formed by using that part of the spectrum, but, thanks to metamerism, there are an infinite number of spectra that can produce any given color, so that's a rabbit hole we probably don't want to go down. In any event, all the Bayer arrays that I know about do have significant response in at least one color plane from 400 to 800 nm.

However, camera manufacturers provide, in the EXIF data, suggested matrices for conversion of raw data to some color spaces. It sounds like Sony has defined a new color space that fits their new displays, and some of their cameras will probably produce JPEGs in that space and instructions (in the form of new matrices) for raw converters to convert to that space.

As a side issue, an RGB color space for a display is limited to primaries which are physically realizable, and can use only positive amounts of each primary. A color space for processing or storage has no such limitations. Examples of RGB color spaces that violate the positive, physical primary constraint are ProPhotoRGB, which employs two physically unrealizable primaries (the "blue" one -- can you call it blue if you can't see it? -- being wildly outside the horseshoe), and the underlying RGB space behind Kodak (Photo CD) Ycc, which allowed negative amounts of some of the 709 primaries. So there's no reason why color spaces used in processing have to be tied to color spaces used for display. I suspect Sony's target customer for the new cameras is someone making JPEGs and doing no computer processing, but just displaying the images "as is" on a Sony monitor. For those people, a camera's ability to save pictures in the monitor color space will allow them to see colors on the display that they can't seen on competitors' displays...yet.

Jim
« Last Edit: February 26, 2013, 10:40:46 AM by Jim Kasson » Logged

l_d_allan
Full Member
***
Offline Offline

Posts: 207



WWW
« Reply #6 on: February 26, 2013, 10:50:40 AM »
ReplyReply

Jim K,

Thanks for the reply ... some of which was over my head.

My interpretation is that the Sony gamut might be an attempt at an improved gamut between Adobe-1998 and ProPhoto that would have greater consistency with the "gamut horseshoe" that a "standard human" can actually see.  Or do I have a flawed understanding?

BTW, Sony seems to be in the fore-front of camera sensors (a'la Nikon 800), so it got my attention when the OP mentioned Sony DSLR sensor.
Logged

retired in Colorado Springs, CO, USA ... hobby'ist with mostly Canon gear ... let me know if you're in the area and would like a free guided tour of our photographically "target-rich environment"
Jim Kasson
Sr. Member
****
Online Online

Posts: 1034



WWW
« Reply #7 on: February 26, 2013, 11:01:43 AM »
ReplyReply

My interpretation is that the Sony gamut might be an attempt at an improved gamut between Adobe-1998 and ProPhoto that would have greater consistency with the "gamut horseshoe" that a "standard human" can actually see.  Or do I have a flawed understanding?

I think you've pretty much got it. In Adobe RGB, we have a gamut that doesn't encompass everything we can print. In ProPhotoRGB, we pretty much do, but you can't build a ProPhotoRGB monitor, at least in this universe. So increasing the gamut of the monitor towards encompassing every color we can print is A Good Thing. Sony is doing that. I think their target is display viewers, not photographers who will use the monitor as a proofing device from the direction they went with the primaries. There are limitations on what you can do with three primaries:  http://blog.kasson.com/?p=1880

Four would give you a lot more flexibility, and some display manufacturers are pursuing that direction: http://gizmodo.com/5441497/sharps-fanciest-new-tvs-the-4+color-le920-series
Logged

Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1227



WWW
« Reply #8 on: February 26, 2013, 11:40:30 AM »
ReplyReply

Quote
Camera sensors sum luminous spectra into three buckets. They don't have gamuts in the sense that we talk about the gamut of an output device.

For photographic capture purposes, yes, sensors don't have gamuts, but from a photographic processing POV they are able to capture scene gamut. Sensors act only as a mirror on reality, however, the more you push them by subjecting them to record certain bright saturated and deep and intense colors it becomes the gamut of the display to know how far to go without turning those intense color's detail into posterized blobs on screen.

In this sense there is no separation of gamut definitions as far as digital capture and processing workflows are concerned.

With regards to straight throughput processing from camera to display, it becomes a question of whether Sony's processing pipeline can map wide gamut scenes and maintain their detail without turning it into posterized blobs on screen. To prove gamut capture capability by Sony's sensor would entail knowing if Sony's default processing pipeline direct to the wide gamut display can compensate a scene gamut on the fly.


For instance the scene gamut shown of the orange pomegranate flower below seems to have exceeded the gamut capture capabilities of my Pentax K100D DSLR, but ACR's tools suggests the camera's jpeg processing pipeline failed in rendering even to sRGB, AdobeRGB or ProPhotoRGB so we're left wondering what gamut that actual scene encompassed.

So is ACR a wide gamut processor?
« Last Edit: February 26, 2013, 11:45:25 AM by tlooknbill » Logged
Chairman Bill
Sr. Member
****
Offline Offline

Posts: 1594


WWW
« Reply #9 on: February 26, 2013, 11:51:04 AM »
ReplyReply

This is going to be fine for women - they can see 100 shades of taupe. Us blokes just see a sort-of-beige. 'Tis overkill for at least 50% of us  Wink
Logged

Jim Kasson
Sr. Member
****
Online Online

Posts: 1034



WWW
« Reply #10 on: February 26, 2013, 04:10:49 PM »
ReplyReply

Tim,

You raise several interesting issues, and I thank you for making me think hard about something that I haven't been paid to think about for close to twenty years.

It is possible to have a kind of compression of the differences between two or more colors that takes place at a sensor level, as opposed to processing that takes place after the raw image is captured. You could have a camera whose filter set was sufficiently peaky that, for saturated colors with a narrow spectrum in the pass-band of one of the filters, only one color plane in the raw image had a useful signal-to-noise ratio (SNR). Small chromaticity variations in the subject would not translate to any chromaticity variations in the captured image, since the chromaticity of the captured image would be whatever chromaticity the software that followed the capture mapped a raw value of x,0,0 (or 0,x,0 or 0,0,x) to. I don't think that's what's happening in your example, since, in the absence of tone compression in the sensor output (brought on by the well filling up or something in the electron-to-digital-value chain saturating), you would see no compression in luminance, which I'm pretty sure I see in the JPEG part of your example.

Designing a filter set for a camera requires the management of a complicated set of tradeoffs. (Tradeoffs are a good thing; otherwise, what would the engineers and the product managers argue about?)

Here are a few:

1) Sensitivity (base ISO) -- higher is better. This argues for keeping the filter passbands wide.
2) SNR -- higher is better. This argues for keeping the filter passbands narrow.
3) Accurate conversion to human tristimulus color space. The effect of this is difficult to generalize. Here's one specific that comes to mind: the ability to replicate the excitation of the "long" or "red" cone cells by extremely short-wavelength visible light that produces the color violet in the rainbow. I've not seen a camera that does a credible job with this.
4) Availability, cost, and stability of suitable filter materials.

The last set of criteria is easier to deal with in a camera that uses beam-splitters and three monochromatic sensors rather that a Bayer array or something similar. However, such an approach has adverse cost, size, and ruggedness implications.

You can see how criteria 2) and 4), together with a low valuation of 3) (Velvia, anyone?) could lead a product team to develop a sensor that had the kind of color-specific chromaticity compression that you're talking about.

Your example shows a lot of luminance compression in the red channel, but it also shows luminance compression in the other two channels -- look at the hand. That makes me think that some or all of that compression comes, as you mention, from the image processing pipeline that follows the raw capture. There are lots of places in that chain of operations where such damage might take place.

Does that make sense? Or have I missed your point?

Jim

« Last Edit: February 26, 2013, 04:53:29 PM by Jim Kasson » Logged

Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1227



WWW
« Reply #11 on: February 26, 2013, 05:06:26 PM »
ReplyReply

Quote
Does that make sense? Or have I missed your point?

Jim

Interesting outline of the mechanics involved in digital sensor behavior as it relates to reproduction according to human vision as a color space, but that wasn't my point.

My point was about Sony or any other company claiming improved color gamut performance of a digital capture and viewing device and how to prove that performance so photographers can understand if it's happening as claimed by said company.

The orange flower was meant as an illustration to show as long as software is involved in the digital reproduction pipeline I don't see how they can prove hardware sensor gamut capture capability in a way a photographer would understand and thus lay down his money to buy such a device.

IOW I believe it's marketing BS because it claims a capability that's downright unprovable or so complex that a photographer wouldn't understand it if it was explained as plainly as possible or even know if it's happening as designed.
« Last Edit: February 26, 2013, 05:08:03 PM by tlooknbill » Logged
Jim Kasson
Sr. Member
****
Online Online

Posts: 1034



WWW
« Reply #12 on: February 26, 2013, 05:10:21 PM »
ReplyReply

Tim,

I think your skepticism is well-placed. It's early days, though. Maybe as we learn more, there will be some beef.

Jim
Logged

digitaldog
Sr. Member
****
Online Online

Posts: 9214



WWW
« Reply #13 on: February 26, 2013, 05:12:31 PM »
ReplyReply

IOW I believe it's marketing BS because it claims a capability that's downright unprovable or so complex that a photographer wouldn't understand it if it was explained as plainly as possible or even know if it's happening as designed.

That's a very real possibility! Some of the stuff I see in terms of marketing seems logical until you dig deeper. For example, there are projectors that in addition to using RGB filters, add CMY. The marketing spin I've seen makes it appear that man, are you going to see more saturated wider gamut color. Build a profile and the reality is, you've got a smaller gamut than a device using RGB filters and just better technology.

I also love when manufacturer's tell us how many billion's or trillions of colors their device can produce (because of 12 or 14 bits and some simple math) despite the gamut limitations and that most people who know, agree we can't see anything close to those numbers of colors (even 16.7 million). Doesn't stop the marketing BS from being applied to the unsuspecting.

I'm not saying this is the case here with Sony. I'm saying Tim's skepticism is well deserved.
Logged

Andrew Rodney
Author “Color Management for Photographers”
http://digitaldog.net/
Tim Lookingbill
Sr. Member
****
Offline Offline

Posts: 1227



WWW
« Reply #14 on: February 26, 2013, 05:16:27 PM »
ReplyReply

Just to add all a photographer wants to do is point his/her camera at a scene and know upfront how much work is going to be involved in getting what he/she saw or what they want to convey.

If they have to shoot product shots with neon green, deep and rich aqua and other intense colors, they need to know if they will be limited by their camera or their display's color gamut so they don't waste time reading color theory explanations that don't improve their bottom line.

A company just coming out and claiming their device can encompass some obscure 3D plot description of a color gamut, would be more helpful offering a device to go with it that measures or at least can tell them if what they've shot goes beyond their product's reproduction chain the photographer bought into.

That's when science becomes functional for a photographer.
« Last Edit: February 26, 2013, 05:22:50 PM by tlooknbill » Logged
GlueFactoryBJJ
Newbie
*
Offline Offline

Posts: 10


« Reply #15 on: April 12, 2013, 04:54:55 PM »
ReplyReply

First, regarding the camera sensor, increased gamut is irrelevant for 99+% of the pictures taken today.  Since they are mostly going to be viewed with sRGB (potentially) capable monitors with sRGB pictures (e.g. on the web), any extended gamut will be lost.  However, for photo processing with good software, a higher dynamic range can result in a far better conversion to the sRGB space, including better shadow tone detail, which is typically lacking with pictures taken directly into a sRGB space. 

I say this because of the way I understand the bits are allocated to different light levels in the picture (unless I've completely misunderstood Michael Reichmann's ETTR and "OpenRAW" articles) and the relatively low powered converters in even modern DSLR cameras when compared to PC's.

Second, any display that is set to a gamut that is wider than the picture it is trying to display can be prone to color distortions as the monitor tries to "upconvert" the (for example) sRGB picture to, say, an Adobe RGB space.  These distortions will likely get worse as conversions are done to move these sRGB pics to even wider and non-standard color spaces.

This is also a problem for other "legacy" media such as pics that use Adobe RGB and movies that use Rec 709 (effectively the sRGB color space).  High quality converters from the existing standard color spaces (e.g. sRGB) and this non-standard color space becomes even more problematic as you increase the number of different conversions to be made.  Yes, you will likely see "more", but the question is, "Is the 'more' accurate?"  Answer:  Probably not. 

As an aside, I'd like to note that it should be interesting to see how Rec. 709 movies play on Rec. 2020 TV's.

Anyway, back to the topic at hand.  IMO, it is another way for Sony to sell their products, like using more megapixels sells cameras, even though additional megapixels doesn't help where many of the major problems are in existing camera systems, such as dynamic range and noise. 

Personally, I'll take better dynamic range and reduced noise over megapixels any day (within reason, given no extensive crop/zooming, and only needing, say, up to 12"x18" prints).  The same with this "new" gamut technology.  I'd rather have spot on sRGB (and Adobe RGB if I'm looking to use a wide gamut printer) in all screens/cameras sold than a wider gamut that is likely inaccurate.  It is just more useful for everyday work.

Anyway, this is my $.02 worth and probably not worth what you are paying for it.

Scott
Logged
Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad