Ad
Ad
Ad
Pages: « 1 2 3 [4]   Bottom of Page
Print
Author Topic: A7s first impressions  (Read 7160 times)
FranciscoDisilvestro
Sr. Member
****
Online Online

Posts: 544


WWW
« Reply #60 on: June 20, 2014, 02:22:22 AM »
ReplyReply

Please explain what you mean by that. I thought RAW was the numerical representation of the pixel charge. Jpg was a log scaled translation into 8 bits. What do you mean by encoding in raw?

Raw contains numerical values of the output af an analog to digital conversion. When a constant quantization step is used, you get a linear encoding of the values. This is the common and easiest approach. The issue with linear encoding is that it is very inefficient at high values and scarce at low values (eg only few bits for the shadows and to many for the highlights)
If you change to a variable quantization step, smaller for low signals and taller for high values, then you use more bits in the shadows and enough bits in the highlights. With this type of encoding there is no more a 1:1 relation in the bits needed to represent a specific dynamic range.
Logged

ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7500


WWW
« Reply #61 on: June 20, 2014, 03:21:23 AM »
ReplyReply

Thanks for explanation, much better than what I have managed.

Best regards
Erik

Raw contains numerical values of the output af an analog to digital conversion. When a constant quantization step is used, you get a linear encoding of the values. This is the common and easiest approach. The issue with linear encoding is that it is very inefficient at high values and scarce at low values (eg only few bits for the shadows and to many for the highlights)
If you change to a variable quantization step, smaller for low signals and taller for high values, then you use more bits in the shadows and enough bits in the highlights. With this type of encoding there is no more a 1:1 relation in the bits needed to represent a specific dynamic range.
Logged

BartvanderWolf
Sr. Member
****
Online Online

Posts: 3684


« Reply #62 on: June 20, 2014, 05:05:06 AM »
ReplyReply

There is one point about the DXO dynamic range calculation as compared to engineering DR that has always confused me. A common method of determining the read noise of a sensor is to put the lens cap on and take a dark frame. Read noise and shot noise combine in quadrature and with the lens cap on there is no shot noise and the read noise is the output of the sensor (provided that the read noise is not clipped as with Nikon cameras). A SNR of one indicates that some signal is present. Does the signal refer to the input or output of the sensor?

Hi Bill,

That's how I do it if it needs to be done accurately. I even exclude the possibility of lens electronics interfering, by using the body cap. I also cover the viewfinder in order to prevent light leaking into the mirrorbox. I also use the shortest possible exposure time (1/8000th sec.) to reduce temporal/thermal noise build-up. To also eliminate pattern noise, one should subtract two such black-frame exposures, and correct the StDev by dividing by Sqrt(2).

Since we cannot know if and how analog gain is employed, we'll have to empirically/statistically determine the input signal in electrons (e-) from the sensor output in ADU or DN, corrected for the gain. However, this will only work if the Raw data (ADUs) is not clipped to (presumably) zero by eliminating any Blackpoint Offset. For example Canon cameras allow to analyze the full black-frame noise, and e.g. Nikon cameras typically clip the black-frame noise. In addition, some manufacturers (e.g. Sony) have started adding additional processing to record the ADUs with a non-linear response-curve in some of their camera models.

This all complicates testing (it's not practical to switch testing methods between models), and is part of the reason that DxO use a somewhat different method, at least that is what I have read between the lines. Apparently, they use the least exposed parts of the SNR curves to extrapolate and determine the black-point at dB=0 (SNR=1, because 20*Log(1) = 0 dB) or, sometimes if that is not a typo, dB=1 (SNR=1.122, because 20*Log(1.122)=1 dB). They do also calibrate for actual exposure by measuring the actual influx of light, and ISO sensitivity. They seem to use a lens on the bodies, but they only shoot one exposure level at a time (by blocking the other filters in their setup), so glare should not play a role.

My findings with the laborious method of subtracting pairs of images and deriving gain from the slope of the response curve to calculate electron input, corresponds to within approx. 0.1 EV with the DxO findings on Dynamic Range.

Cheers,
Bart
Logged
FranciscoDisilvestro
Sr. Member
****
Online Online

Posts: 544


WWW
« Reply #63 on: June 20, 2014, 05:50:00 AM »
ReplyReply

In Nikon cameras, the masked pixels are not clipped, you can use those for your calculations.
Logged

BartvanderWolf
Sr. Member
****
Online Online

Posts: 3684


« Reply #64 on: June 20, 2014, 06:49:42 AM »
ReplyReply

In Nikon cameras, the masked pixels are not clipped, you can use those for your calculations.

Correct, I forgot to add that.

There is however a benefit to sampling actual image sensels. The actual sensels being used for image capture may be affected by surrounding electronics (e.g. amplifier glow), and pattern noise. By only using a narrow strip of masked sensels on one side of the sensor array, we may get a somewhat biased (positive or negative) outlook. Also important is to determine if the row/column directly adjacent to the unmasked sensels is not affected by cross-talk or light leakage. It's sometimes better to not include that 1 pixel row/column of the multiple row/column masked area.

Cheers,
Bart
Logged
FranciscoDisilvestro
Sr. Member
****
Online Online

Posts: 544


WWW
« Reply #65 on: June 20, 2014, 06:41:51 PM »
ReplyReply

I also use the shortest possible exposure time (1/8000th sec.) to reduce temporal/thermal noise build-up.


Bart,

I'm not sure if it really makes a difference using a shutter speed above x-sync (e.g.1/250) unless the sensor is "turned off" as soon as the second curtain closes (I don't really know how long the sensor is active in cameras with physical shutter).
Logged

Jim Kasson
Sr. Member
****
Offline Offline

Posts: 857


WWW
« Reply #66 on: June 20, 2014, 06:56:59 PM »
ReplyReply

I'm not sure if it really makes a difference using a shutter speed above x-sync (e.g.1/250) unless the sensor is "turned off" as soon as the second curtain closes (I don't really know how long the sensor is active in cameras with physical shutter).

Good point. I always did like Bart does, but never thought much about it. I guess it can't hurt anything.

Jim
Logged

BartvanderWolf
Sr. Member
****
Online Online

Posts: 3684


« Reply #67 on: June 20, 2014, 07:10:06 PM »
ReplyReply

I'm not sure if it really makes a difference using a shutter speed above x-sync (e.g.1/250) unless the sensor is "turned off" as soon as the second curtain closes (I don't really know how long the sensor is active in cameras with physical shutter).

Hi Frank,

I have no timing details, but it makes sense to off-load the image data as soon as the second shutter curtain closes, to allow resetting the sensor for the next frame in case of continuous shooting. I do know that on my 1Ds3 I can only change the timing between pressing the shutter release/mirror-up and the opening of the first shutter curtain (to allow resetting the sensor and closing the aperture), but not the time afterwards, so I assume it is ASAP (why wait).

Cheers,
Bart
Logged
Fine_Art
Sr. Member
****
Offline Offline

Posts: 1087


« Reply #68 on: June 20, 2014, 09:03:41 PM »
ReplyReply

Raw contains numerical values of the output af an analog to digital conversion. When a constant quantization step is used, you get a linear encoding of the values. This is the common and easiest approach. The issue with linear encoding is that it is very inefficient at high values and scarce at low values (eg only few bits for the shadows and to many for the highlights)
If you change to a variable quantization step, smaller for low signals and taller for high values, then you use more bits in the shadows and enough bits in the highlights. With this type of encoding there is no more a 1:1 relation in the bits needed to represent a specific dynamic range.

Very interesting, thanks. Is there a description of the curve they use somewhere?
Logged
Vladimirovich
Sr. Member
****
Offline Offline

Posts: 1320


« Reply #69 on: June 21, 2014, 10:57:41 AM »
ReplyReply

Very interesting, thanks. Is there a description of the curve they use somewhere?

if the question is about Sony then for example :

http://blog.lexa.ru/2014/02/02/rawdigger_103_raskapyvaya_sony.html
http://blog.lexa.ru/2014/01/25/o_bitnosti_u_kamer_sony.html
http://blog.lexa.ru/2014/01/22/pro_sony_a900_i_compressed_raw.html
http://blog.lexa.ru/2012/12/29/o_sortakh_raw_u_sony.html
http://blog.lexa.ru/2011/10/28/o_lineinosti_raw_i_ettr.html
Logged
bcooter
Sr. Member
****
Offline Offline

Posts: 1137


Bang The Drum All Day


WWW
« Reply #70 on: June 21, 2014, 02:56:49 PM »
ReplyReply


15 stops or 12 we've worked around this in the real world forever and a flat skinny 15 stop clip doesn't mean it's always useful. 

Back to the real world of usability.

This week, we were picking up some drives and filters for a production and though the A7s isn't on the shelf yet, I did some focusing comparisions with the A7r next to my gh3's and em-1, because the A7s and R use the same focus system, though I believe the S is suppose to be slightly improved.

Bottom line is the A7r autofocus minimully ok in video and stills, nothing spectacular or equal in stills to almost any ovf camera not even equal to the oly and panasonic.  The olympus em-1 on video tracking is actually pretty good, the gh3 just a step above them all in video autofocus, especially the touch screen focus.

Now, this is the 4th time I've briefly tried a Sony A series camera because in essence it's perfect for some of our productions, A 7r for stills a 7s for video, especially with it's high sensitivity would be a huge advantage in size and traveling.

Anyway, these small cameras given their price are amazing regardless of the things I'd like to see.   

I think we're a few generations or more off before we see it.

I do think given that the a7s allows real nightime photography without 2.5k hmi's and in sd goes to 120 fps it has a real place.

The thing that always throws me is If I pick up an A7 anything it feels ok, kind of a rushed to market camera, but when you directly use it next to the olympus and the panasonic the Sony feels like a lightweight mockup.

Maybe this is the way electronic cameras should be built.   Why make the camera built like a 10 year device, when they're gonna change it in 12 months?

Still, Sony is on to something is they just stick with this FE format. 

IMO

BC




Logged

Telecaster
Sr. Member
****
Offline Offline

Posts: 835



« Reply #71 on: June 21, 2014, 10:53:27 PM »
ReplyReply

The thing that always throws me is If I pick up an A7 anything it feels ok, kind of a rushed to market camera, but when you directly use it next to the olympus and the panasonic the Sony feels like a lightweight mockup.

Maybe this is the way electronic cameras should be built. Why make the camera built like a 10 year device, when they're gonna change it in 12 months?

Still, Sony is on to something is they just stick with this FE format. 

Yep. My prospective A7r buyer awhile back wussed out on the deal so I decided to keep it. The 35 & 55mm lenses are really nice. Once Sony/Zeiss comes out with its 28 & 85mm options (they're on the "roadmap") I'll consider it a complete system...for my wants/needs, that is. The camera does have a prototype-ish feel to it. As does the Blackmagic PCC, which kinda is a prototype, I guess.   Smiley  Might as well admit that most cameras are just commodity items now...shoot 'em & boot 'em.

-Dave-
Logged
barryfitzgerald
Sr. Member
****
Offline Offline

Posts: 606


« Reply #72 on: June 22, 2014, 07:37:30 AM »
ReplyReply

Yep. My prospective A7r buyer awhile back wussed out on the deal so I decided to keep it. The 35 & 55mm lenses are really nice. Once Sony/Zeiss comes out with its 28 & 85mm options (they're on the "roadmap") I'll consider it a complete system...for my wants/needs, that is. The camera does have a prototype-ish feel to it. As does the Blackmagic PCC, which kinda is a prototype, I guess.   Smiley  Might as well admit that most cameras are just commodity items now...shoot 'em & boot 'em.

-Dave-

I've still got my 2005 era DSLR's a meagre 6mp but hey they work and take photos (and are worth very little on the s/h market thus not worth selling). Things have moved on tech wise, but back to basics erm well it's about taking pictures and they can still do that.
I am glad the debate isn't about purely megapixels (my little tests reveal very little real world difference 16 to 24mp) 12mp might well be sufficient for many people. On the other hand the discussion about DR would be better served with practical examples (ie photos you might take) Tech talk is fine..real stuff hits harder

Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2795



« Reply #73 on: June 22, 2014, 02:53:51 PM »
ReplyReply

Correct, I forgot to add that.

There is however a benefit to sampling actual image sensels. The actual sensels being used for image capture may be affected by surrounding electronics (e.g. amplifier glow), and pattern noise. By only using a narrow strip of masked sensels on one side of the sensor array, we may get a somewhat biased (positive or negative) outlook. Also important is to determine if the row/column directly adjacent to the unmasked sensels is not affected by cross-talk or light leakage. It's sometimes better to not include that 1 pixel row/column of the multiple row/column masked area.

Cheers,
Bart


Bart,

Your assertion about the use of the masked pixels is well taken and some tests that I have done clarify that point. I had previously performed an analysis of my D800e sensor using your method of paired images. I used ImagesPlus as outlined by Roger Clark. Since the D800e clips the readnoise in a dark frame, one must use an alternative method to determine the read noise. One solution is to plot the variance against the raw data number, using low exposures above the point where the read noise is clipped. The slope is the reciprocal of the gain, and the intercept is equal to the read noise squared. This is the method that Peter Facey used is his analysis of the D3.

The results of such a plot are shown below. The read noise is 1.07 ADUs and the gain is 3.24 e-/14 bit data number.



Here are the masked pixels as viewed in Rawigger with the sensor near saturation.



And the masked pixels for determination of read noise are selected in the image below.  The dark frame pixels have a value of approximately 1.5 DN, which is higher than the method derived from the plot.



These value of these masked pixels is affected by the saturation of the sensor as shown in the plot below. The x-axis is the pixel value of the unmasked pixels.



Here are the Rawdigger values for the masked pixels in a lens cap exposure. The value is similar to that obtained with the interecept method.



The unanswered question is whether the read noise actually varies with sensor saturation or if the larger values of the masked pixels near sensor saturation is due to cross talk or other factors. There does not appear to be any blooming adjacent to the edge of the mask. What do you think?


Bill
Logged
jfirneno
Newbie
*
Offline Offline

Posts: 49


« Reply #74 on: June 22, 2014, 03:08:37 PM »
ReplyReply

Michael:
Very interesting article.  I know this thread is a few days old but if you are still following it I have a question.

I'm an A7R (and A850 and several other Sony cameras) user and am extremely interested in adding the A7S for indoor no-flash shooting.  Could you comment on whether the A7S seemed more capable of low light auto-focus than the other A7 cameras?

Thanks in advance,
John
Logged
bjanes
Sr. Member
****
Offline Offline

Posts: 2795



« Reply #75 on: June 22, 2014, 03:13:47 PM »
ReplyReply

Bart,

Sony uses Engineering Dynamic Range. DxO uses 1:1 S/N. So, not directly comparable.

The important point, to my view, is not the absolute measured number under either schema, by rather the comparative numbers.

DxO now has S/N analysis for some 270 cameras of all brands. These are self consistent and done by one lab presumably using the same methodology.

On that basis the A7s simply is not what many people hoped it would be. That's really what the whole debate is about.

Michael


I think Michael is right here. The engineering definition of DR set the noise flow at the read noise without any signal (e.g. with the lens cap on the camera). A noise floor at the S:N of one does have a signal.

I performed some calculations for the Nikon D800e using a read noise of 1.04 ADUs, adding various amounts of signal. The shot noise is the square root of the signal and the total noise can be calculated by adding the shot noise in quadrature. DR is calculated by dividing the sensor saturation value of 15875 by the calculated noise.



One value for the noise floor the DR is the Rose criterion. The Rose criterion (named after Albert Rose) states that an SNR of at least 5 is needed to be able to distinguish image features at 100% certainty. An SNR less than 5 means less than 100% certainty in identifying image details.

Figure 13 of Emil Martinec's post demonstrates an image with varioius SNRs. In that image a SNR of 4 does show the text in fair detail.

Bill
Logged
Telecaster
Sr. Member
****
Offline Offline

Posts: 835



« Reply #76 on: June 22, 2014, 03:26:44 PM »
ReplyReply

I've still got my 2005 era DSLR's a meagre 6mp but hey they work and take photos (and are worth very little on the s/h market thus not worth selling). Things have moved on tech wise, but back to basics erm well it's about taking pictures and they can still do that.

Yup. I still have my Epson R-D1, also a 6mp camera. It shoots above its mp weight. I no longer use it seriously, though, due to its high power draw and its batteries' declining ability to hold a charge. It's not quite in Leica territory build-wise but it's not that far off. Sony's A7-series cameras are extremely capable but are if anything lacking in heft and apparent solidity. IMO anyway...and I tend to prefer lighter-weight cameras. I have to wonder how they'll hold up to long-term pro-level use. In fact I don't think they're intended for that. Thus my "shoot 'em & boot 'em" comment...in a pro context anyway.

-Dave-
Logged
michael
Administrator
Sr. Member
*****
Offline Offline

Posts: 4876



« Reply #77 on: June 22, 2014, 03:46:36 PM »
ReplyReply

Yes, by between one and 2.5 stops, depending on various factors. DxO shows the A7s to be among the lowest noise cameras on the market as high ISO.

But also take into account lenses. If you are using slower lenses right now, it might make sense to simply get a faster lens. All depends on what you're shooting, how big you print, your budget, etc.

Michael
Logged
jfirneno
Newbie
*
Offline Offline

Posts: 49


« Reply #78 on: June 22, 2014, 05:35:10 PM »
ReplyReply

Thanks Michael.  I'm renting one for a party at the end of next month.  I've been using the 35mm and 55mm FE lenses on the A7R plus some a-mount lenses like the 135/f1.8 adapted using the LAEA3.  ISO 6400 is all right, but if the A7S can give me 6400 that looks like 3200 on the A7R then I'll be all set.

Regards,
John
Logged
Pages: « 1 2 3 [4]   Top of Page
Print
Jump to:  

Ad
Ad
Ad