Ad
Ad
Ad
Pages: « 1 2 3 [4]   Bottom of Page
Print
Author Topic: Nick Devlin's article  (Read 10188 times)
dreed
Sr. Member
****
Offline Offline

Posts: 1279


« Reply #60 on: August 30, 2011, 09:34:33 AM »
ReplyReply

That's because it's hard for a computer to see which lines are the relevant ones, and which are not.

Our brains perform a lot of near-magical shortcuts to build effective illusions about reality.

A computer, however, does not  have the luxury of seeing things as they should be.

Your first example, telling the computer to convert a specifiz trapezoid to a rectangle, is something which is easy enough in itself. You find that in the perspective correction tool in Photoshop.

More complex shapes are far from trivial.

I'm not saying that we won't get any of the features you yearn for sometime in the near or distant future, just that these things are not easy to do.

The  perspective correction tool is what I was referring to in "sliders".
It is a clumsy method to correct distortion.
As is the "distortion slider".

I'm aware that complex shapes are trivial, but the behaviour of light through a camera lens is not random.

I suppose what I'm saying is rather than try and play with a bunch of knobs to get the picture looking right, I'd rather tell LR what it should look like and have LR work out what it needs to do in order for the picture to look that way.

Including colour.
Logged
jani
Sr. Member
****
Offline Offline

Posts: 1604



WWW
« Reply #61 on: August 31, 2011, 02:39:55 AM »
ReplyReply

The  perspective correction tool is what I was referring to in "sliders".
It is a clumsy method to correct distortion.
As is the "distortion slider".
Yes, they are fairly clumsy, I'm not disagreeing with that.

Quote
I'm aware that complex shapes are trivial,
Surely you mean "non-trivial"?

Quote
but the behaviour of light through a camera lens is not random.
It's not quite random (quantum physics temporarily ignored Wink), but if you pick an arbitrary camera lens, the behaviour of light appears arbitrary.

Even if e.g. Lightroom has a profile for a Nikon AF-S 50mm f/1.8G on a Nikon D3s, that does not mean that your sample of the same lens is identical. Optically speaking the light will take a different path through your lens, as compared to the profile.

The differences may be miniscule and invisible to even pixel peepers, or it may be easily identifiable.

Even so, correcting for distortions is not the same as correcting perspectives to match your artistic vision of what's "looking right". Only you can make that decision.
Quote
I suppose what I'm saying is rather than try and play with a bunch of knobs to get the picture looking right, I'd rather tell LR what it should look like and have LR work out what it needs to do in order for the picture to look that way.

Including colour.
Unfortunately, at this stage of technological development, that means that you have to tell LR by using "a bunch of knobs", or have someone else do it for you. Smiley
Logged

Jan
dreed
Sr. Member
****
Offline Offline

Posts: 1279


« Reply #62 on: September 01, 2011, 08:21:25 AM »
ReplyReply

Even so, correcting for distortions is not the same as correcting perspectives to match your artistic vision of what's "looking right". Only you can make that decision.Unfortunately, at this stage of technological development, that means that you have to tell LR by using "a bunch of knobs", or have someone else do it for you. Smiley

Right, but this thread is not about what we can do now, but "what if's".

What can the camera do to make taking photographs better?

And I extended that to the post-processing software with the talk about LR.

At the very least, I want to tell LR that a set of edges should be parallel or that a set of four edges should make a rectangle/square and for it to "work it out."


Something that I wonder about from time to time is sensor based auto-ISO.

What's that, you might ask?

It's the ability of the sensor to have some parts operate at ISO 100 and other parts at ISO 200, thereby pulling all of the "darker tones" up. I don't know if that's worthwhile as the introduced noise may mean that you may as well do it post.

Something else that has popped up in my mind of late is that if a camera can do face recognition, why can't it do object recognition? So if you're trying to take a picture of your cat chasing a laser pointer, the camera can keep track of the object (cat) that you initially focused on, as long as it stays in frame, and keep the lens focused on it. It would also apply to birding as well. This may be substantially harder than facial recognition and tracking faces because faces have a rather typical set of objects that make them up - and colours too.
Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1279


« Reply #63 on: September 01, 2011, 10:50:31 PM »
ReplyReply

What use is image stabilisation to landscape shooters that are always tripod mounted?

Wouldn't it be more useful to have "wind stabilisation" so that the lens compensated for camera movement due to wind and not hands?

And so that the lens doesn't need to guess about how much movement is required, why not attach a USB device to the camera that monitors wind speed and direction relative to the camera, feeds that information into the camera so that it can then direct the lens to apply correction?

Through experimentation today, it would seem that IS can be of use like this (although it is somewhat limited), but given that wind can actually be measured, why not?
Logged
duane_bolland
Jr. Member
**
Offline Offline

Posts: 71


WWW
« Reply #64 on: September 02, 2011, 12:54:20 AM »
ReplyReply

I have no interest in any of these five technologies. 
Logged
stamper
Sr. Member
****
Online Online

Posts: 2875


« Reply #65 on: September 02, 2011, 03:15:30 AM »
ReplyReply

Quote

It's the ability of the sensor to have some parts operate at ISO 100 and other parts at ISO 200, thereby pulling all of the "darker tones" up. I don't know if that's worthwhile as the introduced noise may mean that you may as well do it post.

Unquote

In a good quality full frame camera this shouldn't be a issue. The difference of 1 stop should not be noticeable if you consider that noise isn't a problem at 800 iso in say for instance a D700. This issue of noisy shadows at base iso seems to be a hot one in various forums. It is overblown by posters with a scientific bent rather than a practical one.  Roll Eyes
Logged

stamper
Sr. Member
****
Online Online

Posts: 2875


« Reply #66 on: September 02, 2011, 03:16:10 AM »
ReplyReply

I have no interest in any of these five technologies. 

And?
Logged

jani
Sr. Member
****
Offline Offline

Posts: 1604



WWW
« Reply #67 on: September 05, 2011, 05:02:43 AM »
ReplyReply

Right, but this thread is not about what we can do now, but "what if's".
What I'm trying to say is that that particular "what if" may be seriously out of reach for many, many years to come. Mind-reading is really, really hard, and we've only just scratched the surface of it.

Look at how far your average powerful PC or Mac has come on this road, and then consider that processing power in a camera is far, far less because of power requirements.

Wishful thinking is nice, though, and I want a camera with dynamically adapting, floating lenses, so that I don't need to carry a huge backpack, and a neural fourth-generation interface. Wink
Logged

Jan
dreed
Sr. Member
****
Offline Offline

Posts: 1279


« Reply #68 on: September 05, 2011, 06:34:05 PM »
ReplyReply

Some ways in which LR could "enable" better digital photography...

Introduce the focus equivalent of HDR - call it HFR. The idea being that you can take two (for example) photographs of subjects shot at f/2.8 (say), that are 50 meters apart and have LR take the in-focus parts of both photographs and merge them into one that has the in-focus bits from both plus background from one or the other. This would allow us to shoot at small f-stops to reduce the impact of diffraction on image quality without having to sacrifice depth of field.

If that could be made to work and work in-camera (like HDR is now), then you could pick out two boundary points you want to be in focus on the LCD screen and the camera could calculate the distance, take into account the lens and f-stop being used and then step the focus from one point to the next, taking a photograph for each distance interval that would be in focus until a continuum of photographs is obtained that spans the distance of the two selected points in the field of view.

Introduce a new method to modify colour. Rather than move around a bunch of sliders that change hue, saturation, exposure, etc, give the user a palette of colours from which the currently selected pixel/region can be transformed into using the above. Maybe allow the user to select which transformations can or cannot be used. This then allows me to pick the colour that I want the leaves or sky to be rather than try and work out which particular sliders will give me the look that I want. Maybe this requires multiple colour choices to be made so that a proper transformation equation can be built? I don't know if this would work but it seems like an interesting idea to play with...
Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1279


« Reply #69 on: September 14, 2011, 08:53:53 PM »
ReplyReply

Whilst shooting over the weekend, it occurred to me that ETTR is rather tricky in situations where the light is constantly changing.

At both sunrise and sunset, it would seem that exposure times are better thought of as calculus equations because the the light level at the start of a (say) 30 second capture can be different than at the end.

But rather than try and build into cameras a method to simulate pixel exposure, maybe the pixels themselves should drive the exposure.

That is, when a pixel on a sensor reaches a given "fullness", lets say 95%, the sensor self-triggers the exposure.

So rather than the user telling the camera how long to keep the shutter open for, the camera tells the user how long the shutter was open for in order for the brightest pixel to fill to a given percentage.

Being ignorant of the physics involved at a microscopic level, I have no idea if the above is at all feasible/possible. But as a photographer that's keen on the ETTR idea, it sounds nice Smiley
Logged
ErikKaffehr
Sr. Member
****
Offline Offline

Posts: 7888


WWW
« Reply #70 on: September 17, 2011, 08:27:52 AM »
ReplyReply

Hi,

Not very feasible! There are about 20-80 MPixels to be scanned, several thousands time a second, on battery power. Would be feasible to design circuitry for the feature but you would perhaps prefer to use the silicon are to collect photons?

Don't want to be negative, just try to put things in another perspective...
Best regards
Erik

Whilst shooting over the weekend, it occurred to me that ETTR is rather tricky in situations where the light is constantly changing.

At both sunrise and sunset, it would seem that exposure times are better thought of as calculus equations because the the light level at the start of a (say) 30 second capture can be different than at the end.

But rather than try and build into cameras a method to simulate pixel exposure, maybe the pixels themselves should drive the exposure.

That is, when a pixel on a sensor reaches a given "fullness", lets say 95%, the sensor self-triggers the exposure.

So rather than the user telling the camera how long to keep the shutter open for, the camera tells the user how long the shutter was open for in order for the brightest pixel to fill to a given percentage.

Being ignorant of the physics involved at a microscopic level, I have no idea if the above is at all feasible/possible. But as a photographer that's keen on the ETTR idea, it sounds nice Smiley
Logged

dreed
Sr. Member
****
Offline Offline

Posts: 1279


« Reply #71 on: September 17, 2011, 11:42:33 AM »
ReplyReply

Hi,

Not very feasible! There are about 20-80 MPixels to be scanned, several thousands time a second, on battery power. Would be feasible to design circuitry for the feature but you would perhaps prefer to use the silicon are to collect photons?

Don't want to be negative, just try to put things in another perspective...

I wasn't thinking of scanning the pixels rather that the pixel itself can trigger the sensor to expose by sending a signal to whatever when the photon well reaches a certain level of "fullness".

To put it in more ordinary terms, many bathroom sinks have an "overflow hole" to prevent the sink from overflowing. If water running down that overflow hole could cause the sink to unblock and empty (and for all that to be automatic), then that's closer to what I'm thinking. Now you would just need to arrange 20 million sinks and be able to empty (and measure the amount of water) in each one of them when any one of them starts to drip water through the overflow hole. As an analogy, it isn't perfect and is just meant to be illustrative of the idea.

And the idea being that the sensor itself decides on when to close the shutter based on when any of its pixels reaches a certain threshold in terms of photon capacity.

Undoubtedly this requires a completely different pixel and grid design than what is used today. But then what we use today is more or less designed to fit in with how we've used film rather than starting afresh...
Logged
Pages: « 1 2 3 [4]   Top of Page
Print
Jump to:  

Ad
Ad
Ad