Ad
Ad
Ad
Pages: [1] 2 3 »   Bottom of Page
Print
Author Topic: Does a photo give spatial information (the nose job)?  (Read 6013 times)
AlfSollund
Full Member
***
Offline Offline

Posts: 128


« on: January 21, 2012, 12:01:10 PM »
ReplyReply

Here are questions based on a lengthy perspective discussion. Please feel free to ignore if you like:

1) Does a photo give any spatial information? That is, can we from a normal 2-D photo get any kind of information about distances and objects spatial placement?
2) Does a wide give different or more spatial information than a tele?

And here are answers:
1) No, non whatsoever. The information can be used to guess about spatial, but no information can be derived for sure. Its no way of knowing the relative spatial positions to objects in a photo, or even if they are spatial.
2) No, since the answer on 1) is zero, the answer on 2 will also be zero. So by zooming you are not deriving any more spatial information, you are simply adding different information.

Here i an easy way to prove this. Look at any given photo. Based on the photo itself its no way of saying that this is a photo of a photo where all objects are in the same plane, or a photo of real 3-D objects. I can zoom all I like while taking the photo, but the difference will add no clues as to I'm seeing a photo of a photo or not.

This is the reason that we and the rangefinder cameraes have 2 eyes. These gives different perspectives and add the needed information to see spatially. Actually much of what we are doing when creating a photo is trying to make the illusion of spatial information by having varnishing lines etc in the photo.
Logged

-------
- If your're not telling a story with photo you're only adding noise -
http://alfsollund.com/
bill t.
Sr. Member
****
Offline Offline

Posts: 2693


WWW
« Reply #1 on: January 21, 2012, 01:45:32 PM »
ReplyReply

I will ignore fact that I can look at an image any number of generations removed from the original, and still ferret out the relative depth of objects within the scene.  Any ordinary human visual cortex or even just your average biggest supercomputer in the world can do that.  Ok, very subjective, and the objects in the image have to be contained within the set of things recognizable to humans and/or super-computers.  So all bets off for abstractions.

But, quantification can be had by meticulously analyzing texture and edge contrasts within the scene, thereby inferring relative amounts of defocus for various objects, which tells us something about depth.  We need know nothing for sure about the image or its history, it's all relative based on a self-benchmarking source.  Less focused objects are either closer or farther away than more focused ones, and simple trending analysis can be used to infer which direction.  With enough hardware and computing power, it's a piece of cake.  Hah!   Smiley

Last but not least, there is a lot of proprietary software out there that can do this stuff on digitally scanned images, mostly developed by wiz kids to convert single camera motion pictures into (admittedly awful) approximations of what should have been shot as two-camera 3D in the first place.  The algorithms benefit greatly from facial recognition, etc.  Computers do most of the work, with only a little help from organic, carbon-based units.
Logged
jerryrock
Sr. Member
****
Offline Offline

Posts: 558



WWW
« Reply #2 on: January 21, 2012, 02:07:45 PM »
ReplyReply

I think this camera does give spacial information.

https://www.lytro.com/science_inside
Logged

Gerald J Skrocki
skrockidesign.com
mouse
Full Member
***
Offline Offline

Posts: 150


« Reply #3 on: January 21, 2012, 06:48:57 PM »
ReplyReply

1) Does a photo give any spatial information? That is, can we from a normal 2-D photo get any kind of information about distances and objects spatial placement?
2) Does a wide give different or more spatial information than a tele?


Can you tell us exactly how you define "spatial information"?
Logged
Wayne Fox
Sr. Member
****
Offline Offline

Posts: 2808



WWW
« Reply #4 on: January 21, 2012, 07:33:31 PM »
ReplyReply

well, it seems fairly accurate spacial information could be derived from in image if you know the actual dimensional details of the items in the photograph.
Logged

John Camp
Sr. Member
****
Offline Offline

Posts: 1258


« Reply #5 on: January 21, 2012, 11:36:33 PM »
ReplyReply

Yes. Check this:

http://en.wikipedia.org/wiki/Photogrammetry
Logged
AlfSollund
Full Member
***
Offline Offline

Posts: 128


« Reply #6 on: January 22, 2012, 02:16:14 AM »
ReplyReply

Can you tell us exactly how you define "spatial information"?

3-D information about the objects in the photo, such as the distance to the objects and their relative 3-D position to each other and the photographer (even such a trivial information if one object is in front or behind another object, where the object also might be the photographer). Not included in my definition: That objects must be more than zero distance from the photographer.

well, it seems fairly accurate spacial information could be derived from in image if you know the actual dimensional details of the items in the photograph.
So how will we do this if I take a photo of a photo? There is only one object in this photo, and that is a (near) 2-D object. namely the photo i photograph. Secondly there are methods to guess the actual size, but not possible to know for sure.
« Last Edit: January 22, 2012, 02:24:40 AM by AlfSollund » Logged

-------
- If your're not telling a story with photo you're only adding noise -
http://alfsollund.com/
mouse
Full Member
***
Offline Offline

Posts: 150


« Reply #7 on: January 22, 2012, 04:31:06 AM »
ReplyReply

3-D information about the objects in the photo, such as the distance to the objects and their relative 3-D position to each other and the photographer (even such a trivial information if one object is in front or behind another object,

Emphasis mine.  The easiest and most obvious contradiction to your argument is when one object is partially obscured by a closer object.  Surely this indicates the relative 3-D position.  Granted one must have additional information to determine exact distances between objects, but your original argument concerns relative distances; i.e. is one object closer to the camera than another? 

You did ask "Does a photograph give any spatial information"?
That's why I asked for your definition of spatial information.  Relative information is still information.  Bigger is information even if you can't say how much bigger.

And your example of a photograph of a photograph is a red herring.
« Last Edit: January 22, 2012, 04:36:00 AM by mouse » Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5121


« Reply #8 on: January 22, 2012, 07:40:19 AM »
ReplyReply

In practice even a one-eyed view is often enough to draw some fairly reliable conclusions about subject distances. The obvious case is knowing the relative sizes of various objects in the scene (the famous nose-to-ear ratio), but out-of-focus effects are common visual clues too, and I would think that measurements of blurring at edges could reveal something. To be pedantic, a given degree of OOF blurring is seen with objects at two different distances, in front of and behind the plane of exact focus, but combined with other information like which objects blocks the view of which others, something can be learnt ...

... though not with the Degree of certainty required to get someone to admit that they were wrong in a long forum discussion.
Logged
OldRoy
Sr. Member
****
Offline Offline

Posts: 425


WWW
« Reply #9 on: January 22, 2012, 09:06:43 AM »
ReplyReply

I know better than attempt to participate in the kind of exchange that frequently occurs on LL; the adjoining focal length/perspective discussion is a case in point. However nostalgia overcomes my reluctance.

Someone posted a link about photogrammetry. Whilst by no stretch of the imagination am I an expert in this field I was involved, many years ago (and in retrospect entirely prematurely) in a project to implement stereo photogrammetry using TV technology. This was at a time when the technology was predominantly analogue with the exception of a few monochrome CDD cameras.

Some of my colleagues were senior academics with a deep understanding of the mathematics and optics involved (projected geometry). Our objective, roughly speaking, was to derive real time (or nearly so) 3d measurements from a pair, or more, of TV cameras. This is analogous to the techniques employed for a very long time in the creation of 3d contour maps where pairs of aerial photographs, taken by a camera of known optical characteristics, are combined in a stereo viewer and light spots aimed at each image are converged. From the x/y data related to the location of the reference point in each image, a z value may be calculated. Assuming known geometry of the cameras/images, the equivalent x/y/z values for the terrain can be derived.

I put a lot of time and money into this project. It was pretty obvious that the same technique, if it could generate values quickly, could be used in a wide variety of applications. In manufacturing, say, engine blocks, co-ordinate measuring machines are used to "poke" at key locations on the target and the deflections compared to a reference set of measurements. Unfortunately this sort of method (which I believe is still used) is subject to "drift" and a requirement for recalibration. My own proposal was to use a raster scanned laser to mark the targets. We also had a lot of interest in using this method for surveying the submerged portion of oil platforms. At one point a friend's film effects company that was involved had a visit from someone at NASA to take a look at a stereo TV viewing system we had attached to a manually operated robotic arm. Unfortunately in the mid 1980s the limitations of the technology were a big, if not entirely un-surmountable obstacle.

I know that this technology has come a long way but I'm always surprised how long it has taken for similar techniques to come into widespread use. At the time I was involved some of my contacts were mainly interested in 3dTV for entertainment use. Of course now most of the hardware required is available off the shelf and the requisite computing power is no longer even a consideration...

Roy
Logged
AlfSollund
Full Member
***
Offline Offline

Posts: 128


« Reply #10 on: January 22, 2012, 10:02:44 AM »
ReplyReply

Emphasis mine.  The easiest and most obvious contradiction to your argument is when one object is partially obscured by a closer object.  Surely this indicates the relative 3-D position.  Granted one must have additional information to determine exact distances between objects, but your original argument concerns relative distances; i.e. is one object closer to the camera than another? 
Thats the point. From one image only you cannot see if a object is obscured  by another, or closer to the camera. You can guess, and have an indication. Its quite easy to make objects appear to be obstructed. You cannot even tell if the object is in front of the photographer. By front I mean the way the lens and nose in pointing when taking the pic.

You did ask "Does a photograph give any spatial information"?
That's why I asked for your definition of spatial information.  Relative information is still information.  Bigger is information even if you can't say how much bigger.
Still no spatial information. Only information that can be used for guessing. You cannot say if one object is bigger than another from a photo. Even the appearance of a human cannot be trusted by 2-D only. You can only say if one object is larger than another in the 2-D representation.
Logged

-------
- If your're not telling a story with photo you're only adding noise -
http://alfsollund.com/
Ellis Vener
Sr. Member
****
Offline Offline

Posts: 1726



WWW
« Reply #11 on: January 22, 2012, 10:16:16 AM »
ReplyReply

Spatial information gleaned from a photograph is one of the pillars of both photo reconnaissance as well as military and coporate intelligence gathering. Light and shadow length are important.
Logged

Ellis Vener
http://www.ellisvener.com
Creating photographs for advertising, corporate and industrial clients since 1984.
BartvanderWolf
Sr. Member
****
Online Online

Posts: 3440


« Reply #12 on: January 22, 2012, 11:07:02 AM »
ReplyReply

Spatial information gleaned from a photograph is one of the pillars of both photo reconnaissance as well as military and coporate intelligence gathering. Light and shadow length are important.

That's correct, as is the amount of (de-)focus:
http://spie.org/x648.html?product_id=736131 or
http://www.comp.nus.edu.sg/~zhuoshao/depthRecovery/index.html.

Cheers,
Bart
« Last Edit: January 22, 2012, 11:12:35 AM by BartvanderWolf » Logged
BJL
Sr. Member
****
Offline Offline

Posts: 5121


« Reply #13 on: January 22, 2012, 12:20:15 PM »
ReplyReply

From one image only you cannot see if a object is obscured  by another, or closer to the camera. You can guess, and have an indication. ... Only information that can be used for guessing.
Almost everything that we claim to know is merely a "guess" by this standard of "not absolutely certain". Many philosophers would declare this to be the case for any claims of "a postiori", empirical knowledge about the physical world. Skeptically speaking, empirical science is based entirely on "information that can be used for guessing". This is why uncomfortable scientific conclusions about the health effects of cigarette smoking or the evolutionary history of life on earth can be dismissed as mere guesses/conjectures/hypotheses if one so desires.

For example, even the spatial relationships that we think we detect with stereoscopic vision can in fact be illusions (as with glasses-free 3D TV presentation of a computer generated 3D movie), but most people would not then conclude that photogrammetry is pure guesswork.
Logged
mouse
Full Member
***
Offline Offline

Posts: 150


« Reply #14 on: January 22, 2012, 05:17:39 PM »
ReplyReply

From one image only you cannot see if a object is obscured  by another, or closer to the camera. .... Even the appearance of a human cannot be trusted by 2-D only. You can only say if one object is larger than another in the 2-D representation.

Are you really contending that, in a photo, if one object partially obscures another, one cannot conclude that the partially obscured object is further from the camera?  Imagine for example, a photograph of a person standing in front of, and partially obscuring the lower part of a tree.  Viewing only the photo I suppose one could reach alternate conclusions:  The person is in front of the tree (closer to the camera) OR the tree is growing out of the persons head.  So granted, a photo cannot give any information about spatial relations in the real world if the observer has no experience with the real world.  In what galaxy do you reside?

You are, I assume, the same AlfSollund who posted several times in a contemporary thread on the subject of perspective.  In that thread you were in firm agreement with the majority on the meaning of perspective and the fact that it depended on distance from observer to subject, and not on the focal length of the lens.  You posted:

Quote
Mouse is 100% correct. Why anyone would spend any efforts in trying to contradict facts is beyond me.

The difference between two photos with different focal lengths is that tele one has a subset of of the content, with exactly equal composition and perspective as the wide (DoF might differ). And nobody else than the ones ignorant of perspective would want to compare these, since the result is given as most of us know.

I'm sorry to say that you are arguing with the last 1000 years or so of knowledge on simple geometry 

And with this Im do not want to give any more education lessons of subjects that should be obvious at ground school levels, and that the Greeks mastered more than 2000 years ago (and probably other cultures before that).

I assume you are also in agreement with the common definition of perspective; that it involves visual distortions in a 2D image which provide information about the relative spatial positioning of objects in the original 3D scene.  Here again it is critical that the observer is cognizant of the significance of such visual distortions based on what he has seen and experienced in the real 3D world.

Now, in a separate thread, you are essentially arguing that perspective does not exist.  To paraphrase, you are arguing that "a photo cannot give any spatial information.  That we cannot derive from a photo any kind of spatial information about distances and objects spatial placement."

From this abrupt turnaround I can only conclude that you are badly confused, are unable to understand the meaning of the words (yours and others) that are being used or, perish the thought, you are simply trolling. Cry
Logged
j-land
Newbie
*
Offline Offline

Posts: 35


« Reply #15 on: January 22, 2012, 08:29:41 PM »
ReplyReply

This comes to mind...
Logged
tom b
Sr. Member
****
Offline Offline

Posts: 869


WWW
« Reply #16 on: January 22, 2012, 08:32:57 PM »
ReplyReply

Check out this site!

Cheers,
Logged

Ray
Sr. Member
****
Offline Offline

Posts: 8852


« Reply #17 on: January 23, 2012, 08:00:22 PM »
ReplyReply

I'm surprised that no-one seems to have mentioned the obvious fact that a 'standard lens' provides the closest representation of perspective and distances for the human eye. That's probably why it's called a 'standard lens'.

Of course, a 'standard lens' can be almost any focal length depending on the format of the camera. On a P&S camera it might be 5mm. On the so-called cropped format, around 30mm; on full-frame 35mm format, around 50mm; on Medium Format around 80mm; on 4"x5" format around 150mm; and  on 8"x10" format, around 320mm.

In other words, one camera's standard lens could be another camera's wide-angle or telephoto, within the limitation of the lens image circle for wide angles, and other design factors.

Without the use of computer analysis and the aid of other measuring devices, a human being, using his own eyes, is able to guess only fairly accurately the relative distances of objects provided he is familiar with at least some of the objects in the scene.

When people who have been born blind as a result of some defect which can later be corrected through modern surgery, one might think what a wonderful blessing that would be. Unfortunately, the reality is that what such people see with their gifted sight is a meaningless jumble which can be very disturbing. Apparently, it's a slow and arduous process of relearning that such people have to go through, as a baby does as it gradually works out that one object that is obscured by another is further away than the object that does the obscuring.

As I look out of my hotel window, a huge ceiling-to-floor window, and peruse the view of skyscrapers, trees, carports, backyards of smaller buildings, parked cars and occasional people walking in the near foreground below, I am able to guess reasonably accurately the distances between myself and any object in the scene. There are hundreds of clues, the most ubiquitous of which are the windows and balconies indicating the position of each floor in a building and the total number of floors. From experience I know that each floor is 3 to 4 metres in height.

The furthest skyscraper appears to my eyes to be 6 or 7 hundred metres away. Regardless of the accuracy of this estimate, I could produce the same estimate when viewing a photo of the scene in front of my eyes taken with a 'standard lens' from the same position.

If I'm presented a photo of the same scene shot from the same position, but taken with a wide-angle lens, it becomes much more difficult to estimate the distance of that furthest skyscraper. It really does look further away. Instead of 600 metres it could be 2 or 3 kilometres away, if taken with a really wide-angle lens. However, as a photographer, knowing that the shot was taken with a wide-angle lens, I can make allowances for that fact. I might think, although the skyscraper actually looks as though it's 2 or 3 kilometres away, in reality it's probably less than 1 kilometre.

Conversely, if the skyscraper is shot with a telephoto lens, whether an actual telephoto lens or an equivalent crop from a wide-angle shot, estimating the distance to the skyscraper is also problematical because it really does look much closer. Again, knowing that the skyscraper was shot with a telephoto lens, I can make allowances and claim that the skyscraper is really much further away than it appears.

This is why I claim that it is absurd to say that focal length of lens has no bearing on perspective as experienced by the human observer.
Logged
mouse
Full Member
***
Offline Offline

Posts: 150


« Reply #18 on: January 23, 2012, 09:58:18 PM »
ReplyReply

I think I finally understand Ray's problem.  To appreciate it fully you need to read some of his posts in this thread


While the majority of photographers, painters, and others involved in creating or interpreting 2D images have adopted the standard, mathematically defined concept of (linear) perspective, Ray has created his own personal definition of perspective..  He has every right to do so, but it is neither legitimate nor rational for him to argue that the consensus definition is wrong while his is correct.  While Ray may continue to submit any amount of evidence in support of his personal definition, nothing he has written calls into question the legitimacy of standard definition nor suggests that his definition is more useful or appropriate.  It is axiomatic that any discussion must remain fruitless if you cannot reach agreement on the basic meaning of the subject under discussion.

We encounter the same problem in this thread with AlfSollund's thesis that a photo cannot convey any spatial information.  I believe the very fact that he makes this statement is evidence that his definition of spatial information is quite different from the concept shared by the majority of photographers (and other observers).  However absent any precise definition of what Alf means by spatial information (I asked him, but have not yet received a clear answer), it becomes impossible to engage in any sort of a logical discussion.
« Last Edit: January 24, 2012, 02:00:15 AM by mouse » Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8852


« Reply #19 on: January 24, 2012, 04:01:53 AM »
ReplyReply

While the majority of photographers, painters, and others involved in creating or interpreting 2D images have adopted the standard, mathematically defined concept of (linear) perspective, Ray has created his own personal definition of perspective..  He has every right to do so, but it is neither legitimate nor rational for him to argue that the consensus definition is wrong while his is correct.  

I assure you that there is no-one more interested in the possibility that I am wrong. If my eyesight and sense of perception are faulty, I'm the first one who wants to know. If my reasoning is faulty or illogical, or the evidence I base my reasoning upon is not factual, I'm the first one who wants to know.

I see no merit in maintaining an illogical and incorrect position on any matter just for the egotistical satisfaction of appearing to be right or winning the argument.

You will find many references on the internet expressing the opinion that the standard lens (45 or 50mm for 35mm format) provides the sense of perspective that most closely matches human vision. However, since I'm not the sort of person to blindly accept any opinion as being true simply because there is a consensus on the matter, I've tested this for myself, taking numerous shots of the same scene at different focal lengths, and found that 50mm (on 35mm format) does indeed more closely match what I see in reality, in terms of spatial relationships and in terms of assessing the real distance to the viewer (me) of the subject that's been photographed.

Now, it could be that I'm part of a minority group of people who are 'perspectively' challenged, and that most people are able to immediately see the true spatial relationships and distances in a photographic image, whatever focal length of lens has been used.

If this is the case, then in my defense I will mention that Leonardo da Vinci appears to have had a similar problem, according to the following extract from the University of Chicago. However, the telescope hadn't been invented in his time.

Quote
Leonardo da Vinci, writing soon after the invention of scientific perspective, dismissed it as perspectiva accidentalis, and in his work Trattora della Pittura noted the distortive effects of perspective in wide angles and the various visual manipulations and elisions that occur from arbitrarily moving the constructed vanishing point in a painting. Leonardo encouraged painters instead to focus on parallel developments in aerial perspective gradations in color, shadow, and texture to denote three-dimensional relations.

It would be interesting if you could tell me which of the following statements, upon which I base my reasoning, is incorrect.

(1) The standard lens, 50mm for 35mm format, most closely matches the natural perspective of the human eye.

(2) Shots from wide-angle lenses, uncropped, make distant objects appear further away and make close objects appear closer than they actually are in reality.

(3) Cropping any image, whether in camera or in post-processing, is effectively no different than using a longer focal length of lens that provides the same angle of view as the cropping.

(4) The effective focal length of any lens is a relationship between its actual focal length, the size of the sensor, and/or the degree of cropping in post-processing.

(5) It's not the focal length of the lens per se that has any bearing on perspective, but the effective focal length.

It would help if you could specify which of the above statements is wrong in your opinion. We could then concentrate on the problem area instead of going round in circles.


Logged
Pages: [1] 2 3 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad