Ad
Ad
Ad
Pages: « 1 ... 5 6 [7] 8 »   Bottom of Page
Print
Author Topic: The Physics of Digital Cameras  (Read 36656 times)
PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #120 on: January 22, 2010, 07:55:43 PM »
ReplyReply

Quote from: WarrenMars
is more or less constant, the smaller the pixels the more significant the margin losses. When you're looking at pixel pitches of 3m the area taken up by the cell margins is a significant fraction of the total cell area. It follows that the number of photons detected by the bin is significantly less that those

I am not a native English speaker, so please forgive my ignorance if I put my foot in my mouth :-). As I understand it, a pixel pitch is actually a measure of distance. That seems confirmed by different sources... (http://en.wikipedia.org/wiki/Dot_pitch for example). Therefore, I must confess I am a bit puzzled by the fact that your distance unit of choice seems to remain the m. What your preferred unit for area?
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #121 on: January 22, 2010, 08:09:01 PM »
ReplyReply

Quote from: PierreVandevenne
Thermodynamically speaking, how does one prevent the completely isolated environment from reaching some kind of thermal equilibrium? Adding conversion steps makes the analysis more complex, yes, but what keeps the system unstable? Assuming the system is unstable, and there is a perpetual flow in a perfectly isolated environment, how does it fundamentally differ from a perpetual motion machine in a frictionless environment?

The cool thing about this idea is that it is perfectly fine with starting out in a condition of perfect thermal equilibrium within a closed system (all temperatures of all components exactly the same). The key concept is the asymmetric thermodynamic boundary--a barrier that energetic particles (photons in this case) can easily cross of their own volition in one direction, but not the other. All common conceptions about thermodynamics (heat cannot flow from a cold object to a hot object, etc) are based on the premise that all thermodynamic boundaries are symmetric; i.e. that any given energetic particle has an equal probability of crossing the boundary in either direction unless acted on by an outside force (which requires the expenditure of energy). By creating an asymmetric boundary, the system naturally gravitates to a state where energy concentration is unequal, and that variance in energy concentration can then be exploited in any number of conventional ways. The equilibrium-state ratio of energy concentration on opposite sides of the boundary is inversely proportional to the ratio of the probability that a particle will cross in one direction vs the probability it will cross in the opposite direction. With the normal symmetric boundary, this ratio is 1:1, therefore the equilibrium state is also 1:1, or equal concentration of energy on both sides.

I go into this in more detail in slides 53-69 of the presentation.
Logged

Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #122 on: January 22, 2010, 08:30:38 PM »
ReplyReply

Quote from: PierreVandevenne
At first sight, there is a lot of similarities between your machine and that one

http://www.lhup.edu/~dsimanek/museum/sucker.pdf

I initially thought of something along similar lines, and after spending a few months doing ray-tracing simulations calculating the properties of various emitter and reflector geometries, I proved to my own satisfaction that you can't concentrate isotropic radiation with reflectors to a higher concentration than that of the source of the radiation. Regardless of the shape of the reflective cavity and the arrangement of the blackbody masses, the flux density of photons measured anywhere in the cavity will be identical in the conditions described in your link.

Using the same ray-tracing code, but adding in calculations for refraction, total internal reflection, and Fresnel's equations (to calculate the proportion of refracted vs reflected energy), I proved to my satisfaction that concentrating isotropic radiation can be accomplished with refraction or total internal reflection (though the latter is more effective). The experiments I conducted appear to validate my theory, though not necessarily beyond all possibility of doubt.
Logged

Daniel Browning
Full Member
***
Offline Offline

Posts: 142


« Reply #123 on: January 22, 2010, 08:33:45 PM »
ReplyReply

Quote from: WarrenMars
Due to the relatively low electron counts in the small cells other noise sources such as thermal noise and readout error become more significant, in the above example: 4 times more significant in fact!

I agree. Readout error is the most significant issue for smaller pixel sizes. In order to have the same noise power at any given spatial frequency, read noise has to decrease at the same rate as the pixel size, so that a 3 micron pixel must have half the read noise of a 6 micron pixel in order for both to have the same noise power at any common level of detail. Fortunately, there are many smaller pixels that do indeed have lower read noise -- often even lower than is even needed just to match a larger pixel. But only at base amplification. At high gain the trend is reversed and larger pixels often have less read noise than small pixels scaled.

I think it should also be mentioned that thermal noise is only significant to a minority of photographic applications (shutter speeds longer than 10 seconds).

Quote from: WarrenMars
Although it sounds fine in theory, in the real world  there are various problems with the pixel binning solution.

The main problem is that there is always some loss at the margins of the pixel.

Although the idea that there is always some loss sounds fine in theory, in the real world, actual shipping cameras prove that sensor designers have achieved equal QE across a huge variety of pixel sizes (including 3 microns) for typical focal lengths and f-numbers, with more than a full order of magnitude between their areas.

Quote from: WarrenMars
Then there is the problem of over-large file sizes that take too long to process, slow up your camera, slow up your computer and take up too much room.

This can be solved. It's a Simple Matter Of Programming. If you want the full return from smaller pixels, you have to accept the larger files, slower processing, etc. But if you only want the full return some of the time, and the rest of the time you want smaller files, normal quality, etc., then you just select the compressed raw option. It will give you the same filesize and processing speed as if you had larger pixels. Look at REDCODE for example. It compresses 9.5 MP into a 1 MB compressed raw file. Now sure, the quality is not quite as high as an 11 MB lossless-compress raw file -- but whatever quality issues there (very little) are do not reach into any of the lower spatial frequencies, which is all that larger pixels could offer anyway.
Logged

--Daniel
col
Jr. Member
**
Offline Offline

Posts: 52


« Reply #124 on: January 22, 2010, 08:53:11 PM »
ReplyReply

I wrote:
a
Quote
good case can be made that what actually matters is the total number of detected photons for the entire image, independent of the size or number of pixels.

Quote from: WarrenMars
Although it sounds fine in theory, in the real world  there are various problems with the pixel binning solution.

The main problem is that there is always some loss at the margins of the pixel. You can talk about micro lenses that can bend the light around the depletion zone, but 1) they ain't gonna bend all the light, 2) there's still gonna be appreciable loss at the micro lens boundaries. In your example above, if the small pixels were square with side length 1, then the large pixel would have a boundary length 8 and the 4 small pixels a total boundary length 16. The margin losses inherent in the pixel binning are then DOUBLE those of the unbinned. Since the width of pixel margins is more or less constant, the smaller the pixels the more significant the margin losses. When you're looking at pixel pitches of 3m the area taken up by the cell margins is a significant fraction of the total cell area. It follows that the number of photons detected by the bin is significantly less that those detected at the large pixel. Hence photon noise is greater at the bin.

Due to the relatively low electron counts in the small cells other noise sources such as thermal noise and readout error become more significant, in the above example: 4 times more significant in fact!

Then there is the problem of over-large file sizes that take too long to process, slow up your camera, slow up your computer and take up too much room.

And yet, Warren, my statement above still stands as absolutely correct, as regards "shot noise" from photon counting statistics. All you are saying is that, for a given overall sensor size, the total number of photons collected may be less when the number of pixels are increased, due to a greater proportion of dead area between the pixels. Just so. My statement makes no comment on the many variables that determine how many photons are detected. All I said was that the effective shot noise in the image as a whole is set by the total number of detected photons in the image. You need to read and understand exactly what I say before rushing in and declaring that it is untrue.

Colin


Logged
PierreVandevenne
Sr. Member
****
Offline Offline

Posts: 510


WWW
« Reply #125 on: January 22, 2010, 08:58:17 PM »
ReplyReply

Quote from: Jonathan Wienke
Using the same ray-tracing code, but adding in calculations for refraction, total internal reflection, and Fresnel's equations (to calculate the proportion of refracted vs reflected energy), I proved to my satisfaction that concentrating isotropic radiation can be accomplished with refraction or total internal reflection (though the latter is more effective). The experiments I conducted appear to validate my theory, though not necessarily beyond all possibility of doubt.

In a non closed system, at first sight, I can believe your system could generate _some_ electricity forever. In a closed system, for a while. But intuitively, I can't agree with the 0.288 W per cm3 though. That would lead to 2.8 KW per 10000 cm3 (a one square meter, 1 cm high layer) and 288 KW (2.8MW (!) if you increase efficiency as planned) for a cubic meter of device. Since we can basically assume that all the energy we receive comes from the sun, we are already at about twice the solar constant per square meter in orbit. Granted, you could take the energy elsewhere (note that thermal energy input will definitely be limited by the area of the exchanger, not its volume!) without bad long term consequences for the planet since we'll be giving it back one way or another when using it (too bad it isn't a solution to global warming as well ;-))... Still being in the neighborhood of the machine is likely to be somewhat uncomfortable if it is that efficient at converting heat into electricity.

Also, I can't think of a geometry that would allow both dense stacking and temp gradient preservation...
Logged
col
Jr. Member
**
Offline Offline

Posts: 52


« Reply #126 on: January 22, 2010, 10:36:48 PM »
ReplyReply

Quote from: Jonathan Wienke
I initially thought of something along similar lines, and after spending a few months doing ray-tracing simulations calculating the properties of various emitter and reflector geometries, I proved to my own satisfaction that you can't concentrate isotropic radiation with reflectors to a higher concentration than that of the source of the radiation. Regardless of the shape of the reflective cavity and the arrangement of the blackbody masses, the flux density of photons measured anywhere in the cavity will be identical in the conditions described in your link.

Using the same ray-tracing code, but adding in calculations for refraction, total internal reflection, and Fresnel's equations (to calculate the proportion of refracted vs reflected energy), I proved to my satisfaction that concentrating isotropic radiation can be accomplished with refraction or total internal reflection (though the latter is more effective). The experiments I conducted appear to validate my theory, though not necessarily beyond all possibility of doubt.

Hi Jonathan,

There is something that concerns me about the very fundamentals of what you propose.

Refer to Page 5 of your PDF document, headed "Theory of Operation"

The ray tracing from emissive surface 'A" is just fine. Photons leaving surface "A" travel for a brief distance through air (or vacuum), and then strike the refractive layer, and are refracted at the interface. No problems.

The ray tracing from surface "B" may just be a bit dodgy, because you have apparently assumed that it is posssible for the photons to start their journey from just inside the refractive material. Whether that is valid is unclear. Consider the situation if the emissive surface "B" was simply pressed hard up against the refractive layer. In that case, there would still be a very small air gap, and your ray tracing would be wrong - in fact, the ray trace would look identical to the rays leaving emissive material "A", and the device would therefore not work. As I understand, what you actually do (in effect) is to "paint" the emissive surface onto the refractive surface. It could be argued that the photons still originate outside of the refractive material, and will therefore be refracted as they enter it, just as if there was an infinitesimally small airgap between the two. If this is true, your device will behave symmetrically, and will not work.

Your experimentally measured temperature difference of 0.018K strikes me as very small, and does not support your theory beyond all possible doubt.

What do you think?

Colin
« Last Edit: January 22, 2010, 10:39:21 PM by col » Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #127 on: January 23, 2010, 08:18:12 AM »
ReplyReply

Quote from: col
Hi Jonathan,

There is something that concerns me about the very fundamentals of what you propose.

Refer to Page 5 of your PDF document, headed "Theory of Operation"

The ray tracing from emissive surface 'A" is just fine. Photons leaving surface "A" travel for a brief distance through air (or vacuum), and then strike the refractive layer, and are refracted at the interface. No problems.

The ray tracing from surface "B" may just be a bit dodgy, because you have apparently assumed that it is posssible for the photons to start their journey from just inside the refractive material. Whether that is valid is unclear. Consider the situation if the emissive surface "B" was simply pressed hard up against the refractive layer. In that case, there would still be a very small air gap, and your ray tracing would be wrong - in fact, the ray trace would look identical to the rays leaving emissive material "A", and the device would therefore not work. As I understand, what you actually do (in effect) is to "paint" the emissive surface onto the refractive surface. It could be argued that the photons still originate outside of the refractive material, and will therefore be refracted as they enter it, just as if there was an infinitesimally small airgap between the two. If this is true, your device will behave symmetrically, and will not work.

It is easy to verify experimentally that it is possible to optically bond an emissive surface to refractive material so that refraction does not occur between the emissive surface and the refractive material. If you lay a triangular prism on top of a sheet of printed text, it is possible to observe the total internal reflection effect--when attempting to view the text from certain angles, the text will disappear and be replaced by a reflection of whatever is on the other side of the prism from the viewer. In contrast, if you paint the surface of the prism, then you can view the painted surface from any angle you like, and it will never "disappear" due to total internal reflection.

The same principle is employed in fingerprint scanners. You have a triangular prism with a 90 angle and two 45 angles. You have a light source attached to one 45 surface and the sensor attached to the other 45 surface. When nothing is being scanned, total internal reflection causes the photons emitted from the light source to bounce off the inner surface of the prism and reflect to the sensor. But when a finger is placed on the surface, a temporary optical bond forms between the top of the fingerprint ridges. This optical bond enables photons to travel from inside the prism directly into the finger being scanned without being refracted. As a result, the sensor sees an image of the light source where the ridges are not in contact with the prism, and the ridges where they are in contact with the prism. The brightness difference between the reflection of the light source compared to the reflection of the light source off the skin is used to determine which pixels are fingerprint, and which are background.

The diagrams below illustrate the concept; imagine the black circle to be a fingerprint ridge pressed against the prism surface and creating the optical bond. Instead of being reflected directly to the detector, the photons are mostly absorbed by the fingerprint ridge. If you have a triangular glass prism, it is easy to visually verify this for yourself experimentally.

[attachment=19680:Prism_TIR.gif]  [attachment=19681:Prism_OB.gif]

As long as the emissive surface is optically bonded to the layer of refractive material (basically the difference between painting the surface of the prism directly instead of just laying it on a painted surface), the photons will not be refracted as they are emitted from the emissive surface into the refractive material.

Quote
Your experimentally measured temperature difference of 0.018K strikes me as very small, and does not support your theory beyond all possible doubt.

I regard the experiments as interesting, but not as ironclad beyond-a-shadow-of-a-doubt proof.
« Last Edit: January 23, 2010, 08:37:07 AM by Jonathan Wienke » Logged

Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #128 on: January 23, 2010, 04:34:14 PM »
ReplyReply

Here's another refraction / total internal reflection experiment that is easy to try:

Fill a rectangular or square transparent container until it is almost full of water. Observe the surface of the water from just below water level, and you will see that it is behaving like a mirror from that vantage point. If you hold your finger a few millimeters above the water's surface, you will not be able to see it through the surface when your viewpoint is just below the surface level of the water. But if you dip your finger partway into the water, you will have no trouble at all seeing the part of your finger below water level. The reason for this is because the water-air boundary forms a refractive interface where refraction and total internal reflection will occur, but no such refractive interface exists between the water and your finger. As a result, you can clearly see the portion of your finger below water level, and no part of your finger below water level is hidden by reflections. Any photons being emitted or reflected by your finger inside the water go directly into the water without being refracted or reflected.
« Last Edit: January 23, 2010, 04:36:50 PM by Jonathan Wienke » Logged

col
Jr. Member
**
Offline Offline

Posts: 52


« Reply #129 on: January 23, 2010, 06:18:42 PM »
ReplyReply

Quote from: Jonathan Wienke
Here's another refraction / total internal reflection experiment that is easy to try:

Fill a rectangular or square transparent container until it is almost full of water. Observe the surface of the water from just below water level, and you will see that it is behaving like a mirror from that vantage point. If you hold your finger a few millimeters above the water's surface, you will not be able to see it through the surface when your viewpoint is just below the surface level of the water. But if you dip your finger partway into the water, you will have no trouble at all seeing the part of your finger below water level. The reason for this is because the water-air boundary forms a refractive interface where refraction and total internal reflection will occur, but no such refractive interface exists between the water and your finger. As a result, you can clearly see the portion of your finger below water level, and no part of your finger below water level is hidden by reflections. Any photons being emitted or reflected by your finger inside the water go directly into the water without being refracted or reflected.

Yes, through this simple yet clever example, as well from the excellent examples in your previous post, you convince me that the photons leaving the emissive layer will not be refracted upon entering the refractive substrate, provided the two layers are "optically bonded", which can be achieved by painting the emissive layer onto the refractive substrate. Alternatively, a thin refractive layer could be "painted" or similarly deposited onto a thicker emissive substrate, which (as I understand) is what you would actually do in your final embodiment of this device.

OK. My next question, is whether you have made a serious attempt to calculate what temperature difference you should be getting in your constructed model. I'm being lazy here, in that I could do a back of envelope calculation on this myself, but suspect you have already done it. My gut feeling is that, if all your claims are true, and if there is the slightest chance that a useful amount of electrical power could ever be produced, then you should be measuring more than a 0.018K temperature difference in the prototype model. Do back-of-envelope calculations suggest you should get a dT of more that 0.018K?  

Another question. Your surfaces "B" were made from 6" Edmund Optics IR Fresnel lenses, painted black on the grooved side. I know what a Fresnel lens is, but you make no mention of this in your Background and Theory sections, so you have confused me a little here. Your Theory of Operation clearly shows a simple, parallel-sided slab (or layer) of refractive material, not a Fresnel lens.

Cheers, Colin

   

Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #130 on: January 23, 2010, 07:58:18 PM »
ReplyReply

Quote from: col
Alternatively, a thin refractive layer could be "painted" or similarly deposited onto a thicker emissive substrate, which (as I understand) is what you would actually do in your final embodiment of this device.

Yes, that is what I had in mind. The refractive layer wouldn't have to be much more than a few wavelengths thick to work as intended, though the wavelengths in question range all the way to about 60 m

Quote
OK. My next question, is whether you have made a serious attempt to calculate what temperature difference you should be getting in your constructed model. I'm being lazy here, in that I could do a back of envelope calculation on this myself, but suspect you have already done it. My gut feeling is that, if all your claims are true, and if there is the slightest chance that a useful amount of electrical power could ever be produced, then you should be measuring more than a 0.018K temperature difference in the prototype model. Do back-of-envelope calculations suggest you should get a dT of more that 0.018K?  

Another question. Your surfaces "B" were made from 6" Edmund Optics IR Fresnel lenses, painted black on the grooved side. I know what a Fresnel lens is, but you make no mention of this in your Background and Theory sections, so you have confused me a little here. Your Theory of Operation clearly shows a simple, parallel-sided slab (or layer) of refractive material, not a Fresnel lens.

I used the Fresnel lenses because they were the cheapest thing I could find that claimed to be reasonably transparent in the long-wave infrared region where room-temperature thermal emission occurs. I wasn't using them as lenses, I was simply using them as a field-expedient refractive coating for my "B" surface. However, they are far from ideal for that purpose, as the following spectral transmission graph shows:



Ideally, transmission should be near 100% from 3-60m. If you compare this graph with the graph in slide 74 showing emitted power distribution at 300K, you'll see that there are several bands where the fresnel lens material is opaque (and therefore highly emissive) in wavelengths active at 300K, which reduces maximum efficiency greatly. On top of that, the experiment was conducted in air, not a vacuum, reducing efficiency even further. Given the inefficiency of the lens material for the intended purpose, and the less-than-optimal experimental conditions (not having access to a vacuum pump and suitably-sized container), 0.018 C/K is within the realm of plausibility. I'd obviously be much happier if I'd measured a greater temperature difference, but it's not that far out of line from what I was realistically expecting to get.

It all comes to how much money I can can get the wife to approve for abstract science experiments she doesn't really understand, and what kind of materials and facilities I can get access to to try some experiments under more optimal conditions with materials better-suited to the purpose.
Logged

Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #131 on: January 23, 2010, 08:28:27 PM »
ReplyReply

Quote from: PierreVandevenne
In a non closed system, at first sight, I can believe your system could generate _some_ electricity forever. In a closed system, for a while. But intuitively, I can't agree with the 0.288 W per cm3 though. That would lead to 2.8 KW per 10000 cm3 (a one square meter, 1 cm high layer) and 288 KW (2.8MW (!) if you increase efficiency as planned) for a cubic meter of device.

The calculations behind that power density figure are on slides 22-23 of the presentation, and I think they are fairly conservative. One thing to note though; the calculation is for the size of the converter device itself, not for the heat exchanger needed to supply it with enough heat energy to keep functioning. The size of heat exchanger will vary dramatically, depending on whether the heat is being extracted from air or a liquid such as water.

Quote
Since we can basically assume that all the energy we receive comes from the sun, we are already at about twice the solar constant per square meter in orbit. Granted, you could take the energy elsewhere (note that thermal energy input will definitely be limited by the area of the exchanger, not its volume!) without bad long term consequences for the planet since we'll be giving it back one way or another when using it (too bad it isn't a solution to global warming as well ;-))... Still being in the neighborhood of the machine is likely to be somewhat uncomfortable if it is that efficient at converting heat into electricity.

Today's internal combustion engines are about 50% efficient. If you were to have an electric car powered by one of my devices, you'd need an air heat exchanger and airflow roughly comparable to an internal combustion engine's radiator and the airflow needed to keep the engine cool to run an electric car with comparable horsepower. The difference would be that the air would be cooled instead of heated as it passed through the heat exchanger.

IMO, this idea is a possible solution for carbon-based fuel consumption and the pollution it generates. If you can extract enough energy from ambient air to power your car as you drive (cooling the air in the process), then there is no reason to burn any fuel, and driving your car will have essentially zero environmental impact. It's the ultimate "green" technology if it can be built cheaply enough to be economically feasible.

Consider this question: would you pay an extra 10000 euro for your next car if you NEVER had to stop at a fuel station again?
Logged

WarrenMars
Guest
« Reply #132 on: January 24, 2010, 08:19:57 AM »
ReplyReply

We seem to have gone a long way from the original topic, which is ok, but I'm out.

Thank you to all who contributed to this thread and who read my web pages. Yes there was a fair amount of ridicule but no more than I deserved, considering the arrogance of my original post and the presence of a number of factual errors in my site. I hereby apologise for these shortcomings. Now some more humble pie before I go:

Thanks to ridicule from some of you I have forced myself to research more deeply than I had previously and I have found that some of the fundamental assumptions that I had made were incorrect. In particular: RAW files are not gamma compressed, the ideal F number is not 1 and pixels contain only 1 colour not 4. Thanks also to Jonathon and Colin for alerting me to the limited electron carrying capacity per pixel.

The Gods don't like hubris and I have been justly embarrassed, however it has been said that: "The man who never made a mistake never made anything!" Furthermore I don't believe this thread has been a waste of time, far from it! In particular the magic number of f/0.5 is surely worth the price of admission on its own.

What doesn't break you only makes you stronger and I am off to rewrite my web pages in the light of my deeper understanding. The thrust of my original analysis remains correct, however it must be seen now as an idealised theoretical analysis rather than a study of real-world cameras. I will address the issue of real world cameras in the rewrite and I will produce real world data to verify my analysis. For the moment I am modifying Dave Coffin's DCRAW application in order to produce the required TRUE RAW; something I haven't been able to find anywhere.

I will post a new thread in this forum when I have finished the rewrite and will appreciate any constructive criticism at that time.
See you folks around.
Logged
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #133 on: January 24, 2010, 01:53:57 PM »
ReplyReply

Quote from: WarrenMars
I will post a new thread in this forum when I have finished the rewrite and will appreciate any constructive criticism at that time.
See you folks around.

Fair enough. It's been interesting...
Logged

Theresa
Jr. Member
**
Offline Offline

Posts: 51


« Reply #134 on: January 25, 2010, 10:23:27 AM »
ReplyReply

QUOTE (WarrenMars @ Jan 22 2010, 03:47 PM) *
Then there is the problem of over-large file sizes that take too long to process, slow up your camera, slow up your computer and take up too much room.

With a modern computer, such as one with a quad core chip and 6GB of memory and copious hard drive space, processing is VERY fast.  The files aren't "overly large" since the computer has no problem coping with them.  I even had a four year old computer that was fast enough.  The only time I see complaints about digital photos being too big is when someone is trying to work on a 24GB with an old computer that is worth maybe one fourth as much as the camera.  Older Macs are no faster than old pcs, its just a fact of life that a hi-def camera requires a fast computer.
Logged
joofa
Sr. Member
****
Offline Offline

Posts: 486



« Reply #135 on: January 25, 2010, 03:37:08 PM »
ReplyReply

Quote from: WarrenMars
Thank you to all who contributed to this thread and who read my web pages. Yes there was a fair amount of ridicule but no more than I deserved, considering the arrogance of my original post and the presence of a number of factual errors in my site. I hereby apologise for these shortcomings.

Warren, you don't have to apologize. It takes a lot of courage to admit a mistake and I really appreciate your boldness in doing it online.

Quote from: WarrenMars
Thanks to ridicule from some of you I have forced myself to research more deeply than I had previously and I have found that some of the fundamental assumptions that I had made were incorrect. In particular: RAW files are not gamma compressed, the ideal F number is not 1 and pixels contain only 1 colour not 4. Thanks also to Jonathon and Colin for alerting me to the limited electron carrying capacity per pixel.

It is sad that you were ridiculed, especially by those whose own understanding in some of these matters is not correct.

Quote from: WarrenMars
For the moment I am modifying Dave Coffin's DCRAW application in order to produce the required TRUE RAW; something I haven't been able to find anywhere.

Now that is something courageous. Unfortunately, DCRAW is written in a bad software programming style. Very difficult to read. A good example of a code that is functional but otherwise poorly written. Good luck with it.

« Last Edit: January 25, 2010, 04:13:20 PM by joofa » Logged

Joofa
http://www.djjoofa.com
Download Photoshop and After Effects plugins
Jonathan Wienke
Sr. Member
****
Offline Offline

Posts: 5759



WWW
« Reply #136 on: January 25, 2010, 06:30:07 PM »
ReplyReply

Quote from: PierreVandevenne
In a non closed system, at first sight, I can believe your system could generate _some_ electricity forever. In a closed system, for a while. But intuitively, I can't agree with the 0.288 W per cm3 though. That would lead to 2.8 KW per 10000 cm3 (a one square meter, 1 cm high layer) and 288 KW (2.8MW (!) if you increase efficiency as planned) for a cubic meter of device.

I double-checked my calculations regarding the 0.288 W/cm^3 estimate, and did find some errors from converting units incorrectly. I've corrected those errors, and have a revised estimate of 0.171 W/cm^3 power density, based on the following assumptions:
  • Ambient temperature of 300 K (26.85 C)
  • Foil thickness of 0.086 mm
  • Refractive index of surface B coating = 2.37
  • Surface B refractive coating thickness of 40 microns
  • Emissivity/absorptivity of foil surface 95% of blackbody
  • 56 foil layers per centimeter (alternating A and B ), totalling 56 cm^2 of surface A area and 56 cm^2 of surface B area within 1 cm^3, with sufficient spacing to prevent conduction losses due to surfaces touching each other
  • Thermocouples operating at 50% of the Carnot efficiency limit

Slides 22 and 23 of the presentation files have been updated to reflect the revised power density estimate. 0.171 W/cm^3 is equivalent to 171 kW/m^3.
« Last Edit: January 25, 2010, 06:30:43 PM by Jonathan Wienke » Logged

DarkPenguin
Guest
« Reply #137 on: January 25, 2010, 07:12:58 PM »
ReplyReply

Quote from: joofa
Now that is something courageous. Unfortunately, DCRAW is written in a bad software programming style. Very difficult to read. A good example of a code that is functional but otherwise poorly written. Good luck with it.

http://www.libraw.org/





Logged
col
Jr. Member
**
Offline Offline

Posts: 52


« Reply #138 on: January 25, 2010, 07:26:02 PM »
ReplyReply

Quote from: WarrenMars
We seem to have gone a long way from the original topic, which is ok, but I'm out.
Thank you to all who contributed to this thread and who read my web pages. Yes there was a fair amount of ridicule but no more than I deserved, considering the arrogance of my original post and the presence of a number of factual errors in my site. I hereby apologise for these shortcomings.
I will post a new thread in this forum when I have finished the rewrite and will appreciate any constructive criticism at that time.
See you folks around.

Best luck with everything. Sure, you made a few mistakes, but forced us all to think and learn, and some of the topics which came up were interesting.
Logged
dwdallam
Sr. Member
****
Offline Offline

Posts: 2044



WWW
« Reply #139 on: January 27, 2010, 03:38:22 AM »
ReplyReply

Quote from: Plekto
If they were using another material other than glass, though, that would change.  The question, though, is, what other materials would we possibly use besides glass and plastic?

Diamond.
Logged

Pages: « 1 ... 5 6 [7] 8 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad