Ad
Ad
Ad
Pages: « 1 [2] 3 4 5 »   Bottom of Page
Print
Author Topic: Synthetic HDR  (Read 26713 times)
kal
Jr. Member
**
Offline Offline

Posts: 62


WWW
« Reply #20 on: March 30, 2007, 02:49:33 AM »
ReplyReply

Quote
There's no violation of information theory here. I'm not peddling perpetual motion. ;-) There's simply a tradeoff between resolution and bit depth -- normally, this would result in a poor quality image, but by restricting the effect to deep shadow areas, there should be no visible degradation to brighter parts of the image, whilst retaining more shadow detail due to the greater dynamic range. It's not inventing any information -- just kind-of spreading it out in a way that looks better to human visual perception, if that makes any sense.
[a href=\"index.php?act=findpost&pid=109540\"][{POST_SNAPBACK}][/a]

Some more thoughts:

- did you try using different convolution kernels?

- in your wiki you compared synthetic HDR with Levels; did you compare it with Levels + low pass filtering applied to (previously) dark areas?
Logged

laughingbear
Full Member
***
Offline Offline

Posts: 214


« Reply #21 on: March 30, 2007, 06:38:06 AM »
ReplyReply

Quote
It's a little different in philosophy to traditional HDR, I suppose, but I can see this being really good for dealing with printing difficult images. I'm not so concerned with getting into the more stylised (dare I say it a bit cliched?) applications of HDR -- my thing is fine art photography, and my motivation for this is trying to make better prints (better performances of the score, to misquote Ansel Adams), if you see what I mean.

I am no scientist by any means, I am musician, but yes, I can see what you mean! This was my spontaneous thought as well, as I am much more interested in print results than anything else.

Your in depth knoweldge and the way you tackle that challenge speaks for itself!

I regret that I can not contribute on a technical level, may be just one thougt, you mention Photoshops HDR function, I would like to think that "Photomatix" http://www.hdrsoft.com/ offers a higher standard for reasons of comparison.

Quote
Nevertheless, I will go ahead and write the plugin. I intend to give it away for free, so people can use it or not, it's entirely up to them.

Wow. This is a very generous touch in deed.

You appear to be one of those people with 72 hours days instead of 24.  
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #22 on: March 30, 2007, 07:47:19 AM »
ReplyReply

Quote
I think a PS plugin could potentially do a much better job, without any of the artifacts associated with the HDR merge or tone mapping, because I can figure out all of the maths to get exact solutions. My ideal would be something like, put PS into 16-bit mode (if it isn't already), then have the plugin synthesize any missing low-order precision, whilst keeping the image bit-for-bit exactly the same as the original except for shadow data. This should mean that you'd get essentially no artifacts whatsoever, just much more (cleaner) shadow information to play with in dodging and burning. It's a little different in philosophy to traditional HDR, I suppose, but I can see this being really good for dealing with printing difficult images. I'm not so concerned with getting into the more stylised (dare I say it a bit cliched?) applications of HDR -- my thing is fine art photography, and my motivation for this is trying to make better prints (better performances of the score, to misquote Ansel Adams), if you see what I mean.
[a href=\"index.php?act=findpost&pid=109519\"][{POST_SNAPBACK}][/a]

I've had an idea for something like this for a while, but I haven't gotten around to writing it.  Basically, a filter would simply average local pixels, with a radius proportional to the darkness of the pixel (or the average of its neighbors).  Then you could dodge and burn, as you suggest, or use something like the shadow/highlight tool in PS to bring out the shadows.  For a greyscale camera, the same routine could be used as one that works on RGB color images, but for RAW CFA data, it would probably best be done before demosaicing, while noise is still in one pixel.  I've made a simple FilterMeister filter that does a median effect, but based on the ratio of a pixel to its surroundings, rather than the difference, which has the most effect in shadows, especially for hot pixels; that would be a good pre-processing before reducung the frequency limit of the shadows.

Maybe I'll try my hand at it in FilterMeister this weekend.
Logged
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #23 on: March 30, 2007, 07:56:39 AM »
ReplyReply

Quote
I uploaded a 16-bit TIFF for the source image, partly because that's the exact same image I started with when I was doing the synthetic HDR experiments, and partly because most third-party software makes a mess of decoding monochrome DNG files (this is the Megavision's native format, but Adobe's own DNG decoder insists on trying to decode them as colour images (it tries to interpolate for a Bayer matrix that actually isn't there), so you lose quality and gain weirdness unless you process them with Megavision's software).
[a href=\"index.php?act=findpost&pid=109569\"][{POST_SNAPBACK}][/a]

You really don't need the services of a RAW converter for a greyscale camera, if you have access to an uncompressed DNG.  You can load uncompressed DNGs into photoshop literally with "load as" and ".raw", and supply the correct parameters in the dialogue.  You can blackpoint the image if not done already in the RAW file, and then use "Levels" for gamma-adjustment.

If DCRAW supports your digital back, then all you need to do is use the "-D" option to get the literal RAW data into PSD format (with the parameter for 16-bit PSD).
Logged
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #24 on: March 30, 2007, 07:56:50 AM »
ReplyReply

Quote
I am no scientist by any means, I am musician, but yes, I can see what you mean! This was my spontaneous thought as well, as I am much more interested in print results than anything else.

Your in depth knoweldge and the way you tackle that challenge speaks for itself!

I regret that I can not contribute on a technical level, may be just one thougt, you mention Photoshops HDR function, I would like to think that "Photomatix" http://www.hdrsoft.com/ offers a higher standard for reasons of comparison.
Wow. This is a very generous touch in deed.

You appear to be one of those people with 72 hours days instead of 24. 
[a href=\"index.php?act=findpost&pid=109596\"][{POST_SNAPBACK}][/a]

As I've said, I'm not that impressed with Photoshop's merge to HDR function. The best way to do this is to actually solve the equations and come up with an exact solution -- I'll have a go at it over the weekend and see what comes out.
Logged
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #25 on: March 30, 2007, 07:59:10 AM »
ReplyReply

Quote
You really don't need the services of a RAW converter for a greyscale camera, if you have access to an uncompressed DNG.  You can load uncompressed DNGs into photoshop literally with "load as" and ".raw", and supply the correct parameters in the dialogue.  You can blackpoint the image if not done already in the RAW file, and then use "Levels" for gamma-adjustment.

If CRAW supports your digital back, then all you need to do is use the "-D" option to get the literal RAW data into PSD format (with the parameter for 16-bit PSD).
[a href=\"index.php?act=findpost&pid=109605\"][{POST_SNAPBACK}][/a]

Adobe's RAW support stuffs up Megavision images, unfortunately. Superficially, it will read the files, but it tries to interpolate them as if they had a Bayer matrix, which they don't, so you end up with a loss of resolution and some weird colour artifacts.
« Last Edit: March 30, 2007, 03:52:26 PM by st326 » Logged
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #26 on: March 30, 2007, 03:34:53 PM »
ReplyReply

Quote
Adobe's RAW support stuffs up Megavision images, unfortunately. Superficially, it will read the files, but it tries to interpolate them as if they had a Bayer matrix, which they don't, so you end up with a loss of resolution and some weird colour artifacts.
[a href=\"index.php?act=findpost&pid=109608\"][{POST_SNAPBACK}][/a]

Hmm... I just had another go at it. Seemingly it now works -- it's likely that Adobe have fixed something in one of their recent updates. This is definitely good, because I rather like ACR in comparison with the Megavision software. I'll give it a try again.
« Last Edit: March 30, 2007, 03:52:52 PM by st326 » Logged
Tim Gray
Sr. Member
****
Offline Offline

Posts: 2002



WWW
« Reply #27 on: March 30, 2007, 05:39:38 PM »
ReplyReply

I used to do blends from a single exposure, but haven't since I started to use the parametric controls in Lightroom.

If I use something like:

Exposure 0
Recovery 0
Fill light 21
Black 0
Brightness 104

Highlights -100
Lights +70
Darks +82
Shadows +100

I seem to get someting that looks a lot like the HDR image (blown highlights included) or am I not pixel peeping sufficiently?  In this case I could blend multiple images using the CTRL ALT ~ method and not increase the blown pixels.

The point is that there appears to be very few 0,0,0 pixels in the shadows so it't not surprising that a fair amount of detail can be recovered given the bit depth of the image.
Logged
Monito
Jr. Member
**
Offline Offline

Posts: 96



WWW
« Reply #28 on: March 30, 2007, 06:08:30 PM »
ReplyReply

Quote
As I've said, I'm not that impressed with Photoshop's merge to HDR function.[a href=\"index.php?act=findpost&pid=109606\"][{POST_SNAPBACK}][/a]
You can't make a good HDR without good source images.  I followed the link to your writeup.  I'm sorry to say, but your source images aren't a good set for shadow detail.  You have taken too many that are too dark and not enough with increased exposure for shadow detail.
Logged

MonitoPhoto (Landscape, Architecture, Portraits: Halifax, Nova Scotia)
Monito
Jr. Member
**
Offline Offline

Posts: 96



WWW
« Reply #29 on: March 30, 2007, 06:18:45 PM »
ReplyReply

You encountered a memory problem merging 7 images twelve bit, 16 MPixel on a 4 GB machine.  Have you set your Photoshop to cross the 1.5 GB Photoshop memory limit?  I don't have the details on how to do it, since I've not had the memory to do it.

I have merged 9 images that are 16 bit TIFF files, 12 MPixel each (4368 x 2912) on the 1.5 GB machine without difficulty.  That's nine files, 75 MB each, producing a 131 MB PSD file, which after some workups became a 303 MB PSD file.  Not too different from the merge you attempted.  Photoshop CS2.
Logged

MonitoPhoto (Landscape, Architecture, Portraits: Halifax, Nova Scotia)
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #30 on: March 30, 2007, 06:20:16 PM »
ReplyReply

Quote
You can't make a good HDR without good source images.  I followed the link to your writeup.  I'm sorry to say, but your source images aren't a good set for shadow detail.  You have taken too many that are too dark and not enough with increased exposure for shadow detail.
[a href=\"index.php?act=findpost&pid=109753\"][{POST_SNAPBACK}][/a]

Possibly so, but the point of the article was actually recovering shadow detail from images that *don't* already conventionally have sufficient shadow detail. The traditional HDR image was given as a comparison really.
Logged
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #31 on: March 30, 2007, 06:29:11 PM »
ReplyReply

Quote
You encountered a memory problem merging 7 images twelve bit, 16 MPixel on a 4 GB machine.  Have you set your Photoshop to cross the 1.5 GB Photoshop memory limit?  I don't have the details on how to do it, since I've not had the memory to do it.

I have merged 9 images that are 16 bit TIFF files, 12 MPixel each (4368 x 2912) on the 1.5 GB machine without difficulty.  That's nine files, 75 MB each, producing a 131 MB PSD file, which after some workups became a 303 MB PSD file.  Not too different from the merge you attempted.  Photoshop CS2.
[a href=\"index.php?act=findpost&pid=109755\"][{POST_SNAPBACK}][/a]

Yes, Photoshop keeled over with a crash when I tried it. I didn't realise it had a 1.5GB limit -- I'll look into that. I specifically built this machine for dealing with large images, not so much for the 32MB files from the Megavision, more the 300MB images from the Better Light scan back I use. I quite frequently end up working on 1GB+ photoshop files -- it doesn't take much when the base image is that big by the time you create a couple of adjustment layers.

I was a bit surprised at the crash, to be honest. I wasn't expecting to have a problem with that many images. In one previous project I shot about 70 frames (again with the Megavision back) of the same street scene, then composited them. I'm pretty sure I managed to composite about 30 or so frames at a time, then I did another comp-of-comps to get the final image. I did choose to use 8 bit images, however, because the originals had plenty of contrast anyway and didn't need much in the way of adjustment, but even so, keeling over with just 7 16-bit monochrome 4k x 4k images wasn't impressive.
Logged
Monito
Jr. Member
**
Offline Offline

Posts: 96



WWW
« Reply #32 on: March 30, 2007, 08:02:16 PM »
ReplyReply

I read the article, st326.  A very interesting use of convolution.  You trade away resolution to get greater bit depth.

I wonder if kind of the opposite could be done.  Astronomers use image stacking for noise reduction which one might loosely say is using multiple images to get better resolution in the luminance (and color) space with the same dynamic range.  The better resolution is important only in the blacks where noise is relatively large.  So it is a noise reduction system.

I wonder if one could use stacking of images to obtain a higher resolution picture of a scene.  It might involve shifting the camera ever so slightly between shots to increase the variation so that each ray is not going through exactly the same bit of glass or dust.  Perhaps lightly tapping the tripod might be enough.

So instead of doing using perhaps four 10 MPixel images taken with an 85mm lens in a 2 x 2 pano to make a 32 MPixel image (some lost due to overlap for stitching), one could make four 10 MPixel images using a 50mm lens and stack them for higher resolution, perhaps approaching 40 MPixel.

I think in principle it could be done. One way might be to super-impose the four images to align the layers (if there has been camera movement; might not be necessary to shift the camera).  From each pixel, create a 2x2 area in the super-image. For each 2x2 area, take one pixel from each of the four layers to fill out the 2x2 area.

The process I have outlined seems naive and I'd have to think about the math a bit, since there might be a more refined solution.  It seems naive, since the assignment to individual cells of the 2x2 areas seems arbitrary.
Logged

MonitoPhoto (Landscape, Architecture, Portraits: Halifax, Nova Scotia)
John Sheehy
Sr. Member
****
Offline Offline

Posts: 838


« Reply #33 on: March 30, 2007, 09:28:12 PM »
ReplyReply

Quote
Adobe's RAW support stuffs up Megavision images, unfortunately. Superficially, it will read the files, but it tries to interpolate them as if they had a Bayer matrix, which they don't, so you end up with a loss of resolution and some weird colour artifacts.
[a href=\"index.php?act=findpost&pid=109608\"][{POST_SNAPBACK}][/a]

I'm not talking about RAW converters at all; you don't need RAW conversion for greyscale.  RAW converters do all sorts of color magic; greyscale RAW is pretty much ready to go, except that black may not be zero, depending on the camera or back, and gamma may not be applied.  IOW, you can do everything a converter would do by simply loading the raw RAW image, and using the Levels tool.

It doesn't matter whether the DNG converter thinks the image is color or greyscale.  Just because a converter interprets the DNG as CFA, doesn't necessarily mean that the DNG file itself has a demosaiced image in it.  DNG stores greyscale and CFA in the same format; only the tags should be different.  If you convert the DNG to uncompressed DNG (if it isn't already), you can rip the grey RAW bitmap right out of it in the "photoshop .raw" dialogue.  It sits in literal english-writing order, right after the header, up to the end of the file, usually.
« Last Edit: March 30, 2007, 09:29:37 PM by John Sheehy » Logged
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #34 on: March 30, 2007, 09:45:29 PM »
ReplyReply

Quote
I read the article, st326.  A very interesting use of convolution.  You trade away resolution to get greater bit depth.

I wonder if kind of the opposite could be done.  Astronomers use image stacking for noise reduction which one might loosely say is using multiple images to get better resolution in the luminance (and color) space with the same dynamic range.  The better resolution is important only in the blacks where noise is relatively large.  So it is a noise reduction system.

I wonder if one could use stacking of images to obtain a higher resolution picture of a scene.  It might involve shifting the camera ever so slightly between shots to increase the variation so that each ray is not going through exactly the same bit of glass or dust.  Perhaps lightly tapping the tripod might be enough.

So instead of doing using perhaps four 10 MPixel images taken with an 85mm lens in a 2 x 2 pano to make a 32 MPixel image (some lost due to overlap for stitching), one could make four 10 MPixel images using a 50mm lens and stack them for higher resolution, perhaps approaching 40 MPixel.

I think in principle it could be done. One way might be to super-impose the four images to align the layers (if there has been camera movement; might not be necessary to shift the camera).  From each pixel, create a 2x2 area in the super-image. For each 2x2 area, take one pixel from each of the four layers to fill out the 2x2 area.

The process I have outlined seems naive and I'd have to think about the math a bit, since there might be a more refined solution.  It seems naive, since the assignment to individual cells of the 2x2 areas seems arbitrary.
[a href=\"index.php?act=findpost&pid=109769\"][{POST_SNAPBACK}][/a]

Actually, there are a few medium format backs that automatically do 'multi-shot' captures, where they automatically move the image sensor a few microns in the relevant direction between captures. Usually it's done to get around moire fringing effects on colour sensors, but it can also be used to give greater resolution. The only problem really is that relatively few lenses have enough resolution at the film plane for it to make a noticeable difference -- some certainly do, particularly lenses made by Schneider and Rodenstock recently specifically for use with mini-view cameras, but most aren't really quite sharp enough. I dare say most of my Bronica lenses would work for it, but they are (relatively) modern designs. I've certainly had some good results making stitched panos with my Schneider shift/tilt lens on the Bronica/Megavision -- I can get about 6500x4000 out of it from two shots. I've not tried going for 4 shots, but my guess would be that I'd probably get about 6k x 6k because you can't quite shift as far if you try to do it in both directions at once. Mind you, for huge resolution there's not much that can beat the Better Light, so I tend to use that in situations where I can physically get it to the location.

I know that in astronomy it's common to do interferometry with multiple telescopes to simulate very large apertures to get very fine definition -- I'm not sure what kind of image processing they actually use. I did once do some work for a lens manufacturer who had an interferometer that they used to measure lens and mirror surfaces to insane accuracies, but it seemed to rely on doing fancy things that relied on the quantum nature of light, so I'm not sure what would be necessary.
Logged
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #35 on: March 30, 2007, 09:47:08 PM »
ReplyReply

Quote
I wonder if kind of the opposite could be done.
[a href=\"index.php?act=findpost&pid=109769\"][{POST_SNAPBACK}][/a]

Hmm... I'm not sure if the transform is invertible directly, or what exactly it would mean if it was. I'll think about that further. Food for thought.
Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #36 on: March 30, 2007, 11:01:11 PM »
ReplyReply

Sorry for the delay. You've got me confused here, Sarah. I thought the purpose of the exercise was to retrieve shadow detail.

Here's your tonemapped PSD version which seems to be greatly lacking in shadow detail, but does have a few blown highlights. I've converted it to grayscale to reduce file size.

[attachment=2207:attachment]

It's a bit cheeky to ask if you are working on a calibrated monitor, in view of your qualifications and expertise, but I wonder   .

This is my attempt to bring out shadow detail without blowing highlights too much, using straight processing in PS, selection, levels and curves etc.

[attachment=2208:attachment]

As you can see, I've brought out much more shadow detail than is evident in your tonemapped version whilst also preserving highlights, although I admit the shiny metal parts are a bit lacklustre. I might work some more on that.
Logged
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #37 on: March 31, 2007, 12:02:15 AM »
ReplyReply

Quote
Sorry for the delay. You've got me confused here, Sarah. I thought the purpose of the exercise was to retrieve shadow detail.

Here's your tonemapped PSD version which seems to be greatly lacking in shadow detail, but does have a few blown highlights. I've converted it to grayscale to reduce file size.

[attachment=2207:attachment]

It's a bit cheeky to ask if you are working on a calibrated monitor, in view of your qualifications and expertise, but I wonder   .

This is my attempt to bring out shadow detail without blowing highlights too much, using straight processing in PS, selection, levels and curves etc.

[attachment=2208:attachment]

As you can see, I've brought out much more shadow detail than is evident in your tonemapped version whilst also preserving highlights, although I admit the shiny metal parts are a bit lacklustre. I might work some more on that.
[a href=\"index.php?act=findpost&pid=109808\"][{POST_SNAPBACK}][/a]

Not a bad result, Ray, but the shadow area looks a bit over-grainy -- something that seems pretty much completely absent in the synthetic HDR image. But then, that was kind-of the point of the exercise. :-)

Actually, for what it's worth, my own rendition of this photo is completely different:



My visualisation was always aimed at having completely black shadows, which is why I used the black towel as a background. This is actually a comp of two separate exposures about 2 or 3 stops apart. I initially played around with HDR merge because I had problems with highlights blowing out, and I tried using that technique to get that under control. I didn't find the result usable, but got interested in the idea of pulling that much more out of shadow areas generally. Not for this particular image, because it didn't fit with my visualisation, but just as a general principle. When I had the idea about using convolution in a binning style then using HDR techniques to re-merge the image, I looked through the images I had to hand and this one was the best I could find in terms of having a large amount of dynamic range to start with, as well as having been shot as an HDR sequence (which I didn't really use).
« Last Edit: March 31, 2007, 12:13:02 AM by st326 » Logged
Ray
Sr. Member
****
Offline Offline

Posts: 8939


« Reply #38 on: March 31, 2007, 12:51:43 AM »
ReplyReply

Okay! Got you. Your tonemapped image does contain less noise in the shadows, when one brings them out. (Ignore my previous post if you like   .

However, I have to say, if I was converting a RAW file of this image and saw that degree of noise in my rendition below, I'd do another conversion either using luminance smoothing in ACR or noise reduction in RSP. A small adjustment at the conversion stage can have a quite noticeable effect on noise without reducing resolution to any significant degree that can't be compensated for with appropriate sharpening.

The mathematics of lower quantization errors might be clear to you, but this issue won't be resolved for me in practical terms until you can provide a RAW file that will open in ACR and preferrably RSP as well. I don't suppose you could borrow a Canon 5D and take a shot of a particularly contrasty scene, could you?  

If we are talking about fewer quantization errors in the shadows, I believe a simple dual conversion in ACR can provide a marginal improvement, but so marginal in my experience, that the added difficulties of getting a good blend without halos does not make that procedure worth the trouble, in my view. But perhaps my skills in PS are not up to the job.


Below are the lower right corners. My image, of course, is the one with less blown highlight area. The noise reduction in your tone-mapped image is significant and I failed to get a similar noise reduction in my image using Neat Image. I suppose one could argue that whatever noise reduction one achieves at the conversion stage, further noise reduction can be achieved with your method. If that's the case, I look forward to using your plu-in   .

[attachment=2209:attachment]  [attachment=2210:attachment]
Logged
st326
Jr. Member
**
Offline Offline

Posts: 62


« Reply #39 on: March 31, 2007, 01:00:54 AM »
ReplyReply

Quote
Okay! Got you. Your tonemapped image does contain less noise in the shadows, when one brings them out. (Ignore my previous post if you like   .

However, I have to say, if I was converting a RAW file of this image and saw that degree of noise in my rendition below, I'd do another conversion either using luminance smoothing in ACR or noise reduction in RSP. A small adjustment at the conversion stage can have a quite noticeable effect on noise without reducing resolution to any significant degree that can't be compensated for with appropriate sharpening.

The mathematics of lower quantization errors might be clear to you, but this issue won't be resolved for me in practical terms until you can provide a RAW file that will open in ACR and preferrably RSP as well. I don't suppose you could borrow a Canon 5D and take a shot of a particularly contrasty scene, could you?   

If we are talking about fewer quantization errors in the shadows, I believe a simple dual conversion in ACR can provide a marginal improvement, but so marginal in my experience, that the added difficulties of getting a good blend without halos does not make that procedure worth the trouble, in my view. But perhaps my skills in PS are not up to the job.
Below are the lower right corners. My image, of course, is the one with less blown highlight area. The noise reduction in your tone-mapped image is significant and I failed to get a similar noise reduction in my image using Neat Image. I suppose one could argue that whatever noise reduction one achieves at the conversion stage, further noise reduction can be achieved with your method. If that's the case, I look forward to using your plu-in   .

[attachment=2209:attachment]  [attachment=2210:attachment]
[a href=\"index.php?act=findpost&pid=109824\"][{POST_SNAPBACK}][/a]

Phew, it was a long road, but we got there in the end! ;-)

I'm uploading the DNG now. As it seems that recent versions of ACR don't screw it up, you might as well have a play with it if you like. I'd certainly be interested in seeing what NR tools can do with it. It's worth reiterating that the synthetic HDR algorithm really isn't intended to be a noise reduction algorithm, though admittedly it does seem to have that effect (quite strongly). I'll have a think about adding a bit of for-real NR filtering when I do the plugin. It could be that I could do some despeckling or median filtering or something alongside the basic algorithm. I'll have some fun with that over the weekend. :-)
Logged
Pages: « 1 [2] 3 4 5 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad