Ad
Ad
Ad
Pages: « 1 [2]   Bottom of Page
Print
Author Topic: The State of Vector Conversion Image Enlargement?  (Read 6615 times)
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 570


WWW
« Reply #20 on: August 14, 2012, 01:12:57 PM »
ReplyReply

Bart,

to which degree would this vectorizing be able to replace Super Resolution stacking, and thus make multiple shots unnecessary? SR doubles the pixel count, which does not sound much in comparison.

Hope this is not hi-jacking the thread??

Good light - Hening
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #21 on: August 14, 2012, 01:22:59 PM »
ReplyReply

SR doubles the pixel count, which does not sound much in comparison.
I dont believe there is such a hard limit in SR. It is more of a setup-dependent point of diminishing returns.
-number of images
-lense PSF

-h
Logged
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 570


WWW
« Reply #22 on: August 14, 2012, 03:00:31 PM »
ReplyReply

What you write sounds reasonable, in principle. But i the real world, there is - to my knowledge - only one app which does this: Photo Acute, and that is limited to 2 times the original pixel count.
Good light - Hening
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #23 on: August 14, 2012, 03:15:50 PM »
ReplyReply

What you write sounds reasonable, in principle. But i the real world, there is - to my knowledge - only one app which does this: Photo Acute, and that is limited to 2 times the original pixel count.
Good light - Hening
try googling "image super resolution software". I got 20,600,000 results, many of which seems to be commercial applications that claims to do multi-frame super-resolution.

http://photoacute.com/tech/superresolution_faq.html
Quote
Q What levels of increased resolution are realistic?
A It is highly variable depending on the optical system exposure conditions and what post-processing is applied. As a rule of thumb, you can expect and increase of 2x effective resolution from a real-life average system (see MTF measurements) using our methods. We've seen up to a 4x increases in some cases. You can get even higher results under controlled laboratory conditions, but that's only of theoretical interest."

At some point, it does not make sense to keep doubling the number of exposures for a diminishing return. Most camera setups run into the lense MTF/camera shake limit sooner or later (a naiive implementation of SR really only adress the limitation of sensor resolution, relying on aliasing for its results: )
Quote
Digital cameras usually have anti-aliasing filters in front of the sensors. Such filters prevent the appearance of aliasing artifacts, simply blurring high-frequency patterns. With the ideal anti-aliasing filter, the patterns shown above would have been imaged as a completely uniform grey field. Fortunately for us, no ideal anti-aliasing filter exists and in a real camera the aliased components are just attenuated to some degree.

I think that the problem you are raising is kind of backwards. If superresolution allows you to fuse N images into one that has twice the native resolution of your camera, this superresolution image is going to be the ideal starting-point for further image scaling. Anyhow, anyone can claim that their algorithm does "400% upscaling" or "1600% upscaling". That is trivial. The question is how does the result look. That is not trivial.

-h
« Last Edit: August 14, 2012, 03:19:38 PM by hjulenissen » Logged
dwdallam
Sr. Member
****
Offline Offline

Posts: 2044



WWW
« Reply #24 on: August 14, 2012, 04:04:31 PM »
ReplyReply

Do you have any references, or is this just a wish-list?

-h

It's an extrapolation based on current technology that can reproduce graphics perfectly, but they are simple graphics. I also based my comment on the fact that entire automobiles can be constructed in vector programs and scaled accordingly to any size without loss of that image.

Yes, there are gaps in my position. I admit that. But if you're saying that mathematical models working in concert with vector algorithms will never be able to reproduce bitmap images perfectly (meaning there is no difference to the human eye at any resolution or viewing distance), then the burden of proof is on you.
« Last Edit: August 14, 2012, 04:11:02 PM by dwdallam » Logged

dwdallam
Sr. Member
****
Offline Offline

Posts: 2044



WWW
« Reply #25 on: August 14, 2012, 04:07:41 PM »
ReplyReply

Are you saying to re-sample at 600ppi and then do the sharpening etc as opposed to re-sampling at 300ppi?

Yes, and also don't underestimate the benefit of sharpening and printing at 600 PPI. You can sharpen more (with a small radius) at 600 PPI (or 720 PPI for Epsons) because artifacts will probably be too small to see. What's more, boosting the modulation of the highest spatial frequencies also lifts the other/lower spatial frequencies, the total image becomes more defined.

With PZP you can balance between the amount of vector edge sharpening and USM like sharpening, whatever the image requires, and you can add noise at the finest detail level.

Cheers,
Bart
Logged

dwdallam
Sr. Member
****
Offline Offline

Posts: 2044



WWW
« Reply #26 on: August 14, 2012, 04:23:09 PM »
ReplyReply

I just read this and it is interesting. I'm going to try to and see what happens. What do you all think?

http://www.digitalphotopro.com/technique/software-technique/the-art-of-the-up-res.html
Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3767


« Reply #27 on: August 14, 2012, 05:25:50 PM »
ReplyReply

Are you saying to re-sample at 600ppi and then do the sharpening etc as opposed to re-sampling at 300ppi?

Absolutely, yes. Of course, this assumes that there is something to sharpen at 600/720 PPI. So when the native resolution for the output size drops below 300/360 PPI, there would be little detail to sharpen at that highest level unless one uses PZP or similar resolution adding(!) applications, but the effect will still carry over to lower spatial frequencies (after all, a 2 pixel radius at 600 PPI is still a 1 pixel radius at 300 PPI, but with more pixels to make smoother edge contrast enhancements).

Another thing is that at 600/720 PPI one has another posibility to enhance overall resampling quality, and that is with deconvolution sharpening, targeted at the losses inherent with upsampling. But that's another subject, a bit beyond the scope of this thread which deals with the specific situation of vector types of sharpening (although with blending one can get the best of both worlds).

Cheers,
Bart
Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #28 on: August 15, 2012, 01:16:27 AM »
ReplyReply

It's an extrapolation based on current technology that can reproduce graphics perfectly, but they are simple graphics. I also based my comment on the fact that entire automobiles can be constructed in vector programs and scaled accordingly to any size without loss of that image.
Then you are missing the point (in my humble opinion). Scaling a vector model should be relatively easy. Building a vector model from simple graphics (like text) is also doable. Building a vector model from noisy, complex, unsharp real-world images is very hard (I have tried).

There is also the problem that even though a nice vector model of, say, a car can be reproduced at any scale to produce smooth, sharp edges, blowing it up won't produce _new details_. The amount of information is still limited to the thousands or millions of vectors that represents the model. At some scale, it might be possible to "guess" the periphery of a leaf in order to smoothly represent it at finer pixel grids. But leafs contains new, complex structures the closer you examine it. Unless that information is encoded into the pixels of the camera, good luck estimating it. You might end up with a "cartoonish" or "bilatteral filtered" image where large-scale edges are perfectly smooth, while small-scale detail is very visibly lacking.

http://en.wikipedia.org/wiki/Image_scaling

(Image enlarged 3× with the nearest-neighbor interpolation)

(Image enlarged in size by 3× with hq3x algorithm)
The results obtained using specialized pixel art algorithms are striking, but in my opinion the reason why they work so well is because the source image really is a "clean" set of easily vectorized objects, rendered with a limited color map. This is a narrow set of the pixels that a general image can contain, and these algorithms does not work well on natural images (I have tried).
Quote
Yes, there are gaps in my position. I admit that. But if you're saying that mathematical models working in concert with vector algorithms will never be able to reproduce bitmap images perfectly (meaning there is no difference to the human eye at any resolution or viewing distance),
The Shannon-Nyquist sampling theory actually supports that a properly anti-aliased image can be properly reproduced at any sampling rate (=pixel density). The thing is that "properly anti-aliased" actually means a band-limited waveform, i.e. fine detail must be removed. If you can live with that, everything else reduce to simple linear filters that can nicely fit into existing cpu hardware.
http://en.wikipedia.org/wiki/Nyquist–Shannon_sampling_theorem

I interpret your position to be that, say, a VGA color image (640x480 pixels) at 24 bits per pixel can be upscaled to any size/resolution and be visually indistinguishable from an image at that native resolution. I am very sceptic about such a view. Do you really think that future upscaling will make a $200 ixus look as good as a D800? I shall give you an exotic example. Hopefully, you will see the general point that I am making. Say that you are shooting an image of a television screen showing static noise. Using a 1 megapixel camera. You obtain 1 megapixel of "information" about that static. Now, shoot the same television using a 0.3 megapixel camera. The information is limited to 0.3 megapixel. As there is (ideally) no correspondence between pixels at different scales, the lowres image simply does not contain the information needed to recreate the large one, and no algorithm in the world can guess the accurate outcome of a true whitenoise process.

http://en.wikipedia.org/wiki/Information_theory

Say that you have high-rez image A and high-rez image B. When downsampled, they produce an identical image, C (may be unlikely, but clearly possible). If you only have image C, should an ideal upscaler produce A or B?

I think that I have introduced sufficient philosophical and algorithmic issues that your claim that it is only a matter of cpu cycles is weakened.
Quote
then the burden of proof is on you.
You put out certain claims. I am sceptic about those claims. The burden of proof obviously is on you. I shall try to support my own claims.

http://en.wikipedia.org/wiki/Philosophical_burden_of_proof
Quote
"When debating any issue, there is an implicit burden of proof on the person asserting a claim. "If this responsibility or burden of proof is shifted to a critic, the fallacy of appealing to ignorance is committed"."

-h
« Last Edit: August 15, 2012, 01:37:21 AM by hjulenissen » Logged
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 570


WWW
« Reply #29 on: August 16, 2012, 05:38:42 AM »
ReplyReply

quote BartvanderWolf august 14, 2012 05:25:50 PM:

Another thing is that at 600/720 PPI one has another posibility to enhance overall resampling quality, and that is with deconvolution sharpening,

Bart,
now we have 3 ways of upsizing/improving sharpness under discussion:
Super Resolution, Deconvolution, and vector uprezzing. How would you suggest to combine them?
(btw is deconvolution limited by a certain minimum resolution like 600 dpi?)

Good light! Hening
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #30 on: August 16, 2012, 06:20:38 AM »
ReplyReply

This article seems to provide an overview of SR vs deconvolution vs "upscaling"
http://www-sipl.technion.ac.il/new/Teaching/Projects/Spring2006/01203207.pdf

The natural order I believe would be super-resolution->deconvolution->upsampling, since SR and deconvolution are strongly connected to physical characteristics of the camera (and SR is strongly connected to the PSF/deconvolution).

Most of the litterature is concerned with blind deconvolution in mind. The devotion of users on this forum seems to indicate that people are willing to estimate PSF independently. That should greatly simplify the problem.

Quote
(btw is deconvolution limited by a certain minimum resolution like 600 dpi?)
Think of deconvolution as "sharpening done better", at the cost of more parameters to input/estimate and more cpu cycles. Like sharpening, you can do deconvolution on any resolution image.

-h
« Last Edit: August 16, 2012, 06:30:59 AM by hjulenissen » Logged
nemophoto
Sr. Member
****
Offline Offline

Posts: 507



WWW
« Reply #31 on: August 16, 2012, 09:26:39 AM »
ReplyReply

This is a topic that comes up periodically. It essentially boils down to individual taste and experience. I for one have used Genuine Fractals (now Perfect Resize) and Alien Skin Blowup for years. As a matter of fact, my experience with GF goes back about 15+ years when you had to save the image in a dedicated file format. Now, I mostly use Blowup, partially because of my frustration with rendering speed (on screen) with Perfect Resize since they adopted OpenGL for and the GPU for rendering. For one of my clients, I regularly use the software to create images for instore posters, and am generally enlarging 150 -200%. Most recently, though, for some shows, I created 40x60 blowups, for printing on canvas with my iPF8300. These images, at 300 dpi, often hit the scales at 900MB, and sometimes 1.4GB, and required enlarging about 340%. (One image was even greater because I had accidentally shot it on MEDIUM JPEG from my 1Ds Mark II, so it was a 9MP file.) If you pixel peep at 100%, you will see what appear to be nasty and weird artifacts. However, the truer view is 50% and really at that size, 25%. Then you'll see what most viewers see in printed form from the proper distance of about 3' - 5'. That said, many people still got within inches.

I think these programs offer a superior result over bicubic and alike, but the results are most noticeable when enlargement approaches 200%. For something in the 110-135% range, it doesn't make sense to use a plugin that takes longer to render with marginally better results. One of the main things one gains is in edge sharpness. Our eyes perceive contrast (and therefore sharpness), before we even start to perceive things like color, and this is the strong suit for programs such as Perfect Resize and Blowup3.
Logged

BartvanderWolf
Sr. Member
****
Offline Offline

Posts: 3767


« Reply #32 on: August 16, 2012, 10:34:26 AM »
ReplyReply

quote BartvanderWolf august 14, 2012 05:25:50 PM:

Another thing is that at 600/720 PPI one has another posibility to enhance overall resampling quality, and that is with deconvolution sharpening,

Bart,
now we have 3 ways of upsizing/improving sharpness under discussion:
Super Resolution, Deconvolution, and vector uprezzing. How would you suggest to combine them?

Hi Hening,

1. Single image capture produces inherently blurry images. Even if not by subject motion or camera shake, we have to deal with  lens aberrations, diffraction, anti-aliasing filters, area sensors, Bayer CFA demosaicing, IOW blurry images. A prime candidate to address that capture blur is deconvolution, because the blur can usually be characterized quite predictably with prior calibration, and the Point Spread Function (PSF) that describes that combined mix of blur sources can be used to reverse part of that blur in the capture process. If done well, it will not introduce artfacts that can later become a problem when blown-up in size and thus visibility.

2. Second would be techniques like Super Resolution, but usually that requires multiple images with sub-pixel offsets so that may not be too practical for some shooting scenarios. Single image SR depends heavily on suitable image elements in the same image that can be reused to invent credible new detail at a smaller scale, so not all images are suitable. I would also put fractal based upscaling in that category, sometimes it works, sometimes it doesn't, even within the same image there are areas that are better than others.

3. Third is a combination of good quality upsampling, combined with vector based resampling. The vector based approach favors edge detail (which is very important in human vision), so it is best combined with high quality traditional resampling for upsampling. The traditional resampling should balance between ringing, blocking, and blurring artifacts. Downsampling is best done with good quality pre-filtered downsampling techniques (to avoid aliasing artifacts).

Quote
(btw is deconvolution limited by a certain minimum resolution like 600 dpi?)

No, there is no real limit other than that it is a processing intensive procedure, and therefore it takes longer when the image is larger. So ultimately memory constraints and processing time are the practical limitations, but fundamentally it is not limited by size.

Quote
Good light!

Same to you, Cheers,
Bart
Logged
Hening Bettermann
Sr. Member
****
Offline Offline

Posts: 570


WWW
« Reply #33 on: August 16, 2012, 02:33:17 PM »
ReplyReply

Thanks to all the three of you for your answers!

The sequence SR --> deconvolution would be like my current work flow with SR in PhotoAcute, output as DNG, then conversion to TIF in Raw Developer with deconvolution, then editing the TIF - so vector uprezzing could be added here if required.
The sequence deconvolution-->SR would - ideally - require deconvolution independent of raw conversion, since raw input is best for SR - at least in PhotoAcute.

Anyway, it looks like multiple frames for SR are still required.

Good light! - Hening.
Logged

hjulenissen
Sr. Member
****
Offline Offline

Posts: 1683


« Reply #34 on: August 16, 2012, 03:06:10 PM »
ReplyReply

Anyway, it looks like multiple frames for SR are still required.
Yes, I like to think of multiple frames as needed by SR by definition. Clever upscaling using a single image is just.... clever upscaling. SR exploits the slight variations in aliasing of several slightly shifted images that reveals different details about the true scene.

-h
Logged
bill t.
Sr. Member
****
Offline Offline

Posts: 2693


WWW
« Reply #35 on: August 16, 2012, 05:49:41 PM »
ReplyReply

By coincidence a charity I help asked me to print some really grungy, over processed stock shots.  My Alien Skin Blow Up 3 trial was still valid, so I  used it.  Was amazed how nicely the stair-stepping and over-sharpened grizzle dissolved into something not exactly credible, but hugely more acceptable than the original.  Also, Blow Up 3 has by far the nicest integration into LR and PS of any similar program I have used.  It is also the fastest, and seems to handle large files with ease.

So a grudging thumbs up.  If you do service printing, you need it.  It may not always produce the technically best results, the I think that for commercial customers the exceptional smoothness of the result would trump most every other issue.  But no specific jpeg artifact reduction, would be nice if it had that too.
Logged
Pages: « 1 [2]   Top of Page
Print
Jump to:  

Ad
Ad
Ad