Ad
Ad
Ad
Pages: « 1 ... 10 11 [12] 13 14 ... 20 »   Bottom of Page
Print
Author Topic: If Thomas designed a new Photoshop for photographers now...  (Read 59133 times)
MHMG
Sr. Member
****
Offline Offline

Posts: 596


« Reply #220 on: May 13, 2013, 04:22:53 PM »
ReplyReply

+1 to Jeff Schewe's remarks about Live Picture (LP). Way ahead of it's time (including soft proofing), and with features that still haven't been duplicated by other image editing programs, for example, a brush tool behavior that PS and LR still don't have as an option (as far as I have been able to figure out). Brush size was fixed relative to screen/window size so that when you zoomed in on an image, the brush size stayed the same...i.e, just like a real brush in hand, not a virtual brush stuck to the size of a set number of image pixels which change size on screen as image magnification is changed. Also, there was no visual on-screen distinction between viewing an image at odd magnifications versus evenly divisible units of pixel count (i.e. 25%, 50%, 100%, etc) as there still is today in both PS and LR4. I could inspect for image output sharpness at any desired magnification. This feature undoubtedly had to do with the interpolation processing of the pyramid file structure (remember the HP/Kodak Flashpix format initiative based on the IVUE pyramid file format structure used in LP?) as well as a superb anti-aliasing screen-draw algorithm present in the LP software.
 
My personal recollection of that era is perhaps a bit foggy now, but LP'a exclusive use of the "layers" metaphor (like cartoon animation cell overlays) and non destructive editing gave it a photographer's "darkroom dodging and burning" feel to it that PS simply didn't have, in large part because personal computer hardware wasn't capable enough at the time to give PS any real-time fluidity when working with big image files. So times have changed for better and for worse, but I still personally view LP as the cleanest and most elegant software I've ever used on a computer, bar none. Any programming teams wanting to produce a new image editor would do well to grab some old MAC hardware and take some time to play with the final version  LP version 2.6. Pity that LP got managed by bean counters and marketing 'experts" into an untimely death.

As others have already stated, the current version of LR and PS are very mature, and whether by corporate marketing decree, or by simple software evolutionary cycles, I still need PS for layers and mask sophistication that LR doesn't possess at this time. I can't do everything I need to do in LR. Part of this has to do with my active interest in fine art printmaking.  A wedding or sports photographer, for example, who needs to deliver high quality files, and lots of them, to the client is going to be thrilled with LR.  But someone wanting to sculpt a single image to the very highest print standard (that's my goal) still needs PS to accomplish this very personal and somewhat obsessive/compulsive level of finesse!

Speaking of blue sky stuff, I can't think of an easier and thus better metaphor for "layers and masks".  Why do we need to throw out this concept simply because it has been around for a long time in the imaging industry? It is brilliant, so IMHO, any truly competent image editor needs to have it. I'm aware of OnOne Perfect layers, but LR without layers and mask sophistication on a par with PS makes it incomplete and insufficient for my needs. Its absence in LR is the only reason I have to keep returning to PS. While on the subject of "tried and true" image editing features like layers and masks , I can think of a parallel debate going on currently with computer OS software designers. It has to do with "files and folders".  Many OS designers now say the files and folders concept is an antiquated metaphor and confusing to the young generation of smart phone users.  New mobile OS's for smartphones and tablets are increasingly being designed by software teams that feel we should dispense with this time honored analogy to paper filing cabinets for records management. Seriously? The files and folders paradigm works, and it ported very well to digital records management. Why throw it away and hide where our files are kept so that each individual application has to outsmart us to find our files? Stupid, stupid, stupid. This movement to do away with the files and folders concept will cause all sorts of file migration (and migraine) headaches for digital librarians and archivists in the near future. Hence,  a personal plea to all the software engineers following this thread.... KEEP both The "boringly conventional" files/folders and layers/masks concepts solidly in place on whatever new image editing software program you choose to give us.

Lastly, I'd like to see to small but refined updates to both PS and LR. I'd like a more robust info tool pallette that allows us to see LAB and LCH values not only for source file being edited but for destination file as well and , delta E, and delta AB values between the source and destination. PS still can't do that. It can give CMYK "proof" values but not LAB or LCH for the destination profile, only LAB values for the source image data. And we need much much better metadata viewers and editors. A floating palette than can be customized to show/hide metadata fields of our own choosing and with metadata editing right on that palette. PS and LR, indeed just about all software on the market today, are simply awful on metadata organization and viewing. Plenty of room for improvement there.

cheers,
Mark
http://www.aardenburg-imaging.com
« Last Edit: May 13, 2013, 04:53:12 PM by MHMG » Logged
LKaven
Sr. Member
****
Offline Offline

Posts: 788


« Reply #221 on: May 13, 2013, 05:03:29 PM »
ReplyReply

Speaking of blue sky stuff, I can't think of an easier and thus better metaphor for "layers and masks".  Why do we need to throw out this concept simply because it has been around for a long time in the imaging industry? It is brilliant, so IMHO, any truly competent image editor needs to have it.

It's not a question of throwing out this metaphor.  In my mind, it's a question of having this metaphor built as one possibility among many on top of a generalized architecture (dataflow).  Photoshop layers should not be the ground abstraction, it should be in an upper-level abstraction.  I think you'll see in the long run there are many better ways to go that are also "layerlike" but don't follow slavishly from the original photoshop implementation.  The original implementation is an ongoing hack.  ["Apply image?..."groups"?...."smart objects"?  Ad hoc blend modes?   It's pretty far from brilliant in today's software design curriculum.]  You can do much better without throwing out the things you like about it.  You can even have a "compatibility module" for those who want to preserve their historical files.
Logged

jrsforums
Sr. Member
****
Offline Offline

Posts: 722


« Reply #222 on: May 13, 2013, 05:53:58 PM »
ReplyReply

While on the subject of "tried and true" image editing features like layers and masks , I can think of a parallel debate going on currently with computer OS software designers. It has to do with "files and folders".  Many OS designers now say the files and folders concept is an antiquated metaphor and confusing to the young generation of smart phone users.  New mobile OS's for smartphones and tablets are increasingly being designed by software teams that feel we should dispense with this time honored analogy to paper filing cabinets for records management. Seriously? The files and folders paradigm works, and it ported very well to digital records management. Why throw it away and hide where our files are kept so that each individual application has to outsmart us to find our files? Stupid, stupid, stupid. This movement to do away with the files and folders concept will cause all sorts of file migration (and migraine) headaches for digital librarians and archivists in the near future. Hence,  a personal plea to all the software engineers following this thread.... KEEP both The "boringly conventional" files/folders and layers/masks concepts solidly in place on whatever new image editing software program you choose to give us.


cheers,
Mark
http://www.aardenburg-imaging.com

if I remember correctly, the original Lightroom beta (alpha?) did not use the physical folder/files structure we have now.  Don't remember exactly what it was, but many complained about it t the Adobe team, smartly, changed it.

John
Logged

John
Torbjörn Tapani
Newbie
*
Offline Offline

Posts: 46


« Reply #223 on: May 13, 2013, 07:45:16 PM »
ReplyReply

Good lens correction. Mustasch type distortions, deconvolution of motion blur, CA, Coma, sharpness maps to even out edge sharpness or field curvature, or correcting nervous bokeh, oval highlights, flare /veiling removal etc. Stuff characteristic of lenses that can be anticipated.

Selecting sets of images and creating stitches and/or stacks still editable as RAW like for smart objects, removing objects, random noise, dark frame subtractions and the like. Stitches could be spherical panos or whatever. Combination of lens corrections, stitch and stack maybe could produce a Brenizer bokeh pano still editable as RAW (even spherical hah that would be awesome).

While we're at it could a focus bracketed stack maybe have live DoF control for tilt/shift effects. Lytro lite. Focus peaking to see where we place fake focus or to apply correct amounts of sharpening.

Frequency based retouching tools, like a live apply image with a slider for radius and quick way of viewing high/low pass but it's just done on the fly. Like a pair of channels with a bias between them. Then a retouching brush with options for content aware, texture, tone, clone, not eleventy different ones. Were you to sample a swatch it's just a brush.
Logged
Gulag
Full Member
***
Offline Offline

Posts: 182


« Reply #224 on: May 13, 2013, 08:35:51 PM »
ReplyReply

Good lens correction. Mustasch type distortions, deconvolution of motion blur, CA, Coma, sharpness maps to even out edge sharpness or field curvature, or correcting nervous bokeh, oval highlights, flare /veiling removal etc. Stuff characteristic of lenses that can be anticipated.

Selecting sets of images and creating stitches and/or stacks still editable as RAW like for smart objects, removing objects, random noise, dark frame subtractions and the like. Stitches could be spherical panos or whatever. Combination of lens corrections, stitch and stack maybe could produce a Brenizer bokeh pano still editable as RAW (even spherical hah that would be awesome).

While we're at it could a focus bracketed stack maybe have live DoF control for tilt/shift effects. Lytro lite. Focus peaking to see where we place fake focus or to apply correct amounts of sharpening.

Frequency based retouching tools, like a live apply image with a slider for radius and quick way of viewing high/low pass but it's just done on the fly. Like a pair of channels with a bias between them. Then a retouching brush with options for content aware, texture, tone, clone, not eleventy different ones. Were you to sample a swatch it's just a brush.

My uneducated curiosity is whether or not your print will command seven or eight figure a pop if all those items on your wishlist come true in the next release. in the meantime,

Logged

“For art to be art it has to cure.”  - Alejandro Jodorowsky
plugsnpixels
Sr. Member
****
Offline Offline

Posts: 295



WWW
« Reply #225 on: May 14, 2013, 01:16:55 AM »
ReplyReply

This is a noble effort and I've been through exercises like this before (with another graphics/imaging/vector app), and what I learned was every person desires a different feature set because everyone's work is a bit different. We ended up thinking that a modular approach might work best, where you have the core app and install advanced modules of interest (not plug-ins, they would still be gravy) as time goes on. The modules could be from the same developer or third parties.

Another thing that came to mind when reading this thread was, aren't any other existing apps sufficient for photographers specifically? Or are they more geared toward creative post-processing than utility work?

It seems to me that adding additional functionality to Lightroom is the best way to go forward quickly, assuming it will remain subscription-free and its developers are given latitude to make it the best it can be for this specific purpose.
Logged

Free digital imaging ezine
http://www.plugsandpixels.com
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #226 on: May 14, 2013, 01:17:28 AM »
ReplyReply

Problem is not so much the precision of the rendering pipeline, problem is stacking.

Especially if one of the steps in the stack involves blur in one way or another (think USM, local contrast enhancement, etc). In an interpreted pipe-line this would not only increase sampling requirements disproportionally and exponentially, but it disrupts the parallelism of the graphics cards internals. Additionally, some of the newer sharpening techniques rely on iteration. If you want to implement those types of functions it becomes progressively problematic if the entire pipeline is interpreted.
There are GPU implementations of stuff like blurring that seems to exploit the hardware quite well.

If you have a pipeline of N jobs, each divisible into M (partially overlapping input or output) threads, this might or might not map well into a given GPU hardware. I don't think that the precense ofsharpening means that GPU is out of the question.

Doing stuff on the GPU seems to be difficult, error-prone, and a significant percentage of applications seems to not map well into current GPUs (meaning that the power consumption and price of a GPU does not defend using it)
Quote
Secondly, what you want to determine also is the effect the user expects to see when they change some previous step.
If they use a parametric brush on some particular location in the image, and they decide to turn on lenscorrections, or apply a  perspective correction, then what should the position (and form) of the brush do? And what about if you stack images for panorama stitching and the user does the same?

Note how simple misunderstandings can occur:
If I ask you to "blend" image A and B, do you interpret that as:
1. start with A and blend B on top (not commutative),

or do you interpret that as:
2. create a mix of A and B (commutative).

What about if A and/or B have masks?
A good point.

I guess that Lightroom solves this the "easy" way by having a fixed pipeline.

A Photoshop substitute might have to be more flexible. I still think that it is possible to have a "default" pipeline (Lightroom-esque), and to be able to move components around in the pipeline. Perhaps a "linear" mode would be possible (in which edits are applied in the same order that they are tweaked by the user).
Quote
And finally, the size requirements for a photoshop image are usually significantly different than what our graphics cards currently are based on. Even if hardware will improve and become cheaper etc, you should still expect a 10 year time frame if at all, because the graphics cards are based on certain output requirements for gaming, video, and medical imaging. Well, I suppose password hacking could be added, but I'm not sure how that will affect the imaging capabilities of graphic cards.
Current graphics cards have >1GB of memory, I dont think that buffer storage is the issue. Rather, the (in) flexibility of the processing hardware, the state of the implementation languages, the debugability and the testing matrix caused by significantly different hardware on the market seems like obstacles.
Quote
But, any workflow that allows one to go back to previous steps could be called "parametric", and as such, as long as the expectations of the user are reasonable when deciding to redo a previous step, the application could be entirely "parametric". And a final result could be rendered based on recomputing the entire chain.
The problem you hinted at would still be a problem? If I did sharpening as step#2, then choose to "redo" sharpening as step#274, where in the re-rendered pipeline should the sharpening be applied?

-h
« Last Edit: May 14, 2013, 01:36:37 AM by hjulenissen » Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #227 on: May 14, 2013, 01:20:59 AM »
ReplyReply

The thing that Jeff said that started me on this way of thinking was -- paraphrasing -- you don't want to do in a pixel processor what you can do in a parametric processor partially because of limited precision in the pixel processor causing potential damage to the image.  That implies that the implementer doesn't always know just how much error can be tolerated.

Jim
It is probably simpler to calculate visibility thresholds in a fixed pipeline than in a non-fixed pipeline.

Or perhaps photoshop is simply limited by prior architecture decisions and the need for speed?

-h
Logged
Schewe
Sr. Member
****
Online Online

Posts: 5426


WWW
« Reply #228 on: May 14, 2013, 01:30:36 AM »
ReplyReply

Another thing that came to mind when reading this thread was, aren't any other existing apps sufficient for photographers specifically? Or are they more geared toward creative post-processing than utility work?

Not really...Adobe and Photoshop has had a really long run of being best of bread–which has pretty much dried up any competition. I downloaded GIMP and Pixelmator to test them out...yep, both will do some interesting things, nope, neither are a replacement for Photoshop. Seriously, Photoshop's position in the industry has, minimized the 3rd party development. Could the Photoshop CC decision change things? Yep...but don't count on substantial changes really quickly.

Logged
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #229 on: May 14, 2013, 01:30:56 AM »
ReplyReply

I was using the example as a way to crystallize the discussion, not as a concrete product proposal, but, thanks for bring practicality into the picture.
I guess that I saw that.
Quote
I don't think doing intermediate calcs in FP (maybe not DP FP, but FP) is necessarily impractical. We are seeing a proliferation of DSP-derived processors on graphics adapters. Many of those processors support FP, and there is a trend to make the results of calculations available to programs running in the main processors. Indeed, you can buy add-in cards that do DSP-like processing that have no connection to a display; they're expensive and power hogs, but that should change. Image processing is relatively easily parallelized.
Doing single-precision fp on the cpu is somewhat simpler than doing fixed-point on the cpu. But slower. If the vector "pump" is 128 bits (SSE) or 256 bits (AVX) a fair guess would be that 32-bit sp float would be 1/4 the speed of 16-bit integer. Now, this is not entirely true because not all vector arithmetic is done one a single cycle, and one floating-point operation may map to >1 fixed-point operation. But I think that it is accurate enough for this discussion.

So would 0.25x speed be worth it for having 32-bit float instead of 16-bit integer?
Quote
Another thing about the intermediate image processing that could ameliorate its inherently slower speed than custom-tweaked code: a lot can be done in the background. In order for a program to feel crisp to the user, all that's necessary is to update the screen fast. The number of pixels on the screen is in general fewer than the number in the file, so there's less processing to keep the screen up to date than to render the whole file. Just in case the user decides to zoom in, the complete image should be computed in the background. This also avoids an explicit rendering step, which could be an annoyance for the user.

All this background/foreground stuff makes life harder for the programmers. On the other hand, think of the time they'll save not tweaking code.
I (and you?) tend to focus on the processing pipeline, the image processing mathematics and such. This tends to make up a surprising small percentage of the people, resources and codelines in a commercially successful application. There are umpteen factors that affect peoples happiness with a product.

There is well-defined theory that makes (many) image processing-tasks into satisfying "riddles" that appeals to me, while still (usually) ultimately judged by our vision. I don't think there is anything like that in user-interaction, marketing, QA and all of the other activities that goes into a product like Photoshop (I don't claim to know)?

-h
Logged
Wayland
Jr. Member
**
Offline Offline

Posts: 75



WWW
« Reply #230 on: May 14, 2013, 01:49:59 AM »
ReplyReply

Not really...Adobe and Photoshop has had a really long run of being best of bread–which has pretty much dried up any competition. I downloaded GIMP and Pixelmator to test them out...yep, both will do some interesting things, nope, neither are a replacement for Photoshop. Seriously, Photoshop's position in the industry has, minimized the 3rd party development. Could the Photoshop CC decision change things? Yep...but don't count on substantial changes really quickly.

Photoline deserves a good look at though. Given a couple of years more development along the lines it has now and I think it could be a very viable replacement.
Logged

Wayland.
aka. Gary Waidson
Enter theWaylandscape...
thoricourt
Newbie
*
Offline Offline

Posts: 4


« Reply #231 on: May 14, 2013, 02:25:07 AM »
ReplyReply

Since we are in blue sky country, as a companion application to LR, I would like to have/do the following:

must haves
-----------
ACR integrated & updated regularly
Bridge keyword/metadata setup
adjustment layers (current list is OK + WB)
smart objects for ALL filters, including HDR tone-mapping
ALL filters for 16bit
16 & 32 bit
Sharpen on steroids (as a minimum unsharp mask, smart sharpen, camera shake)
source/creative/output sharpening (à la PK Sharpener)
Blur
Noise reduction
Layers: all existing layer functions, styles
ACR adjustments as layers
channels
mask tweaking (refine edge style)
crop, crop overlays
lens correction: wide angle adaptation, CA, etc.
blending: panorama, hdr, stacking for DOF & NR
gradients (linear, radial)
type
brush with current functionality
tone adjustment: levels, curves
color adjustment: hue, saturation, color picker, color match, color conversion
cloning, spotting and healing tools + content-aware, content aware move
eraser
filters: gaussian blur, liquify, dust & scratches, median, high pass, warp, apply image
all existing selection tools + quick mask, luminosity (à la Lobster)
pen/path
color management, soft proofing
printing
save as to formats ensuring files can be read in 50 years time (e.g. layered or flattened tiff)
actions, batch processing
info, histogram
third party plug-ins compatibility
history
user defined number of undos
preferences save
user defined actions, brushes, etc saved in one location and importable in application updates
tablet integration
mini-bridge or some sort of browser
64bit & multicore
perpetual license
equivalent price as in the US

nice to haves but not required
------------------------------
Bridge
Filter gallery
read video only to extract image
Puppet warp
editable keyboard shortcuts
all brush parameters as currently implemented
choice of user interface color scheme

shouldn't have
----------------
3D
video
face recognition

It is "funny" that we have LR and PS, and we still are defining a third application. Seems to me a waste of time and energy since we have already two tools that more or less pleased everyone...before CC.
But hey since Jeff asked, I am more than grateful & happy to have my say.

Good day to you all!
Logged
plugsnpixels
Sr. Member
****
Offline Offline

Posts: 295



WWW
« Reply #232 on: May 14, 2013, 02:40:27 AM »
ReplyReply

Yes, PhotoLine would be one of the top contenders. But for years users have been trying to impress upon the developers the need for a decent GUI, standardizing of tool and menu labeling and a website overhaul with better tutorials. Without these few take it seriously enough to even try it.
Logged

Free digital imaging ezine
http://www.plugsandpixels.com
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #233 on: May 14, 2013, 02:43:47 AM »
ReplyReply

There are GPU implementations of stuff like blurring that seems to exploit the hardware quite well.

"seems" being the operative word. There are implementations based on the lower resolution versions in a mipmap, but that doesn't scale and align properly. A good example of such bad blurring can be found in Apple's Core Image.

Perhaps a "linear" mode would be possible (in which edits are applied in the same order that they are tweaked by the user).

Yes, linear or nodal are both possible, as long as there is a reasonable expectation as to what happens when returning to previous or earlier edits. Some form of caching is going to be required.

Current graphics cards have >1GB of memory, I dont think that buffer storage is the issue. Rather, the (in) flexibility of the processing hardware, the state of the implementation languages, the debugability and the testing matrix caused by significantly different hardware on the market seems like obstacles.

Yes, to all of your points, but as for storage: if you turn a 80 Mpx MFDB file into a 32bit floating point rep with 4 components,
what do you get? And then you need to also store all kinds of crap of the processing pipeline, intermediate caching of results, etc…
And then you want to create a panorama stitch with HDR with maybe 16 of those files…

The problem you hinted at would still be a problem? If I did sharpening as step#2, then choose to "redo" sharpening as step#274, where in the re-rendered pipeline should the sharpening be applied?

Yes, but there can be a clear difference between "re-editing" an existing adjustment, and "adding" a new adjustment. However, I am not much of a proponent of flexibility for the sake of flexibility. It is not particularly useful to keep stacking sharpening upon sharpening, and then have the user complain about the result.

Implement a clear set of processing steps to guarantee optimal results,
and implement reasonable flexibility to guarantee creativity.

I personally believe that LR doesn't currently find the right balance between the two, and adding pixel editing could force this to be re-designed, without entirely having to start from scratch.
Logged

Regards,
Oscar Rysdyk
theimagingfactory
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #234 on: May 14, 2013, 02:53:21 AM »
ReplyReply

I personally believe that LR doesn't currently find the right balance between the two, and adding pixel editing could force this to be re-designed, without entirely having to start from scratch.
I would really like a (if need be pixel-level) plugin that really plugged into the LR processing chain, as opposed to exporting/reimporting. I.e. When I adjust the exposure compensation slider in Lightroom, the raw file would be re-processed on the fly with LR blocks, then the external plugin, then final LR blocks. Ideally, the external editor would only generate a set of LR-compatible scripts (MATLAB, Python, OpenCL, whatever) that would run equally well on anyone elses LR installation (or LR version 11).

Of course, this could lead to me having to redo plugin settings (such as applying spatial warping in front of a pixel-operation), but that would be on an as-needed basis.

-h
Logged
plugsnpixels
Sr. Member
****
Offline Offline

Posts: 295



WWW
« Reply #235 on: May 14, 2013, 02:56:57 AM »
ReplyReply

Oscar, I just realized that was you! I have a couple of your old plug-ins listed on my site. Glad to see you're back! Let's list the new apps. You too Schewe!
Logged

Free digital imaging ezine
http://www.plugsandpixels.com
hjulenissen
Sr. Member
****
Offline Offline

Posts: 1666


« Reply #236 on: May 14, 2013, 03:00:18 AM »
ReplyReply

"seems" being the operative word. There are implementations based on the lower resolution versions in a mipmap, but that doesn't scale and align properly. A good example of such bad blurring can be found in Apple's Core Image.
Are you saying that there are no high-quality, reasonable efficient (overlapping i/o) image processing algorithms running on GPUs? I looked into this a few years back, and expected there to have been some progress.
Quote
Yes, to all of your points, but as for storage: if you turn a 80 Mpx MFDB file into a 32bit floating point rep with 4 components,
what do you get? And then you need to also store all kinds of crap of the processing pipeline, intermediate caching of results, etc…
And then you want to create a panorama stitch with HDR with maybe 16 of those files…
*It seems that 5-6 GB is available right now.
*If you are working on massive projects, could not the software work on sensible tiles, dumping intermediate results to system memory?
*Perhaps users stitching multiple 80 MP MFDB images would be willing to purchase several GPUs?

http://www.nvidia.com/object/personal-supercomputing.html

GPUs are no doubt being hyped, and many customers have unreasonable expectations ("why does not Adobe rewrite Photoshop in CUDA, then it would be 100x faster"). But I am hoping that something good will come out of it. Perhaps the unification of CPU and GPU (first physically, then memory access, then instructions) will make it easier to re-use hw resources made for games and such in graphics and other dsp applications.

-h
Logged
opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #237 on: May 14, 2013, 03:15:17 AM »
ReplyReply

Oscar, I just realized that was you!

Yes, it is me  Cool

Thanks.
Logged

Regards,
Oscar Rysdyk
theimagingfactory
LKaven
Sr. Member
****
Offline Offline

Posts: 788


« Reply #238 on: May 14, 2013, 03:27:42 AM »
ReplyReply

GPUs are no doubt being hyped, and many customers have unreasonable expectations

My computer does a thermal shutdown when I try to play SimCity 5 on anything but the lowest quality setting.   Sad
Logged

opgr
Sr. Member
****
Offline Offline

Posts: 1125


WWW
« Reply #239 on: May 14, 2013, 03:42:14 AM »
ReplyReply

Are you saying that there are no high-quality, reasonable efficient (overlapping i/o) image processing algorithms running on GPUs? I looked into this a few years back, and expected there to have been some progress.

I wouldn't be qualified to answer.

The few examples I have seen are either doing blur incorrectly and/or are caching results.

Clearly, progress is going fast, and video resolutions are getting higher, and quality demands and expectations in our industry are getting lower, so eventually it will all merge. But if the card is doing caching logic, the application might as well control or copy that behaviour in some meaningful, user-centric way.

Logged

Regards,
Oscar Rysdyk
theimagingfactory
Pages: « 1 ... 10 11 [12] 13 14 ... 20 »   Top of Page
Print
Jump to:  

Ad
Ad
Ad