Ad
Ad
Ad
Pages: [1]   Bottom of Page
Print
Author Topic: Open hardware/software, a response to Devlin  (Read 645 times)
markmfredrickson
Newbie
*
Offline Offline

Posts: 1


« on: August 14, 2011, 07:17:20 PM »
ReplyReply

In my opinion Nick Devlin starts from a false premise in his recent article. He claims that software is what is holding back modern camera devices and that hardware is the easy part of modern camera manufacturing.

I disagree.

While it may be the fact that current business practices create high costs resulting from the programming required for modern camera  hardware, this fact is the result of a closed process by camera manufacturers. Wishing to keep the camera a closed, proprietary device, manufacturers saddle themselves with the responsibility and cost of creating all of the software necessary to run and use the camera. Third party developers are shut out of the process, and there is no opportunity for skilled entrepreneurs to add value to camera hardware through additional software.

Consider instead if camera manufacturers were to use an open system, at least one in which specifications were made available if not the entire software stack. This would be similar to current mobile devices such as the iPhone/iPad (which Devlin argues is an opportunity for 3rd party development) or the even more open Android platform. In the case of the iPhone, the OS is largely closed, but specifications and libraries are provided to 3rd party developers. The Android platform, while to some degree control by Google, is even more open. In both cases, these platforms are thriving as developers produce new applications and hardware manufacturers continue to reap profits as they do not have to provide every tool themselves. It strikes me as completely possible that a modern camera could run Android and immediately gain access to the pool of experienced, motivated developers producing software for this platform.

As a proof of concept the Franken Camera demonstrates what an open source camera could do, but these academic researchers do not have the R&D budgets and fabrication facilities of the big players. Producing the optics and hardware is the competitive advantage of the camera manufacturers. By opening the doors to 3rd party developers, they could decrease costs and offer more to consumers.

I agree with Devlin that the time is right to consider the new software features in modern cameras. I disagree that we need to look to the major manufacturing houses to provide them. We need the opportunity to create this ourselves.

While I disagree with his premise, I do agree with Devlin's conclusion: these features will probably not be produced. In both cases because camera manufacturers will keep the system locked down, proprietary, and featureless.
Logged
dreed
Sr. Member
****
Offline Offline

Posts: 1251


« Reply #1 on: August 15, 2011, 05:22:36 AM »
ReplyReply

I'm going to disagree with you on this and agree with Nick.

The camera manufacturers are tasked with designing a working software package for the camera that mostly meets the needs of most of the people that pick up and use it. Notice the word "most."

The problem for the manufacturer is that faced with a finite display area, they're limited in what information they can show you.

A couple of examples software improvements that I'd like to see:
- dial in the exact duration of the shutter being open down to at least a 1/10th of a second in granularity
- histogram to be calculated using the data from the raw capture, rather than the jpeg
- for the camera to tell me what the zoom mm was for a given shot when I review it
- for the camera to be able to tell me how many mm I've got a zoom lens zoom'd in to live
- live mode with clipping for highlights and blacks
- live mode with focus mask highlighting
- restrictions on using halves or thirds only exposure removed and be able to use any aperture or shutter times

... all of the above is now just software.

And perhaps when shooting in live mode, allow us to choose an exposure setting that maximises captured information without clipping. This shouldn't be that difficult (at least for shorter exposure times) as cameras with live view are already doing "exposure simulation" so it would seem a simple matter to move from "optimise live exposure" to "generate optimum photograph exposure plan".

It is tempting to suggest that we should be able to upload codecs for either raw photographs or video into the camera, so that we can replace the NEF/CR2 engine with a DNG one, or the AVCHD with MPEG2, etc. Why not just build all of that in? I can imagine that there would be engineering constraints on just how much code they can fit into the camera. Similarly, it would allow the replacement of JPG output with PNG output, etc. But there may also be performance penalties if the software is designed to support this kind of plug-n-play approach.

To get a full idea of just how customisable software can be, look at applications such as Microsoft Word, etc. Apart from some of the very basic menus (File, etc), the entire interface that consist of buttons can be moved around, etc. Why not allow similar with cameras? Things such as "My Menu" on more recent Canon cameras go some distance in this, but essentially, all of the menus should be customisable by a sufficiently proficient photographer. How many unused menu items does your camera have? Do we want that? Harder to tell.

On the voice approach, I wouldn't mind if I could take the lens cap off and say "bird" and the camera instantly switches from being in a mode optimised for landscape photography to one optimised for taking bird photographs, regardless of what the mode dial says. That type of functionality could easily make the difference between getting the shot and not.

Manufacturers have shown a little bit of ingenuity by allowing us to tune the "auto ISO" range. Why just limit it to that? Why not let me set an auto-aperture range of (say) 6.3 to 11 (maybe even different ranges per lens) for when I'm hand held and maybe I'm willing to open up a bit more if it means the shutter speed increase results in a significant increase in the chance I get a sharp photo? And why can't the camera tell me if camera movement during the time that the shutter was open was such that the photo will have blurry details? I'd rather know if the photo is sharp than the exact position (to the meter) of where I was when I took the photo.
« Last Edit: August 15, 2011, 10:14:25 AM by dreed » Logged
Pages: [1]   Top of Page
Print
Jump to:  

Ad
Ad
Ad