Yes, I'm going to say "those who wrote ZFS" because I don't think it would be very cool to name drop.
Right because the ZFS developers are wildly popular famous people and by saying their names your credibility will actually go down as a result.
I don't think it's cool to appeal to a false (unnamed) authority. None of how ZFS works is some proprietary secret, it's an open source project.
For such "very well known issues with ZFS and RAIDZ performance" it's curious you had to go all the way to those who wrote it for clarification.
Mentioning the details of the hardware used for testing, would just see this thread descend into debate over the pro's and con's of various hardware, drivers, etc.
Peer review. It's actually just to understand the problem better. Most important is what platform and pool version, what benchmarking tools and what *relative* values between single disk and multidisk RAIDZ the various tests produce.
My strong suspicion is that you've confused the importance of small file random IO in a photographer's context, and diminish the value of large file sequential IO which is where RAIDZ performs quite well. Your data might suggest a net neutral result of RAIDZ for Lightroom catalog performance, which entails small random IO.
But while I'm not suggesting the Lightroom catalog go on a RAIDZ array, it might be a valuable test to confirm/deny this because ZFS is a reality on Mac OS X, and soon so will RAIDZ, as a commercial product.
Further, correcting you on various technical points above would not help anyone. But it seems like you've definitely had lots of ZFS cool-aid to drink.
Except, I thoroughly enjoy being corrected on technical points because I have an innate affinity for being technically correct. The thing is you haven't provided a single reference, or explanation at all for any of your claims.But you appear to have an ample supply of memes and non-responses.
Ok, the way you're talking here makes it clear that you're completely unfamiliar with Lightroom and its work flow.
You're right, that's why I keep the 1.0 beta email list archive handy for easy reference. But did you have a question? Or do you think these pot shots you take are adequate distractions from not answering questions you've been asked?
Since you're an open source freak, I'll suggest that you look at using darktable (which I'm unfamiliar with) since I believe that it is similar in work flow and design to Lightroom.
Pass, I'm reasonably pleased with LR, but thanks for the suggestion. It's the most useful data you've provided so far, insofar as it's the only data you've provided so far. I'd never heard of darktable before.
If I was only working with photoshop (or gimp) then what you describe would be relevant.
Perhaps. But you're welcome to explain why you think so. I will submit the following:
a.) Lightroom benefits more than Photoshop from small file random IO performing disks. This includes high RPM disks, RAID 0 array, and SSD. Such a fast disk may not be fault tolerant.
b.) The LR Catalog contains image file metadata. In normal (default) operation, metadata is not automatically saved to image XMP (sidecar or in DNG). So the catalog is an important file to backup, and Lightroom has a feature to do this. But if you want a more frequent backup than those options provide, it might be nice to automate pushing the file to more reliable storage.
c.) The LR preview data is expendable. If your fast disk dies or needs to be reformatted, you can rebuild the previews and wait it out.
However, Lightroom stores them as individual files in a database, so a syncing program (such as those mentioned) would only push new and changed previews from fast media to more reliable storage. Totally optional. Might save you a few hours one day. But certainly not the end of the world to not back these up.
DNG are RAW files.
I think it's confusing to conflate DNG and Raw. DNG can contain entirely non-Raw data: e.g. JPEG and TIFF can be converted into DNG. Thus DNG could contain output-referred data, or camera-referred linear-demosaiced data, or camera-referred mosaiced data.
So if you're worried about bit-rot, moving to use DNG will allow you to detect when bit rot occurs and potentially (for example if you put the original file inside the DNG) correct it (assuming that only part of the DNG data and not the original data has rotted.)
There's a potential for ambiguity in that the GUI may not distinguish which data is corrupt, in which case you don't have an easy way to correct the problem. I think the error detection and correction needs to be more automated than this. Since we need duplicates anyway for fault tolerance, it makes sense for the file system to simply manage the error detection and correction, including removal/replacement of the corrupt image with a known good copy, and leave me alone.
This is one of those things that makes me wish every camera generated DNG files because that way the data is protected from the moment the camera generates it.
It would be nice. But it's important to distinguish between the ability to detect error and the ability to correct it. Camera generated DNG with checksum would allow for error detection but not correction (of that particular DNG). Detection may be better than nothing, but I think we should expect better than just being notified of a problem.
 This is why it's a good idea to periodically "Save Metadata to Files".
 Some call it scene-referred. I distinguish between the dynamic range of the scene vs the camera, but it's often a smallish distinction.