I know that all hard drives can (and probably eventually will) fail, but this has happened to me so infrequently that rebuild times are a non-issue.
Complete disk failure accounts for a fraction of problems with data loss. Other problems including silent data corruption, and sector read errors and user induced data loss.
You can see from this article, How Many Copies Do You Need
, the question is ultimately answered by money. Not strategy. However, also very noteworthy is that of the viable storage medium we have available to us as photographers, consumer SATA disks are the least reliable. This is not merely due to disk failure, which are obvious and in a JBOD context everything on that disk is lost. These are errors that the user has no practical way of determining what files have been adversely affected. They frequently find out only when they try to access the file, and it's damaged.
I buy new, larger drives every couple years, and always run Diskwarrior on them.
Disk Warrior checks a tiny fraction of the disk surface related to the file system. It's like verifying the integrity of a card catalog at a library. It does nothing to check, let alone repair, the contents of the library itself (the books, i.e. your data).
Bigger drives are slower drives when it comes to a restore. The rate at which they are getting faster isn't keeping up with the rate at which they're getting bigger.
For me JBOD with offsite backups has been working well.
It works well in that any single disk failure, it's obvious to he who organized the system exactly what has been lost, and thus what must be restored from backup. It requires quite some effort and complexity to organize independent disks, to know what is on them, more importantly on which disk a particular desired image is located on. JBOD can work OK for smaller catalogs. When you're talking about 10 disks, and their 10 backup disks, and possibly another 10 backup disks, well this is a mess. 10 disk icons mounted on the desktop all at once for normal work, and possibly 20 during backups? It's asking for user induced data loss and that's actually quite common too. Vastly more common than disk failure.
You didn't mention internet based offsite storage, perhaps you're storing disks offsite. In any case, average U.S. bandwidth for download is 7 Mbps. Upload is 1 Mbps. For 2 TB of data, that's 26 days to download, continuously - no other internet usage. For uploading that same 2 TB it's 185 days. Some people are lucky enough to have 2-3x that bandwidth. But that's still a week to restore 2 TB from offsite, and two months of uploading.
For active photographers, cloud storage is totally untenable except as the last resort. So higher availability locally is reasonable.
One reason I don't want to use RAID is because of the reduced capacity (ie, if your RAID drives total 8TB, you'll only have 4TB or so of storage due to the built-in redundancy; I'd rather use all of the space and have off-site redundancy).
Just as long as you realize that you're exchanging local redundancy, referred to as data availability, to a distinctly non-available off-site backup. The contra argument to wanting more capacity instead of redundancy, is that you now have more stuff to lose, when it happens.
 Silent data corruption is particularly insidious because it will replicate itself into all downstream backups and archives. In the very common case were photographers recycle (or rotate) their media for various purposes, the migration in effect guarantees replicating corrupted data. Conventional RAID does not correct or prevent SDC.
 RAID 1, 5, 6 will all self correct for this. In better implementations, it's self-healing.