eas wrote:
SSDs can be a big win for some applications, and the improved I/O performance can pay for itself in power-consumption reductions, but SSDs
aren't always a clear win price/performance wise.
Just because they're not always more cost-effective does't mean that they aren't still faster.
While it's true that high sequential read speeds are easier to achieve with with magnetic disks (6 magnetic disks in a RAID array can probably match the 500-600 MB/s sequential read speeds of an $895 Fusion-IO drive, while costing about half as much), the same is not true for random read/write performance. It's impractical to match the random read/write performance of an SSD with magnetics, since you'd need so many of them, it ends up being more expensive.
Of course, whether you actually need that performance is another question. But as a user of a 160GB Intel x25-m in my home desktop, I can say that it *does* make sense at home, if you can afford it. It's really an amazing difference.
Quote:
What I'd really like right now is a way to transparently use SSDs to durably buffer writes for the tables on our PostgresDBs to reduce the # of random IOs our HDDs have to handle, but I haven't wanted to mess with Solaris to play with ZFS, and I don't know of a Linux solution.
See btrfs, which is Linux's answer to ZFS. Unfortunately, it's not stable yet, and won't be for some time. It's intended to be the next-gen filesystem, with ext4 acting as the intermediate solution.
However, if all you want to do is buffer writes, shouldn't the OS be able to handle that with write buffering, or writing to an in-memory table and then periodically copying that to an on-disk table? Admittedly this is less reliable since memory goes *poof* in a failure scenario...