Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
 Post subject:
PostPosted: Tue Aug 25, 2009 5:03 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
If space is not a concern, SSDs can already be cheaper than spindle-based disks in some environments.

I recently read an article that was testing Intel's x25-e drive in high-load environments. They concluded that one single x25-e was as fast as 18 spindle-based 15K RPM SAS disks in RAID (in IOPS).

Obviously, the SSD cost a fraction as much as *18* high-end server drives. So if you're concerned about performance and not capacity, putting a few SSDs in RAID-1 or RAID-10 is already cheaper in servers.

Probably more reliable too; SSDs have no moving parts, so they tend to fail more predictably.


Top
   
 Post subject:
PostPosted: Tue Aug 25, 2009 5:09 pm 
Offline
Senior Member
User avatar

Joined: Sun Feb 08, 2004 7:18 pm
Posts: 562
Location: Austin
Absolutely right. I run a heavily database-driven service at a dedicated provider. (It was born on Linode, and eventually had to move out, so it can definitely be considered a Linode success story.)

Anyway, we were hitting a wall on I/O performance, and were looking at having to spend much money on a massive server upgrade to get more spindles. Instead, we dropped in a single X25-E, and suddenly I/O is not the system bottleneck. It was an unbelievable upgrade. Relative to hard drives, it's cake to stick more CPU power and memory in 1U servers.


Top
   
 Post subject:
PostPosted: Wed Aug 26, 2009 10:29 am 
Offline
Senior Member

Joined: Fri Jan 09, 2009 5:32 pm
Posts: 634
Guspaz wrote:
If space is not a concern


That clearly isn't the case if you read these forums.


Top
   
 Post subject:
PostPosted: Wed Aug 26, 2009 11:16 am 
Offline
Senior Member

Joined: Thu Aug 28, 2003 12:57 am
Posts: 273
glg wrote:
Guspaz wrote:
If space is not a concern


That clearly isn't the case if you read these forums.


If each and every Linode customer got their own 80 GB Intel X25-M drive, that would be an additional $250 or so per customer. If the drive is expected to last 3 years before being replaced (a conservative estimate), that's an extra $7 per month or so. Throw in another $3 a month just for "administrative" costs (i.e. Linode's profits on their outlay to buy the SSD up front) and that's $10 per month.

I would HAPPILY pay an extra $10 per month for 3.5x the disk space of my Linode 540 plan. The fact that the drive would be 10x faster than my existing Linode shared drive would be a nice bonus.

Of course, the big problem is that it's not possible to put 30 or 40 SSD drives in a single server. The good news is that the cost per GB of SSD drives gets lower the larger the drive gets, so buying a few large capacity drives and splitting users across them is even more economical than my $10 per month estimate. And SSD drives are smaller, quieter, cooler, and use less power than platter drives. So getting a few TB of SSD drives into a Linode host ought to be doable in the not too distant future.

Perhaps Linode should start on the high end. That would be a *great* way to have a high end plan - a Linode 2880 equivalent with SSD drives. Charge an extra $10/mo on top of the standard Linode 2880 plan, and be the only hosting provider that I know of that hosts a plan on SSDs. And the performance would blow the doors off of a standard Linode 2880 for the types of workloads I expect people are using Linode 2880s for ...


Top
   
 Post subject:
PostPosted: Thu Aug 27, 2009 4:27 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
Except they wouldn't use x25-m, they'd use x25-e. So, roughly speaking, multiply your lifespans by ten (30 years) since the cells support 10x the writes. At that point, the drive wearing out isn't the lifespan concern, but how long before the drive is too out of date to be useful.


Top
   
 Post subject:
PostPosted: Tue Sep 15, 2009 12:59 am 
Offline
Junior Member

Joined: Sat Mar 21, 2009 3:45 am
Posts: 48
SSDs can be a big win for some applications, and the improved I/O performance can pay for itself in power-consumption reductions, but SSDs aren't always a clear win price/performance wise.

What I'd really like right now is a way to transparently use SSDs to durably buffer writes for the tables on our PostgresDBs to reduce the # of random IOs our HDDs have to handle, but I haven't wanted to mess with Solaris to play with ZFS, and I don't know of a Linux solution.


Top
   
 Post subject:
PostPosted: Tue Sep 15, 2009 11:02 am 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
eas wrote:
SSDs can be a big win for some applications, and the improved I/O performance can pay for itself in power-consumption reductions, but SSDs aren't always a clear win price/performance wise.


Just because they're not always more cost-effective does't mean that they aren't still faster.

While it's true that high sequential read speeds are easier to achieve with with magnetic disks (6 magnetic disks in a RAID array can probably match the 500-600 MB/s sequential read speeds of an $895 Fusion-IO drive, while costing about half as much), the same is not true for random read/write performance. It's impractical to match the random read/write performance of an SSD with magnetics, since you'd need so many of them, it ends up being more expensive.

Of course, whether you actually need that performance is another question. But as a user of a 160GB Intel x25-m in my home desktop, I can say that it *does* make sense at home, if you can afford it. It's really an amazing difference.

Quote:
What I'd really like right now is a way to transparently use SSDs to durably buffer writes for the tables on our PostgresDBs to reduce the # of random IOs our HDDs have to handle, but I haven't wanted to mess with Solaris to play with ZFS, and I don't know of a Linux solution.


See btrfs, which is Linux's answer to ZFS. Unfortunately, it's not stable yet, and won't be for some time. It's intended to be the next-gen filesystem, with ext4 acting as the intermediate solution.

However, if all you want to do is buffer writes, shouldn't the OS be able to handle that with write buffering, or writing to an in-memory table and then periodically copying that to an on-disk table? Admittedly this is less reliable since memory goes *poof* in a failure scenario...


Top
   
 Post subject:
PostPosted: Tue Sep 15, 2009 12:02 pm 
Offline
Senior Member

Joined: Sat Mar 28, 2009 4:23 pm
Posts: 415
Website: http://jedsmith.org/
Location: Out of his depth and job-hopping without a clue about network security fundamentals
bji wrote:
If each and every Linode customer got their own 80 GB Intel X25-M drive, that would be an additional $250 or so per customer.

And then when you migrate to a new host, we get to ticket the datacenter to pull your drive and move it.

Ick.

_________________
Disclaimer: I am no longer employed by Linode; opinions are my own alone.


Top
   
 Post subject:
PostPosted: Tue Sep 15, 2009 1:23 pm 
Offline
Senior Member

Joined: Thu Aug 28, 2003 12:57 am
Posts: 273
jed wrote:
bji wrote:
If each and every Linode customer got their own 80 GB Intel X25-M drive, that would be an additional $250 or so per customer.

And then when you migrate to a new host, we get to ticket the datacenter to pull your drive and move it.

Ick.


Or buy a new one, and leave the old one in place for another customer to upgrade to.


Top
   
 Post subject:
PostPosted: Tue Sep 15, 2009 5:35 pm 
Offline
Senior Member

Joined: Mon Jun 16, 2008 6:33 pm
Posts: 151
Xan wrote:
I run a heavily database-driven service at a dedicated provider. (It was born on Linode, and eventually had to move out, so it can definitely be considered a Linode success story.)

Congrats ! Very curious :D

Are you able to provide any more details ? (or even just some vague hints ?)


Top
   
 Post subject:
PostPosted: Tue Sep 15, 2009 8:35 pm 
Offline
Senior Member

Joined: Thu Dec 04, 2008 10:55 am
Posts: 57
Location: New Jersey
http://www.youtube.com/watch?v=96dWOEa4Djs

A must watch for SSD enthusiasts.


Top
   
 Post subject:
PostPosted: Tue Sep 15, 2009 10:18 pm 
Offline
Senior Member
User avatar

Joined: Fri Oct 24, 2003 3:51 pm
Posts: 965
Location: Netherlands
Great video - highly recommended - even for people who aren't SSD enthusiasts (yet).

_________________
/ Peter


Top
   
 Post subject:
PostPosted: Wed Sep 16, 2009 3:34 am 
Offline
Junior Member

Joined: Sat Mar 21, 2009 3:45 am
Posts: 48
Guspaz wrote:
eas wrote:
SSDs can be a big win for some applications, and the improved I/O performance can pay for itself in power-consumption reductions, but SSDs aren't always a clear win price/performance wise.


Just because they're not always more cost-effective does't mean that they aren't still faster.

While it's true that high sequential read speeds are easier to achieve with with magnetic disks (6 magnetic disks in a RAID array can probably match the 500-600 MB/s sequential read speeds of an $895 Fusion-IO drive, while costing about half as much), the same is not true for random read/write performance. It's impractical to match the random read/write performance of an SSD with magnetics, since you'd need so many of them, it ends up being more expensive.

Of course, whether you actually need that performance is another question.


This is where I get off. I am constitutionally ill-suited for any sort of technical discussion where cost is not considered an important variable, even more so when the required performance envelope isn't defined.

Quote:
Quote:
What I'd really like right now is a way to transparently use SSDs to durably buffer writes for the tables on our PostgresDBs to reduce the # of random IOs our HDDs have to handle, but I haven't wanted to mess with Solaris to play with ZFS, and I don't know of a Linux solution.


See btrfs, which is Linux's answer to ZFS. Unfortunately, it's not stable yet, and won't be for some time. It's intended to be the next-gen filesystem, with ext4 acting as the intermediate solution.

However, if all you want to do is buffer writes, shouldn't the OS be able to handle that with write buffering, or writing to an in-memory table and then periodically copying that to an on-disk table? Admittedly this is less reliable since memory goes *poof* in a failure scenario...


The key word in the passage you quoted was "durably" and I'll throw in "consistency" for good measure. In a perfect world, I'd rather spend money on SSDs than hardware RAID controllers.

I know btrfs is being held out as Linux's answer to ZFS, but as you say, its not well proven yet. It's also not clear to me if it can address transparently putting hot-blocks on faster storage the way I understand ZFS can. I also wonder about its future. Work on btrfs was largely funded by Oracle, and Oracle is acquiring Sun...


Top
   
 Post subject:
PostPosted: Wed Sep 16, 2009 12:43 pm 
Offline
Senior Member

Joined: Thu Dec 04, 2008 10:55 am
Posts: 57
Location: New Jersey
I just thought of something... I think they'd have to rename RAID to allow for SSDs (RAED?).. That is unless of course the price of SSDs goes down below $700 for 256gigs any time soon!

Damn I'm funny.


Top
   
 Post subject:
PostPosted: Thu Dec 31, 2009 8:51 pm 
Offline
Senior Member

Joined: Sat Feb 10, 2007 7:49 pm
Posts: 96
Website: http://www.arbitraryconstant.com/
Xan wrote:
We complain plenty about lack of disk space here already, and with good reason. It'll be a very long time before SSDs reach parity with spinning platters on the $/GB front.

But maybe the future lies in some relatively small amount of SSD space for each of us for the main system, and then access to a much larger (spinning platter) SAN.
What probably makes more sense is Linodes built the way they are, with SSD space made available over iSCSI. It'll be small, and it'll be expensive, so what you'd probably want to do is run just your database or other I/O intensive application on it.

The iops you can push through a Linode are probably one of the biggest limitations of using them at the moment.


Last edited by ArbitraryConstant on Thu Dec 31, 2009 8:53 pm, edited 1 time in total.

Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 4 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group