Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic

Would you like to see SSD based VPS on Linode?
Absolutely and ASAP  18%  [ 17 ]
Sure that'll be something i maybe keen on  16%  [ 15 ]
Whatever for? No need  11%  [ 10 ]
No, it'd be stupid and expensive  27%  [ 25 ]
I'll consider it if its reasonable  18%  [ 17 ]
Yes  3%  [ 3 ]
No  7%  [ 7 ]
Total votes : 94
Author Message
 Post subject:
PostPosted: Fri Jul 29, 2011 11:57 am 
Offline
Senior Member

Joined: Thu May 21, 2009 3:19 am
Posts: 336
I'm not concerned with wearing out SSDs, I don't think any "normal" user or business is going to be able to do that. It's how they fail that I'm concerned about. Every actual, true failure I've heard about (not the "users/techs/know-it-alls" complaining on Newegg), they just stop working. No graceful failure, just one second they are working, the next they are not.

We have several SSDs in workstations and so far, for over a year the only thing we've ran into is an issue with XP not having TRIM support. We were troubleshooting a workstation with an SSD, trying to figure out why it was so slow. Turns out slapping that drive into a Win7 machine and forcing TRIM to run on the drive restored the performance completely.


Top
   
 Post subject:
PostPosted: Fri Jul 29, 2011 12:26 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
I've had more than a few spinning HDDs fail on me without warning too.

It should be noted that any SSD formatted by XP is probably misaligned, which causes rather large performance penalties. Moving it to Win7 alone isn't enough unless you re-aligned it.


Top
   
 Post subject:
PostPosted: Fri Jul 29, 2011 6:29 pm 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
In a proper storage situation (e.g. not a single drive in a desktop machine), I'd actually prefer sudden complete failure. It's the weird slow flake-out drive failures that scare me, the ones where you're like "huh, that was weird" and don't swap out the drive because it's fine now...

_________________
Code:
/* TODO: need to add signature to posts */


Top
   
 Post subject:
PostPosted: Fri Jul 29, 2011 6:52 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
Yeah, if you've got redundancy (RAID-1, RAID-5, RAID-6, RAIDZ, etc), then a sudden complete failure is the best kind, because you know to swap out the drive for a replacement right away rather than saying "Oh, well, the drive is corrupting data, but the block checksums are handling it, I can wait a bit before replacing"...


Top
   
 Post subject:
PostPosted: Tue Aug 02, 2011 3:44 pm 
Offline
Senior Member

Joined: Sat Feb 10, 2007 7:49 pm
Posts: 96
Website: http://www.arbitraryconstant.com/
Something to keep in mind is that Linode would RAID SSDs just like they RAID hard drives. Individual users wouldn't need to worry about lifetime issues any more than they worry about any other hardware failure.

The other thing is... some years back, I noticed disk performance was gradually getting worse and worse. I assumed this was because Linode kept getting more cores per server, but wasn't able to increase disk resources in the same way.

More recently disk performance has been uncannily steady, even though contention ratios have most certainly ballooned well beyond where they were when disk performance looked bad.

This implies they've done something to the storage hierarchy. Quite possibly something like bcache, a Linux project which uses SSDs as a non-volatile caching layer.

Come to think of it, the latency I'm seeing for synchronous writes almost requires something like that to explain. :)


Top
   
 Post subject:
PostPosted: Wed Aug 03, 2011 12:02 am 
Offline
Senior Member

Joined: Thu Jul 22, 2010 8:23 pm
Posts: 60
They have definitely done something, but from my testing only to the 512mb packages.

I ran UNIXBench across Linode's entire range of offerings, and the disk speed on the 512mb VM's were blistering compared to the larger Linodes.


Summary:

Linode 512MB
1 parallel copy of test
File Copy 1024 bufsize 2000 maxblocks 311812.9 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 82966.6 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 1048007.7 KBps (30.0 s, 2 samples)
Pipe Throughput 1795144.2 lps (10.0 s, 7 samples)

Linode > 768MB
1 parallel copy of test
File Copy 1024 bufsize 2000 maxblocks 76893.8 KBps (30.0 s, 2 samples)
File Copy 256 bufsize 500 maxblocks 19415.0 KBps (30.0 s, 2 samples)
File Copy 4096 bufsize 8000 maxblocks 276594.9 KBps (30.0 s, 2 samples)
Pipe Throughput 86488.9 lps (10.0 s, 7 samples)

Disk I/O is the Number 1 bottle neck for VM's, so I'm not surprised that Linode is keeping quiet about what they are doing to keep their disk speed well ahead of their competitors... A quick test of a Rackspace VM I manage had a ~30mb/s write on 64k block size compared to linodes 120mb/s.


Top
   
 Post subject:
PostPosted: Wed Aug 03, 2011 11:01 am 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
Linux has always been quiet about the underlying infrastructure. They haven't exactly hidden it, and they've talked about it before, but the idea is that the Linode customer doesn't worry about the underlying hardware. We get our VMs, and Linode worries about the underlying hardware. Heck, the hardware changes over time anyhow.


Top
   
 Post subject:
PostPosted: Wed Aug 03, 2011 11:29 am 
Offline
Senior Member

Joined: Tue Feb 19, 2008 10:55 am
Posts: 164
If one is to believe ebay, all things considering, migrating their vps's to ssd has dramatically increased performance and is the same cost as 15k disks.

Not that I care other than to troll the know it alls that poo poo the idea, even though they've never ran a DC and ebay says SSD is cost effective. My linode is plenty fast enough as it is.


Top
   
 Post subject:
PostPosted: Wed Aug 03, 2011 11:44 am 
Offline

Joined: Wed Aug 03, 2011 11:36 am
Posts: 1
Website: http://thatlinuxbox.com/blog/
Location: Gainesville, FL
glg wrote:
Really? Because, by far, the biggest complaint people post here is "want more disk", not "want faster disk".


Yes, I'd much rather see an added low cost / slower / cheaper storage option.

Seems like this could be implemented via the Extras at a cost of something like 10 Gig for $1 / mo.


Top
   
 Post subject:
PostPosted: Wed Aug 03, 2011 1:14 pm 
Offline
Senior Member

Joined: Wed Mar 03, 2010 2:04 pm
Posts: 111
ArbitraryConstant wrote:
Something to keep in mind is that Linode would RAID SSDs just like they RAID hard drives. Individual users wouldn't need to worry about lifetime issues any more than they worry about any other hardware failure.


True, but to my earlier point - the way the SSDs have failed in my case - it is unclear if it's an issue with the drive itself or something in the firmware that went haywire. In the latter case, writing some bad instruction to a RAID 1 composed of identical disks running the same firmware could destroy the array. In official terms, that would suck.


Top
   
 Post subject:
PostPosted: Wed Aug 03, 2011 2:23 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
eBay said that the SSDs were the same cost as their 15k RPM FC disks, not their 15k RPM SAS disks.

Let's compare pricing on NewEgg:

15k RPM SAS 300GB: $250
15k RPM FC 300GB: $1800

Linode does not use FC disks. FC disks cost as much as enterprise SSDs.


Top
   
 Post subject:
PostPosted: Thu Aug 04, 2011 11:31 am 
Offline
Senior Member

Joined: Tue Feb 19, 2008 10:55 am
Posts: 164
It's difficult to compare the overall cost, say, over a year, by just looking at the initial purchase price. ebay looked at space, electricity, performance, I can't remember what else. But you know this already because you run a DC similar to ebays DC and linode.

ebay didn't even know before they started using ssd what effect they would have.


Top
   
 Post subject:
PostPosted: Fri Aug 05, 2011 10:14 am 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
chesty wrote:
It's difficult to compare the overall cost, say, over a year, by just looking at the initial purchase price. ebay looked at space, electricity, performance, I can't remember what else. But you know this already because you run a DC similar to ebays DC and linode.

ebay didn't even know before they started using ssd what effect they would have.


Well, presumably they knew it would be faster, and they probably decided to do a feasibility study to see if it was practical, and then found out that the cost wasn't too bad.


Top
   
 Post subject:
PostPosted: Sat Aug 06, 2011 10:14 pm 
Offline
Senior Member

Joined: Sat Feb 10, 2007 7:49 pm
Posts: 96
Website: http://www.arbitraryconstant.com/
fiat wrote:
I ran UNIXBench across Linode's entire range of offerings, and the disk speed on the 512mb VM's were blistering compared to the larger Linodes.

Hm...

The pipe throughput was significantly faster on the 512 as well, which is peculiar because that doesn't hit disk at all. That implies some other difference between the hosts, but I don't think similar optimizations are ruled out but your numbers.


Top
   
 Post subject:
PostPosted: Fri Dec 23, 2011 4:27 am 
Offline
Newbie

Joined: Mon Mar 22, 2010 7:53 pm
Posts: 2
I think linode should give users an option to choose swap between SSD and HDD, at least. Or maybe better, option to switch main storage space between SSd and HDD.

But Linode disk IO is really fast.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 0 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group