Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
 Post subject: Bandwidth and Diskspace
PostPosted: Sat Feb 27, 2010 1:57 am 
Offline
Newbie

Joined: Thu Dec 03, 2009 3:24 am
Posts: 4
Hello :)

Well I don't know how exactly linode is managing its hardware/service.

But I think it would be a nice feature to have a custom/user defined bandwith/diskspace.

For example I use 30 GB out of my 200 GB monthly bandwidth transfer. But I've already filled 5 GB of my diskspace and I have a growing database. (of files not MySQL)

So I think I'm wasting like 170 GB of my monthly transfer but soon I'll be ran out of diskspace. Because of this fear (running out of diskspace), I have still kept my previous shared hosting because I have 800 GB there. (I know the overselling and I know the difference between the quality of the hardwares) So that in case I move some files from my linode to there and free of some diskspace here. I am paying 7$/month for the shared server, so that I could give that to linode instead and create a more reasonable situation on linode for myself.

By user-defined, I mean for example in my case, I could choose "I need 100 GB monthly transfer (Or even less) "and receive more diskspace in return.

Thanks for your attention


Top
   
 Post subject:
PostPosted: Sat Feb 27, 2010 2:04 am 
Offline
Senior Member
User avatar

Joined: Sun Dec 27, 2009 11:12 pm
Posts: 1038
Location: Colorado, USA
For $7/month, keep your shared host and use it as a storage server.

BTW: Who gives you 800G of disk space for $7/month?


Top
   
 Post subject:
PostPosted: Sat Feb 27, 2010 12:24 pm 
Offline
Senior Member

Joined: Fri Jan 09, 2009 5:32 pm
Posts: 634
vonskippy wrote:
BTW: Who gives you 800G of disk space for $7/month?


Someone who is overselling, and not by a little.


Top
   
 Post subject:
PostPosted: Sat Feb 27, 2010 12:34 pm 
Offline
Senior Member

Joined: Sat May 03, 2008 4:01 pm
Posts: 569
Website: http://www.mattnordhoff.com/
The problem is, Linode's servers only have so much disk space. They aren't being stingy for fun; it's because they'd run out otherwise. Giving someone an extra few hundred GB of bandwidth is no problem, but the disks are only so large.


Top
   
 Post subject:
PostPosted: Sat Feb 27, 2010 12:44 pm 
Offline
Senior Member

Joined: Fri Jan 09, 2009 5:32 pm
Posts: 634
mnordhoff wrote:
The problem is, Linode's servers only have so much disk space. They aren't being stingy for fun; it's because they'd run out otherwise. Giving someone an extra few hundred GB of bandwidth is no problem, but the disks are only so large.


It would be awesome if they'd get some SAN, but that doesn't seem to be in their plans. Probably not cost effective.


Top
   
 Post subject:
PostPosted: Sat Feb 27, 2010 4:46 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:18 am
Posts: 681
glg wrote:
It would be awesome if they'd get some SAN, but that doesn't seem to be in their plans. Probably not cost effective.

If it's just as an expansion option that would be interesting, but if for primary storage, I'm not convinced that a SAN structure is a good solution yet.

From a little observation in some other cases using them, it would appear that I/O issues can be even worse (or at least less predictable) in a heavily shared environment than you can find on Linodes with their local disk. For at least one provider I found some references from customers whose best hdparm numbers (however imperfect that benchmark may be) were about equivalent to my worst numbers here at Linode. Disk already has serious resource contention, so not sure I'd want to lose even further.

From what I can glean, the ability to do some throttling and I/O resource control seems to be improving, but even then its being done at the destination so the network to the SAN can still become congested. Maybe another generation or two.

-- David


Top
   
 Post subject:
PostPosted: Sat Feb 27, 2010 9:38 pm 
Offline
Senior Member

Joined: Fri Jan 09, 2009 5:32 pm
Posts: 634
db3l wrote:
From what I can glean, the ability to do some throttling and I/O resource control seems to be improving, but even then its being done at the destination so the network to the SAN can still become congested. Maybe another generation or two.


I don't think you need another generation, you just need better stuff that's available today. I run some really high availability/high performance stuff for my day job. All the app/db storage is SAN. We have never had a performance problem with the SAN.


Top
   
 Post subject:
PostPosted: Sat Feb 27, 2010 11:23 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:18 am
Posts: 681
glg wrote:
I don't think you need another generation, you just need better stuff that's available today. I run some really high availability/high performance stuff for my day job. All the app/db storage is SAN. We have never had a performance problem with the SAN.

I'll of course defer to more practical experience, but I would be curious about the SAN/client ratio it operates under. I still think SANs work best for high storage/throughput to reasonable numbers of clients, but in a VPS environment with a high number of independent clients, SAN contention is still a real issue.

At current sizes, you could have 64 Linode 360's per TB of SAN. Not sure what SAN size would be best (larger is more economic, but also puts a lot of eggs in one basket as well as increasing contention). So let's assume somewhere between a 5-10TB SAN as a starting point - that yields perhaps 320-640 individual VPS' hitting that SAN. Still seems problematic to me. Plus, the host to SAN bandwidth needs to replace from 8-16 hosts worth of local disk I/O bandwidth which is not trivial (1.5Gbps per host at SATA 1.5). Obviously there's some benefits of statistical sharing amongst all the VPS', but too many try to do something at the same time and it could be a real problem.

-- David


Top
   
 Post subject:
PostPosted: Sun Feb 28, 2010 2:41 am 
Offline
Senior Member

Joined: Mon Apr 27, 2009 7:36 pm
Posts: 59
Website: http://www.xenscale.com
Location: Boise, ID
Here is my experience on the whole SAN thing.
I had a client that used SANs for their VMS on a different (bigger than linode) provider.

Said provider used Citrix XenServer which recommends a SAN. Since theres was iSCSI based my problem wasn't contention, which seemed to be just fine since we had great throughput, but it was that the network liked to drop every now and then which has a tendency to break DB tables with MyISAM.

Since moved that client to Linode and happy ever since.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 0 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group