Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Tue Apr 23, 2013 6:07 pm 
Offline
Senior Member

Joined: Sun Mar 07, 2010 7:47 pm
Posts: 1970
Website: http://www.rwky.net
Location: Earth
I've been discussing the issues with Linode over the past day or so and we've just about resolved it for my nodes.

One of them was on a node where another user was severely abusing disk IO, so I migrated to another host and it's helped a lot, tasks that were taking 7 minutes are now taking 3 (which is how long they should be). I timed a mysql database dump it was taking ~60 seconds, now it's taking ~20.

One important note is that both of these were on E5-2630L and the new ones are E5-2670. Either there is a noticeable difference between the two or the E5-2630Ls were heavily loaded hosts.

I'll post more details about the unix bench scores later.

_________________
Paid support
How to ask for help
1. Give details of your problem
2. Post any errors
3. Post relevant logs.
4. Don't hide details i.e. your domain, it just makes things harder
5. Be polite or you'll be eaten by a grue


Top
   
PostPosted: Tue Apr 23, 2013 6:33 pm 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
This doesn't seem to be a universal problem, oddly enough.

Here's my Munin processing time over the past month or so. This machine (now a Linode 2048) moved from a L5520 to a E5-2630L somewhere in the middle of the month:

Image

Also, here's CPU usage for another machine (now a Linode 1024), which moved from an L5420 to a E5-2650L...

Image

And the pingdom page load time over a similar period:

Image

I'm not saying the problem doesn't exist, but it doesn't seem universal.

BTW: On the unixbench tests, y'all are running that from a completely idle system (e.g. Finnix), right?

_________________
Code:
/* TODO: need to add signature to posts */


Top
   
PostPosted: Tue Apr 23, 2013 8:21 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
Well, is munin CPU bound? Because if you moved to a substantially faster CPU in the middle of the month, and saw no change in the performance of a CPU-bound process, that would indicate some problem.


Top
   
PostPosted: Tue Apr 23, 2013 9:10 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:18 am
Posts: 681
I'm definitely seeing/feeling performance hits on more than one migrated nodes (all 512 to 1024), though it can be hard to accurately benchmark, especially without access to the old node. One in particular, with a lot of application-level historical data, had been quite stable for more than a year (and more than one host) but most of the last week or so is in the top 10 worst performance days of the past 15 months. And this is on a node that was migrated to the E5-2670.

I can only guess at this point that it's CPU related, even though my own guest CPU load is only 6-10% on average. But with the memory increase the working set (including database) can pretty much now fit into memory, something I do see reflected on my I/O graphs which have dropped significantly on average.

I just don't get it, since by the specs it's hard to see how the new node can't at least match the prior performance. I guess it could be a busy host, but I've had that in the past (and even migrated once) and it didn't produce average application level results as bad as I'm currently seeing. And then it was generally I/O wait that was causing my problems, but I see very little of that right now, especially with my lower I/O rate.

I do see significantly higher cpu steal percentages than I recall on older nodes, or currently see on non-upgraded nodes. I don't know if that's reality or if it's just being reported more accurately, nor how much of an impact it's having. Though I wonder if it wasn't a mistake to increase guests to 8 cores - maybe the overhead of dealing with the extra contention among all the guests across the larger set of cores is actually hurting more - at least for the average case - than access to the extra cores is of benefit.

It's unfortunate, since I was really enthusiastic about the upgrades, but at the moment, I sort of wish I hadn't done any - things were more stable and predictable before. Maybe that's just due to changing hosts and could have happened previously too - it just seemed so unlikely that moving to a new node could end up being a step backwards. I just hope whatever is going on settles down and I can stop focusing on it and having to worry about the change in my application performance.

-- David


Top
   
PostPosted: Tue Apr 23, 2013 10:36 pm 
Offline
Senior Member

Joined: Tue Oct 20, 2009 9:02 am
Posts: 56
I'm also seeing performance hits.

What's going on with Linode these days?
An upgrade that feels like a downgrade...


Top
   
PostPosted: Wed Apr 24, 2013 5:59 am 
Offline
Senior Member

Joined: Sun Mar 07, 2010 7:47 pm
Posts: 1970
Website: http://www.rwky.net
Location: Earth
hoopycat wrote:
BTW: On the unixbench tests, y'all are running that from a completely idle system (e.g. Finnix), right?


In my case close enough, Ubuntu with all the services stopped (even cron!) except ssh.

I've resolved my problems with two servers thanks to Linode support.
One server had an abusive user hitting the disk the other one I don't know the reason for poor performance. Both were on E5-2630L and are now on E5-2670.
After the upgrade reports and backups that would take about half an hour increased to an hour. After moving to the E5-2670 hosts they're back down to half an hour.

Here is a munin graph of the server load for the month

Image

You can see an increase around week 16 which is when I first upgraded (ignore the big spike that was a test). After the spike you can see it's dropped back down to normal this is the new E5-2670 host.

The unix bench score for the E5-2630L were
UnixBench (w/ all processors) 375.5
UnixBench (w/ one processor) 153.3
The unix bench score for the E5-2670 are
UnixBench (w/ all processors) 1062.4
UnixBench (w/ one processor) 361.4

Now this isn't the best unix bench I've ever see on a Linode the best I've seen is
UnixBench (w/ all processors) 1431.4
UnixBench (w/ one processor) 524.5
Which was on a Intel(R) Xeon(R) CPU L5630 @ 2.13GHz

However, the reason for this seems to be that for some reason the Unix bench file copy tests perform worse on the new E5 hardware than the old hardware even though the Dhrystone and Whetstone tests are showing a good improvement over the new hardware. Unix bench seems to penalize the score due to the file copy tests.

Another point I've not had reports of performance problems on Larger nodes (2GB/4Gb) but I've not inspected them closely to be sure they're not affected it could be they're just affected less. The two servers in this post were 512s (now 1GBs).

The servers in question were a high availability pair so downtime for one server wasn't a problem. For those of you with problems and a non HA setup you'll probably have to migrate to a new host which means downtime :(

If you're interested benchmarks can be viewed in detail here
E5-2630L http://serverbear.com/benchmark/2013/04 ... JhJS7lnJEq
E5-2670 http://serverbear.com/benchmark/2013/04 ... GxQJufITKJ
L5630 http://serverbear.com/benchmark/2013/02 ... EnWwLgKnlV

Linode support were very polite and helpful during the diagnosis and were apologetic about having to migrate again. All in all my faith in Linode's quality support has been retained!

tl;dr
Unixbench appears to be less accurate on the new processors. The E5-2670 > E5-2630L. Ask to be migrated to a E5-2670 host. Linode support still awesome.

_________________
Paid support
How to ask for help
1. Give details of your problem
2. Post any errors
3. Post relevant logs.
4. Don't hide details i.e. your domain, it just makes things harder
5. Be polite or you'll be eaten by a grue


Last edited by obs on Sat May 18, 2013 3:49 am, edited 1 time in total.

Top
   
PostPosted: Fri May 17, 2013 9:41 am 
Offline
Senior Member

Joined: Fri Dec 07, 2007 1:37 am
Posts: 385
Location: NC, USA
Figured I should follow up since the scheduled maintenance (whatever it was) seems to have helped. Not as good as it used to be, but doesn't feel "broken" like it did right after migrating. Maybe I was just spoiled on a really lightly loaded host before.

Migrated for 2xRAM on 4/12, maintenance occurred on 5/7.

Image


Top
   
PostPosted: Fri May 17, 2013 10:46 pm 
Offline
Senior Member

Joined: Sat Sep 25, 2010 2:25 am
Posts: 75
Website: http://www.ruchirablog.com
Location: Sri Lanka
I have been a customer of Linode for past 3 years. And performance was good with those L5520 nodes. I have had 3 linodes 2 512, and a 2048. I have upgraded 1, 512 vps and 2048 one. I have got moved to the L2630 too and the performance is just terrible. I have been a long time advocate of Linode and I have written 3 highly positive reviews about linode and referred 14 people ( got credited after 3 months ) to linode. But now Linode is nothing special except the control panel and reliability.

I have moved 2 of those those non mission critical nodes to Ramnode and im pleased. However Im hosting my production sites on Linode's trusty old 512MB I wont upgrade that.

Heres what my Google webmaster tools graph looks like after I migrated to Ramnode. Performance is consistent and the last drop happened after I have installed PHP-APC.

Image

_________________
www.ruchirablog.com


Top
   
PostPosted: Sun May 19, 2013 1:11 am 
Offline

Joined: Sun May 19, 2013 1:05 am
Posts: 1
adergaard wrote:
I'm also seeing performance hits.

What's going on with Linode these days?
An upgrade that feels like a downgrade...


I so wish I've seen this whole thread earlier.
Never would have gone the "free upgrade" road.
Performance is awful ever since, and the support is constantly saying it must be my fault with some configuration changes and whatnot...

It would probably helped many if the info was shared more. This topic/thread clearly shows that the upgrade was downgrade for some/many that did it...


Top
   
PostPosted: Wed May 29, 2013 6:30 am 
Offline
Senior Newbie

Joined: Sun Aug 07, 2011 9:35 am
Posts: 10
Upgraded 5 linodes, a mix of web servers and databases, and they're all using more CPU.

Haven't changed any application settings.


Top
   
PostPosted: Wed May 29, 2013 11:43 am 
Offline
Senior Member

Joined: Sun Sep 13, 2009 11:37 pm
Posts: 65
When I migrated my server times were absolutely abysmal. It was running absolutely terribly -- PHP pages that would take 200ms to serve in the past were sometimes taking 3 or 4 seconds. I finally isolated it to a CPU issue by trying simple pings to my nginx server. These went from ~30 ms on my old linode to often over a second on the new one.

Anyways, I requested a switch and landed on a server where I'm back to ~30ms pings or so. So I'm happy for now, but don't understand how I should ever have had 1 second ngnix pings in the first place. Methinks there is some grave misconfiguration that Linode needs to suss out.


Top
   
PostPosted: Wed May 29, 2013 11:50 am 
Offline
Senior Member

Joined: Fri Nov 02, 2012 4:20 pm
Posts: 60
It would be much more helpful if you posted what CPU you were on when you had poor performance.


Top
   
PostPosted: Wed May 29, 2013 7:53 pm 
Offline
Senior Newbie

Joined: Sun Aug 07, 2011 9:35 am
Posts: 10
Was on L5520 and moved to E5-2670 in Tokyo

http://ark.intel.com/compare/40201,64595


Top
   
PostPosted: Tue Jun 04, 2013 5:11 pm 
Offline
Senior Newbie

Joined: Fri Oct 09, 2009 1:24 pm
Posts: 15
I am also seeing increased CPU load on a 512MB since the upgrades/migration to 1024MB. In particular, there seems to be more "steal" than before. My guess is that's related to the bump to 8 cores.

I seem to recall in the past that each host node had Linodes of a particular size, eg. you wouldn't have 512MB Linodes on the same node as 2048MB Linodes. Does anyone know if that is still the case? If not, that change along with the 8 core bump could explain smaller Linodes getting squeezed.


Top
   
PostPosted: Tue Jun 04, 2013 5:27 pm 
Offline
Senior Member

Joined: Fri Nov 02, 2012 4:20 pm
Posts: 60
blah,

I don't believe that's true anymore. I think you "could" run into a situation where there are multiple size Linodes on the same box.

Edit: I don't know that there's been any official word on this from Linode.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 1 guest


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group