Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Sat May 18, 2013 12:35 pm 
Offline
Senior Member

Joined: Wed Oct 20, 2010 12:35 pm
Posts: 111
Location: United Kingdom
MichaelMcNamara wrote:
I was getting about 1500ms or greater FTB values until I migrated from Apache to Nginx utilizing W3TC on WordPress. With W3Tc there's no need to fire up PHP for every page visit and Nginx just serves up the static HTML disk cache files. It brought my FTB values down to around 150ms (that's a huge performance increase!).

http://blog.michaelfmcnamara.com/2012/1 ... x-php-fpm/


A better option is just to ditch Wordpress and use Pelican. It can import your Wordpress blog automatically and is ridiculously fast (static HTML only). Plus you can now do your blogging using Vim and Git :). Much nicer.

http://docs.getpelican.com/en/3.1.1/

Stever wrote:
+1 on wishing I could go back and not take the free upgrade. I didn't really need the RAM and now the CPU and disk IO are horrid.

I'm sitting here waiting on a manual tar|gzip backup that took maybe 5 minutes on the old hardware, after migrating it's at 30 minutes and counting. Watching htop I'd say I'm getting at least 90% cpu steal :(


Sounds to me like the new nodes are getting hammered with all the migrations. I'd expect it to calm down somewhat once this huge mass of migrations has finished. I'd imagine it is putting quite a load on their internal network and the host machines IO in particular.


Top
   
PostPosted: Sun May 19, 2013 10:28 pm 
Offline
Senior Newbie

Joined: Wed Jan 28, 2009 11:24 am
Posts: 11
Website: http://www.limetech.org
Stever wrote:
+1 on wishing I could go back and not take the free upgrade. I didn't really need the RAM and now the CPU and disk IO are horrid.

I'm sitting here waiting on a manual tar|gzip backup that took maybe 5 minutes on the old hardware, after migrating it's at 30 minutes and counting. Watching htop I'd say I'm getting at least 90% cpu steal :(


I had exactly the same issue post-upgrade, opened a support ticket and had a new migration to transfer to a new machine within 10 minutes, haven't had a problem since. I think I'm back on the old hardware (but with the increased RAM), but I wasn't CPU bound anyway - and it's better than completely unusable like it was post-migration.


Top
   
PostPosted: Tue May 21, 2013 11:59 am 
Offline
Junior Member

Joined: Mon Jan 30, 2012 3:21 am
Posts: 29
Location: Glendale, CA
[quote="Guspaz"]It doesn't help that linodes now have 8 virtual cores, not sure what the reasoning behind that decision was. Even though the core count is doubled on the new hosts, it would have probably reduced contention.[/quote]

This reminds me of when using VMware (I know it works a little differently than Xen, but here goes:)

We constantly had "discussions" between the network group (in charge of the VMware infrastructure) and the Database admins. The DBAs wanted more cpu cores for their systems (and more RAM) while the network group wanted less cores (RAM was understandable in this situation). To quench the issue a test environment was setup with both configurations, and the DBAs were asked to test the machines. It turned out that the less CPU cores had better performance than the full cpu cores. (6 cores vs 2 cores on a host with quad 6 core xeon processors). The issue in VMware was that all "requested" cores must have an available cycle before the host would give the guest the requested cpu cycles. So with 2 or 4 cores, it got the cpu cycles needed quicker than with 6 cores.. (with multiple guest machines on the system, only difference was the DB machines)... so... at least with VMware "more is not always better".


Top
   
PostPosted: Fri May 24, 2013 9:42 am 
Offline
Senior Member

Joined: Sat May 03, 2008 4:01 pm
Posts: 569
Website: http://www.mattnordhoff.com/
Xen isn't stupid like that, though.

Edit: Xen may or may not have other interesting performance issues, but it 100% does not have that one.

_________________
Matt Nordhoff (aka Peng on IRC)


Top
   
PostPosted: Tue Jun 04, 2013 6:08 pm 
Offline
Senior Member
User avatar

Joined: Tue May 26, 2009 3:29 pm
Posts: 1691
Location: Montreal, QC
I was more suggesting that the number of threads involved was a bit nuts. Assuming each linode represents 8 threads on the host machine, and that there are still 40 linodes per lowest plan host machine, you've got a 16-core server managing 320 threads (virtual cores), or 20 virtual cores per real core.

Now, that's no worse than when we had 40x4 threads on 4x2 real cores but there was an opportunity to reduce the contention there (by doubling the real core count and keeping the virtual core count the same at 4 per linode).

I'm saying I'm not sure what the point of doubling the virtual core count was.


Top
   
PostPosted: Fri Jun 07, 2013 4:58 pm 
Offline
Senior Member
User avatar

Joined: Wed Mar 17, 2004 4:11 pm
Posts: 554
Website: http://www.unixtastic.com
Location: Europe
Guspaz wrote:
I'm saying I'm not sure what the point of doubling the virtual core count was.


It was marketing and because this change could be made without buying extra hardware.

I doubt there were many people who were CPU bound before the upgrade.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group