Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
 Post subject:
PostPosted: Thu Oct 27, 2011 1:29 pm 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
Net-burst wrote:
3) Update your recovery distro because it is really very, very outdated. Because of this, I'm using 2gb portion of precious hdd space to house another deployment which I will be able to use in case of emergency to diagnose/repair my production deployment.

Finnix 100 was released less than a year ago, so I wouldn't exactly call it "very, very outdated" quite yet. What's missing from version 100 that's nice to have for recovery? (Aside from the pvops-compatible kernel.)

_________________
Code:
/* TODO: need to add signature to posts */


Top
   
 Post subject:
PostPosted: Thu Oct 27, 2011 5:21 pm 
Offline
Senior Newbie

Joined: Fri Jan 28, 2011 10:21 am
Posts: 17
Location: Kiev, Ukraine
hoopycat wrote:
Finnix 100 was released less than a year ago, so I wouldn't exactly call it "very, very outdated" quite yet. What's missing from version 100 that's nice to have for recovery? (Aside from the pvops-compatible kernel.)

Hm. Haven't checked it in a while. Last time I used Finnix, it destroyed my ext4 partitions because it had outdated e2fsprogs or something like that...


Top
   
 Post subject:
PostPosted: Thu Oct 27, 2011 5:26 pm 
Offline
Senior Newbie

Joined: Fri Jan 28, 2011 10:21 am
Posts: 17
Location: Kiev, Ukraine
OverlordQ wrote:
Net-burst wrote:
3) Update your recovery distro because it is really very, very outdated. Because of this, I'm using 2gb portion of precious hdd space to house another deployment which I will be able to use in case of emergency to diagnose/repair my production deployment.

2 gigs? You can fit a bare debian install in like 400 megs.
Barebone install of Arch Linux fits in about 150 megs. But with all needed tools and software it jumps to around 500-750. Rest of space is generally there just in case. For example for when I need to recompile kernel from within recovery deployment.


Top
   
 Post subject:
PostPosted: Thu Oct 27, 2011 5:29 pm 
Offline
Senior Newbie

Joined: Fri Jan 28, 2011 10:21 am
Posts: 17
Location: Kiev, Ukraine
rsk wrote:
Net-burst wrote:
2) Add write barrier support. It will definitely help with data integrity for people, who run unstable systems or use Freemont data center with built-in reboots :)

Erm... I believed the RAIDs in servers are battery-backed?
That I dont know. But it wont hurt to have bigger degree of data security.


Top
   
 Post subject:
PostPosted: Thu Oct 27, 2011 6:19 pm 
Offline
Senior Member
User avatar

Joined: Sat Aug 30, 2008 1:55 pm
Posts: 1739
Location: Rochester, New York
Net-burst wrote:
Arch Linux


Understood. Carry on.


Top
   
 Post subject:
PostPosted: Fri Oct 28, 2011 1:57 pm 
Offline
Senior Member
User avatar

Joined: Tue Nov 24, 2009 1:59 pm
Posts: 362
Net-burst wrote:
rsk wrote:
Net-burst wrote:
2) Add write barrier support. It will definitely help with data integrity for people, who run unstable systems or use Freemont data center with built-in reboots :)

Erm... I believed the RAIDs in servers are battery-backed?
That I dont know. But it wont hurt to have bigger degree of data security.

I'm not an expert in this regard, but as our "disks" are LVM units on top of a RAID that you share with all other Linodes, I believe barrierwould have to require flushing the controller's cache, and kill performance for everyone. And, I believe that battery-backed controllers exist for exactly this reason - to make data safe without need to flush caches.

_________________
rsk, providing useless advice on the Internet since 2005.


Top
   
 Post subject:
PostPosted: Sat Oct 29, 2011 2:05 pm 
Offline
Junior Member

Joined: Sat Feb 21, 2009 6:25 pm
Posts: 26
I'm a relatively new UNIX admin. I've never been root before.

One nice service would be "Pretend to be a cracker and look for vulnerabilities." I.e., someone would check my Linode for me and make sure there aren't any security flaws.

It probably could be automated via a script.

For example, I was *SHOCKED* at how many people try to grab phpmyadmin, which I never installed. It would have been nice to hear "Hey! Don't use phpmyadmin!"


Top
   
 Post subject:
PostPosted: Sat Oct 29, 2011 2:30 pm 
Offline
Senior Newbie
User avatar

Joined: Thu Dec 17, 2009 8:45 pm
Posts: 15
Smark wrote:
2. Linode site reliability. No one has mentioned this, but every time I lose connection to my Linode (which is in Dallas) I immediately check linode.com, which is unavailable. I assume this is because it is also located in Dallas. I usually check status.linode.com but can't remember if that is down also or not. If this isn't actually an issue and just something weird on my end, ignore this.


I've had a linode in Dallas for 6+ months and it is very solid. I monitor with multiple services that watch the server every minute from multiple locations around the world (i.e. pingdom, wasitup) and I've had almost no interruptions in service (besides a couple short outages that were my fault; e.g. config errors, os level issues). I saw a couple 3 to 4 minute network glitches this summer, where my server stayed up and running, but I could not reach the network for a couple minutes. I think both of those were in the middle of the night. I have no idea how wide spread the glitch was, it's possible it was just the host I was on, though as mentioned, my linode didn't go down.

I just wanted to say that if there are / were any problems with Dallas, they are / were very infrequent and very short.

Jamie

_________________
Paw Dogs


Top
   
 Post subject:
PostPosted: Sat Oct 29, 2011 2:37 pm 
Offline
Senior Newbie
User avatar

Joined: Thu Dec 17, 2009 8:45 pm
Posts: 15
fsk wrote:
I'm a relatively new UNIX admin. I've never been root before.

One nice service would be "Pretend to be a cracker and look for vulnerabilities." I.e., someone would check my Linode for me and make sure there aren't any security flaws.


Google SATAN

http://en.wikipedia.org/wiki/Security_A ... g_Networks

Make sure you know what you are doing, it is easy to start violating hosts and networks you don't have the right to access or probe.

_________________
Paw Dogs


Top
   
 Post subject:
PostPosted: Sat Oct 29, 2011 2:42 pm 
Offline
Senior Newbie
User avatar

Joined: Thu Dec 17, 2009 8:45 pm
Posts: 15
sirpengi wrote:
3) domain registration. yeah, this has been asked for before too.


imo... There is no reason for Linode to get into registration. There are many good registrars out there right now.

I'm using Moniker, though I think they cater to people that have more than just a few domains. I only pay a tiny amount higher than what the registrar pays for domains. There isn't much profit in domain registrations, many of the registrars use domain registration as a loss leader for their other services.

I think to make domain registration worth it for linode to do, they would likely have to charge more than you would pay at other registrars.

Jamie

_________________
Paw Dogs


Top
   
 Post subject:
PostPosted: Sun Oct 30, 2011 6:42 am 
Offline
Senior Newbie

Joined: Fri Jan 28, 2011 10:21 am
Posts: 17
Location: Kiev, Ukraine
hoopycat wrote:
Net-burst wrote:
Arch Linux


Understood. Carry on.

You may not believe it, but Arch is quite good server platform and is used quite frequently by russians as such. Also, if you look closely, you will see that almost all VPS providers provide not only Debian/Ubuntu/CentOS but also Arch and Gentoo. Heck, Google uses Gentoo quite frequently. Their ChromeOS is entirely based on Gentoo :)


Top
   
 Post subject:
PostPosted: Sun Oct 30, 2011 6:47 am 
Offline
Senior Newbie

Joined: Fri Jan 28, 2011 10:21 am
Posts: 17
Location: Kiev, Ukraine
rsk wrote:
I'm not an expert in this regard, but as our "disks" are LVM units on top of a RAID that you share with all other Linodes, I believe barrierwould have to require flushing the controller's cache, and kill performance for everyone. And, I believe that battery-backed controllers exist for exactly this reason - to make data safe without need to flush caches.

I'm also not an expert, but if VM just suddenly shuts down or hangs, you will have corrupted data if you had write event to file system, that wasn't flushed. And this can happen to anyone because inside VM disk writes are also cached. Battery-backed RAID is needed to sustain integrity of RAID itself and flush all data from cache of RAID itself. But we also have VM cache :)


Top
   
 Post subject:
PostPosted: Sun Oct 30, 2011 1:22 pm 
Offline
Senior Member

Joined: Wed May 13, 2009 1:18 am
Posts: 681
Net-burst wrote:
I'm also not an expert, but if VM just suddenly shuts down or hangs, you will have corrupted data if you had write event to file system, that wasn't flushed. And this can happen to anyone because inside VM disk writes are also cached. Battery-backed RAID is needed to sustain integrity of RAID itself and flush all data from cache of RAID itself. But we also have VM cache :)

The combination of journaling filesystems and the BBU RAID should prevent any filesystem corruption, but you could certainly have application level data that did not make it to the disk if it was only held in in-memory buffers (at any level) at the time of failure. But that's the application's fault, as without application flush requests, there's never any guarantee about data consistency on media.

On any system, those applications that require such consistency should be handling it themselves with explicit flushing, and nothing you can impose externally can correct things if they don't. Consistency has to start at the top, from the application. Databases (at least ACID-compliant ones), for example, usually have their own level of journaling which is flushed prior to writing any actual record data. (I suppose arguably that case could then be considered corruption on restart, but the database will just replay the journal and no data will be lost) Even a simple logging application needs to use flush if it wants any assurance that the data has been written, regardless of what's happening beneath it at system level. The flush may not turn out to be sufficient, but it's required.

The problem with modern disks, and what write barriers were introduced to help address, is that the disks themselves may cache and reorder data writes, so that even when the filesystem driver believes it has written data to its journal or in proper order (which the application level flush is then trusting to mean its data is on physical media), it may only exist in the drive's cache, and a sudden outage may end up with that data never making it to the disk media. The write barrier prevents the filesystem driver from writing any further data until the disk guarantees prior data has hit the media successfully (and assuming the disk isn't fibbing, which some have in the past). Unless you're on an LVM volume, which I believe does not currently pass barrier requests through to the media.

However, the BBU on the arrays in the Linode case solves this in a separate way. It's there to ensure that at a minimum any data currently held by its cache is persisted until the following reboot, at which point it will be immediately written to media prior to any other operations. So (at least theoretically) there's no way for data not to reach media once the filesystem driver has handed it to the drive array, and barriers would offer no particular benefit, except likely slowing down the application while it waits in a shared environment for the data to be written to media. Some early measurements had the hit as high as 30% for some workloads and I don't think that was even in a shared environment. Now, the BBU isn't quite an absolute guarantee (it could fail, or the disks could be offline longer than it can maintain the cache - probably a few days at most) but it's pretty darn good, and the most critical applications will have their own way of dealing with actual corruption in such rare cases, ala databases above.

Perhaps a more succinct way to think of it is that barriers were introduced when you couldn't trust your disks, but rather than barriers, a BBU just lets you trust your disks again. And without the performance hit barriers introduce.

While I'm not 100% sure, I also don't believe barriers have any impact on the point of higher level application data consistency since applications would still need to have flushed their internal data (otherwise the filesystem driver might not yet have chosen to write the data itself). The barrier option in ext4 for example, affects the journal commit record (and data sent to the disk prior to that point), but not unflushed in-memory cache data. So appropriate flushing is needed barrier or not. I'd certainly want any application whose consistency I cared about to be written to explicitly flush data it required to be stored.

-- David


Top
   
 Post subject:
PostPosted: Sat Nov 12, 2011 1:58 pm 
Offline
Senior Newbie

Joined: Thu Sep 22, 2011 1:10 pm
Posts: 16
I've been with Linode for my company a while, like 5 years. Really I think everything runs fine, I can't find anything I would really like to improve upon...

but, if I had a wish list it would be this.

1. Another storage option. I'd primarily really only use it for backups, so cheap is good, even if it's slow.
2. some other linode option. That isn't very clear, but there are some tasks that I do not need to do very often, so I do them at the office on a KVM virtual machine. I might do them once a week, or once a month and they need 8GB or 16GB of RAM. It would be handy to be able to do them on a linode that I only spin up once in a blue moon and only pay for while it's running.... or something like that. Storage for this one isn't all that important, mostly CPU/RAM for temporary usage. Or maybe a way to temporarily spin up CPU and/or RAM on a current linode for 24 hours? I know this is a nightmare on your side to figure out how to do :)
3. I really can't think of a 3rd one, you guys run everything very well.

_________________
-Abzstrak


Top
   
 Post subject:
PostPosted: Sun Nov 13, 2011 12:35 pm 
Offline
Senior Member
User avatar

Joined: Sun Jan 18, 2009 2:41 pm
Posts: 830
Note that larger plans than the Linode 4096 are available, they're just not advertised on the front page.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 8 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group