jebblue wrote:
sednet wrote:
matthewlai wrote:
I would set up a NFS server (or multiple, depending on demand) at each data center, have a script that syncs unix users with their users database, and set up quota and traffic shaping depending on pricing scheme.
You have no need to sync user IDs between the client and the server and doing so would be a total nightmare as the various client ranges would overlap. Just export a correctly sized file system to the correct clients and let the client end worry about user ID to user name mapping. You don't need quotas at all if the exported file systems are the correct size to began with.
jebblue wrote:
NFS isn't reliable.
NFS as a protocol is perfectly reliable and in common use all over the world. It's still the standard way to share file systems between unix machines.
It isn't reliable.
Uh what? I will say it's reliable enough for us to have used as our backend datastores for a large VMWare cluster for over three years. With no "reliability" issues at that.
Having worked with several storage vendors, something like this is definitely do-able for Linode. I don't know what type of hardware Linode is running on (last I knew was years ago using 2U servers) but having fiber for storage and an entirely separate fiber network just for the SAN would be a huge resource sink to every node. I could see a fix in one of two ways:
1. Deploy storage and have a specific subset of hosts connected to it. If you need SAN storage, your Linode will have to migrate.
2. Deploy a NAS (I've had very good luck with NetApp FAS) and make everything available via NFS.
I'll speak to #2 in more detail... NetApp does offer automation through their DataOnTap SDK. You can provision storage on the fly. Set up a filer and create the disk aggregates manually, then automate the rest. A user needs storage? They can submit a ticket or do it as an Linode Extra. Run a script that creates a new volume (say 100GB) and makes it available via NFS to a specific IP or via security settings.
We use the Netapp FAS Filers at work for our Windows CIFS shares, I've done this automation via powershell. The admin picks which filer, how big, what share name, etc and the script does the rest. Their automation tools are very powerful.
If you're going to sit there and talk about "oh gosh well NFS is just so unreliable, lets just use a SAN" you're likely mis-informed, been using bad implementations, or have very little experience with today's offerings.