Linode Block Storage (Newark beta)

Linode Staff

Linode Block Storage (Newark beta)

What is the Linode Block Storage service?

The Linode Block Storage service allows you to create and attach additional storage volumes to your Linode instances. These storage volumes persist independently of Linode instances, but can easily be attached from one Linode to another without the need to reboot. Volumes attached to Linodes appear as block devices and can be formatted and mounted just like any other block device.

Block Storage Volumes are highly available with 3x replication. They're fast - built on great engineering, NVMe/SSD hardware, and a fast network. They're affordable - $0.10 per GB (free during the beta) and no usage fees. They're cloud: elastic, scalable, expandable, resizable, etc. You can hot-plug them into and out of running Linodes. Oh, and you can boot off of them, too.

What can I do?
* Create Volumes

  • Remove Volumes

  • Resize Volumes

  • Attach a Volume to a Linode

  • Detach a Volume from a Linode

  • Add Volumes to your Linode Configuration Profile block device assignments (changes take effect next boot)
    Snapshotting, cloning, and Volume backups are not implemented - but may be in the future.

Can I attach a Volume to multiple Linodes?

Nope. A Volume can only be attached to one Linode at a time.

How big of a Volume can I create?

Between 1 GB and 1024 GB for now. This is a beta, after all. After the beta, the max volume size may be larger.

How many Volumes can I attach to a Linode at the same time?

Up to 8.

Can I mount Volumes across datacenters?

No. Volumes and instances must be in the same region.

Is there API support?

Yes. Documentation coming soon!

How does the beta work?
* The beta is free - there will be no storage costs.

  • Volumes can only be created in our Newark, NJ datacenter. You will need to have at least one Linode there.

  • Let's be honest - this is a beta. You probably shouldn't store any data on it that you can't afford to lose.
    How can I get in on this?

The Block Storage beta is public. You can click [](Manage Volumes)[Manage Volumes](Manage Volumes) off the [](Linode index page)[Linode index page](Linode index page). We'd very much appreciate any testing and feedback. You're welcome to reply to this thread or create a new one in the beta category, or email us, or open a ticket - whatever works.

Thanks,

-Chris

48 Replies

Congratulations on getting this to beta Caker. This is amazing, I can't wait to try it out.

So much yes!

I can't use it yet, since we're in Fremont and need way more than 100GB, plus the lack of backups is worrisome, but I'm excited about the potential to use it when it's out of beta. It was very badly needed at Linode, and it looks fantastic!

I'd love to see a feature that allows sharing the same block storage across multiple Linodes, that'd be sweet. Any chance it could be supported in the future?

Could you also address the lack of backup options? I wouldn't mind no snapshotting and cloning, but being able to back up at intervals or on demand like we can do with regular nodes is pretty critical.

Richardo: thanks man! … been a fun and challenging project. The team deserves the credit.

archon810: It's doubtful that this will ever be integrated into the Linode Backup Service as you know it today. In part because they're just two different systems, and also because we can't, for instance, include backing up 100s of GBs of data on a $5 Linode's backup service which currently only costs a couple bucks.

If we did anything, it would probably be to automate creating a NEW volume and copying the data over. But the rates (cost) would be the same. You could automate this yourself, for now, by creating a new volume, attaching it to the Linode with the volume you want to back up, copying the data, and unmounting, unplugging.

I think a popular use of these Volumes will be people performing backups TO them.

-Chris

I've been waiting for this feature for almost a year and I'm so excited to see how effective it is. This feature was badly needed and unfortunately, I had to refer a lot of clients to other hosts considering that their primary requirement was storage and they weren't wanting to spend big on a Linode just for storage. Congratulations on this and I really hope that it brings some good news (i.e bigger volumes and availability in more data centers) as soon as it comes out of its beta.

@caker Thanks, understood. I'd probably pay extra for automating these things and being able to restore snapshots though, even if it costs the same as the data itself because of the ease of use and speed.

How does rebooting such a volume work? Will it automatically reconnect to the mount point inside the Linode using some Linode customizations and tools, or will it rely on manually running the mount command? Right now we know if the Linode is up and operational, it will have the required storage, but with the system split into 2 parts, I'm going to need to prepare to handle each part going down individually and handling such new cases gracefully.

Will you be publishing best practices for using this new block storage?

Are there any plans for HDD block storage?

like archon810, I am focused in Fremont, CA so can't use it yet but great to see it arrive as an option :)

> How does rebooting such a volume work? Will it automatically reconnect to the mount point inside the Linode using some Linode customizations and tools, or will it rely on manually running the mount command?

Volumes are managed via your Configuration Profile block device assignments. When you attach a volume through the "attach" workflow, it's automatically added to your running config profile and then hotplugged. On reboots, volumes referenced by the booting config profile are also attached on boot. Managing /etc/fstab entries is still up to you, however.

Congratulations on this and I really hope that it brings some good news (i.e bigger volumes and availability in more data centers) as soon as it comes out of its beta.

Thanks! The 100G size max is only for during the beta. Once we launch, max volume size will be much larger… As for availability in other DCs after beta, we'll work on getting this deployed as fast as we can - once we're good to go.

Are there any plans for HDD block storage?

Unlikely.

Thanks for the feedback and questions!

-Chris

Yeay, finally a software backup option that's cheaper than running a second Linode :D

  • Will the price stay at $0.10 per GB or decrease / increase on bigger volumes?

  • Is it possible to hot-swap a volume from one Linode to a second one without losing it's contents? (without reformating or rebooting)

Sounds very good - although I could certainly appreciate a (cheaper) HDD based tier. But I appreciate that Linode favors high quality vs lowest possible price point.

Any ETA for Frankfurt?

I currently use S3QL with some unnamed storage provider and i'd enjoy having a few GB somewhat more local.

I tested the block storage (local read and write speed + wget external bigger files) and all seems good. Your instructions for new users are extremely helpful on mounting the volume but what found missing is that I had to reboot my server after attaching the volume as without rebooting it, the mounting was failing by saying that the drive is not found. I'm now looking forward to the production version. Can you please let us know how long will it take for the storage feature to come out of beta? If it's very near, then I want to wait a little more before starting it using for my file storage, otherwise I'm going to start using it for my production (understanding that the storage is in beta but my files aren't that important even if the data is lost as I can restore them easily)

Hi,

I would like a cheeper block storage based on HDD, but I wouldn't necessarily agree that such a thing is low quality. Perhaps disk read/writes would be slower, but I'd find it more worth it if the service was, say, 0.05 per GB. With that said, I think it's price is good as it currently is, considering the redundancy, and the use of SSDs. I'm also hopeful that, if the disk space on Linode plans increase, the block storage price would decrease to match it.

I think it's pretty cool that you can boot from your block storage disks. Perhaps this could introduce an option in the future, to have a diskless Linode, with a disk or disks provided by block storage only. I could find that pretty useful, and that could open up other options such as quick upgrades of ram and CPU without migrating disks, and without using much, if any, storage on the hardware hosting the Linode. I'd imagine that if such a thing were available, the resources would be lower in price. At the current price point of block storage, for the $5.00 plan, this would amount to the resources without a disk costing $2.60, and the disk costing $2.40. This is all speculation and ideas, but it's fun to think of new and interesting things.

I look forward to being able to test and/or use the block storage service in Dallas, where I already have a Linode I wouldn't have to pay extra for. I already have some plans and uses for such a service, as my main concern, at this point, is disk space far more than other resources. Having a potential diskless Linode as an option would also be fun, and something I could easily use, too, using only block storage disks.

Linode just keeps getting better as time goes on, something I'm glad to see!

Blake

@Tech10:

I would like a cheeper block storage based on HDD, but I wouldn't necessarily agree that such a thing is low quality. Perhaps disk read/writes would be slower, but I'd find it more worth it if the service was, say, 0.05 per GB.

I think at that point that's when you'd use FUSE and S3. Maybe it's just me, but I see block storage like this mainly used for things like large databases or an intermediate place for backups before pushing them out to S3 or Glacier. If, for example, you're moving infrequently accessed data to it then you're probably better off using S3 standard or infrequent access, both of which are cheap and generally fast enough.

@carmp3fan, I would not necessarily think so. I am mostly providing services (email, nextcloud etc) for a relatively small group of users, and having block storage in the datacenter would work very smooth in that application - HDD would be perfectly ok for those kind of needs.

I am currently using S3 storage with one of the big providers. S3QL actually works very well in that kind of application, since it uses a (large) local cache that is very quick in delivering the items requested often (recent emails, files added and shared in nextcloud etc). You notice waiting times rarely, mostly when looking for a rarely touched file. I was secretly hoping Linode would provide an s3 backend, but whatever way they do their block storage, I'll buy. Nothing beats having it in the same datacenter. And I rather give my (little) money to Linode anyway :)

@johnnychicago:

@carmp3fan, I would not necessarily think so. I am mostly providing services (email, nextcloud etc) for a relatively small group of users, and having block storage in the datacenter would work very smooth in that application - HDD would be perfectly ok for those kind of needs.

I don't disagree on email (somewhat do for Nextcloud considering you can use it directly from S3), but I still don't see it as a big reason for moving to it. I've been using Linode for my own mail server for years and even with my hoarding tendencies for the last 10+ years, I've only accumulated 4.7G, so I run fine on a 1024.

If you'll notice, I didn't say anything about a web server. Mostly because it would likely be cheaper and faster to have the storage intensive photos and videos stored in AWS than on your low-end Linode. That makes assumptions that that is easy to do with whatever software you are using.

@johnnychicago:

I am currently using S3 storage with one of the big providers. S3QL actually works very well in that kind of application, since it uses a (large) local cache that is very quick in delivering the items requested often (recent emails, files added and shared in nextcloud etc). You notice waiting times rarely, mostly when looking for a rarely touched file. I was secretly hoping Linode would provide an s3 backend, but whatever way they do their block storage, I'll buy. Nothing beats having it in the same datacenter. And I rather give my (little) money to Linode anyway :)

I've not used S3QL, but it looks like something I should. I really wish someone would create something (I'm not capable) that would use both S3 and BackBlaze B2. Kind of like RAID across providers.

This is very awesome, do we know how long roughly beta will last and if more DC's than "NJ" will the service be available for?

Thanks for getting this going on the east coast first guys. Especially helpful since most of our services are population centric and we want to cater to the bulk of population in the USA. Great work!

Is the block storage a raw format that will allow us to format it with whatever file system we like?

For example, I'm working on a FreeBSD deployment and would like to use the block storage.

@impact:

Is the block storage a raw format that will allow us to format it with whatever file system we like?

For example, I'm working on a FreeBSD deployment and would like to use the block storage.

They are treated as block devices just like the normal disk of your linode. So you can format them with whatever file system you want.

> block devices and can be formatted and mounted just like any other block device

Is there any ETA on when this will come out of BETA? I am a bit desperate to move my server back over to Linode because where I am currently the support is absolute trash. But I need Blockstorage! :(

So excited, I hope it goes fully live soon.

spun up a $5 linode with 20GB block storage for benchmark testing at https://community.centminmod.com/thread … post-52259">https://community.centminmod.com/threads/linode-block-storage-early-access.11985/#post-52259 considerably slower than local SSD storage !

I would definitely think it would be slowly as it's remote storage. But your tests say 1.5th the speed? Ouch. I hope that was just a bad scenario. :/

Will block storage be available on all regions after beta phase?

Any chance we could get a date for the block storage? Even an estimated date would be useful. We are having a hard time planning ahead without knowing when this service will be available. We are very excited about the block storage and would love to start using it asap. :) Thank you

Hello,

We've got the second beta cluster in the pipeline. 90% certain it will go to us-west (Fremont, CA). It's looking about 4 or 5 weeks out.

After this one, the remaining DCs will go much more quickly and simultaneously.

-Chris

Great news Caker,

Bit sad to see the 1/5th speeds of local SSD. I understand it's network drives but was hoping it would have a lot smaller gap so it could be used for Databases and such. Will have to see what it is like once it comes out of beta.

@viion:

Great news Caker,

Bit sad to see the 1/5th speeds of local SSD. I understand it's network drives but was hoping it would have a lot smaller gap so it could be used for Databases and such. Will have to see what it is like once it comes out of beta.

Probably related to triple redundancy feature. Similar performance drop was seen when I tested another provider's Ceph based triple redundant SSD storage.

@caker:

Hello,

We've got the second beta cluster in the pipeline. 90% certain it will go to us-west (Fremont, CA). It's looking about 4 or 5 weeks out.

After this one, the remaining DCs will go much more quickly and simultaneously.

-Chris

Great news !

Any updates? When will it become GA?

We're still working to get this spread across more datacenters, but there's no ETA or announcements to be made at this time.

Update: we should have the Fremont cluster online within the next week or two (is my guess). The hardware is installed and is currently being provisioned. After some initial testing, we'll turn it up for everybody. After that: on to the remaining locations!

-Chris

I contacted support about a timeline a week ago. We're very excited about implementing it (Freemont) and this was the response:

"Sorry but we still don't have a definite time frame for that. I'm hoping before 2018 but I can't give any time frame with certainty. I wouldn't want to mislead you."

Sounds like it could be 2 - 3 months at least. I'm still hopeful Freemont will role out sooner (sounds like a week or two!) and that estimate is for a system wide implementation, but we've made other plans to deal with our storage needs until 2018 to be safe.

Can't wait for this to get going, even just in beta.

Is the Block Storage SSD?

When I look in the volumes area of the Linode Manager, I can now see Freemont CA listed.

Which datacenter is next? What kind of ETA?

@axchost:

Is the Block Storage SSD?

The new Block Storage build in Fremont is using spinning disks, but we’re still working on what the final version of Block Storage will look like.

-Blake

@TheJosh:

When I look in the volumes area of the Linode Manager, I can now see Freemont CA listed.

Which datacenter is next? What kind of ETA?

We don't currently have an idea of what data center is next or when the full release will be, but I can confirm that Fremont will be up and running soon. You can follow our blog at blog.linode.com for info about our releases.

-Blake

@bmartin:

The new Block Storage build in Fremont is using spinning disks, but we’re still working on what the final version of Block Storage will look like.

-Blake

I thought all block storage was going to be SSD (per announcement first post on this thread). Does spinning disk mean it's going to be cheaper?

@mjrpes:

@bmartin:

The new Block Storage build in Fremont is using spinning disks, but we’re still working on what the final version of Block Storage will look like.

-Blake

I thought all block storage was going to be SSD (per announcement first post on this thread). Does spinning disk mean it's going to be cheaper?

Hey there! Block Storage in Newark is all SSD, however with the Block Storage beta in Fremont we are building it using spinning drives. I'm sorry we weren't entirely clear on that point. As for how or if this would affect the price, I don't have any word on that right now.

That being said, this is currently in beta and we're testing systems out. I can't say what the final product will look like exactly.

If you have any other questions, please let us know.

@scrane:

Hey there! Block Storage in Newark is all SSD, however with the Block Storage beta in Fremont we are building it using spinning drives. I'm sorry we weren't entirely clear on that point. As for how or if this would affect the price, I don't have any word on that right now.

That being said, this is currently in beta and we're testing systems out. I can't say what the final product will look like exactly.

If you have any other questions, please let us know.

Have the capacity/size caps been expanded beyond 1TB per volume in Newark yet?

@zigmoo:

@scrane:

Hey there! Block Storage in Newark is all SSD, however with the Block Storage beta in Fremont we are building it using spinning drives. I'm sorry we weren't entirely clear on that point. As for how or if this would affect the price, I don't have any word on that right now.

That being said, this is currently in beta and we're testing systems out. I can't say what the final product will look like exactly.

If you have any other questions, please let us know.

Have the capacity/size caps been expanded beyond 1TB per volume in Newark yet?

Nope, we're still at 1TB per volume in Newark for now.

Just curious: is 1TB not sufficient for your usage? What size volume would you need to create for block storage to be useful to you?

  • Jim

It seems that resizing a volume while it is attached to a running Linode breaks things, resulting in an unclean shutdown (crash?) when it is re-attached. I've been able to reproduce it twice with these steps:

1. Start with a volume attached to a running Linode, unmounted

2. Resize the volume in the manager

3. Detach the volume. This will appear in dmesg/console:

[1791385.124252] sd 0:0:2:3: [sdc] Synchronizing SCSI cache
[1791385.127212] sd 0:0:2:3: [sdc] Synchronize Cache(10) failed: Result: hostbyte=0x04 driverbyte=0x00

4. Re-attach the volume. The Linode will shut down. I'm not sure if there's anything printed in the console at this point since I was using SSH instead of Lish. The SSH session just died.

I just created a new linode in the Newark center specifically to use Block Storage, but when I try to add a volume there it fails with a message saying "Block Storage is currently at capacity". Will more capacity be added soon? If not, is there some other solution for provisioning a linode with a large amount of disk space but modest CPU and network?

I don't have an ETA but we plan on having more capacity soon. We don't have any Linode packages that specialize in storage, but larger Linodes do have more storage space if you're looking for more disk space immediately.

I was able to create a volume this morning, finally. It works as advertised but the latency when writing is disappointing, between 75 and 150 kB/s in my tests. My application involves many small files so this performance is important.

# dd if=/dev/zero of=/mnt/test/test1.img bs=1024 count=1000 oflag=dsync
1000+0 records in
1000+0 records out
1024000 bytes (1.0 MB, 1000 KiB) copied, 7.40577 s, 138 kB/s

This is with default ext4 settings on a 70GB volume / partition.

oflag=dsync is always going to be super slow, i wouldn't call it a safe measurement of real world usage.

OK, I tried conv=fdatasync instead of oflag=dsync and the performance is much better, about 300 MB/s for the volume and a bit faster for a native linode partition.

(With oflag=dsync writing to the native partition was about 1.3 MB/s, about 10 times faster than writing to the volume.)

We're continuing Block Storage beta discussion over in the Fremont post, located here:

https://forum.linode.com/viewtopic.php?f=26&t=15333

Going to lock this thread out.

Thanks,

-Chris

I have FreeBSD 12.1 deployed on a Linode. I've created a volume via the block storage service and have attached it to my linode. It is not showing up, The howto references /dev/disk/ something but there is no /dev/disk tree and the drive is not visible. Is there an additional step I have to do?

Linode Staff

Hey @tech43 — since this is an old, obsolete thread, I've gone ahead and reposted your question and provided an answer here:

How do I use Block Storage with FreeBSD?

Hope this helps!

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct