Linode KVM (beta) - Update 2

Linode Staff

Linode KVM (beta) - Update 2

Please see the original announcement for background information on the Linode KVM Beta.

What's new?
* The Linode KVM beta is now available in Fremont, Dallas, Atlanta, Newark, London, and Singapore!

  • The Linode Backup Service is now compatible with KVM Linodes.

  • In addition to booting Direct-from-Disk, you can also choose grub-legacy and grub2 from the kernel drop-down.

What's coming soon?
* Full-virt mode - so OSs that don't have PV/virtio drivers can be ran

  • Graphical mode support

  • and much much more!

How do I get in on this?

Click here to apply and we'll move the Linode you specify over to the KVM beta.

Enjoy!

-Chris

13 Replies

Sounds like those "coming soon" items mean virtualized Windows will be possible too.

We just benched a London KVM node at 28% faster than its Xen equivalent. That's using our standard PlushForums platform benchmark. Perhaps some of this is due to an underutilised beta node, but it's impressive to say the least. What's the timeline on the rollout?

Nice!

Our goal is to have KVM generally available, with a self-serve convert tool and a (temporary) default KVM/Xen account setting, in the next few weeks.

-Chris

Here are a few quick stats from testing a 2048 KVM and XEN Linode, the former was faster:

2048 KVM: http://serverbear.com/benchmark/2015/05 … p1rivCvxaY">http://serverbear.com/benchmark/2015/05/16/5s36u9p1rivCvxaY

# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --filename=sb-io-test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite --numjobs=1 --name=test
test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.8
Starting 1 process
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/351.7MB/0KB /s] [0/89.9K/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=27818: Sat May 16 14:52:54 2015
  write: io=4096.0MB, bw=357662KB/s, iops=89415, runt= 11727msec
  cpu          : usr=10.88%, sys=54.83%, ctx=23686, majf=0, minf=9
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=0/w=1048576/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: io=4096.0MB, aggrb=357662KB/s, minb=357662KB/s, maxb=357662KB/s, mint=11727msec, maxt=11727msec

Disk stats (read/write):
  sda: ios=0/1043161, merge=0/2, ticks=0/388840, in_queue=388513, util=96.86%

# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --filename=sb-io-test --bs=4k --iodepth=64 --size=4G --readwrite=randread --numjobs=1 --name=test
test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.8
Starting 1 process
Jobs: 1 (f=1): [r(1)] [100.0% done] [363.6MB/0KB/0KB /s] [92.1K/0/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=27821: Sat May 16 14:54:25 2015
  read : io=4096.0MB, bw=372728KB/s, iops=93181, runt= 11253msec
  cpu          : usr=10.69%, sys=53.97%, ctx=21300, majf=0, minf=73
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=4096.0MB, aggrb=372727KB/s, minb=372727KB/s, maxb=372727KB/s, mint=11253msec, maxt=11253msec

Disk stats (read/write):
  sda: ios=1040114/0, merge=0/0, ticks=388053/0, in_queue=387773, util=96.72%

# openssl speed sha256 rsa2048
Doing sha256 for 3s on 16 size blocks: 7791936 sha256's in 3.00s
Doing sha256 for 3s on 64 size blocks: 4392355 sha256's in 3.00s
Doing sha256 for 3s on 256 size blocks: 1901286 sha256's in 3.00s
Doing sha256 for 3s on 1024 size blocks: 587538 sha256's in 3.00s
Doing sha256 for 3s on 8192 size blocks: 79200 sha256's in 3.00s
Doing 2048 bit private rsa's for 10s: 6626 2048 bit private RSA's in 10.00s
Doing 2048 bit public rsa's for 10s: 217859 2048 bit public RSA's in 10.00s
OpenSSL 1.0.1k 8 Jan 2015
built on: Tue Mar 24 20:38:55 2015
options:bn(64,64) rc4(16x,int) des(idx,cisc,16,int) aes(partial) blowfish(idx)
compiler: -I. -I.. -I../include  -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -Wl,-z,relro -Wa,--noexecstack -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
sha256           41556.99k    93703.57k   162243.07k   200546.30k   216268.80k
                  sign    verify    sign/s verify/s
rsa 2048 bits 0.001509s 0.000046s    662.6  21785.9

2048 XEN: http://serverbear.com/benchmark/2015/05 … veIGcDLSqb">http://serverbear.com/benchmark/2015/05/16/ZlLN1IveIGcDLSqb

# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --filename=sb-io-test --bs=4k --iodepth=64 --size=4G --readwrite=randwrite --numjobs=1 --name=test
test: (g=0): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.8
Starting 1 process
Jobs: 1 (f=1): [w(1)] [100.0% done] [0KB/95664KB/0KB /s] [0/23.1K/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1971: Sat May 16 14:59:54 2015
  write: io=4096.0MB, bw=65381KB/s, iops=16345, runt= 64152msec
  cpu          : usr=4.67%, sys=19.04%, ctx=106232, majf=0, minf=9
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=0/w=1048576/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
  WRITE: io=4096.0MB, aggrb=65380KB/s, minb=65380KB/s, maxb=65380KB/s, mint=64152msec, maxt=64152msec

Disk stats (read/write):
  xvda: ios=1/1039389, merge=0/2627, ticks=0/3946317, in_queue=3945197, util=99.98%

# ./fio --randrepeat=1 --ioengine=libaio --direct=1 --gtod_reduce=1 --filename=sb-io-test --bs=4k --iodepth=64 --size=4G --readwrite=randread --numjobs=1 --name=test
test: (g=0): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=64
fio-2.2.8
Starting 1 process
Jobs: 1 (f=1): [r(1)] [100.0% done] [138.1MB/0KB/0KB /s] [35.6K/0/0 iops] [eta 00m:00s]
test: (groupid=0, jobs=1): err= 0: pid=1986: Sat May 16 15:01:48 2015
  read : io=4096.0MB, bw=213625KB/s, iops=53406, runt= 19634msec
  cpu          : usr=12.97%, sys=52.78%, ctx=20564, majf=0, minf=73
  IO depths    : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.1%, >=64=0.0%
     issued    : total=r=1048576/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
     latency   : target=0, window=0, percentile=100.00%, depth=64

Run status group 0 (all jobs):
   READ: io=4096.0MB, aggrb=213624KB/s, minb=213624KB/s, maxb=213624KB/s, mint=19634msec, maxt=19634msec

Disk stats (read/write):
  xvda: ios=1036241/0, merge=1795/0, ticks=870066/0, in_queue=869010, util=99.58%

# openssl speed sha256 rsa2048
Doing sha256 for 3s on 16 size blocks: 7056604 sha256's in 2.99s
Doing sha256 for 3s on 64 size blocks: 3775456 sha256's in 3.00s
Doing sha256 for 3s on 256 size blocks: 1593011 sha256's in 3.00s
Doing sha256 for 3s on 1024 size blocks: 515112 sha256's in 3.00s
Doing sha256 for 3s on 8192 size blocks: 69063 sha256's in 3.00s
Doing 2048 bit private rsa's for 10s: 5920 2048 bit private RSA's in 10.01s
Doing 2048 bit public rsa's for 10s: 191342 2048 bit public RSA's in 9.99s
OpenSSL 1.0.1k 8 Jan 2015
built on: Tue Mar 24 20:38:55 2015
options:bn(64,64) rc4(16x,int) des(idx,cisc,16,int) aes(partial) blowfish(idx)
compiler: -I. -I.. -I../include  -fPIC -DOPENSSL_PIC -DOPENSSL_THREADS -D_REENTRANT -DDSO_DLFCN -DHAVE_DLFCN_H -m64 -DL_ENDIAN -DTERMIO -g -O2 -fstack-protector-strong -Wformat -Werror=format-security -D_FORTIFY_SOURCE=2 -Wl,-z,relro -Wa,--noexecstack -Wall -DMD32_REG_T=int -DOPENSSL_IA32_SSE2 -DOPENSSL_BN_ASM_MONT -DOPENSSL_BN_ASM_MONT5 -DOPENSSL_BN_ASM_GF2m -DSHA1_ASM -DSHA256_ASM -DSHA512_ASM -DMD5_ASM -DAES_ASM -DVPAES_ASM -DBSAES_ASM -DWHIRLPOOL_ASM -DGHASH_ASM
The 'numbers' are in 1000s of bytes per second processed.
type             16 bytes     64 bytes    256 bytes   1024 bytes   8192 bytes
sha256           37761.09k    80543.06k   135936.94k   175824.90k   188588.03k
                  sign    verify    sign/s verify/s
rsa 2048 bits 0.001691s 0.000052s    591.4  19153.4

Unixbench scores from your tests: 2293.7 KVM vs 993.4 Xen. Nice!

-Chris

Any reason, other than beeing compatible with Win 3.1, you are using the pc-i440fx qemu hardware model, and not q35?

Also PIIX (PATA!) for disk, rather than AHCI for full virt?

Edit: grammr

Are your backing qemu processes (carefully!) wrapped using say SELinux or AppArmor to mitigate/limit some kinds of guest to host escape bugs?

@trippeh:

Are your backing qemu processes (carefully!) wrapped using say SELinux or AppArmor to mitigate/limit some kinds of guest to host escape bugs?
Yes - although I won't go into detail.

The chipset stuff, mentioned in your previous post, we're looking into.

-Chris

@caker:

@trippeh:

Are your backing qemu processes (carefully!) wrapped using say SELinux or AppArmor to mitigate/limit some kinds of guest to host escape bugs?
Yes - although I won't go into detail.
Good enough for me

@caker:

The chipset stuff, mentioned in your previous post, we're looking into.

I'm not a hundred percent sure the Q35 variant is ready for prime time, but its been around for some years now.

Just wondering if it has been considered and dismissed, and if so, why

AHCI seems to work as a disk bus even with the i440fx model though. With NCQ it may get closer to virtio performance. Doing some testing..

But it won't run Win 3.1 ;-)

As suspected the AHCI based emulation is quite a bit faster than the PIIX IDE emulation in local quick and dirty testing (fio random 70% read 30% write mix). Close to virtio here actually, but the SSD I'm testing against is slow so thats not very telling.

And it should work with most operating systems from the last 10 years or so.

@caker:

Our goal is to have KVM generally available, with a self-serve convert tool […]

The self-serve convert tool seems to be in place now. It's a button called "Upgrade to KVM".

To test, I created a new Xen linode and to my surprise it was 50% faster than my old Xen linode created back in August.

Then migrated to KVM and this time it was 62% faster than the new Xen linode from which it migrated.

I don't think this is possible just by using KVM. There must be new underlying hardware as well.

@sanvila:

To test, I created a new Xen linode and to my surprise it was 50% faster than my old Xen linode created back in August.

Then migrated to KVM and this time it was 62% faster than the new Xen linode from which it migrated.

I don't think this is possible just by using KVM. There must be new underlying hardware as well.
Or simply fewer users and minimal amount of production workloads on the hardware - for now.

I notice there is no longer a watchdog device available (I cant find it.) Could this be added? It is available on the Xen Linodes as xen_wdt.

This is useful to catch full system crashes and make the hypervisor reset the guest if it has not pinged the watchdog device in a while (using a watchdog daemon, for example "watchdog".) Just adding the device to the VMs by default should not be harmful - it does not activate until something open()'s /dev/watchdog inside the guest.

On my home and work KVM setups I use the "i6300esb" qemu device, confiigured to "forcefully reset the guest" on timeout.

As for real hardware, pretty much all computers have one, often two, you just have to find & configure it.

Edit: cosmetic

Reply

Please enter an answer
Tips:

You can mention users to notify them: @username

You can use Markdown to format your question. For more examples see the Markdown Cheatsheet.

> I’m a blockquote.

I’m a blockquote.

[I'm a link] (https://www.google.com)

I'm a link

**I am bold** I am bold

*I am italicized* I am italicized

Community Code of Conduct