Linode Forum
Linode Community Forums
 FAQFAQ    SearchSearch    MembersMembers      Register Register 
 LoginLogin [ Anonymous ] 
Post new topic  Reply to topic
Author Message
PostPosted: Sat Feb 08, 2014 2:40 pm 
Offline
Junior Member

Joined: Fri May 29, 2009 8:40 am
Posts: 37
Has anyone managed to create a high availability setup for IPv6 on Linode?

I've tried the basics of moving floating IPv4 addresses between Linodes - and that's easy enough. Just enable IP failover for the relevant IPs, reboot as needed, and use arping to force other hosts to update their ARP cache. It works nicely.

However I've had no such luck with doing the same with IPv6. It's possible to transfer IPv6 pool addresses to other servers, however it's impossible to get the IPv6 NDP cache updated on other servers. For other Linodes on the same network this isn't too bad - their neighbour cache doesn't last that long. However for external connectivity, this results in 20-30 minutes of traffic going to the old Linode.

I've tried to use arpsend/ndsend to get the neighbours caches updated:

Code:
root@devon:~# arpsend -U -i 2a01:7e00::2:9995 eth0

18:37:13.021656 IP6 fe80::f03c:91ff:fe6e:afd5 > ff02::1: ICMP6, neighbor advertisement, tgt is 2a01:7e00::2:9995, length 32


But sadly this traffic isn't being seen on other servers, probably similar to how ping6 ff02::1%eth0 doesn't work either.

Anyone with any experience on this?


Top
   
PostPosted: Wed May 28, 2014 7:16 pm 
Offline
Junior Member
User avatar

Joined: Tue Dec 27, 2005 1:33 am
Posts: 43
Location: USA
I've found a solution. If you ping the IPv6 default gateway from the pool address, it forces the router to refresh its NDP cache for the pool address, permitting connectivity from hosts that are off-subnet. This doesn't fix the NDP caches of other Linodes on the same subnet, but as you observed, their NDP caches time out much more quickly than the router's. Oddly, it takes about 5-10 pings to have any effect, but I've done extensive testing and this technique works every time so I'm deploying it to production.

This is the command I'm using:

Code:
ping6 -c 1 -w 15 -I $MY_POOL_ADDRESS fe80::1%eth0 > /dev/null


This pings the default gateway from the pool address until it gets a response, or 15 seconds elapse. It 15 seconds elapse without a response, there's probably some other problem with your connectivity.

You need a version of ping that supports the -I option. On Debian, this means you need the iputils-ping package rather than the inetutils-ping package.

Edit: I should mention that if you try to run ping6 immediately after adding the pool address to your interface, ping6 might fail to bind to the pool address because DAD hasn't completed yet. So you'll need to either wait until DAD completes before pinging or just disable DAD.


Top
   
PostPosted: Sat May 31, 2014 9:36 am 
Offline
Junior Member

Joined: Fri May 29, 2009 8:40 am
Posts: 37
As I forgot to reply to this post - I did contact support regarding this problem, and they confirmed that the all nodes multicast address is filtered, which is why it's impossible to get IPv6 high availability working the right way. I believe it's on their todo list - but there's no ETA.

Thank you for this workaround! I can finally do some high availability without having to worry about IPv6 traffic disappearing for up to 30 minutes.

The only downside is that I'll have to flush the NDP cache of other Linodes - but at least that's a solvable problem.


Top
   
PostPosted: Thu Mar 05, 2015 2:26 am 
Offline
Junior Member
User avatar

Joined: Tue Dec 27, 2005 1:33 am
Posts: 43
Location: USA
Unfortunately, my workaround doesn't work anymore. Although fe80::1 replies to the pings, Linode's routers continue to send traffic to the wrong host for about 30 minutes. -Alex-, have you found out anything new by chance?


Top
   
PostPosted: Sat Mar 07, 2015 4:09 pm 
Offline
Junior Member

Joined: Fri May 29, 2009 8:40 am
Posts: 37
Sadly I'm living with IPv6 being a second class citizen with Linode, and IPv6 being a bit of a pain as well.

I've got both servers with the relevant IPv6 addresses as additional IPs on the local interface:

Code:
iface lo inet6 loopback
   up ip -6 addr add 2a01:7e00:etc:etc/128 dev lo preferred_lft 0


Now the high availability monitor can either add the same address to eth0 on one of the servers, or the pool address if you've got a routed subnet to a pool IP.

The one quirk of this is that whilst traffic is going to the old server - the old server will keep on responding to existing traffic until the routers cache expires and points to the new server. A big disadvantage is that if you're doing this with pool addresses with other servers of your own, the NDP cache of servers will keep on sending traffic to the old server for quite a while! It won't stop until traffic has stopped flowing to that IP for a few minutes (enough to timeout), or until you flush the cache for that destination IP.

This allows me to perform scheduled maintenance by turning off keepalived on the server ~30 minutes in advance. Sadly it doesn't help for real high availability, traffic will eventually flow to the right server if there's unexpected downtime, but not immediately.

The only reason I'm tolerating this as a solution is that it should happen very rarely, IPv6 traffic is a small percentage of overall traffic, and happy eyeballs will hopefully favour IPv4 until the IPv6 address is reachable again.

I don't like it, and I hope that Linode will at some point take this issue seriously. IPv6 just feels like an afterthought in multiple ways.


Top
   
Display posts from previous:  Sort by  
Post new topic  Reply to topic


Who is online

Users browsing this forum: No registered users and 5 guests


You cannot post new topics in this forum
You cannot reply to topics in this forum
You cannot edit your posts in this forum
You cannot delete your posts in this forum

Search for:
Jump to:  
RSS

Powered by phpBB® Forum Software © phpBB Group