Olaf Schreck
2016-09-19 11:10:02 UTC
I have configured a Debian 7 server for IPv6 (in addition to IPv4).
I can ping6 www.google.com and other addresses, fine. BUT the server
reproducibly looses IPv6 connectivity after roughly 20min, and I can't
figure why this happens. Clues anyone?
My hoster (Hetzner) routes the 2a01:4f8:191:XXXX:/64 network to the
server. Following their instructions, I assign a static address from that
block and set the default route to fe80::1, either manually
ip -6 addr add 2a01:4f8:191:XXXX::5/64 dev eth0
ip -6 route add default via fe80::1 dev eth0
or in /etc/network/interfaces like this
iface eth0 inet6 static
address 2a01:4f8:191:XXXX::5
netmask 64
gateway fe80::1
This works, I can ping6. But it reproducible stops working after 20min,
confirmed using this command
while true; do date; ping6 -c3 -w5 www.google.com; sleep 10; done
I'm sure it's not Google rate-limiting my pings, I get the same results with
various IPv6 addresses that I'm authorized to ping.
To restore IPv4 connectivity (IPv4 still working), I can either reboot or
re-add the default route with these commands (order is important):
ip -6 route del default
ip -6 route del fe80::1 dev eth0
ip -6 route add fe80::1 dev eth0
ip -6 route add default via fe80::1 dev eth0
Which will give another 20min of IPv6. 100% reproducible. And it's just the
routing that needs to be fixed.
I have exluded:
- no router advertisements used by the hoster, no such packets seen with
tcpdump, server is not configured to accept RAs
- no ip6tables rules, and default ACCEPT everywhere
- no cronjob or other periodical script that could be responsible
- no "security software" or similar that would interfere
Important data point: This server has 2 ethernet interfaces, so there are
2 link-local fe80::/64 routes to eth0 and eth1. I was suspicious that the
problem might be related, so I disabled IPv6 on the second interface
completely with with sysctl net.ipv6.conf.eth1.disable_ipv6 = 1.
And that resulted in stable and flawless IPv6 connectivity!
While this workaround is ok for this server, I have another one that shows
the same symptoms. But for that server I need IPv6 on the other interfaces,
so the workaround does not apply.
I'd rather like to learn why this happens, or what config part I may be
missing. Clues or further debugging hints very welcome. Thanks!
Olaf
I can ping6 www.google.com and other addresses, fine. BUT the server
reproducibly looses IPv6 connectivity after roughly 20min, and I can't
figure why this happens. Clues anyone?
My hoster (Hetzner) routes the 2a01:4f8:191:XXXX:/64 network to the
server. Following their instructions, I assign a static address from that
block and set the default route to fe80::1, either manually
ip -6 addr add 2a01:4f8:191:XXXX::5/64 dev eth0
ip -6 route add default via fe80::1 dev eth0
or in /etc/network/interfaces like this
iface eth0 inet6 static
address 2a01:4f8:191:XXXX::5
netmask 64
gateway fe80::1
This works, I can ping6. But it reproducible stops working after 20min,
confirmed using this command
while true; do date; ping6 -c3 -w5 www.google.com; sleep 10; done
I'm sure it's not Google rate-limiting my pings, I get the same results with
various IPv6 addresses that I'm authorized to ping.
To restore IPv4 connectivity (IPv4 still working), I can either reboot or
re-add the default route with these commands (order is important):
ip -6 route del default
ip -6 route del fe80::1 dev eth0
ip -6 route add fe80::1 dev eth0
ip -6 route add default via fe80::1 dev eth0
Which will give another 20min of IPv6. 100% reproducible. And it's just the
routing that needs to be fixed.
I have exluded:
- no router advertisements used by the hoster, no such packets seen with
tcpdump, server is not configured to accept RAs
- no ip6tables rules, and default ACCEPT everywhere
- no cronjob or other periodical script that could be responsible
- no "security software" or similar that would interfere
Important data point: This server has 2 ethernet interfaces, so there are
2 link-local fe80::/64 routes to eth0 and eth1. I was suspicious that the
problem might be related, so I disabled IPv6 on the second interface
completely with with sysctl net.ipv6.conf.eth1.disable_ipv6 = 1.
And that resulted in stable and flawless IPv6 connectivity!
While this workaround is ok for this server, I have another one that shows
the same symptoms. But for that server I need IPv6 on the other interfaces,
so the workaround does not apply.
I'd rather like to learn why this happens, or what config part I may be
missing. Clues or further debugging hints very welcome. Thanks!
Olaf