IPV6
Internet Protocol version 6
Back to Top

Packet loss upstream when using IPv6

terrapen
Visitor

Packet loss upstream when using IPv6

I'm experiencing substantial packet loss uptsream of my modem (second hop) when using IPv6.  I have two business class connections--one at home and one at the office--and both modems are in the same neighborhood, just down the road.  Both modems are Netgears and both experience similar levels of loss.

 

I called Biz Class support and the tech insists that I do not have IPv6 and that Comcast does not support IPv6 at this time.  Very frustrating.

 

At any rate, here are some traceroutes to ipv6.google.com that demonstrate the issue.

 

From the office:

Screen Shot 2016-01-19 at 10.26.15 AM.png

 

From my home:

Screen Shot 2016-01-19 at 10.30.29 AM.png

 

 

Forum Contributor

Re: Packet loss upstream when using IPv6

The tech is confused, they don't support STATIC IPv6 but they do support dynamic.  If you get that in the future ask for an escalation so the techs manager can straighten them out.

 

What is your downstream and upstream bandwidth service level please, and what is the exact model # of the Netgear?

terrapen
Visitor

Re: Packet loss upstream when using IPv6

I have Netgear CG3000DCR modems at both locations.

Hardware revision: 1.04

Firmware revision: V3.01.05

 

Both locations are on the Business Deluxe 75/15 plan.

terrapen
Visitor

Re: Packet loss upstream when using IPv6

Well, this is frustrating.  Comcast Support responded to my escalation with a generic email saying that they logged into my modems and tested some basic website connectivity and that it worked fine.  Said that I should do some basic troubleshooting and contact an IT professional.   LOL.

 

How can I escalate this?  If I reply to their ticket, it's just going to get a service tech sent out.  I don't believe there is any problem with my CPE and I suspect that my signal levels are already fine.  It feels like a Comcast network issue but I'm not sure how to get this handled.

Trusted Forum Contributor

Re: Packet loss upstream when using IPv6

Terrapen,

 

It is helpful to understand that processing of an ICMP echo (ping) request is a low priority activity for most systems. This is in contrast to actual connection data traffic, the highest priority. Traceroute is a series of ping requests with ascending hop count limits.

 

1. A Request timeout response (lost packet) most likely indicates the target system was busy doing its primary task, handling user data, not pings.

2. The result for pinging any hop after the local router (also a router) follows the same logic - a router's primary purpose is forwarding packets. Ping responses can easily be delayed or even time out because the router is doing its job.

 

Bottom Line: What problem are you having that prompts you to use traceroute?

Forum Contributor

Re: Packet loss upstream when using IPv6

xz4gb8, please do not spread incorrect information

 

https://en.wikipedia.org/wiki/Ping_(networking_utility)

 

"...

Echo reply

The echo reply is an ICMP message generated in response to an echo request; it is MANDATORY for all hosts and routers, and must include the exact payload received in the request...."

 

Forum Contributor

Re: Packet loss upstream when using IPv6

terrapen,

 

  Not saying this is definitely your problem but the Netgear device you are using has been reported in many threads on this board as being prone to drop traffic at higher bandwidth.   I have one of these devices but at the lower business level and it works fine.  It is unfortunate as this is the only device that properly handles DHCP-PD   Typically, CPU starvation is what does it in for these SOHO devices and that is what I suspect is your problem.

Trusted Forum Contributor

Re: Packet loss upstream when using IPv6

Tmittelstaedt,

 

terrapen has shown a diagnostic tool response showing end to end pings exhibit 0% packet loss, indicating that all intermediate nodes are doing their primary job of forwarding packets, even while not respondiing to ICMP ECHO requests.

 

terrapen has yet to describe any problem with actual business use of his connection. 

 

Your wikipedia reference to ICMP ECHO is correct, but misleading.  Implementation of ECHO certainly is a manadatory host requirement. However, there is a business requrement that a router give preference to forwarding packets rather than answering ECHO requests. This can result in indefinitely delayed responses under high traffic conditions. This is another security consideration (denial of service) besides that mentioned in the wikipedia entry which prompts implementers to discard ECHO requests in contradiction to RFC 1122.

 

Please indicate what incorrect information you perceive in my original post. I will immediately correct any such.

 

terrapen
Visitor

Re: Packet loss upstream when using IPv6

Here is the issue: IPv6 connectivity/performance is poor and my browser (which uses an automatic algorithm to choose IPv4 or IPv6, based on performance) uses IPv4 at least ~90% of the time.  I am not impressed with the end-user performance of the Comcast v6 network.

 

As for ping responses, these checks were being run in a residential area in the middle of a weekday.  I highly doubt that the routers were so loaded that they couldn't respond to ICMP requests.  Sorry, not buying that.

 

As for my Netgear, it was not heavily loaded at the time of testing.  I maintain latency and throughput graphs of my connection and can verify.

Forum Contributor

Re: Packet loss upstream when using IPv6

"A Request timeout response (lost packet) most likely indicates the target system was busy doing its primary task, handling user data, not pings"

 

you also mentioned echo replies timing out which is another no-no

 

Routers must respond to echo replies.  It is OK to delay forwarding of them but the router cannot do so indefinitely.

 

Keep in mind that all of these lines (including ethernet) are essentially serial lines.  Meaning that the router can only put 1 packet at a time into the pipe.  That is why all routers have output queues - because quite often more packets arrive at the router destined to a particular output interface "faster" than that interface can accomodate them.

 

The way that all routers work is AS LONG AS the router CPU is NOT overloaded (or other problems like that) the router will ONLY drop UDP.  There are various UDP flooding tools out there which you can use to demonstrate this - I have even ported one of them to Android if your interested - but that is how it is and it can be easily demonstrated if you care to learn more about how these routers work.

 

If a router recieves a UDP packet followed by an ICMP packet both destined for an output interface, if that interface is "full" (ie: in the middle of clocking out a packet) then both packets are put into the output queue.  The router isn't allowed to cease transmission of the current packet, abort it, then select the ICMP packet - that is what they are talking about when they say a router is allowed to DELAY processing of ICMP.  But, once the current packet has been completely transmitted, the router is not permitted to put the UDP packet in front of the ICMP packet - it must select the ICMP packet next from the queue and transmit it - then the UDP packet follows.  If the UDP packet has expired by then, then so be it - it's dropped.

 

But in the last analysis, all of this is academic.  It is a fact that multiple people have complained about that Netgear model on these forums indescriminately dropping packets at higher packet rates.  You can pull up the posts in the equipment forum if you care to do so.  That indicates only one thing - the Netgear CPU is overloaded.  When that happens the rules are thrown out the window and the router will not behave predictibly.

 

He should NOT be running this device at 100Mbt down, simple as that.  He should be running the Cisco BWG.  Unfortunately DHCP-PD is broken on that device.  So his choice is to either run a Netgear that drops traffic at high data rates with working DHCP-PD - or run a Cisco BWG that works fine at high data rates (likely because it has a faster internal CPU) but has broken DHCP-PD.   Or, to discard the rented equipment entirely and run his own router  (most of the new wireless AC stuff has plenty fast CPUs for him) behind a bridged DOCSIS3 device, in which case he can get working DHCP-PD but he will give up the ability to have static IPv4.

 

I am through with defending Comcast here.  The problems in the 3 devices - the SMC, the Cisco, the Netgear - and IPv6 have been well known for years.  We even had promises a few years ago to get the Cisco DHCP-PD fixed - but that never happened - we also had the same promises to get DHCP-PD fixed on the SMC - that also never happened - only the code in the Netgear for DHCP-PD got fixed.  I can dig up those posts and promises in the archives as well as you can.  And on top of all of this the SMC reboots itself every 6 hours if IPv6 is enabled on it.

 

I would like Comcast to get the Cisco BWG's firmware fixed.  Cisco would do it if Comcast pressed the issue.  Comcast buys hundreds of thousands of these devices from Cisco, the BWG contract is worth millions to Cisco - if Comcast told Cisco to jump on this issue Cisco would say "how high"   The reality is Comcast doesn't give a tinker's dam about IPv6.

Forum Contributor

Re: Packet loss upstream when using IPv6

terrapen,

 

You claim your Netgear "was not loaded at the time of the test" however to be perfectly frank you simply don't understand router loading.

 

When you initiate a request - such as for let's say a 50kbt file - down a 100Mbt connection, that file takes less than a second to transmit.  HOWEVER, during the milliseconds that this file is transferring - the data is coming at the Netgear at full throttle - at the full 100Mbt - and during that period of time - the Netger's CPU is overloaded and dropping packets.

 

When you run your fancy graphs that "prove" the router is not loaded, those graphs are sampling every minute - or longer - or at least every 30 seconds - and there is absolutely no possible way that they would show any clipping or drops.  None at all.  The damage is happening in between sampling periods and it is being averaged in to the seconds where no data is being sent.

 

Now, I have told you what I suspect your problem is.  I've told you where to go in these forums to research the issue and understand it.  You are renting this gear and if you honestly think I'm wrong you can call into Comcast and open a ticket and get your router swapped out FOR FREE and then you will have "proof" that I'm wrong because if I AM wrong, once you are on the Cisco BWG the problems won't go away.

 

The fact that you continue to argue, to me it just means that you are afraid I'm right and you are wrong and your trying to turn this into a hissing match of some kind.

 

Am I 100% positive I'm right?  Of course not.  There are no guarantees in troubleshooting these issues.  But, you have stated that you have tried doing it your way and gotten nowhere.  It certainly won't hurt you to try it my way and see what happens.

Forum Contributor

Re: Packet loss upstream when using IPv6

 

xz4gb8

 

One last "fun fact" about ICMP flooding,

 

Today, no operating systems that ship with a working ping flood tool.  None of the Windows OS's ever implemented the -f option (flood) in their console ping commands.  Linux does have a -f option - but - ping floods are redirected through the iptables stack - which can only process packets at a certain rate - this effectively limits a ping -f flood.

 

It is possible to boot FreeBSD with ipfw shut off - and use ping -f as the superuser.  At that point, your ability to flood is restricted to the efficiency of the network drivers in the OS and many of them aren't the greatest.  But even then there is still processing that goes on, on each packet before it's sent.

 

The only way you can really flood ICMP these days is by a custom-built program that creates the ICMP packets from scratch and uses the raw socket interface in the OS to send the packet - or to install a packet driver in the OS that runs at a privileged ring (where device drivers run)  And that program must be run as superuser on a Unix/Linux system.  It's NOT possible to do it on ANY Windows OS.

 

Microsoft introduced raw sockets into Windows XP - then withdrew them with a patch after Service Pack 1 - then put a neutered version back in later that only allows raw socket access for UDP - not ICMP.  No successive Windows OS allows raw socket access for ICMP.

 

Android, like Linux, only permits raw socket acess to the root user - which means to run a real ICMP flood tool on Android you must be rooted.

 

There is a reason that these OSes are built like this which should be obvious to you now, if you have read my posts.

Trusted Forum Contributor

Re: Packet loss upstream when using IPv6

Further analysis of data from terrapen

 

In ping, ping6, traceroute, traceroute6, and the like, indication of non-response is based upon no response being received before a specified timeout - whether the default built into the command or as specified by a command argument.  The practical effect is to place a finite time limit on command  completion. There is no way to determine if a particular ECHO response was never sent or if it was just delayed longer than the specified timeout and not received. An ECHO response may even have been sent in a timely fashion, but was delayed along the return path - which is not necessarily the direct inverse of the forward path.

 

So, for all of these commands, a missing response means only that something untoward has happened, but we don't really know what. So we should look for more interesting data.

 

terrapen has provided traceroute6 data from two different client systems to  ipv6.google.com which show 0% packet loss end-to-end and average RTT small enough and close enough to be nearly statistically identical. An RTT of 11 msec generally poses no humanly discernable performance problems.  This suggests that there is some other mechanism is in play. Early implementations of Happy Eyeballs have been know to favor selection of IPv4. But this does not necessarily mean that IPv6 performance is bad, but may mean that the version of Happy Eyeballs you are using is biased more to selection of IPv4.

 

Since all this Happy Eyeballs stuff is done at connection establishment, looking at ongoing data transfers (page updates and the like) is necessary to be more definitive.  Try using http://speedtest.xfinity.com to several targets and compare IPv4 and IPv6 numbers.  Tell us what you find.

Highlighted
NetDog
New Member

Re: Packet loss upstream when using IPv6

Is this still an issue?  RF is good right?

terrapen
Visitor

Re: Packet loss upstream when using IPv6

We can close this issue.  I've switched to biz class fiber from a competitor.  Latency is now 1-3ms and speeds are 12x of what I had before.  IPv6 seems preferred ~80-90% of the time now.

Forum Contributor

Re: Packet loss upstream when using IPv6

As I said repeatedly (and the guy refused to believe it) the Netgear is limited in it's CPU power.  When he was forced to scrap the Netgear for a fiber router - surprise surprise surprise, everything went to normal.

 

It's like beating your head against a wall.  Sigh.