Saturday, April 11, 2009

Google 'local peering' live?

From some traceroute data [target: www.blogger.com]:
9 unknown.uni.net.za (155.232.253.254) 2.341 ms 2.055 ms 2.237 ms
10 64.233.174.40 151.205 ms 151.276 ms 151.457 ms
Note the jump in RTT as we go from the TENET router to the Google router (64.233.160.0/19 belongs to Google).

Compare to traceroute data for a less aggressively-peering company [target: www.microsoft.com]:
8 v2750-tiger-brie.uni.net.za (155.232.145.226) 2.341 ms 2.254 ms 2.337 ms
9 unknown.uni.net.za (196.32.209.25) 153.050 ms 152.938 ms 152.223 ms
Where the next hop is a TENET-controlled router overseas.

So perhaps Google has started 'peering locally', and are transparently tunneling the data overseas to allow IS [and/or other ISPs] to avoid paying for each user's international bandwidth. Of course Google are therefore paying for it, but with the standard international bandwidth pricing feedback damping the usage, they probably aren't seeing a huge explosion in bandwidth demand.

Contrast this to the local YouTube traffic caching, where having locally-available video means that videos load faster: better user experience translates to higher usage, and the demand increases. For the user this is a virtuous circle, but to the various I[C]T[S] departments it must seem more like a vicious circle.

However, for organisations like TENET where the pipe is purchased old-school (i.e. 'per Mbps' rather than 'per GB'), this will be a boon, because up to 40 Mbps [peak] is now being handed-off locally. Interestingly the upstream/outbound traffic, presumably web crawling, is still going through the Ubuntunet point in London.

Some graphs:
The London peering session going [relatively] quiet late Tuesday.


A 'local peering' session [presumably with Google] in Cape Town going live late Tuesday:

And some of the local caching traffic seems strongly correlated (via Mk. 1 eyeball-based peak matching on other graphs) to the local peering session.

The takeaway: Google is probably peering locally, but is trying to avoid flooding ISPs with bandwidth-conversion economic problems.

No comments: