Note the jump in RTT as we go from the TENET router to the Google router (126.96.36.199/19 belongs to Google).
9 unknown.uni.net.za (188.8.131.52) 2.341 ms 2.055 ms 2.237 ms
10 184.108.40.206 151.205 ms 151.276 ms 151.457 ms
Compare to traceroute data for a less aggressively-peering company [target: www.microsoft.com]:
8 v2750-tiger-brie.uni.net.za (220.127.116.11) 2.341 ms 2.254 ms 2.337 ms9 unknown.uni.net.za (18.104.22.168) 153.050 ms 152.938 ms 152.223 ms
Where the next hop is a TENET-controlled router overseas.
So perhaps Google has started 'peering locally', and are transparently tunneling the data overseas to allow IS [and/or other ISPs] to avoid paying for each user's international bandwidth. Of course Google are therefore paying for it, but with the standard international bandwidth pricing feedback damping the usage, they probably aren't seeing a huge explosion in bandwidth demand.
Contrast this to the local YouTube traffic caching, where having locally-available video means that videos load faster: better user experience translates to higher usage, and the demand increases. For the user this is a virtuous circle, but to the various I[C]T[S] departments it must seem more like a vicious circle.
However, for organisations like TENET where the pipe is purchased old-school (i.e. 'per Mbps' rather than 'per GB'), this will be a boon, because up to 40 Mbps [peak] is now being handed-off locally. Interestingly the upstream/outbound traffic, presumably web crawling, is still going through the Ubuntunet point in London.
The London peering session going [relatively] quiet late Tuesday.
A 'local peering' session [presumably with Google] in Cape Town going live late Tuesday:
And some of the local caching traffic seems strongly correlated (via Mk. 1 eyeball-based peak matching on other graphs) to the local peering session.
The takeaway: Google is probably peering locally, but is trying to avoid flooding ISPs with bandwidth-conversion economic problems.