Showing posts with label traffic. Show all posts
Showing posts with label traffic. Show all posts

Friday, September 18, 2009

End of an era

TENET now runs all institutional traffic over SEACOM:


Bandwidth usage on the SAT-3 portions has dwindled to almost nothing; the SAT-3 connectivity should terminate sometime in the next month or so.


Interesting to see some BGP traffic on the Telia backup link - perhaps routing changes to some of the institutional ASs propagating through and settling down?

Wednesday, July 22, 2009

Mmmm mirrors

Wow.
1.2+ Gbps traffic from the mirror.ac.za front-end in London.

Saturday, April 11, 2009

Google 'local peering' live?

From some traceroute data [target: www.blogger.com]:
9 unknown.uni.net.za (155.232.253.254) 2.341 ms 2.055 ms 2.237 ms
10 64.233.174.40 151.205 ms 151.276 ms 151.457 ms
Note the jump in RTT as we go from the TENET router to the Google router (64.233.160.0/19 belongs to Google).

Compare to traceroute data for a less aggressively-peering company [target: www.microsoft.com]:
8 v2750-tiger-brie.uni.net.za (155.232.145.226) 2.341 ms 2.254 ms 2.337 ms
9 unknown.uni.net.za (196.32.209.25) 153.050 ms 152.938 ms 152.223 ms
Where the next hop is a TENET-controlled router overseas.

So perhaps Google has started 'peering locally', and are transparently tunneling the data overseas to allow IS [and/or other ISPs] to avoid paying for each user's international bandwidth. Of course Google are therefore paying for it, but with the standard international bandwidth pricing feedback damping the usage, they probably aren't seeing a huge explosion in bandwidth demand.

Contrast this to the local YouTube traffic caching, where having locally-available video means that videos load faster: better user experience translates to higher usage, and the demand increases. For the user this is a virtuous circle, but to the various I[C]T[S] departments it must seem more like a vicious circle.

However, for organisations like TENET where the pipe is purchased old-school (i.e. 'per Mbps' rather than 'per GB'), this will be a boon, because up to 40 Mbps [peak] is now being handed-off locally. Interestingly the upstream/outbound traffic, presumably web crawling, is still going through the Ubuntunet point in London.

Some graphs:
The London peering session going [relatively] quiet late Tuesday.


A 'local peering' session [presumably with Google] in Cape Town going live late Tuesday:

And some of the local caching traffic seems strongly correlated (via Mk. 1 eyeball-based peak matching on other graphs) to the local peering session.

The takeaway: Google is probably peering locally, but is trying to avoid flooding ISPs with bandwidth-conversion economic problems.

Wednesday, February 25, 2009

UCT's extra bandwidth redux

[from a comment on the last post:]
Of course, SANREN is what's supposed to provide the drastically improved last-mile.
SANReN will eventually provide the core of the new network, and maybe the access leg too. Some Gauteng institutions have access via the fibre ring there and are happy. Many large institutions elsewhere don't have that yet (and are waiting, waiting, waiting) and are unhappy.

However, if the UCT link was increased from ~32 to ~44 Mbps (coincidentally this is like going from an E3 to a DS-3), what's to prevent it from being increased further? Why not a DS-4 (~274 Mbps) :)
BTW, the Google caches are already live. [We don't know for sure that these are GGCs, but see below ] Their effectiveness seems to be improving (presumably their caches are still filling), but they do still use quite a lot of incoming (international) bandwidth.
If these are Google caches (GGCs), I doubt they will have reduced international bandwidth usage much. The local usage would spike because access is so much quicker and easier.
Traceroute (ICMP, rather than UDP) to v1.cache.googlevideo.com
mt.google.com is also mapping to a TENET IP block, so Google Maps tiles should be served from somewhere local [TENET or GGC or who-knows-what].

kh.google.com [Google Earth] is still mapping to the Google IP addresses overseas.

mail.google.com and docs.google.com still map to the Google IP blocks overseas.

So either TENET has some sort of fancy caching and line-rate traffic redirection going on, or Google is doing something interesting.

Hopefully the local build-out will be finished soon.

Tuesday, February 24, 2009

UCT's extra bandwidth

UCT seems to have gotten some extra temporary bandwidth today. Looking at the graphs we see a spike at 14h00 (GMT+2) in the traffic graphs.



Most of the increased bandwidth usage seems to have consumed international traffic and so that international portion of UCT's bandwidth allocation is finally pegged against the limit of 26 Mbps.



Which is interesting.

Because it means that until now, the international traffic has been crowded out by national traffic. Much of which will be to the Akamai cluster at IS, and the upcoming Google Global Caches (GGCs) and local servers.

So more international bandwidth isn't going to help unless the actual "last-mile" to UCT is drastically improved.

Friday, October 10, 2008

AMS-IX breaks 500 Gbps!

[from the AMS-IX status page]

The number 1 internet exchange in the world by traffic volume - AMS-IX in Amsterdam - has reached 500 Gbps!



Notice the well-known flat summer trend. It's interesting to project what the max will reach by December: 550-600 Gbps?

Tuesday, September 09, 2008

TENET peers with Google

TENET's UK frontend Ubuntunet [wiki] is now peering with Google.

This is only in the UK for now, presumably they'll peer locally when Google gets their act together. Let's hope Google isn't waiting for the Seacom cable to land too.

Thursday, April 10, 2008

Extra transit for TENET

Viewing the monitoring graphs for TENET's London node, a new transit circuit through DataHop [web?]is apparent:


Peak speeds are about 50 Mbps. Combined with the Telia peak of ~100 Mbps, this means TENET currently has ~150 Mbps peak transit.

Note the very small outbound. Mmmm, eyeballs.

Friday, November 02, 2007

AMS-IX tops 350 Gbps

AMS-IX just keeps carrying more and more traffic. Their latest figures top 350 Gbps:

Take note Teraco / JINX!

Wednesday, September 12, 2007

Monderman's ideas implemented

Wired wrote about Hans Monderman's traffic engineering a while back in their 12.12 issue.

And now it turns out that Bohmte in western Germany is the second town after Drachten to implement his Shared Space design [Reuters].


View Larger Map