Sunday, November 29, 2009

Ramping up Research Traffic on SEACOM

While TENET has enjoyed high-speed interchange of traffic with GÉANT before, it has mostly been from its nodes co-located in London (e.g. local mirrors).

However, it looks like things are heating up - traffic of up to 700 Mbps between the SANReN ring in Gauteng...
and GÉANT in Europe...

Interestingly, none of the TENET graphs show which client networks this is going to, so perhaps this is just an internal test or mirror population?

Friday, October 02, 2009

Return of CINX

The ISPA has re-launched CINX (the Cape Town Internet Exchange). PR here.
Nothing on the old CINX and why it was discontinued, except for a rather oblique statement that there is now has "more than enough Internet traffic to justify an exchange for the city".

No graphs for the exchange yet, but the TENET folks are on the ball and you can view their port graphs here. 10 Mbps in Cape Town doesn't seem like much compared to the 100 Mbps of JINX (which looks suspiciously flat, although there is a 175 Mbps spike in there).

Excellent news for local providers.

Friday, September 18, 2009

End of an era

TENET now runs all institutional traffic over SEACOM:


Bandwidth usage on the SAT-3 portions has dwindled to almost nothing; the SAT-3 connectivity should terminate sometime in the next month or so.


Interesting to see some BGP traffic on the Telia backup link - perhaps routing changes to some of the institutional ASs propagating through and settling down?

Thursday, August 06, 2009

Google preparing for full roll-out?

Google shuffles around some more DNS records et voilá:
  • mail.google.com now resolves to a Google-owned IP (one of the cpt01s01 series)
  • mt.google.com now resolves to a Google-owned IP (as above, so below)
Was the hosting of the Google Maps tiles on the [presumed] GGC caching box just a fig-leaf, a ploy for respectability? Since only YouTube access still seems to be directed towards those caching boxen, perhaps that traffic was a quite-substantial chunk of the original Google traffic to TENET's network, the rest of which is now easily handled by the local peering arrangement.

It might allow easier filtering or shaping for the IT/ICTS departments so inclined, but why else keep the YouTube traffic on those boxes? Google's purported 'boxenertia'?


However, seeing that GMail is now locally hosted, that means the major services are being served from local machines. Perhaps this means Google is getting ready for a full-bore load test, in preparation for a proper roll-out?

Wednesday, July 29, 2009

Google tuning

Even though Google's new Cape Town nodes are in limited production / test mode, users have been connecting to both the local data-centre, as well as to overseas datacentres [e.g. London?].

Presumably Google is still tuning the DNS based on usage while simultaneously expanding the datacentre capacity.

Interestingly it is only mail.google.com that returns a non-local host. So apparently mail is not amenable to normal caching modes or it would have been done through either the [presumed] GGCs or even transparently cached within the Cape Town datacentre.

Let's hope that Google get some cheap capacity on SEACOM even though is has higher latency than SAT-3 (~215 ms vs ~150 ms). Since most of the front-end logic (e.g. client javascript) would be served by local machines, interactivity should not be compromised too much.

Wednesday, July 22, 2009

Mmmm mirrors

Wow.
1.2+ Gbps traffic from the mirror.ac.za front-end in London.

Tuesday, July 07, 2009

Google local peering starting (slowly)

Google now serves [almost] all traffic to TENET sites directly from either:
  • peering connection, or
  • some sort of GGC-like nodes [Google Maps, YouTube videos]
Name resolution and tracerouting from UCT hosts confirms that all Google services are being served from local [South African] servers. This has huge implications in bandwidth savings, and quality of user interaction because of lower latency for content that is already available on the local datacenter and caches.

Issues
Strangely, while www.youtube.com resolves to a locally-hosted server (64.233.179.100 currently), youtube.com (with no leading www.) resolves to a 208.65.153.xxx address, seemingly in Richmond, VA.

Clarification
The caching nodes [presumably GGC nodes] currently serve only certain services (map tiles, YouTube videos) and are generally co-located within a client network. The other Google services are served from machines located in Cape Town (based on ping times) and that are administratively within a Google-owned IP block.

Further clarification
Assume this is a "small" trial run (limited number of users on TENET, maximum goodwill from helping the poor academics) before a full roll-out. The caching nodes already save TENET up to 45 Mbps international transport! There is no local visibility to Google's ZA IP block from IS's route server.

Friday, April 17, 2009

TENET peering at LINX

More interesting traceroute data [target www.yahoo.com]:
8 v2750-tiger-brie.uni.net.za (155.232.145.226) 2.732 ms
9 unknown.uni.net.za (196.32.209.25) 152.780 ms
10 ge-1-1-0.pat1.the.yahoo.com (195.66.224.129) 154.808 ms
11 so-0-1-0.msr1.ird.yahoo.com (66.196.65.33) 163.932 ms
12 gi-1-1.bas-b1.ird.yahoo.com (87.248.101.1) 164.020 ms
13 f1.us.www.vip.ird.yahoo.com (87.248.113.14) [open] 165.543 ms
The TENET router in London (196.32.209.25) connects to the Yahoo! AS, and the 195.66.224.0/19 router address is a giveaway that it's through LINX.

Testing other networks [target www.facebook.com] yields more data:
9 unknown.uni.net.za (196.32.209.25) 153.603 ms
10 linx.br01.lon1.tfbnw.net (195.66.225.69) 153.336 ms
11 xe-x-x-x.br01.ash1.tfbnw.net (204.15.22.245) 229.906 ms
12 te-13-0.csw06a.ash1.tfbnw.net (204.15.23.55) 233.942 ms
13 www.11.06.ash1.facebook.com (69.63.186.12) [open] 262.342 ms
where hop 10's hostname confirms the LINX connection, and also shows we're still connecting to the Virginia servers.

The LINX looking glass also shows the routing to TENET through AS1299 (Telia) and AS2914 (NTT) and directly.

Traffic graphs confirm significant traffic through LINX


Several other autonomous systems are reachable via LINX peering sessions, including hosting providers Hurricane Electric and LeaseWeb and CDN Limelight Networks [ex YouTube providers, and 2009 Presidential inauguration streamers].

Clearly this is in preparation for the new SEACOM pipe, although TENET is saving on transit requirements.

Saturday, April 11, 2009

Google 'local peering' live?

From some traceroute data [target: www.blogger.com]:
9 unknown.uni.net.za (155.232.253.254) 2.341 ms 2.055 ms 2.237 ms
10 64.233.174.40 151.205 ms 151.276 ms 151.457 ms
Note the jump in RTT as we go from the TENET router to the Google router (64.233.160.0/19 belongs to Google).

Compare to traceroute data for a less aggressively-peering company [target: www.microsoft.com]:
8 v2750-tiger-brie.uni.net.za (155.232.145.226) 2.341 ms 2.254 ms 2.337 ms
9 unknown.uni.net.za (196.32.209.25) 153.050 ms 152.938 ms 152.223 ms
Where the next hop is a TENET-controlled router overseas.

So perhaps Google has started 'peering locally', and are transparently tunneling the data overseas to allow IS [and/or other ISPs] to avoid paying for each user's international bandwidth. Of course Google are therefore paying for it, but with the standard international bandwidth pricing feedback damping the usage, they probably aren't seeing a huge explosion in bandwidth demand.

Contrast this to the local YouTube traffic caching, where having locally-available video means that videos load faster: better user experience translates to higher usage, and the demand increases. For the user this is a virtuous circle, but to the various I[C]T[S] departments it must seem more like a vicious circle.

However, for organisations like TENET where the pipe is purchased old-school (i.e. 'per Mbps' rather than 'per GB'), this will be a boon, because up to 40 Mbps [peak] is now being handed-off locally. Interestingly the upstream/outbound traffic, presumably web crawling, is still going through the Ubuntunet point in London.

Some graphs:
The London peering session going [relatively] quiet late Tuesday.


A 'local peering' session [presumably with Google] in Cape Town going live late Tuesday:

And some of the local caching traffic seems strongly correlated (via Mk. 1 eyeball-based peak matching on other graphs) to the local peering session.

The takeaway: Google is probably peering locally, but is trying to avoid flooding ISPs with bandwidth-conversion economic problems.

Tuesday, March 17, 2009

iPhone + Geolocation + P2P

Since Apple's announced new embeddable Google Maps for the iPhone, not to mention the new P2P support, perhaps it's not too much to ask to get a cool new proximity-based alerting system?

Such a system could be used for geocaching or even friend-finding (the real kind of friend-finding where you already know the person...)

TENET has new transit providers

The TENET traffic graphs show NTT as a new provider:

And a 10G interface, to boot! This is likely in preparation for the arrival of SEACOM, as TENET's international bandwidth is the current bottleneck.

Traffic on Datahop has been halved, and traffic on Telias is at a quarter to a third of normal rates. So they are either being phased out or being used as backup providers.

Friday, March 13, 2009

UCT permanent upgrade to 43 Mbps

The bandwidth test was obviously successful as UCT has upgraded their connection to 43 Mbps. Graphs at the usual place [notice the jump around 12.30pm].

ICTS has issued a remarkably restrained notice. Did they not trumpet this to the whole campus because:
  • it's a drop in the bucket compared to the demand?
  • they don't want to stimulate further demand?
  • they don't want any recognition?
Whatever the reason, we still look forward to the upcoming 1 Gbps temporary link for TSN 63 (UCT)

Wednesday, March 11, 2009

Google resolving locally

When resolving www.google.co.za from within TENET IP space, one now receives A records indicating TENET IP addresses.

There are still some holdouts - kh.google.com / mwx.google.com and mail.google.com still resolve to normal Google IP addresses. Perhaps they'll transition soon.

Monday, March 09, 2009

Wits trounces UCT

I'm not writing about sports now, but rather about Wits's recent climb to the number 1 position on the unofficial bandwidth rankings:

Their monitoring graph shows peaks above 60 Mbps, and TENET's order status page shows an upgrade of their backbone link to 57920 kbps.

UCT's monitoring page show the same old peaks, with an upcoming 43008 kbps link as a consolation prize. This is obviously what was under test over the last week or two.

And it's interesting to see a new 'five-thirds' formula mentioned. Taking UCT's international commit of 26 Mbps and multiplying by 5/3 yields about 43 Mbps. Fully 40% of traffic is expected to be local! TENET: how about showing some flow statistics to the IS Akamai clusters?

Thursday, March 05, 2009

gov.za proxy fail

The *.gov.za services that are proxied behind iproxy1.gov.za all seem to have fallen over, while the services that are proxied behind iproxy2.gov.za seem to be fine:

iproxy1
cemis.wcape.gov.za
www.westerncape.gov.za
www.services.gov.za
www.search.gov.za

iproxy2
www.gov.za
www.info.gov.za

Could be a proxy issue, or maybe Telkom playing games again, or even a local failure. Weiiiird.

Wednesday, March 04, 2009

Google to merge YouTube AS

Peeringdb.com always yields interesting nuggets [login:guest, password:guest] if you watch which companies are performing updates.

YouTube, although a Google company, still maintain a separate AS, but it looks like that is going to change [sourced from the above-linked peering record]:
Notes: Peering will be migrating to Google (AS15169) by EOY 2009.
Many have wondered how long YouTube would peer independently of Google, and many have also speculated on the technical complexity of merging their operations to allow peering via both AS. I assume Google will start announcing routes to AS 36561 (if they haven't already), and YouTube will start pre-pending [padding] their routes to make the Google routes preferable.

YouTube is peering in two exchanges that Google seems to be missing from:
  • Equinix Newark
  • PAIX Dallas
One could speculate on whether Google will keep these connections active for AS 15169, as they already privately peer at Equinix Dallas and PAIX New York (amusingly this is almost the opposite Company/Location matching).

Wednesday, February 25, 2009

UCT's extra bandwidth redux

[from a comment on the last post:]
Of course, SANREN is what's supposed to provide the drastically improved last-mile.
SANReN will eventually provide the core of the new network, and maybe the access leg too. Some Gauteng institutions have access via the fibre ring there and are happy. Many large institutions elsewhere don't have that yet (and are waiting, waiting, waiting) and are unhappy.

However, if the UCT link was increased from ~32 to ~44 Mbps (coincidentally this is like going from an E3 to a DS-3), what's to prevent it from being increased further? Why not a DS-4 (~274 Mbps) :)
BTW, the Google caches are already live. [We don't know for sure that these are GGCs, but see below ] Their effectiveness seems to be improving (presumably their caches are still filling), but they do still use quite a lot of incoming (international) bandwidth.
If these are Google caches (GGCs), I doubt they will have reduced international bandwidth usage much. The local usage would spike because access is so much quicker and easier.
Traceroute (ICMP, rather than UDP) to v1.cache.googlevideo.com
mt.google.com is also mapping to a TENET IP block, so Google Maps tiles should be served from somewhere local [TENET or GGC or who-knows-what].

kh.google.com [Google Earth] is still mapping to the Google IP addresses overseas.

mail.google.com and docs.google.com still map to the Google IP blocks overseas.

So either TENET has some sort of fancy caching and line-rate traffic redirection going on, or Google is doing something interesting.

Hopefully the local build-out will be finished soon.

Tuesday, February 24, 2009

UCT's extra bandwidth

UCT seems to have gotten some extra temporary bandwidth today. Looking at the graphs we see a spike at 14h00 (GMT+2) in the traffic graphs.



Most of the increased bandwidth usage seems to have consumed international traffic and so that international portion of UCT's bandwidth allocation is finally pegged against the limit of 26 Mbps.



Which is interesting.

Because it means that until now, the international traffic has been crowded out by national traffic. Much of which will be to the Akamai cluster at IS, and the upcoming Google Global Caches (GGCs) and local servers.

So more international bandwidth isn't going to help unless the actual "last-mile" to UCT is drastically improved.

Wednesday, February 04, 2009

User-generated video in South Africa

There are two major upcoming events in South Africa that will, both directly and indirectly, drive uptake of services like YouTube. Those events are:
  1. The upcoming South African general election
  2. The 2010 FIFA World Cup
The general election
A natural candidate [excuse the pun] for election advertisements, documentaries on parties and candidates, and other related propaganda. This will be more effective if one of the mobile operators does streaming for their users, perhaps in collaboration with YouTube.

The soccer world cup
Unless FIFA does something crazy with the streaming rights, this will be a tough one to get right. The SABC have all of the broadcast rights, and will probably moan at anything unusual or technically creative. As above, partnership with one of the mobile operators will be most important. Especially considering how many people have mobiles vs computers. Of course, since many of those mobile phones are really old, perhaps some aatv-style filtering will be necessary...ahem.

Monday, February 02, 2009

Speculations on Google's peering timing

Take the following two events:
  • Google should start peering locally "soon". [Events seem to have overtaken the original plans for implementation by the end of January, so perhaps mid-February?]
  • SEACOM is coming online "in June".
These might seem unremarkable events individually but when viewed together they provoke interesting questions. At the root of all of these questions has to be some speculation regarding the timing of Google's technical entry into this market (ignoring their sales operation) prompted by this observation:
Local Google peering is very close to, but before, the SEACOM launch.

Some obvious simple questions, with possible answers:
  • Q: Are Google using SAT-3 or SEACOM?
    • A: Google is likely to be using SAT-3.
      • It's unlikely that SEACOM have any segments finished and ready for operations yet.
      • However Google probably don't need a full path from South Africa to London; the 'express path' to Kenya would help operations both in Kenya and in South Africa, so any costs involved in bringing content to Kenya or South Africa could be amortized over both operations.
      • Google would also like some redundancy in their paths, especially considering the Egypt-to-France portion will be the responsibility of Egypt Telecom.

  • Q: Does Google have a big pipe?
    • A: Probably not.
      • 10 Gbps IRUs are available on SEACOM, but SAT-3's total capacity is 120 Gbps. Even 1 Gbps is a large chunk of the available capacity.
      • Pricing on SAT-3 circuits is also very high, although it'll be under pressure because of the imminent SEACOM launch.
      • However, if Google have negotiated a long-term deal with Telkom, they may have secured better pricing.
      • Presuming they buy a circuit on SEACOM, one assumes it will be a 10 Gbps IRU.

  • Q: Why hasn't Google peered in South Africa before this?
    • A: It was too expensive for the possible benefit to Google.
      • ISPs must secure bandwidth to provide service to their customers. Those customers want connection to Google, so ISPs wanting to keep customers happy bought the bandwidth.
      • Google started local [service / sales] operations a while back (witness the recent Yellow Pages / Entelligence / Google spat) and are ramping up their presence in the country. Further usage of YouTube, Google Earth & Google Maps, GMail and other bandwidth-intensive services would help drive traffic (and hence advertising earnings), and usage of Google Docs would help steal marketshare from Microsoft in a developing market that Microsoft itself recognises is vulnerable (hence Microsoft's pre-paid Office offering).
      • Google can afford to run the service at a high marginal cost for a while, until SEACOM comes online.
      • Also, it seems the development of the GGC has only recently come to fruition.

  • Q: Why hasn't Google waited until June (e.g. for better pricing)?
    • A: Competitive advantage.
      • Come June, every Tom, Dick, Harry, Yahoo! and Microsoft will be presumably be buying large circuits to tie in local datacenters or extend peering.
      • Moving first allows better publicity (even via word of mouth as opposed to above-the-line advertising) and can be used by their sales team as a competitive advantage.
Anything else?

Friday, January 16, 2009

ASN fail by default

Ahhh TENET, you continue to be the source of interesting and alternately amusing/depressing news:
[from the REN-news mailing list]
As of the 1st of January, all the Region Internet Registries (RIR's),
that being, AfriNIC, APNIC, LACNIC, RIPE and ARIN have adopted a policy which allocates 32bit ASN's by default
...
[A]lmost no networking equipment out there
currently supports these.
...
This means that should you get a 32bit ASN instead of a 16bit ASN, it
will not be useable to peer with or announce to the TENET network or
most other networks at this point.
I suppose this is a bit like the USG jump-starting DNSSEC by requiring it be present for the .gov TLD. But only a little bit, because in this case the pressure on the vendors is going to be coming from annoyed customers.

And since there's no backwards-compatibility built-in (no fall-back number), Metcalfe's Law works against one here: everyone you need to peer with also needs updated equipment and/or software.