With the landing of several undersea cables in Africa in the last three years, many a pundit have hailed it as a new dawn of telecommunications in the continent. The cables brought with them massive bandwidth capacities to the continent that enabled faster and cheaper communications. Before the arrival of these undersea cables, Satellite was used to connect Africa to the rest of the world. These satellites had the following characteristics:
- Expensive due to the fact that satellite transponder leasing was expensive due to the extremely high demand for the capacity. This demand reached peak circa 2005 when operators were even buying capacity from satellites that were still on paper, not yet built and launched.
- Due to the cost and scarcity of capacity, many back-haul pipes were congested making satellite communications slow and irritating to use.
The arrival of cheap and abundant terrestrial capacity led many to declare that satellite was destined to history books and that there will be no market for satellite broadband in the years to come.
Three years down the line, reality has hit home as the following facts downed:
- The issue of back-haul was resolved by the undersea cables, these cables did not however address the last mile access problem. There is a lot of capacity at the landing stations that cannot be distributed to end users as there is no good last mile infrastructure in place. Spectrum scarcity has also made things worse.
- Even on the existing last mile networks and those being put up to meet this demand, reliability has been a key issue due to poorly designed networks and fiber cuts. Industry leaders now seem to agree to this fact as seen here
- No regulatory framework was set-up to harness the advantages brought by the availability of bandwidth. Regulators failed to come up with new policies and laws such as infrastructure sharing, spectrum farming and sharing
The result is that the consumer has not benefited much as ISP’s and NSP’s continue to offer mediocre services. There are reports of some ISP customers getting as low as 92% availability which translates to about 29 days of outage in a year.
Will Satellite make a come back?
Before we answer this question, we need to be aware of the key advantages that satellites provide. These are:
- Very high availability. No technology beats satellite when it comes to availability. Downtime is rare and far in between making majority of well designed satellite systems achieve the proverbial 99.999% availability (52 minutes of outage in a year).
- Satellites offer instant availability of service over a large area without the need to lay additional infrastructure.
With the advent of Ka-band satellites, the landscape is about to change as these will bring with them large amounts of bandwidth and make them available instantly to large geographical regions. The key advantage of Ka-band satellites over the more traditional Ku-band and C-band satellites are:
- In terms of capacity, one Ka-band satellite is equal to about 100 Ku-band satellite yet they cost the same to manufacture and put into orbit. This means that capacity on Ka-band will be much cheaper to the point of giving fiber capacity competition.
- Ka-band utilizes spot beam technology that enables the use of smaller antenna (as small as 60cm) and cheaper modems. At the moment, a full Ka-band kit is competitive on price to terrestrial technology equipment. Intelsat is developing a spot beam architecture utilizing all bands that will allow 2 satellites to cover all populated continents of the world. Read more on the Intelsat Epic™ project here
With more capacity available to offer higher speeds at much lower costs, with equipment being cheaper and more competitive to terrestrial offerings and giving much higher reliability that terrestrial services can only dream of. What will prevent Satellite broadband from making a come back?
If recent events are anything to go by, Satellite broadband is already making a come back to Africa. The recent launch and uptake of capacity on Yahsat 1B and Hylas-2 satellites over Africa will avail high-speed capacity whose quality rivals that of majority of the terrestrial services latency not withstanding. The main reason why i think Ka-band will be a game changer in the African broadband market is that operators have realized that it is one thing to roll out a terrestrial infrastructure and another to operate and maintain it. Operational costs of the newly laid terrestrial wired and wireless networks are becoming prohibitively high due to vandalism and sabotage. The terrestrial networks offer very many points of failure to offer any reliable service. Ka-band satellite will offer cheaper bandwidth that is more reliable and easy to access and install on any place in the continent. It takes an average of 3 weeks to survey and install a fiber cable in a city like Nairobi, it takes about 2 hours to fully set-up a Ka-band dish and connect to the Internet….. Once the fiber hype dies, Ka-band broadband via satellite will be a hit in Africa.
Anyone dismissing my argument should look at the following links that talk of roll-out and expansion of Ka-band satellite in Europe, Middle east and USA which we consider to be pretty “wire up” than Africa.
- Hughes announcement of the launch of Echostar 17 to offer 100Gbps broadband services in North America: http://bit.ly/ShARi6 after the successful launch and sale of capacity on Spaceway satellites
- Viasat Ka-band 100Gbps broadband service offering in North America: http://bit.ly/ShBAjj
- Avanti communications announcement of the launch of Hylas-2 to offer services in Mideast, Africa, Europe and the Caucus: http://bit.ly/ShBYhL
The landing of various undersea cables on the African shores in the last three or so years has heralded a new dawn of high speed communications that offered clearer international calls and faster broadband speeds. These cables have made bandwidth which was once a scarce commodity be in near oversupply. Indeed as I write this, quite a huge chunk of the undersea capacity is unlit.
With the arrival of these cables, telcos and ISPs that once depended on expensive FSS (Fixed Satellite Service) capacity moved their traffic to the more affordable undersea cables. Cost of bandwidth came down but not to the level envisioned by the consumers as most fell by about 40% and not the expected 90%. I had warned of the possibility of these prices not coming down by 90% as was the expectation and hype in a previous blog post in 2009.
Satellite At Inflection Point
Incidentally, when the cables were arriving, some major developments were happening in the Satellite world. However, due to the hype and excitement of the arrival of undersea cables, majority of us didn’t care or notice these changes that were set to revolutionize satellite communications. These changes have created a lot of excitement in the telecommunications world but are largely ignored here as we believe that the undersea cables are the future.
The Situation In Kenya (and by extension Africa)
With several cables landing on the Kenyan coast, it would be expected that quality of service from ISP’s would be at its best. However, this is not the case as the quality of service has greatly deteriorates over time due to a poorly maintained last mile network. We have bandwidth at the shorelines but we are unable to fully utilize its potential. Majority of the operators in Kenya embarked on ambitious plans to lay last mile fibre cable around the country, the same thing is happening in other African countries too albeit at a slower pace. these are good steps, however, these telcos have oversimplified the issue of last mile access to that of laying cable on poles or burying it underground. That’s just 5% of the entire job of last mile provision, the other 95% lays in maintaining the network which sadly none of the telcos were prepared for. they thought that after laying the cables, money would start flowing in. Last mile cable cuts due to civil works is currently one of the biggest cause of downtime in Kenya today, hardly a day ends without incidents of cable cuts as roads are being expanded, new buildings come up and natural calamities such as trees falling on overhead cables and flooding of cable man holes.
Collectively as Africa, we seem to underestimate the size of this continent, operators do not know what it will take to wire Africa to the same levels as Europe or the US. The map below shows the task ahead of us as far as wiring Africa is concerned, its not going to be an easy job. Africa is the size of the US, China, India, Europe and Japan put together. Click on it for a larger image
This size poses a challenge as far as laying last mile networks in Africa is concerned, the lack of reliable electric power supply also poses a challenge on how far these networks can grow from the major cities. Click on this map here to see how far behind Africa is in as far as power supply distribution is concerned, it was taken by NASA at night sequentially as each part of the earth moved into night time.
As seen above, it will take quite a large amount of investment to bring this continent to the levels of other continents as far as connectivity is concerned.Even when this is done, it will be an expensive affair which will make connectivity expensive as investors will need a return on their investment.
However, all is not lost as the once derided Satellite service is now making a comeback and will soon give terrestrial services a run for their money. Already, the US and Europe are undergoing a major shift in the use of Satellite to provide broadband service. Currently there are more investors putting their money into satellite launches than into laying undersea cables.
Below are some of the developments in Satellite that will herald this comeback but have sadly slipped past most of us.
Unlike Ku and C band satellites that were in use before the arrival of cables in Africa, Ka-band satellites use spot beam technology which allows frequency re-use and the provision of hotter beams. What this means is that satellite capacity can be greatly expanded due to frequency re-use and CPE equipment is now cheaper due to a hot beam/signal. A single Ka-band satellite is now equivalent to about 100 Ku-band satellites for the same cost. The two main reasons why satellite was ditched for fiber was the cost of equipment and bandwidth. Due to these two developments, satellite operators will soon be offer prices as low as 300 USD per Mbps down from about 6,000 USD per Mbps.
The reason why Ka-band was not commercially viable for sometime was because the technology had not mature enough for viable commercialization. Ka-band which operates at the 17-30Ghz range is susceptible to weather interference but there now exists techniques to counter this hence greatly improving reliability. The High operating frequency meant more expensive detecting equipment (modems) but advances in technology have allowed for the manufacture of affordable 200 USD modems today.
More Efficient modulation Schemes
In Communications, the amount of data that can be sent over a transmission channel is dependent on the noise on that channel and the modulation scheme used. There have been great advances in modulation techniques and noise suppression allowing the pushing of more data over smaller channels. This includes the use of Turbo coding which is so far mankind’s best shot at reaching the Shannon limit. One recent and notable development was by Newtec where they managed to push 310Mbps over a 36Mhz transponder which translates to 8.6Mbps/MHz , previously the much you could do was 2.4Mbps/MHz. Read the Newtec story here.
Combine this with Ka-spot beam frequency re-use and you will have satellite capacity that is cheaper than fiber bandwidth, if you add this to the reach that satellite foot print provides, you will have instant broadband available on the entire continent.
At around the same time the cables were landing, a Google-backed broadband project was being announced. The project dubbed O3B (Other 3 Billion) denotes the unconnected 3 Billion people in the world. Google believes that this is the most viable way to avail broadband to the masses. Otherwise how do you explain the fact that Google has never invested in fiber capacity to Africa or the developing countries? I wrote about the O3B project in a previous blog post that you can read here.
The O3B will utilize satellites that are closer to the earth hence the term MEO which stands for Medium Earth Orbit. The fact that these satellites are closer means that latency on the links will be much lower (at 200ms) compared to traditional satellite capacity that gives about (600ms). This will enable higher throughput at lower latencies. To read more on the relationship between latency and throughput, read this tutorial here
majority of the satellites in orbit have a lifespan of about 15 years. This lifespan is determined by the amount of fuel it can carry. This fuel lasts about 15 years and once its depleted, the satellite cannot be maneuvered and is therefore not usable. As a write this, Intelsat which is the worlds largest commercial satellite fleet operator, has signed up for satellite refueling services from MDA corp to extend the life of some satellites by about 5 years. What this means is that operators can get more money out of their satellites due to extended life and therefore they can now offer cheaper bandwidth.
Combine the advantages of Ka-band spot beam, efficient modulation, LEO satellites and ability to refuel satellites and you have with you the solution to the myriad of problems afflicting consumers in Africa today as far as reliable and high speed broadband connectivity is concerned.
By the end of 2014, Satellites will offer cheaper, reliable and more affordable bandwidth than undersea fiber optic cables. This is the reason why investors are flocking to launch satellites than lay cables. These include Yahsat which is an Abu Dhabi company, Avanti communications launching in Europe and Africa and Hughes which is launching Ka-band satellites in the US mainland for broadband connectivity.If undersea cables were as good as was touted by local operators, why are investors putting money in launching in the US and Europe which are more wired than Africa? Locally, the Nigeria government launched its own communication satellite that they say will enhance broadband reach in the country faster than terrestrial technologies.
Watch this space….
The announcement by Safaricom that it’s doing away with its unlimited Internet bundle did not come as a surprise to me. I had discussed the historical reason behind the billing model that is used by ISP’s and mobile operators in a previous blog post here in Feb 2011.
The billing model used in unlimited Internet offering is flawed. This is because the unit of billing is not a valid and quantifiable measure of consumption of service. An ISP or mobile operator charging a customer a flat fee for a size of Internet pipe (measured in Kbps) is equivalent to a water utility company charging you based on the radius of the pipe coming into your house and not the quantity of water you consume (download) or sewerage released (upload).
What will happen if the local water company billed users by a flat rate fee based on per-centimeter radius of pipe going into their homes rather than volume of water consumed? A user with a pipe of radius that is 1% more than the neighbor enjoys 2% more water flow into their house (do the math!). The problem is that their bills will not differ by 2% but by 1% based on the difference in radius of the pipes. A 2% difference yields a 4% difference in consumption but a 2% difference in billing. The result is that a small group of about 1% users end up consuming about 70% of all the water. This figure is arrived at as follows: A marginal unit increase in resource leads to a near doubling of marginal utility. This is a logarithmic gain (Ln 2=0.693 which means that 69% of utility is enjoyed by about 1% of consumers) . This is the figure issued by Bob Collymore the CEO of Safaricom who said that 1% of unlimited users are consuming about 70% of the resources. This essentially means costs could outstrip revenues by 70:1. This does not make any business sense. Not even a hypothetical NGO engaged in giving ‘free’ Internet through donor funding can carry such a cost to revenue ratio. As to why ISP’s and mobile operators thought billing by size of pipe to the Internet could make money is beyond me.
Bandwidth Consumption Is Not Linear
One mistake that network engineers make is to assume that a 512Kbps user will consume double what a 256Kbps user does and therefore advice the billing team that billing the 512Kbps twice the price of the 256Kbps can cover all costs. This is not true. There are things or activities that a 256Kbps user will not be able to do online, like comfortably do Youtube videos. A 512Kbps user will however be able to do Youtube without a problem. The result is that a 512Kbps user will do much more Youtube videos as the 256Kbps user becomes more frustrated with all the buffering and stops all together attempting to watch online videos. The result is that the consumption of the 512Kbps user will be much higher than double that of the 256Kbps user. Other than Youtube, websites can detect your link speed and present differentiated rich content based on that. I’m sure some of us have been given an option to load a ‘basic’ version of Gmail when it detects a slow link. The big pipe guy never gets to be asked if he can load lighter web pages, rich content is downloaded to his browser by default while the smaller pipe guy gets less content downloaded to his browser in as much that they are both connected to the same website. The problem here is that the difference in content downloaded by the two people on 512K and 256K link is not linear or even double but takes a more logarithmic shape.
Nature Of Contention: Its a Transport and not Network problem
The second mistake that the network engineers make in a network is to assume that if you put a group of customers in a very fat IP pipe and let them fight it out for speeds based on an IP based QoS mechanism is that with time each customer will get a fair chance of getting some bandwidth out of the pool. The problem is that nearly all network QoS equipment characterize a TCP flow as a host-to-host (H2H) connection and not a port-to-port (P2P not to be confused with Peer2Peer) connection. There could be two users with one H2H connection each but one of them might posses about 3000 P2P flows. The problem here is that bandwidth is consumed by the P2P flows and not the H2H flows. User with the 3000 P2P flows ends up taking up most of the bandwidth. This explains why peer to peer (which establishes thousands of P2P flows) is a real bandwidth hog.
So what happens when an ISP dumps the angelic you in a pipe with other malevolent users who are doing peer to peer traffic such as bit-torrent? They will hog up all the bandwidth and the equipment and policies set will not be able to ensure fair allocation of bandwidth to all users including you. So some few users doing bit-torrent end up enjoying massive amounts of bandwidth while the rest doing normal browsing suffer. That explains why some users on the Safaricom Network could download over 35GB of data per week as per comments by Bob Collymore. Please read more on how TCP H2H and P2P flows work here. Many ISP’s engage engineers proficient in layer3 operations (CCNP’s, CCIP’s, CCIE’s etc ) to provide expertise on a layer 4 issue of TCP H2H and P2P flows. You cannot control TCP flows by using layer 3 techniques. IP Network engineers are assigned the duties of transport engineers.
At the end of the day, there will be a very small fraction of ‘happy’ customers and a large group of dissatisfied and angry customers. The few happy customers flat rate revenues are not able to cover all costs as the unhappy customers churn. If on the other hand these bandwidth hogs paid by the GB, the story would be very different. This is what operators are realizing now and moving with speed to implement. Safaricom is not the only one affected by this; Verizon, AT&T, T-Mobile in the US are all in different stages of doing away with unlimited service due to their unprofitable nature.
The fast increase of the Internet routing table size on the Default-free Zone (DFZ) is becoming a major concern to IP carriers all over the world. The number of active BGP entries on border routers is increasing at a quadratic if not exponential rate (see figure below). The future unhindered scalability of the Internet is in doubt. In spite of the use of Border Gateway Protocol (BGP), inter domain routing does not scale any more these days as the volume of routing information keeps on growing by the day and it is not clear if the current routing technology will keep pace with this growth and still do it cost effectively. Today it costs more for border routers to exchange routing information than it did few years ago due to investment in more powerful routers that can keep up with this growth.
The depletion of the IPv4 address space and the inevitable adoption of the IPv6 addressing scheme means that routers will now have to exchange much larger routing tables because the vast amount of IPv6 addresses require even more prefixes to be announced in the Default-free Zone. This problem will be compounded by the desire by network operators to announce more specific ( and hence longer prefix) routes to their critical infrastructure such as DNS and Content Delivery Networks (CDNs) in the now wider prefixes in IPv6. This tendency to announce very specific routes by use of longer prefixes stems from the desire to prevent prefix hijacking by malicious Autonomous Systems (AS’s) as was the case in 2008 when an AS owned by Pakistan Telecom announced the Youtube IP space with a longer prefix leading to Youtube traffic being redirected to Pakistan because it was the more specific route. With cyber crime rates increasing worldwide, network engineers want to ensure high availability of their networks on the Internet and end up announcing very long prefixes that have an effect of making the Internet routing table unnecessarily larger. This is the reason why I still think the old rule of eBGP routers filtering any route to a network longer than a /22 should still be in force. A peek on some Internet routing tables will show the existence of even /25’s.
The growing size of the Internet and its inevitable changes and failures leads to a large rate of routing table updates that stresses router CPUs and there have been several proposals made to modify BGP to make it scale. The current routing tables are linear and its high time logarithmic scale routing was introduced that can summarize routes in a logarithmic fashion. By this I mean that summarization of prefixes should be much more intense at the longer side and less intense as the prefixes become shorter.
The above can be achieved in three ways namely:
Aggregation proxies: In this way, ISPs will announce or redistribute routes to their networks via a non BGP protocols to a router aggregation proxy. This proxy will receive many long prefixes and aggregate them into shorter ones for eventual announcement via BGP. The regional allocation of IPs through organizations such as LAPNIC, RIPE, AfriNIC and the rest make aggregation proxies a very viable path because the regional allocation of IP spaces is not random (e.g. any IP starting with 196. or with 41. is from an African ISP). AfriNIC can therefore host aggregation proxies that speak to African border routers via a non BGP protocol and this aggregation proxy can then announce a single entry of say the 196 range to the Internet. the other local aggregation servers in Americas, Europe and Asia can then have filters to reject any inbound traffic to the Africa IP’s because that would be IP hijacking. The downside to aggregation proxies is that paths will now be longer as the proxy introduces an extra hop. the trade-off between a massive reduction of the routing table size and path elongation has to be weighted to see if this is a viable alternative.
DNS-like lookup system: This system will apply to non routable prefixes. in this concept, all the long prefixes are retained and recorded in a DNS-like lookup system in which a particular IP space is mapped to a specific border router. Anyone wishing to communicate with this IP space will do a lookup to get a next hop IP address and send this traffic to it. As a result, the long prefixes are not routable on the Internet but the lookup system knows a router from which the traffic can be forwarded without the use of inter-domain routing information. In simple terms this will be like a DNS not for domain names but for long prefix IP spaces. This proposal will eliminate the need to have long prefixes on the Internet routing table and a bar can be set to filter anything longer than say a /19 from being announced on the now cleaner DFZ. This will have the advantage of returning control of what appears on the DFZ routing table to regional organizations such as AfriNIC as opposed to AS managers who can sometimes be selfish.
Locator-Identifier split (Loc/ID): Whereas the above two methods overlay the existing BGP and enhance it, this approach replaces the existing inter domain routing as we know it. The Locator-Identifier split (Loc/ID) proposes the scraping of IP addressing as we know it and coming up with 2-level routing architecture to replace the current hierarchical inter-domain system. The argument behind Loc/ID is that the reason why IP-based routing is not scalable is because the IP address assigned to a device is now being used as its unique Identifier as opposed to it serving the dual role of it being both a locator and identifier. By splitting it into a Locator section and an ID section, then summarizing the locators on the DFZ, considerable reductions can be achieved on the routing table because routing on the DFZ will be based on locators and not on both locators and identifiers. Cisco recently developed the Loc/ID separation Protocol (LISP) that is hoped will replace BGP in future as BGP will no longer be able to scale a bigger IPv6 Internet. Read more about LISP by clicking here. Cisco is currently promoting LISP as an open standard and not a proprietary standard in the hope that the Internet Engineering Task Force (IETF) will adopt it.
In summary, network operators need to be aware of the non-scalability of BGP and start preparing their networks for the adoption of either of the 3 proposals above. I would however bet that the Loc/ID way of doing things will prevail and LISP will replace BGP as the inter domain routing protocol of choice on the Internet.
O3B networks is a nextgen company founded by Greg Wyler in 2007. O3B is an acronym for the Other 3 Billion denoting the 3 billion people in the world who still lack reliable means of communication.
O3B has a plan to launch Medium Earth Orbit (MEO) satellite constellation to offer low latency fiber quality broadband connection to regions in the world without much terrestrial infrastructure such as Africa, South Asia and the Pacific. O3b has a strong financial backing by big guns such as by SES World Skies, Google, HSBC, Liberty Global, Allen and Company, Northbridge Venture Partners, Development Bank of Southern Africa, Sofina and Satya Capital.
Their plan is to launch a constellation of 8 satellites into medium orbit by 2013 and offer connectivity to mobile operators and internet service providers. They will be a wholesaler and will only sale bulk capacity to providers of video, data and voice and not directly to end users.
O3B believes that the MEO satellites will offer lower latency (and therefore higher throughput) to most undeserved markets where fiber capacity has not and will never reach in a long time. To see how low latency is related to higher throughput see this tutorial here.
The O3B idea is a great one and deserves all the support it can get so as to make it a success.
A while back several investors financially backed the Iridium project. This project was to offer mobile voice communication via 66 Low Earth Orbit (LEO) satellites from anywhere on earth. However the project was a failure even before it started and was bogged down with project delays, design problems and poor marketing. Iridium estimated that it would easily get about 600,000 customers to break even and that the market for their service would be massive. At the end it only attracted 22,000 customers hence failing to even break even. With calls costing about $7 per minute and the handsets costing about $3000, the project was doomed from day one.
The Iridium project failed not because the concept did not work, It did. The failure was due to the managers of this firm underestimating the impact the GSM technology would have on the voice market. It is GSM technology that killed Iridium as it offered very cheap calls from small cheap handsets that could be used even in a building or car (as opposed to iridium sets that could only be used outside in open space).
Many a pundit have expressed fears that the O3B project will also join the likes of iridium in the not so envious club of spectacular project failures. I would however like to differ with the critics of O3B who have predicted its failure based on the failure of previous projects such as iridium and teledesic. This is because O3B is being implemented at the right time in history. This is because broadband internet and voice are now a mass market products unlike when Iridium was being implemented. When teledesic and iridium were being launched, broadband and voice were niche markets and very expensive for the average person. When iridium was launching, mobile communication was classified as a luxury and data transmission was only done by big corporations such as banks and oil companies. Today, things are very different and i think this is what makes the O3B different.
Last mile challenge.
On of the biggest problems in African telecoms is the availability of a reliable and extensive last mile. Africa has never been attractive to investors wishing to invest in last mile because of two factors:
- The African rural population density is very low meaning that traditional last mile access technologies such as wimax will not return on investment and therefore not commercially viable.
- The African continent is massive. The African land mass is equal to USA, India, China, Europe and many other countries put together. The image below says it all. Click it for a bigger and clearer version of the image
As a proponent of the enhancement of African rural communications, I believe the O3B project will help bridge the existing gap in the rural and city populations of Africa by overcoming the last mile commercial viability challenge and leveraging on the satellite footprint to offer cost-effective coverage to nearly every spot on the continent. This means that everyone in Africa will have instant lower latency access to the Internet. According to O3B, they will avail fiber quality satellite connections that will offer lower latency of 190ms (due to MEO satellites), high-capacity links at a low price of about $500 per Mbps. This is very competitive by any standards in developing countries. This capacity can then be distributed via methods such as 3G, LTE, Wimax, WI-Fi etc.
Apart from providing the rural population with broadband internet, the O3B satellites will also provide cellular operators with the much-needed GSM trunking services at a lower cost and therefore enable faster deployment of mobile networks in rural Africa.The cost of network expansion to rural Africa will therefore drastically reduce and this will speed up connectivity and spur development. To see how connectivity aids development in rural Africa, download the Commonwealth rural connectivity report here.
The Google dimension
Many people were surprised when Google decided to back the O3B project. People failed to see the interest Google (which to many is just a search engine) had on provisioning of affordable connectivity in developing countries. I believe the backing from Google was the game changer. On their blog, Google say their mission is to “organize the world’s information and make it universally accessible and useful.” Well, Google has succeeded in organizing the world’s information and O3B will help it make it universally accessible. Google believes that by funding such projects, it will extend its reach to the whole world and increase its market for Google phones, OS’s such as Android and Chrome OS, cloud services, advertising and search engine. Google’s backing for open source software development will also mean that cost barriers to ICT implementation will be eliminated for the other 3 Billion people in the world.
I therefore believe that the Google backed O3B project has come at the right time in history and will play a much bigger role in bridging the digital divide in Africa than fiber optics whose extended coverage is less than 20% of the continent. The participation of Google in this project will also reduce the cost barrier to adoption of ICTs in the developing world such as Africa.
Early 2009, the African continent was heavy with expectation as several fiber optic submarine cables landed on its shores. This it was hoped would avail copious amounts of bandwidth to the continent and reduce its dependency on the existing satellite based connectivity to the Internet.
This development was also accompanied by sweeping changes in the telecoms sector in Africa such as:
- Market liberalization and the end of dependency on the incumbent operator for international connectivity.
- ISPs and NSPs offering ‘smart’ pipes instead of the more traditional ‘dumb’ pipes hence moving up the IT value chain.
- The Rapid increase of the value of broadband connectivity to businesses and individuals in the world. corporates wanted bigger and faster pipes to do business and cut costs while individuals wanted to access the media rich content on the internet such as streaming videos and social networks.
- Deeper penetration of the mobile wireless network into Africa that carried with it mobile data services
The above happenings could not be sustained over the existing satellite bandwidth which was limited in quantity and speed and hence the need to deploy submarine cables.
Industry analysts predicted a massive drop of up to 90% in connectivity costs as consumers migrated to the undersea cables. The Kenyan and South African blogosphere was awash with predictions and expectation of cheaper broadband in these two countries. However, this was not to be as ISPs were slow to adjust prices downwards and gave various reasons as to why they could not do this. some of the reasons are:
- The cost of providing broadband connectivity is made up of many other costs and not entirely on the cost of international backbone connectivity.
- The submarine cable utilization was low and the economies of scale could not come into play to lead to a reduction in pricing.
- They needed to first recoup their investment in the new submarine connectivity systems before the end user can enjoy lower pricing.
- etc etc
What they did instead was to offer more bandwidth for the old price. This ensured that they sustained positive cash flows.
This did not go down well with most consumers who felt cheated by the ISPs. The fact of the matter is that prices did indeed drop by a considerable margin even though this drop wasn’t what was promised or envisaged.
I am however of the opinion that as consumers, we are ignorant of the fact that the ISPs made promises to us based on US and EU pricing models which are totally not applicable in Africa. Whereas a user in the US pays an average of $3.33 per Mbps and a user in Japan pays $0.27 per Mbps, His counterpart in Nigeria will pay $2,400 and in Kenya will pay $700 for the same capacity. The question that arises is why this big difference in pricing?
The answer lies in historical factors of infrastructure development in Africa and the issue of local content.
History of Infrastructure
The years between 1995 and 2001 witnessed an intense investment in ICTs in the United States and Europe characterized by many start-ups and massive capital investment in Internet infrastructure based on speculation of an impending IT explosion. These companies had envisioned a huge market for high speed broadband internet.
In their investment quest, many of these businesses dismissed standard and proven business models, focusing on increasing market share at the expense of the bottom line and a mad rush at acquiring other companies leading to many of them failing spectacularly.
This period before the burst saw the laying of hundreds of thousands of kilometers of fiber optic cables both on land and under sea as companies invested based on pure speculation not on strategic market research information. When the envisaged market failed to materialize, these companies could not get a return on their investments or be profitable and went burst culminating in what is known today as the dotcom bubble burst. Some of the casualties include WorldCom, Tyco, global crossing, Adelphia communications and many more.
When these companies went bankrupt, their massive investment in national and international fiber optic networks lay underutilized and was bought for throw away prices by new investors such Comcast and Sprint. So low were the prices that some cable was bought for 60 US cents per Gbps per kilometer in 2002 compared to the 37 US dollars per Gbps per kilometer the Seacom cable is costing to build in 2009.
Because of the heavy investment in the cables connecting Africa, the operators have no option but offer prices to the consumers that will ensure profitability to the investors because any attempt to emulate their American counterparts will lead to failure to break even or even make a profit.
I believe one of the key differentiating factors between African Internet and the US or EU version is the aspect of local content. By local content i do not mean content such as regional news, websites in local languages and the likes, I mean content hosted locally within the African continents local loop network.
The fact that nearly all Internet content is hosted outside Africa, means that we are fully dependent on international backhaul to access this content. A user in Atlanta Georgia does not need to cross the US shores to get cnn.com because CNN hosts its content in Atlanta (and many other cities in the US) so to him, its a local connection. A user in the UK will traverse few local loops within the UK to access sky.com and will therefore not need international capacity. The same US and UK users will do few hops to access a verio.net hosting server (where nation.co.ke is hosted) The concept of what the ‘Internet’ is in the US or UK is totally different from what it is in Nairobi because a user still has to leave the continent to access nation.co.ke hosted at Verio.net servers. It is therefore more expensive for the African user to access the Internet because he always has to traverse international links that are private commercial ventures.
However, if we work on development of local content by hosting content within the continent of Africa, we can drastically cut the dependence on international capacity. If for example the nation.co.ke website was hosted somewhere in Nairobi, we would not need international fiber capacity to access it. Now take this and apply to majority of the websites we visit (facebook, CNN, soccernet etc) If we had good hosting and cache services locally, only cache updates would need to utilize the international capacity as all African traffic will remain within Africa. A good example is a user in the UK accessing cnn.com (a US website), they do not need to leave the UK because cnn.com is cached in London. The same is not true for the African user.
There is therefore a need for a paradigm shift in the efforts being put to make African broadband Internet cheaper. I believe it does not lie in the laying of more submarine cables from Africa to Europe and US. The solution lies in the provision of reliable data centers within Africa in which content can be hosted cutting down the cost of Internet access drastically.
Just to prove that international capacity is not the solution, the combined internet traffic traversing the trans-Atlantic cables between the US and Europe accounts for less than 30% of the total traffic on these cables with the rest being business data (VPNs etc) and voice traffic. All this is because to the US and EU, the Internet is a local network.
We need to make the Internet local in Africa.
Last week I presented a paper on the state of broadband in Africa at the 2010 Nigeria IT collaboration conference with emphasis on the fact that even with the arrival of fiber optic cables, VSAT still has a major role to play in Africa in the longer term. I also presented on the need for development and hosting of local content within Africa. Here is another blog link with excepts: http://itrealms.blogspot.com/2010/09/vsat-remains-cheapest-accessible.html