Not a day passes on my twitter timeline without seeing several complaints by frustrated people about the quality of Internet connectivity they are getting from their current ISP’s. Many of these complaints stem from the fact the ISP did not deliver on its promise to avail bandwidth to the customer at the subscribed speeds. Other than this, the second most complaint is the lack of proper customer support from the ISP as it becomes evasive in resolving customer issues or explaining what is happening to the customers link.
When Internet service provision in Kenya started in the late 90’s, there was no data last mile network to speak of, existing voice circuits were conditioned to deliver data to customers via dial up or leased lines. Dial up lines delivered 9.6Kbps on 64Kbps channels to users because the analog modem circuitry of the day had not been designed with the Shannon-Hartley theory in mind. Digital leased line could however deliver 64Kbps and above link speeds. The reason why the circuits were 64Kbps is because the Nyquist theorem states that the sampling rate for a voice signal to not suffer distortion should be at least double the highest frequency component of the signal, for voice most the power is concentrated at frequency components below 4kHz, so the sampling frequency is 8KHz, in Pulse Code Modulation, each sample is represented by 8bits/sample, so the bit rate=8000 sample/sec x 8bit/sample=64 Kbps. This 64Kbps is what was available on a single leased line copper pair for data use.
Due to the poor maintenance of the copper telephony infrastructure and a monopoly market for the Internet backbone by KP&TC, Internet services were erratic and rudimentary to say the least. Bad Internet was the norm and good internet was unheard of in Kenya.
With the liberalization of the last mile and Internet backbone provision, many players came into the market to offer the services, service levels did improve but only for some time and users were back to the dark days of slow internet in no time.
Where did ISPs go wrong?
Whereas voice providers offered a 64Kbps circuit for voice communication, they billed by the minute. The longer you make a call, the more you paid, if you are not making a call, the 64Kbps was still dedicated to you and you paid a monthly flat fee for that. ISP’s on the other hand, delivered the 64Kbps and decided to charge a flat monthly bill irrespective of the amount of data transferred. In short, ISPs were offering unlimited data from the onset. Due to the poor management of the incumbent operator Kenya Posts and Telecommunications Corporation (KP&TC), ISPs easily transferred this element of resource over utilization to the incumbent as they made profits using a flawed billing method. When the market was liberalized, the ISP now owned the end to end infrastructure (last mile, core and backbone) and any inefficiency in resource utilization could not be absorbed by anyone else other than them. The rain stated beating ISPs immediately after the liberalization of the internet industry in Kenya as their books now started reflecting these inefficiencies as losses and high operating costs. Many were in the loss making regions, many fell by the way, many merged.
The perpetuation of the unlimited internet promise to customers is the reason why cost cutting, overselling and poor customer support ails the ISP business today. Without a differentiated level of service and billing by the quantity of data transferred, ISPs continue to bleed as a small percentage of customers take advantage of this and over use the service greatly affecting the quality of service the rest receive. When Safaricom abandoned the unlimited data plans early last year, they said that about 1% of the users consumed over 70% of the network resources causing the remaining 99% to experience poor service. Safaricom understood this flawed model of doing business and they no longer offer unlimited data plans on their mobile platform, I also see them doing the same for their fiber and wireless customers in the near future.
Cheap is expensive
At the moment, international capacity if wholesaling at a retail price of 200 USD per Mbps. Any ISP selling dedicated service priced below this is cheating its customers. I see adverts of ISPs selling 1Mbps for as low as 10 USD. This capacity must be shared between several clients to retail at that price. Depending on how many the other users are and how heavy their downloads are, customers will generally never hit even a quarter of the subscribed bandwidth on a good day. The end result is frustration and anger as customers accuse the ISP of treachery and poor service.
The fact is that anyone who needs dedicated Internet will have to pay not less than 500 USD for unlimited 1Mbps in Kenya today, should users wish to pay less for this speed, then it must be capped by amount of downloaded volume as it is not possible to get both dedicated Internet speed and unlimited content download service otherwise ISPs will make a loss. Don’t be fooled by marketing gimmicks and jargon used by ISPs. Nothing good comes cheap. End users who have come to this understanding are reaping the fruits of the choices they made to pay hefty pricing for Internet that works.
Sacrificed at the alter of cheap service is customer service, an ISP cannot charge 10 USD for a 1Mbps link and afford to hire sufficient support staff and cover its overheads to offer any semblance of customer support. The lack of responses from many ISPs in Kenya today to customer support queries is deliberate and not as a result of poor planning or failure to cope with the number of incoming calls. Trust me, customers who pay a premium for Internet have their calls answered and issues resolved faster than their 10 USD counterparts.
What needs to be done?
Lower prices cannot be sustainable in the long run, offering cheap but poor service is also not a good business strategy. There is enough proof in the market to support this. Safaricom did not join the voice and data price cutting bandwagon started by its rivals, they were the most expensive in the market (they still are) yet they continue to turn a profit year after year. Why is this possible? Because it is not true that users are after cheap service, they are after good acceptable service and are willing to pay a premium for it.
ISPs need to offer good service and to do this, they will need to increase their pricing from the current ridiculously low figures to levels that will offer the end user acceptable service levels and make them a profit. The current service by most ISPs is so poor that upgrading from one plan to the other has no noticeable effect on service levels. This has to stop. It can only stop if ISPs increase their pricing.
I know I am not going to be popular for saying this but it’s the truth and unless ISPs bite the bullet, they will continue to offer mediocre services to their customers.
In the recent past, there has been news of certain countries blocking certain websites or the entire Internet from being accessed by the citizens. We have seen stories of countries in the middle east blocking YouTube, Google and social media websites such as Facebook and twitter during the Arab spring and the recent release of a movie that touched on the Muslim religion. We have also seen countries such as China block access to Facebook for political reasons. Just last week, Syria blocked Internet and mobile access by its citizens as the civil war ragged on.
The distributed nature of the Internet ecosystem means that there is more than one path to and from an Internet resource such as a server hosting a website. distributed content delivery and hosting also means there exists more than one copy of the same website or content on several servers that are located in geographically distinct regions. For example, if you tried to access a YouTube video from an Internet connection in Kenya, the video could be hosted at the Google cache servers on Mombasa road. A person accessing the same video in the UK can get the same video from a content server in London for example. This poses a challenge to people who might want to block access to the Video.
How the Internet works in ‘layman’ terms
The Internet utilizes a special routing protocol called Border Gateway Protocol (BGP). In BGP, each Internet service provider has IP addresses that they give users who want to connect to the Internet. All of an ISPs IP addresses then belong to what is called an Autonomous System (AS) number which belongs to the ISP. What happens then is that all ISPs in the world announce their IP addresses under their AS numbers. To find your ISP’s AS number click here.
As an example, assume ISP 1 has the IP addresses from 220.127.116.11 to 18.104.22.168 (total of 16382 addresses) and has them under AS 1, ISP 2 had the IP range from 22.214.171.124 all the way to 126.96.36.199 (16382 addresses also) under AS 2 and so on and so forth up to say ISP100 with IP range x.x.x.x to y.y.y.y on AS 100. So if say for example YouTube is hosted under the IPs that belong to ISP 40 with AS number 40, then if there is a customer on ISP1 that wants to access YouTube, then the routers on each AS will have what is called a routing table that tells them to which AS to send traffic for a particular IP address. A BGP routing table is something like this:
- To reach the IP range from 188.8.131.52 to 184.108.40.206 on AS 1, send this traffic to the BGP router advertizing AS1
- To reach the IP range from 220.127.116.11 to 18.104.22.168 on AS 2, send this traffic to the BGP router advertizing AS2
- To reach IP addresses on AS n, send this traffic to the router advertizing AS n
- To reach all other IP addresses that I do not know how to reach, I should ask some few knowledgeable routers at some big ISPs who because of their size might know.
This means very many IP addresses can be addressed by the common AS Number they share. One ISP can have only 1 AS number to address all its customers. The YouTube IP belonging to AS 40 can therefore be reached by the customer on AS 1 if the AS 1 router knows the route to AS 40 from its routing table.
The above is a simplified explanation of how an Internet routing table looks like. From this we see there are three critical conditions that need to be fulfilled for an ISP user such as you and me to reach or be reached from the Internet. These are:
- A user must have an IP address
- This IP address must belong to an AS
- This AS must be announced by BGP to other BGP speaking routers on the Internet.
How then can Internet access be blocked?
The above means that a user without an IP address cannot access the Internet, but it would be nearly impossible to remove all IP addresses from devices in a country if the powers that be do not want them to connect to the Internet.
The easiest way to make these users not reach the Internet or be reachable is to stop announcing their IP addresses and AS number via BGP. This means that if an ISP is asked by the government to stop announcing its AS, then users on that ISP cannot access the Internet. All a government needs to do is threaten the withdrawal of ISP operating license for non compliance and boom, the entire country is without Internet access!
The diagram below shows how about 57 Syrian AS’s containing thousands of IP addresses stopped being reachable on 29th November 2012 after the government ‘asked’ ISP’s to stop announcing them on the net. The few remaining AS’s were most probably government-run networks.
On the other hand, a government might want to block access to a particular website. This they can do in several ways.
- By asking ISPs to install filters that can detect and filter traffic to and from particular IP addresses that host the website. This is usually a long drawn process and can take months to implement. Iran, China have such systems in place. Nokia Siemens was in the news facing criticism from EU in 2010 for supplying Iran with such equipment.
- If a government wants to block with immediate effect without involving the ISP, they can do this by use of illegal means of advertising a more specific route to the website and discarding the traffic upon receipt. In this method, a government announces an AS with a smaller IP block similar to what belongs to the website. Lets say for example there is an AS number 78 advertising the block 22.214.171.124 to 126.96.36.199 (8190 IP addresses), If a government comes up with an AS number 94 with a similar IP address block but more specific say 188.8.131.52 to 184.108.40.206 (4094 IP addresses). Then lets say the website address is 220.127.116.11 which is part of this IP block, then there will be two AS Numbers 78 and 94 announcing that they know how to reach the website IP on the Internet, so which AS is chosen? The AS chosen is the one with a more specific route (less IP addresses on it) in this case the malicious government AS number 94. So user traffic from this country to that website can be picked by the government router and discarded. Pakistan Telecom (The govt controlled incumbent) inadvertently announced routes to YouTube on the Internet in 2008. They however did not apply this to Pakistan ISPs only but this specific route leaked to the Internet causing a worldwide YouTube outage as all YouTube traffic was now being routed to a BGP speaking router in Pakistan. See how it happened here.
- Countries or organizations that control the root name servers for top-level domains (TLD) such as .com and .net can also block access to websites using the TLD by not answering domain name queries to the root servers for particular domain names. The root server method is what the hacktivist group anonymous wanted to use to bring down the Internet, if they attacked all the existing 13 root servers and bring them down long enough, then the DNS resolution system would collapse leading to a world-wide Internet blackout. This method of blacking out Internet access to certain websites can only be done by countries or organizations controlling these root servers such as the USA.
There are many other numerous ways to block Internet access or access to certain websites by a country, some legitimate and some illegitimate like example 2 above. All in all, it is very easy to block entire countries from the Internet should the need arise.
If you had just enough electricity to either heat your house during winter or power your PC and give you an Internet connection, what would you chose?
In a recent survey, a group of Americans were asked this question and 63% of them chose the Internet connection over staying warm. In another case, a man dug up his neighbor’s lawn to pass a fiber cable to his house and when the neighbor sued him for damaging his well-manicured lawn, the defendant said that Internet was a utility service and therefore had right of way, the courts however thought otherwise and asked the defendant to pay for the damage done. Some ISPs in Kenya have faced difficulties when laying fiber to the building as landlords demand monthly fees for hosting the ISPs cables in the buildings, ISPs have been adamant in paying this monthly ‘rent’ because they argue that companies like Kenya power or the water distributors do not pay a similar consideration to the landlords to deliver their services to the tenants. The ISPs want the landlords to treat their Internet cables as utility cables and not charge for their routing in the buildings.
The question that arises is if Internet connectivity can be considered a public utility like water and electricity. A public utility can be defined as “a business that furnishes an everyday necessity to the public at large.” electricity and water are all considered public utilities. In strictly legal terms, there is also a regulatory component in the public utility definition, but here I am concerned with the “everyday necessity” portion. In a utility service like electricity, I want to flip a switch and expect electricity and consume it in quantities that will satisfy my need but at the same time leave enough available to satisfy other people’s (the public) needs too.
I believe the answer to the question on if the Internet is a public utility depends on many factors. The first is geography. In as much as Africa has made great strides in as far as Internet penetration is concerned, we are still very far compared to our European or Japanese counterparts when it comes to not just availability of the Internet but its use also, its one thing to have internet available and another to use it. Statistics show that Africa contributes just about 2% of total Internet traffic and less than 0.1% of the content. Africa is still fighting hunger and disease and lack of clean water, to try classify the Internet as a utility might seem insensitive and counter productive. or is it?
In the rest of the developed world, penetration in some countries is close to 100% (with Norway at 97% and Monaco at 100.6%) compared to Africa’s Highest penetration rate of 51% in Morocco and lowest in South Sudan at 0%. It might seem counter intuitive to classify Internet as a utility in South Sudan for example. However, if this is done, it might actually spur its penetration levels.
The reasons for declaring it as a utility are different for developed and developing countries. Whereas the developed country population is already hooked to the Internet and use it for their daily lives, In developing countries it’s still a luxury and not many can afford it. However, more and more people from developing countries are spending a bigger chunk of their incomes to gain connectivity.
Declaring The Internet as a utility in a developed country will be mostly to spur usage while in a developing country doing so will only spur penetration. The problem however that will arise in both developed and developing countries is that all public utilities must be closely regulated. When the FCC in the US attempted to declare the Internet as a public utility in 2010, it faced a lot of opposition because of the raft of regulatory measures it had put in place. At stake is how far the FCC could go in dictating the way Internet providers manage traffic on their multibillion-dollar networks. The FCC said that its intentions were misunderstood and all it wanted was to guarantee net neutrality. The issue of net neutrality arises from the fact that some ISPs were giving higher preference to traffic from their own services or friendly partners and less priority to traffic from rival networks, eg Comcast was giving video traffic from its sister companies higher priority than traffic of a similar nature from say Netflix or YouTube. Again, the issue of if Comcast is justified in doing this is a discussion for another day.
So the answer to if the internet can be classified as a public utility depends on so many factors. My opinion is this: for the sake of increasing penetration levels, it should be classified as a utility but should be devoid of the close regulation imposed on other utilities such as water and electricity. This is because unlike water and electricity which lack distinct differentiators from one supplier to another (clean water is clean water, 240 volts AC is 240 volts AC), the Internet has unlimited ways in which value addition and differentiation can be done. a regulatory framework to manage this value addition can be cumbersome and self-defeating and market forces should be let to determine which ISP wins the market.
With the landing of several undersea cables in Africa in the last three years, many a pundit have hailed it as a new dawn of telecommunications in the continent. The cables brought with them massive bandwidth capacities to the continent that enabled faster and cheaper communications. Before the arrival of these undersea cables, Satellite was used to connect Africa to the rest of the world. These satellites had the following characteristics:
- Expensive due to the fact that satellite transponder leasing was expensive due to the extremely high demand for the capacity. This demand reached peak circa 2005 when operators were even buying capacity from satellites that were still on paper, not yet built and launched.
- Due to the cost and scarcity of capacity, many back-haul pipes were congested making satellite communications slow and irritating to use.
The arrival of cheap and abundant terrestrial capacity led many to declare that satellite was destined to history books and that there will be no market for satellite broadband in the years to come.
Three years down the line, reality has hit home as the following facts downed:
- The issue of back-haul was resolved by the undersea cables, these cables did not however address the last mile access problem. There is a lot of capacity at the landing stations that cannot be distributed to end users as there is no good last mile infrastructure in place. Spectrum scarcity has also made things worse.
- Even on the existing last mile networks and those being put up to meet this demand, reliability has been a key issue due to poorly designed networks and fiber cuts. Industry leaders now seem to agree to this fact as seen here
- No regulatory framework was set-up to harness the advantages brought by the availability of bandwidth. Regulators failed to come up with new policies and laws such as infrastructure sharing, spectrum farming and sharing
The result is that the consumer has not benefited much as ISP’s and NSP’s continue to offer mediocre services. There are reports of some ISP customers getting as low as 92% availability which translates to about 29 days of outage in a year.
Will Satellite make a come back?
Before we answer this question, we need to be aware of the key advantages that satellites provide. These are:
- Very high availability. No technology beats satellite when it comes to availability. Downtime is rare and far in between making majority of well designed satellite systems achieve the proverbial 99.999% availability (52 minutes of outage in a year).
- Satellites offer instant availability of service over a large area without the need to lay additional infrastructure.
With the advent of Ka-band satellites, the landscape is about to change as these will bring with them large amounts of bandwidth and make them available instantly to large geographical regions. The key advantage of Ka-band satellites over the more traditional Ku-band and C-band satellites are:
- In terms of capacity, one Ka-band satellite is equal to about 100 Ku-band satellite yet they cost the same to manufacture and put into orbit. This means that capacity on Ka-band will be much cheaper to the point of giving fiber capacity competition.
- Ka-band utilizes spot beam technology that enables the use of smaller antenna (as small as 60cm) and cheaper modems. At the moment, a full Ka-band kit is competitive on price to terrestrial technology equipment. Intelsat is developing a spot beam architecture utilizing all bands that will allow 2 satellites to cover all populated continents of the world. Read more on the Intelsat Epic™ project here
With more capacity available to offer higher speeds at much lower costs, with equipment being cheaper and more competitive to terrestrial offerings and giving much higher reliability that terrestrial services can only dream of. What will prevent Satellite broadband from making a come back?
If recent events are anything to go by, Satellite broadband is already making a come back to Africa. The recent launch and uptake of capacity on Yahsat 1B and Hylas-2 satellites over Africa will avail high-speed capacity whose quality rivals that of majority of the terrestrial services latency not withstanding. The main reason why i think Ka-band will be a game changer in the African broadband market is that operators have realized that it is one thing to roll out a terrestrial infrastructure and another to operate and maintain it. Operational costs of the newly laid terrestrial wired and wireless networks are becoming prohibitively high due to vandalism and sabotage. The terrestrial networks offer very many points of failure to offer any reliable service. Ka-band satellite will offer cheaper bandwidth that is more reliable and easy to access and install on any place in the continent. It takes an average of 3 weeks to survey and install a fiber cable in a city like Nairobi, it takes about 2 hours to fully set-up a Ka-band dish and connect to the Internet….. Once the fiber hype dies, Ka-band broadband via satellite will be a hit in Africa.
Anyone dismissing my argument should look at the following links that talk of roll-out and expansion of Ka-band satellite in Europe, Middle east and USA which we consider to be pretty “wire up” than Africa.
- Hughes announcement of the launch of Echostar 17 to offer 100Gbps broadband services in North America: http://bit.ly/ShARi6 after the successful launch and sale of capacity on Spaceway satellites
- Viasat Ka-band 100Gbps broadband service offering in North America: http://bit.ly/ShBAjj
- Avanti communications announcement of the launch of Hylas-2 to offer services in Mideast, Africa, Europe and the Caucus: http://bit.ly/ShBYhL
In the past few weeks, there has been a rise in consumer complaints aimed towards mobile operators. These complaints relate to the expiry of data bundles after a fixed period of time from activation. The general feeling in the market is that if a customer purchases a data bundle, the mobile operator has no right whatsoever to expire that bundle and the customer should use it for however long he or she wishes to. A user who buys 100MB data bundle can therefore take as long as he wishes (up to say 12 or even 36 months ) to consume it.
This argument is from a customer’s perspective and they have every right to argue that way because they have spent their hard-earned money to purchase these bundles. However, how does the mobile operator view it?
Upstream and Downstream Contractual Obligations
When a customer purchases a data bundle, an implied contract is automatically brought into existence, this contract is between the consumer and mobile operator. The consumer commits to pay or pays for the data bundle and the mobile operator commits to delivering the data to the customers mobile device at the agreed speed and volume. For a binding contract to be formed there must be:
- An offer which is accepted and for which valid consideration is given;
- An intention to create a legal relationship; and
- Certainty of terms.
From the above, the additional points to note in a contract as is below:
- An offer must be communicated, that is why you get a confirmation message from the mobile operator notifying you that you are about to purchase xx MB of data and you need to confirm if this is correct.
- The Acceptance must also be communicated, that is why you also get a message confirming the bundle purchased
- Certainty of terms means that there must be certainty as to the parties, subject matter, and price. In this it means a purchaser of a data bundle is assumed to be aware of what he is purchasing and what the terms in the contract means, in this case it is assumed that someone purchasing 100 MB knows what an MB means and how much he can do with an MB of data. A user cannot purchase 100MB and expect to download and upload anything more than that.
On the flip side, the mobile operator also enters into a contract with the upstream data provider who connects the operator to the Internet. This contract is informed from the fact that downstream customers have brought in business by purchasing bundles. The same rules of a contract apply. However, in this case, the mobile operator does not purchase bundles as such but purchases capacity in Mbps (Mega bits per second). The fact that the mobile operator has purchased “capacity” means that he will pay for the capacity to transmit or receive whether he uses it or not, Capacity is defined as the actual or potential ability to perform, yield, receive or contain. Unlike bundles, capacity does not denote the quantity but the ability.
The mobile operator therefore anticipates that customers who have purchased bundles will connect and use the bundles and goes ahead and commits to sufficient capacity to enable this happen.
What would happen if mobile users purchased bundles and none of them use the bundles for say 6 months? During this time, the mobile operator will have purchased capacity to connect the customers the Internet (capacity in terms of pipe, equipment and human resources). If none of the users consume their purchased bundles, the mobile operator will still incur costs towards this capacity because staff have to be paid their 6 months salary, the upstream provider will need to be paid irrespective of if the pipes were used or not, operating overheads of equipment and depreciation will also be incurred during these silent six months. Being a commercial venture, this is not sustainable as for this firm to be profitable, revenues must be higher than costs.
The mobile operator therefore has to put in place measures to protect itself against such scenarios by setting a time limit within which the purchased data bundles can be used. There is the argument that the customer has prepaid the bundles and the mobile operator has positive cash flow in the whole transaction. However, this positive cash flow is only on the onset and as time goes on, the operator faces diminishing returns from this transaction. Data bundles therefore have to have an expiry date to ensure that the mobile operator does not run into losses. The same way bottled water cheese have a ‘use by‘ date, the two do not essentially expire in the sense of usability, but this date is set to strike a balance between committed resources/capacity and consumption rate to ensure positive returns to the investors.
The landing of various undersea cables on the African shores in the last three or so years has heralded a new dawn of high speed communications that offered clearer international calls and faster broadband speeds. These cables have made bandwidth which was once a scarce commodity be in near oversupply. Indeed as I write this, quite a huge chunk of the undersea capacity is unlit.
With the arrival of these cables, telcos and ISPs that once depended on expensive FSS (Fixed Satellite Service) capacity moved their traffic to the more affordable undersea cables. Cost of bandwidth came down but not to the level envisioned by the consumers as most fell by about 40% and not the expected 90%. I had warned of the possibility of these prices not coming down by 90% as was the expectation and hype in a previous blog post in 2009.
Satellite At Inflection Point
Incidentally, when the cables were arriving, some major developments were happening in the Satellite world. However, due to the hype and excitement of the arrival of undersea cables, majority of us didn’t care or notice these changes that were set to revolutionize satellite communications. These changes have created a lot of excitement in the telecommunications world but are largely ignored here as we believe that the undersea cables are the future.
The Situation In Kenya (and by extension Africa)
With several cables landing on the Kenyan coast, it would be expected that quality of service from ISP’s would be at its best. However, this is not the case as the quality of service has greatly deteriorates over time due to a poorly maintained last mile network. We have bandwidth at the shorelines but we are unable to fully utilize its potential. Majority of the operators in Kenya embarked on ambitious plans to lay last mile fibre cable around the country, the same thing is happening in other African countries too albeit at a slower pace. these are good steps, however, these telcos have oversimplified the issue of last mile access to that of laying cable on poles or burying it underground. That’s just 5% of the entire job of last mile provision, the other 95% lays in maintaining the network which sadly none of the telcos were prepared for. they thought that after laying the cables, money would start flowing in. Last mile cable cuts due to civil works is currently one of the biggest cause of downtime in Kenya today, hardly a day ends without incidents of cable cuts as roads are being expanded, new buildings come up and natural calamities such as trees falling on overhead cables and flooding of cable man holes.
Collectively as Africa, we seem to underestimate the size of this continent, operators do not know what it will take to wire Africa to the same levels as Europe or the US. The map below shows the task ahead of us as far as wiring Africa is concerned, its not going to be an easy job. Africa is the size of the US, China, India, Europe and Japan put together. Click on it for a larger image
This size poses a challenge as far as laying last mile networks in Africa is concerned, the lack of reliable electric power supply also poses a challenge on how far these networks can grow from the major cities. Click on this map here to see how far behind Africa is in as far as power supply distribution is concerned, it was taken by NASA at night sequentially as each part of the earth moved into night time.
As seen above, it will take quite a large amount of investment to bring this continent to the levels of other continents as far as connectivity is concerned.Even when this is done, it will be an expensive affair which will make connectivity expensive as investors will need a return on their investment.
However, all is not lost as the once derided Satellite service is now making a comeback and will soon give terrestrial services a run for their money. Already, the US and Europe are undergoing a major shift in the use of Satellite to provide broadband service. Currently there are more investors putting their money into satellite launches than into laying undersea cables.
Below are some of the developments in Satellite that will herald this comeback but have sadly slipped past most of us.
Unlike Ku and C band satellites that were in use before the arrival of cables in Africa, Ka-band satellites use spot beam technology which allows frequency re-use and the provision of hotter beams. What this means is that satellite capacity can be greatly expanded due to frequency re-use and CPE equipment is now cheaper due to a hot beam/signal. A single Ka-band satellite is now equivalent to about 100 Ku-band satellites for the same cost. The two main reasons why satellite was ditched for fiber was the cost of equipment and bandwidth. Due to these two developments, satellite operators will soon be offer prices as low as 300 USD per Mbps down from about 6,000 USD per Mbps.
The reason why Ka-band was not commercially viable for sometime was because the technology had not mature enough for viable commercialization. Ka-band which operates at the 17-30Ghz range is susceptible to weather interference but there now exists techniques to counter this hence greatly improving reliability. The High operating frequency meant more expensive detecting equipment (modems) but advances in technology have allowed for the manufacture of affordable 200 USD modems today.
More Efficient modulation Schemes
In Communications, the amount of data that can be sent over a transmission channel is dependent on the noise on that channel and the modulation scheme used. There have been great advances in modulation techniques and noise suppression allowing the pushing of more data over smaller channels. This includes the use of Turbo coding which is so far mankind’s best shot at reaching the Shannon limit. One recent and notable development was by Newtec where they managed to push 310Mbps over a 36Mhz transponder which translates to 8.6Mbps/MHz , previously the much you could do was 2.4Mbps/MHz. Read the Newtec story here.
Combine this with Ka-spot beam frequency re-use and you will have satellite capacity that is cheaper than fiber bandwidth, if you add this to the reach that satellite foot print provides, you will have instant broadband available on the entire continent.
At around the same time the cables were landing, a Google-backed broadband project was being announced. The project dubbed O3B (Other 3 Billion) denotes the unconnected 3 Billion people in the world. Google believes that this is the most viable way to avail broadband to the masses. Otherwise how do you explain the fact that Google has never invested in fiber capacity to Africa or the developing countries? I wrote about the O3B project in a previous blog post that you can read here.
The O3B will utilize satellites that are closer to the earth hence the term MEO which stands for Medium Earth Orbit. The fact that these satellites are closer means that latency on the links will be much lower (at 200ms) compared to traditional satellite capacity that gives about (600ms). This will enable higher throughput at lower latencies. To read more on the relationship between latency and throughput, read this tutorial here
majority of the satellites in orbit have a lifespan of about 15 years. This lifespan is determined by the amount of fuel it can carry. This fuel lasts about 15 years and once its depleted, the satellite cannot be maneuvered and is therefore not usable. As a write this, Intelsat which is the worlds largest commercial satellite fleet operator, has signed up for satellite refueling services from MDA corp to extend the life of some satellites by about 5 years. What this means is that operators can get more money out of their satellites due to extended life and therefore they can now offer cheaper bandwidth.
Combine the advantages of Ka-band spot beam, efficient modulation, LEO satellites and ability to refuel satellites and you have with you the solution to the myriad of problems afflicting consumers in Africa today as far as reliable and high speed broadband connectivity is concerned.
By the end of 2014, Satellites will offer cheaper, reliable and more affordable bandwidth than undersea fiber optic cables. This is the reason why investors are flocking to launch satellites than lay cables. These include Yahsat which is an Abu Dhabi company, Avanti communications launching in Europe and Africa and Hughes which is launching Ka-band satellites in the US mainland for broadband connectivity.If undersea cables were as good as was touted by local operators, why are investors putting money in launching in the US and Europe which are more wired than Africa? Locally, the Nigeria government launched its own communication satellite that they say will enhance broadband reach in the country faster than terrestrial technologies.
Watch this space….
The announcement by Safaricom that it’s doing away with its unlimited Internet bundle did not come as a surprise to me. I had discussed the historical reason behind the billing model that is used by ISP’s and mobile operators in a previous blog post here in Feb 2011.
The billing model used in unlimited Internet offering is flawed. This is because the unit of billing is not a valid and quantifiable measure of consumption of service. An ISP or mobile operator charging a customer a flat fee for a size of Internet pipe (measured in Kbps) is equivalent to a water utility company charging you based on the radius of the pipe coming into your house and not the quantity of water you consume (download) or sewerage released (upload).
What will happen if the local water company billed users by a flat rate fee based on per-centimeter radius of pipe going into their homes rather than volume of water consumed? A user with a pipe of radius that is 1% more than the neighbor enjoys 2% more water flow into their house (do the math!). The problem is that their bills will not differ by 2% but by 1% based on the difference in radius of the pipes. A 2% difference yields a 4% difference in consumption but a 2% difference in billing. The result is that a small group of about 1% users end up consuming about 70% of all the water. This figure is arrived at as follows: A marginal unit increase in resource leads to a near doubling of marginal utility. This is a logarithmic gain (Ln 2=0.693 which means that 69% of utility is enjoyed by about 1% of consumers) . This is the figure issued by Bob Collymore the CEO of Safaricom who said that 1% of unlimited users are consuming about 70% of the resources. This essentially means costs could outstrip revenues by 70:1. This does not make any business sense. Not even a hypothetical NGO engaged in giving ‘free’ Internet through donor funding can carry such a cost to revenue ratio. As to why ISP’s and mobile operators thought billing by size of pipe to the Internet could make money is beyond me.
Bandwidth Consumption Is Not Linear
One mistake that network engineers make is to assume that a 512Kbps user will consume double what a 256Kbps user does and therefore advice the billing team that billing the 512Kbps twice the price of the 256Kbps can cover all costs. This is not true. There are things or activities that a 256Kbps user will not be able to do online, like comfortably do Youtube videos. A 512Kbps user will however be able to do Youtube without a problem. The result is that a 512Kbps user will do much more Youtube videos as the 256Kbps user becomes more frustrated with all the buffering and stops all together attempting to watch online videos. The result is that the consumption of the 512Kbps user will be much higher than double that of the 256Kbps user. Other than Youtube, websites can detect your link speed and present differentiated rich content based on that. I’m sure some of us have been given an option to load a ‘basic’ version of Gmail when it detects a slow link. The big pipe guy never gets to be asked if he can load lighter web pages, rich content is downloaded to his browser by default while the smaller pipe guy gets less content downloaded to his browser in as much that they are both connected to the same website. The problem here is that the difference in content downloaded by the two people on 512K and 256K link is not linear or even double but takes a more logarithmic shape.
Nature Of Contention: Its a Transport and not Network problem
The second mistake that the network engineers make in a network is to assume that if you put a group of customers in a very fat IP pipe and let them fight it out for speeds based on an IP based QoS mechanism is that with time each customer will get a fair chance of getting some bandwidth out of the pool. The problem is that nearly all network QoS equipment characterize a TCP flow as a host-to-host (H2H) connection and not a port-to-port (P2P not to be confused with Peer2Peer) connection. There could be two users with one H2H connection each but one of them might posses about 3000 P2P flows. The problem here is that bandwidth is consumed by the P2P flows and not the H2H flows. User with the 3000 P2P flows ends up taking up most of the bandwidth. This explains why peer to peer (which establishes thousands of P2P flows) is a real bandwidth hog.
So what happens when an ISP dumps the angelic you in a pipe with other malevolent users who are doing peer to peer traffic such as bit-torrent? They will hog up all the bandwidth and the equipment and policies set will not be able to ensure fair allocation of bandwidth to all users including you. So some few users doing bit-torrent end up enjoying massive amounts of bandwidth while the rest doing normal browsing suffer. That explains why some users on the Safaricom Network could download over 35GB of data per week as per comments by Bob Collymore. Please read more on how TCP H2H and P2P flows work here. Many ISP’s engage engineers proficient in layer3 operations (CCNP’s, CCIP’s, CCIE’s etc ) to provide expertise on a layer 4 issue of TCP H2H and P2P flows. You cannot control TCP flows by using layer 3 techniques. IP Network engineers are assigned the duties of transport engineers.
At the end of the day, there will be a very small fraction of ‘happy’ customers and a large group of dissatisfied and angry customers. The few happy customers flat rate revenues are not able to cover all costs as the unhappy customers churn. If on the other hand these bandwidth hogs paid by the GB, the story would be very different. This is what operators are realizing now and moving with speed to implement. Safaricom is not the only one affected by this; Verizon, AT&T, T-Mobile in the US are all in different stages of doing away with unlimited service due to their unprofitable nature.