The announcement by Safaricom that it’s doing away with its unlimited Internet bundle did not come as a surprise to me. I had discussed the historical reason behind the billing model that is used by ISP’s and mobile operators in a previous blog post here in Feb 2011.
The billing model used in unlimited Internet offering is flawed. This is because the unit of billing is not a valid and quantifiable measure of consumption of service. An ISP or mobile operator charging a customer a flat fee for a size of Internet pipe (measured in Kbps) is equivalent to a water utility company charging you based on the radius of the pipe coming into your house and not the quantity of water you consume (download) or sewerage released (upload).
What will happen if the local water company billed users by a flat rate fee based on per-centimeter radius of pipe going into their homes rather than volume of water consumed? A user with a pipe of radius that is 1% more than the neighbor enjoys 2% more water flow into their house (do the math!). The problem is that their bills will not differ by 2% but by 1% based on the difference in radius of the pipes. A 2% difference yields a 4% difference in consumption but a 2% difference in billing. The result is that a small group of about 1% users end up consuming about 70% of all the water. This figure is arrived at as follows: A marginal unit increase in resource leads to a near doubling of marginal utility. This is a logarithmic gain (Ln 2=0.693 which means that 69% of utility is enjoyed by about 1% of consumers) . This is the figure issued by Bob Collymore the CEO of Safaricom who said that 1% of unlimited users are consuming about 70% of the resources. This essentially means costs could outstrip revenues by 70:1. This does not make any business sense. Not even a hypothetical NGO engaged in giving ‘free’ Internet through donor funding can carry such a cost to revenue ratio. As to why ISP’s and mobile operators thought billing by size of pipe to the Internet could make money is beyond me.
Bandwidth Consumption Is Not Linear
One mistake that network engineers make is to assume that a 512Kbps user will consume double what a 256Kbps user does and therefore advice the billing team that billing the 512Kbps twice the price of the 256Kbps can cover all costs. This is not true. There are things or activities that a 256Kbps user will not be able to do online, like comfortably do Youtube videos. A 512Kbps user will however be able to do Youtube without a problem. The result is that a 512Kbps user will do much more Youtube videos as the 256Kbps user becomes more frustrated with all the buffering and stops all together attempting to watch online videos. The result is that the consumption of the 512Kbps user will be much higher than double that of the 256Kbps user. Other than Youtube, websites can detect your link speed and present differentiated rich content based on that. I’m sure some of us have been given an option to load a ‘basic’ version of Gmail when it detects a slow link. The big pipe guy never gets to be asked if he can load lighter web pages, rich content is downloaded to his browser by default while the smaller pipe guy gets less content downloaded to his browser in as much that they are both connected to the same website. The problem here is that the difference in content downloaded by the two people on 512K and 256K link is not linear or even double but takes a more logarithmic shape.
Nature Of Contention: Its a Transport and not Network problem
The second mistake that the network engineers make in a network is to assume that if you put a group of customers in a very fat IP pipe and let them fight it out for speeds based on an IP based QoS mechanism is that with time each customer will get a fair chance of getting some bandwidth out of the pool. The problem is that nearly all network QoS equipment characterize a TCP flow as a host-to-host (H2H) connection and not a port-to-port (P2P not to be confused with Peer2Peer) connection. There could be two users with one H2H connection each but one of them might posses about 3000 P2P flows. The problem here is that bandwidth is consumed by the P2P flows and not the H2H flows. User with the 3000 P2P flows ends up taking up most of the bandwidth. This explains why peer to peer (which establishes thousands of P2P flows) is a real bandwidth hog.
So what happens when an ISP dumps the angelic you in a pipe with other malevolent users who are doing peer to peer traffic such as bit-torrent? They will hog up all the bandwidth and the equipment and policies set will not be able to ensure fair allocation of bandwidth to all users including you. So some few users doing bit-torrent end up enjoying massive amounts of bandwidth while the rest doing normal browsing suffer. That explains why some users on the Safaricom Network could download over 35GB of data per week as per comments by Bob Collymore. Please read more on how TCP H2H and P2P flows work here. Many ISP’s engage engineers proficient in layer3 operations (CCNP’s, CCIP’s, CCIE’s etc ) to provide expertise on a layer 4 issue of TCP H2H and P2P flows. You cannot control TCP flows by using layer 3 techniques. IP Network engineers are assigned the duties of transport engineers.
At the end of the day, there will be a very small fraction of ‘happy’ customers and a large group of dissatisfied and angry customers. The few happy customers flat rate revenues are not able to cover all costs as the unhappy customers churn. If on the other hand these bandwidth hogs paid by the GB, the story would be very different. This is what operators are realizing now and moving with speed to implement. Safaricom is not the only one affected by this; Verizon, AT&T, T-Mobile in the US are all in different stages of doing away with unlimited service due to their unprofitable nature.
White space refers to frequencies that are allocated for telecommunications but are not used in active transmission. They however play a crucial role in enabling interference free communication. In layman’s terms, if there are two adjacent FM radio stations say Hope FM at 93.30Mhz and BBC-Africa at 93.90Mhz, the two are separated by white space bandwidth equal to the difference of the two (93.90Mhz-93.30Mhz = 0.6Mhz or 600Khz). If you look at the entire FM or TV spectrum, there is a lot of white space frequencies not in active use but is used as guard band to enable listeners tune clearly and avoid hearing two radio channels at the same time. The same is true also for Television (TV) transmission.
TV transmission uses the UHF frequency range of 470Mhz-806MHz (for example KTN Kenya transmits at 758-764Mhz which is channel 62 on the ITU chart. Remember this logo here? that rainbowy 62 wasn’t a fashion statement). Each TV station is allocated 6Mhz out of which only three points are used for picture, color and audio, the rest is white space. Taking the KTN example, 759.25Mhz is used for video, 762.83Mhz is used for color while 763.75Mhz is used for the audio in the TV channel. The rest is what is known as white space and just lies in waste though serving as guard bands.
It is this wasteful nature of analog TV and radio broadcast that there is a concerted push by CCK to move to digital transmission which unlike analog, does not have white spaces and therefore doesn’t waste precious frequencies. The push to digital TV is informed by the fact that if all stations transmit digital signals, they will free up the white spaces for other uses.
From the consumer’s perspective, the push for analog to digital transmission is because they will also benefit from clearer and rich content that it will bring along (such richness includes being able to set reminders on future programs, scroll what’s next e.t.c just as you are able to do now on satellite TV such as DStv but not on the local free to air stations).
Kenya has set a target of 2012 to complete the analog to digital TV migration and there is already a lot of progress being made on that front. Once the migration is complete, it will have released the UHF band for other uses such as broadband internet, SCADA and remote metering systems and many more. However, once released, these frequencies will be unlicensed meaning anyone can use them without prior approval by CCK. In the USA for example the fact that white space will be unlicensed has seen the FCC face legal proceedings by wireless microphone manufacturers (which use white space) because it would mean that they will be interfered with by more powerful transmission sources such as white space base stations on tall buildings etc. The FCC has however gone ahead and allowed the use of white space for other uses.
The fact that white space is at lower frequency (470Mhz-806MHz ) compared to existing last mile solutions such as Wimax (2000Mhz and 5000Mhz ranges) or Wi-Fi, the white space can travel much farther and around physical objects. It is estimated that a Wi-Fi hotspot that changes to use white space frequency range can increase its coverage area by 16 times hence enabling wide reach. The lower frequencies will also make detection of white space signals easier and less power-hungry (This is because the higher the frequency detected, the more complex the equipment and the more power required, this explains why your phone’s battery drains faster if Wi-Fi or blue-tooth (Higher freq)is turned on than if you tune to FM stations (lower freq) on the same phone).
The wide coverage, cheaper equipment and lower power requirements will present endless possibilities of extending broadband coverage beyond towns or areas with Wimax, Wi-Fi, or GSM coverage. The use of white space will also lead to cheaper internet as investment in infrastructure will be minimum as one base station will be enough to cover the entire city and beyond making last mile infrastructure CAPEX lower. This is unlike Wimax which to cover a city like Nairobi would need about 5-10 base stations.
The Institute of Electrical and Electronic Engineers (IEEE) recently announced the finalization of 802.22TM white space standard known as the Wireless Regional Area Networks (WRAN) or Super-Wi-Fi (Microsoft calls its version White-Fi) that will deliver speeds of up to 22Mbps per channel. This paves way for equipment manufacturers to design interoperable white space transceivers and I believe we will soon see them in phones, laptops and tablet computers. Last April, A test white space network was established at Rice University in Houston in which one base station was able to provide broadband connectivity to 3,000 residents of East Houston.
White space use for broadband connectivity will be a game changer in the country when it comes to last mile connectivity as the current networks are not very extensive in reach and neither do they offer reliable and affordable connectivity. The new white space systems will offer wide coverage especially in rural Kenya that has low population density that has made investors shun extending services to them due to CAPEX required to set-up networks to serve the rural folk. White space utilization will now make it possible for these investors to recoup their investments in rural areas as they will now cover larger areas cheaply.
The Ka band is part of the K band of the microwave band of the electromagnetic spectrum. The Ka symbol refers to “Kurtz-above” — in other words, the band directly above the Kurtz (K)-band. The 30/20 GHz band is used in communications satellites. Similarly Ku stands for Kurtz-under band.
In the past two years or so, there has been a lot of furor over the utilization of the Ka band for commercial purposes such as broadband and audio-visual broadcast. The higher operating frequencies of Ka band and the spot beam design mean that the cost per bit transmitted on a Ka band satellite is significantly lower than that of Ku or C band satellites. This fact makes Ka band an attractive alternative to the more expensive Ku and C band and positions satellite communications as a worthy challenger to the now ubiquitous and cheaper fiber optic cables.
This lower cost per bit has seen investors and satellite operators come up with Ka band satellite launch projects such as the O3B project (see my take on O3B here), the ViaSat, Yahsat and the Hughes spaceway. Major satellite operators have also announced plans to launch Ka band birds in the near future with the notable exception of Intelsat who have so far been silent about their Ka band plans apart from its partial investment in the wildblue Ka satellite that provides high-speed broadband in North America.
Intelsat’s non-commitment to Ka band was perhaps the first sign that they knew something other operators didn’t know. As for now it is left to our fertile imagination as to why the largest satellite fleet owner did not jump into the Ka band-wagon (pun intended)
Recent events and realizations however show why Intelsat was right. The Ka band satellite projects have been faced by unforeseen hitches that have caused delays and indefinite postponement of some satellites.
Lack of key components.
During the design of a satellite, very few components are COTS. This means that majority of the components have to be specifically designed and manufactured for that particular satellite and no other satellite will have an exact component characteristics. One of the most important component in a satellite is the Traveling Wave Tube (TWT). These are RF amplifiers and you can read more about them by clicking here. Apparently the two major manufacturers of TWT’s for satellite application (L3 communications and Thales of France) have run into difficulties in the design and manufacture of Ka band TWT’s. The high demand for Ka band TWT’s is also putting a strain in their manufacturing capacity making the acquisition of TWT’s a critical path in these satellite projects.
Interest from airlines.
One thing that no one foresaw was the strong interest of Ka services from airlines because the cost per bit for Ka band is significantly lower, Airlines could now offer affordable high-speed broadband on-board flights, The demand for on board internet services is also skyrocketing thanks to devices such as iPads and smart phones. These airlines were willing to pay upfront and sign long-term contracts with satellite operators such as the case between Jetblue and Viasat who signed an agreement early this year. This has sent some satellite designers back to the drawing board to design satellites that are capable of seamless cross-beam and sometimes cross-satellite hand over to enable aircraft’s to be always connected. The immense opportunity presented by airlines presents a lucrative market in which satellite operators are willing to play in especially due to the toughening competition on land from terrestrial fiber optic cables. In a few years, broadband on aircraft will be ubiquitous thanks to this development.
This fact has made Ka band not as competitive as before for broadband providers as compared to the existing Ku band satellites because the demand for Ka is now bigger than was before anticipated.
The assumption that Ka band will present cheaper customer premise equipment (CPE) to Ku band was also far-fetched. The assumption that because the antenna is smaller it will therefore be cheaper is wrong. This would only hold true for receive only systems such as DTH Tv like DStv. When it comes to broadband connectivity where there is need to transmit, a smaller dish presents two problems:
- The smaller dish being presented to consumers of Ka band present a bigger possibility for interference if not precisely pointed, the small dish avails a wide beam that can cause adjacent satellite interference. Ka band installation technicians will need to be extremely precise in pointing the smaller dishes. Sometimes this extreme precision is lacking especially in Africa and Asia where adherence to standards is lax. I see a situation where Ka band installations will still continue to be done on Ku size dishes such as the 1.2m or 1.8m dish. The smaller cheaper dishes will simply not work well unless they are on automatic pointing or gyroscopic systems on maritime vessels and aircraft or on receive-only systems.
- Because of the high operating frequencies of Ka band, the tolerances for the RF equipment design will have to be very low. The design of a cheaper and more tolerant RF system is simply not possible. The existing cheap Ka band RF systems are not the best and do not offer good tolerances for the extremely high-speed data transfer figures touted for the Ka band services This is because they introduce noise due to the poor design. You cannot buy a 50 dollar RF system and expect to transmit at 10Mbps.
So far, delays in Ka band satellites launch have been experienced by ViaSat-1, YahSat-1B, the eight O3B satellite constellation and many more.
The market needs to wake up to the fact that Ka band services will come but they will need time to mature into what industry analysts say they will be. It will not be an overnight success story but there is hope that development of Ka band technology and systems will eventually lead to cheaper satellite broadband.
Follow @tommakau on twitter
The fast increase of the Internet routing table size on the Default-free Zone (DFZ) is becoming a major concern to IP carriers all over the world. The number of active BGP entries on border routers is increasing at a quadratic if not exponential rate (see figure below). The future unhindered scalability of the Internet is in doubt. In spite of the use of Border Gateway Protocol (BGP), inter domain routing does not scale any more these days as the volume of routing information keeps on growing by the day and it is not clear if the current routing technology will keep pace with this growth and still do it cost effectively. Today it costs more for border routers to exchange routing information than it did few years ago due to investment in more powerful routers that can keep up with this growth.
The depletion of the IPv4 address space and the inevitable adoption of the IPv6 addressing scheme means that routers will now have to exchange much larger routing tables because the vast amount of IPv6 addresses require even more prefixes to be announced in the Default-free Zone. This problem will be compounded by the desire by network operators to announce more specific ( and hence longer prefix) routes to their critical infrastructure such as DNS and Content Delivery Networks (CDNs) in the now wider prefixes in IPv6. This tendency to announce very specific routes by use of longer prefixes stems from the desire to prevent prefix hijacking by malicious Autonomous Systems (AS’s) as was the case in 2008 when an AS owned by Pakistan Telecom announced the Youtube IP space with a longer prefix leading to Youtube traffic being redirected to Pakistan because it was the more specific route. With cyber crime rates increasing worldwide, network engineers want to ensure high availability of their networks on the Internet and end up announcing very long prefixes that have an effect of making the Internet routing table unnecessarily larger. This is the reason why I still think the old rule of eBGP routers filtering any route to a network longer than a /22 should still be in force. A peek on some Internet routing tables will show the existence of even /25′s.
The growing size of the Internet and its inevitable changes and failures leads to a large rate of routing table updates that stresses router CPUs and there have been several proposals made to modify BGP to make it scale. The current routing tables are linear and its high time logarithmic scale routing was introduced that can summarize routes in a logarithmic fashion. By this I mean that summarization of prefixes should be much more intense at the longer side and less intense as the prefixes become shorter.
The above can be achieved in three ways namely:
Aggregation proxies: In this way, ISPs will announce or redistribute routes to their networks via a non BGP protocols to a router aggregation proxy. This proxy will receive many long prefixes and aggregate them into shorter ones for eventual announcement via BGP. The regional allocation of IPs through organizations such as LAPNIC, RIPE, AfriNIC and the rest make aggregation proxies a very viable path because the regional allocation of IP spaces is not random (e.g. any IP starting with 196. or with 41. is from an African ISP). AfriNIC can therefore host aggregation proxies that speak to African border routers via a non BGP protocol and this aggregation proxy can then announce a single entry of say the 196 range to the Internet. the other local aggregation servers in Americas, Europe and Asia can then have filters to reject any inbound traffic to the Africa IP’s because that would be IP hijacking. The downside to aggregation proxies is that paths will now be longer as the proxy introduces an extra hop. the trade-off between a massive reduction of the routing table size and path elongation has to be weighted to see if this is a viable alternative.
DNS-like lookup system: This system will apply to non routable prefixes. in this concept, all the long prefixes are retained and recorded in a DNS-like lookup system in which a particular IP space is mapped to a specific border router. Anyone wishing to communicate with this IP space will do a lookup to get a next hop IP address and send this traffic to it. As a result, the long prefixes are not routable on the Internet but the lookup system knows a router from which the traffic can be forwarded without the use of inter-domain routing information. In simple terms this will be like a DNS not for domain names but for long prefix IP spaces. This proposal will eliminate the need to have long prefixes on the Internet routing table and a bar can be set to filter anything longer than say a /19 from being announced on the now cleaner DFZ. This will have the advantage of returning control of what appears on the DFZ routing table to regional organizations such as AfriNIC as opposed to AS managers who can sometimes be selfish.
Locator-Identifier split (Loc/ID): Whereas the above two methods overlay the existing BGP and enhance it, this approach replaces the existing inter domain routing as we know it. The Locator-Identifier split (Loc/ID) proposes the scraping of IP addressing as we know it and coming up with 2-level routing architecture to replace the current hierarchical inter-domain system. The argument behind Loc/ID is that the reason why IP-based routing is not scalable is because the IP address assigned to a device is now being used as its unique Identifier as opposed to it serving the dual role of it being both a locator and identifier. By splitting it into a Locator section and an ID section, then summarizing the locators on the DFZ, considerable reductions can be achieved on the routing table because routing on the DFZ will be based on locators and not on both locators and identifiers. Cisco recently developed the Loc/ID separation Protocol (LISP) that is hoped will replace BGP in future as BGP will no longer be able to scale a bigger IPv6 Internet. Read more about LISP by clicking here. Cisco is currently promoting LISP as an open standard and not a proprietary standard in the hope that the Internet Engineering Task Force (IETF) will adopt it.
In summary, network operators need to be aware of the non-scalability of BGP and start preparing their networks for the adoption of either of the 3 proposals above. I would however bet that the Loc/ID way of doing things will prevail and LISP will replace BGP as the inter domain routing protocol of choice on the Internet.
Happy IPv6 day!
The last five years have witnessed unprecedented changes on the whole idea of what the Internet is all about. Initially it was viewed as a vast library of information and was even once dubbed as the ‘information super highway’ where people would search for information. This has since begun to change as the Internet is now evolving from the static and indexed system it once was to a more intelligent system that can ‘learn’ a users preferences and accordingly serve relevant information to them or by the use of the social web where vast ecosystems of interlinked relationships work towards the provision of information to the user without the user having necessarily searched for it. Today we learn of world events through automatic twitter updates that ‘pop’ up on our PCs and mobile devices, we learn of our friends events through picture uploads on Facebook. The opportunities for further development of the internet are vast and limited only by our imagination.
With all this change happening on the Internet, there still exists one problem: We still pay to access the internet. This is one area that has remained fundamentally the same since the birth of the Internet. It is true there is no comparison between the cost of internet access now to what it was 10 years ago, but we never the less still pay (I remember paying KSh 10 per minute in 1998 in a cyber cafe in Nairobi). On the other hand, we do not pay for other forms of information and entertainment access such as radio and TV and to view billboards, they are all free.
Why is the internet not free while it possesses the same basics with the Radio, TV and billboards? They all utilize infrastructure (be it radio transmitters, billboard masts and tarpaulins, wireless spectrum, real estate etc). In the case of radio or TV all the end-user needs to buy is the receiver set and start enjoying the services, no contracts, obligations or money to be paid to anyone unless its pay TV such as DStv. This is not true when it comes to accessing the internet.
I believe the problem lies in the way the internet was originally organized. On one side was the person who needed to access information and on the other side was the person who had the information. In between them was the ISP acting as a conduit through which the exchange of information could happen. The catch was the ISP charged both these people to enable them communicate. “Well there is nothing wrong with this approach, the ISP is a commercial venture and needs to make money” you might say. That is true but this is the very reason why radio users don’t pay for radio while internet users still pay, we also don’t pay to be called on phone, the person who wishes to call me (therefore pass information to me ) pays for the call……. but we still pay for internet. The scenario on the radio field is the same, on one side the seeker of information (the listener) and on the other side the owner of the information or entertainment and in between them the radio station. But in this case the radio station charges the owner of the information only and let the listener enjoy the services for free if he buys a receiver that is capable of tuning to the station. “Well, what the listener would have been charged is taken care of by adverts on radio” you might say again. However the Internet is full of adverts too, in fact sometimes more than those on radio for the same quantity information transfer/exchanged.
Internet access should be free
Last year in May we saw Facebook launch Facebook zero (0.facebook.com). this service grants free access to users on the internet who access the Facebook website via their mobile devices. What Facebook did is that it negotiated with several mobile providers to zero rate access to Facebook website and therefore not bill the customer and instead the ISP sends the bill to Facebook. Effectively the user had free internet access to the Facebook website. This service was recently launched by some local mobile operators where users had unlimited access to Facebook and could also tweet for free even when they had no airtime or valid data bundles loaded on their phones.
Now, if ISPs shifted their focus from billing the consumer to the provider of information as is the example above, then what will happen is that ISPs will still recover their cost of doing business and turn a guaranteed profit.
This business model has worked very well on the Internet content provision sector where companies such as google, yahoo!, Hotmail, facebook and Wikipedia offer their services for free to the public and instead make money from adverts or public goodwill donations (in the case of Wikipedia). Users do not pay to own a yahoo email account or to use the google search engine.
With the advent of cloud computing, hosting content has become dirt cheap and the costs once related to infrastructure and data center investments are now much lower, the penetration of the internet also means there is a bigger audience for internet adverts and these companies now make more money per advert than before. So we have a case where content providers are making more margins over time while ISPs margins continue to decline. Its time ISPs took a piece of the content providers pie by making them pay for access to their information by the public. This is because the content providers make more money if more people visit, they should therefore spend money to attract people to their content by subsidizing or totally paying for their cost of connection.
For this to happen, the billing models for ISPs must also change, billing by pipe speed is no longer sustainable and majority of ISPs are now moving towards volume based billing where a user purchases data volume bundles where the higher the data bundle the higher the pipe speed. So if a user has a bundle and all he does is visit sites that have zero rated their access such as Facebook zero, then his bundle remains intact. What this will do is that it will stir competition among content providers because users will now be accessing the free websites, if my ISP is charging me to access yahoo while Gmail is free (because Google is paying the cost on my behalf) what will make me visit the yahoo page ever again? then this will mean yahoo stands to lose advert revenues (because few people visit it now) and will also be forced to offer free access to its pages by paying the ISP on behalf of the end-user. The same will happen to big content providers who are in competition such as news websites and social networks.
The recent declaration by ITU that makes broadband Internet access a basic human right in addition to food, shelter and water will not be achieved in the current commercial nature of the internet. Governments now have to start factoring in access infrastructure set-up costs in their budgets the same way they budget for roads and access to clean water .
So at the end of the day, Internet access will be free to end users just as it is free to use the road networks, as content providers will now be paying for access and the governments for access infrastructure.
O3B networks is a nextgen company founded by Greg Wyler in 2007. O3B is an acronym for the Other 3 Billion denoting the 3 billion people in the world who still lack reliable means of communication.
O3B has a plan to launch Medium Earth Orbit (MEO) satellite constellation to offer low latency fiber quality broadband connection to regions in the world without much terrestrial infrastructure such as Africa, South Asia and the Pacific. O3b has a strong financial backing by big guns such as by SES World Skies, Google, HSBC, Liberty Global, Allen and Company, Northbridge Venture Partners, Development Bank of Southern Africa, Sofina and Satya Capital.
Their plan is to launch a constellation of 8 satellites into medium orbit by 2013 and offer connectivity to mobile operators and internet service providers. They will be a wholesaler and will only sale bulk capacity to providers of video, data and voice and not directly to end users.
O3B believes that the MEO satellites will offer lower latency (and therefore higher throughput) to most undeserved markets where fiber capacity has not and will never reach in a long time. To see how low latency is related to higher throughput see this tutorial here.
The O3B idea is a great one and deserves all the support it can get so as to make it a success.
A while back several investors financially backed the Iridium project. This project was to offer mobile voice communication via 66 Low Earth Orbit (LEO) satellites from anywhere on earth. However the project was a failure even before it started and was bogged down with project delays, design problems and poor marketing. Iridium estimated that it would easily get about 600,000 customers to break even and that the market for their service would be massive. At the end it only attracted 22,000 customers hence failing to even break even. With calls costing about $7 per minute and the handsets costing about $3000, the project was doomed from day one.
The Iridium project failed not because the concept did not work, It did. The failure was due to the managers of this firm underestimating the impact the GSM technology would have on the voice market. It is GSM technology that killed Iridium as it offered very cheap calls from small cheap handsets that could be used even in a building or car (as opposed to iridium sets that could only be used outside in open space).
Many a pundit have expressed fears that the O3B project will also join the likes of iridium in the not so envious club of spectacular project failures. I would however like to differ with the critics of O3B who have predicted its failure based on the failure of previous projects such as iridium and teledesic. This is because O3B is being implemented at the right time in history. This is because broadband internet and voice are now a mass market products unlike when Iridium was being implemented. When teledesic and iridium were being launched, broadband and voice were niche markets and very expensive for the average person. When iridium was launching, mobile communication was classified as a luxury and data transmission was only done by big corporations such as banks and oil companies. Today, things are very different and i think this is what makes the O3B different.
Last mile challenge.
On of the biggest problems in African telecoms is the availability of a reliable and extensive last mile. Africa has never been attractive to investors wishing to invest in last mile because of two factors:
- The African rural population density is very low meaning that traditional last mile access technologies such as wimax will not return on investment and therefore not commercially viable.
- The African continent is massive. The African land mass is equal to USA, India, China, Europe and many other countries put together. The image below says it all. Click it for a bigger and clearer version of the image
As a proponent of the enhancement of African rural communications, I believe the O3B project will help bridge the existing gap in the rural and city populations of Africa by overcoming the last mile commercial viability challenge and leveraging on the satellite footprint to offer cost-effective coverage to nearly every spot on the continent. This means that everyone in Africa will have instant lower latency access to the Internet. According to O3B, they will avail fiber quality satellite connections that will offer lower latency of 190ms (due to MEO satellites), high-capacity links at a low price of about $500 per Mbps. This is very competitive by any standards in developing countries. This capacity can then be distributed via methods such as 3G, LTE, Wimax, WI-Fi etc.
Apart from providing the rural population with broadband internet, the O3B satellites will also provide cellular operators with the much-needed GSM trunking services at a lower cost and therefore enable faster deployment of mobile networks in rural Africa.The cost of network expansion to rural Africa will therefore drastically reduce and this will speed up connectivity and spur development. To see how connectivity aids development in rural Africa, download the Commonwealth rural connectivity report here.
The Google dimension
Many people were surprised when Google decided to back the O3B project. People failed to see the interest Google (which to many is just a search engine) had on provisioning of affordable connectivity in developing countries. I believe the backing from Google was the game changer. On their blog, Google say their mission is to “organize the world’s information and make it universally accessible and useful.” Well, Google has succeeded in organizing the world’s information and O3B will help it make it universally accessible. Google believes that by funding such projects, it will extend its reach to the whole world and increase its market for Google phones, OS’s such as Android and Chrome OS, cloud services, advertising and search engine. Google’s backing for open source software development will also mean that cost barriers to ICT implementation will be eliminated for the other 3 Billion people in the world.
I therefore believe that the Google backed O3B project has come at the right time in history and will play a much bigger role in bridging the digital divide in Africa than fiber optics whose extended coverage is less than 20% of the continent. The participation of Google in this project will also reduce the cost barrier to adoption of ICTs in the developing world such as Africa.
Last week, the Canadian government passed a law that forces all Internet service providers (ISPs) to bill their consumers by metering their downloaded/uploaded traffic volumes from 31st March(see article here). This means that a user will now pay the ISP not based on the size of pipe he has bought from them but on the amount of traffic downloaded and uploaded, this is the same approach used by electricity and water companies where a user pays for quantity of electricity or water used.
For a very long time ISPs the world over have been billing their customers based on size of Internet pipe a consumer purchases. There is something very flawed with this approach. The unit of billing is not a valid and quantifiable measure of consumption of service. With this way of doing things, it would mean if the ISPs ran water services, they would be billing you for the diameter of the water pipe that comes to your house rather than the amount of water you consume. If they ran electricity supply, they would also bill you based on the thickness (hence resistance) of the cables dropping into your house from the electricity pole. That’s just plain flawed.
How did they end up here?
The first question that anyone might ask is why did ISPs start billing consumers using this model? And why have they stuck with it?
The answer is historical. Before the Internet was born, Telco’s provided voice services only to consumers. These Telco’s had an extensive terrestrial hybrid network that had the widest reach (with the last mile being mostly copper pairs). The technology at the time was based on circuit switched networks collectively known as Public Switched Telephone Networks (PSTNs) that dedicated 64Kbps per voice circuit ( per phone conversation) because this was the minimum bandwidth required for speech encoding with minimal distortion. The PSTN networks were therefore offering 64Kbps at the copper endpoints.
When ISPs started to offer the Internet to the public, they did not own their own networks on which to take the service to the consumers. Because PSTNs were extensive in their reach, ISPs leased PSTN lines to deliver Internet services to the end users. The PSTNs needed to deploy billing mechanisms to bill for the leased voice lines (now converted to data lines) and the only way they could quantify what they has leased out was in terms of 64Kbps channels. Because for each 64Kbps channel occupied by the ISP, the Telco cannot use it for voice communications.
Sadly, the ISPs and mobile broadband operators were too lazy to come up with their billing models and simply passed on the same model used on them by the Telco’s to their consumers. The ISP consumers were now paying for Internet services based on how many voice lines they have converted to data lines and not how much data was passing through these lines.
At the onset, this approach seems to have worked well because ISPs became very profitable businesses. Their growth was phenomenal and nothing could stop it. The strain on the network loading was passed on to the Telco’s by the ISPs who didn’t own these networks anyway. The fact that most of these telco’s were government owned incumbents and profitability was not high up on their list of priorities, meant that the telco’s effectively absorbed some ISP related costs in their ignorance and inefficient business practices. This all changed when ISPs started to build and run their own networks from the core to the last mile. The ISPs started feeling the strain that multimedia such as gaming, audio/video streaming and peer to peer networks were putting on the networks. The advent of full movie and TV streaming services such as netflix and hulu has also made things worse for ISPs. Attempts to slow down bandwidth hungry traffic such as peer to peer, has been met with hue and cry and some users even encrypted of this type of traffic to avoid detection and eventual throttling by the ISP.
Current statistics show that 20% of the users on the Internet consume 80% of the resources (classic Paretto) This therefore means that 80% of Internet users do not get what they pay for thanks to the 20% who are hogging up Internet resources such as bandwidth (especially last mile frequencies on wireless platforms such as Wimax and 3G), server CPU time, network equipment CPU time etc etc. At the end of the day, Internet users are becoming a frustrated lot as resources that were once in abundance are now becoming scarce resources. This negatively impacts and ISPs bottom line.
This has had the effect of commoditizing these scarce resources with last mile frequencies now being auctioned for millions of dollars (Kenya’s CCK auctioned 3G frequencies to Safaricom for 20Million USD) and peering agreements becoming more expensive. This has put a strain on ISPs margins as costs escalate and they have no option than to re-look at their business models especially how they bill for their services. The days when a user bought a pipe and used it with abandon are coming to an end as ISP will now bill based on GBs used so as to remain profitable and offer an acceptable level of service across all users.
The effect of this approach in the market will be varied depending on the level in which an ISP plays in. Smaller ISPs who are basically re-sellers for bigger ISPs do not own networks (and therefore the scarce resources) and the metered billing approach will be a big blow to them because of their business models that ride on an effect called ‘network loading’. Smaller ISPs have fewer customers who are connected at all times and they can therefore over-sell bandwidth they have bought from the bigger operators more efficiently than the bigger operators can. This over-sell means the small ISPs can squeeze out more $ from say 1Mbps pipe than a bigger ISP because they can have higher over-sell ratios because of their lower network loading factor. Smaller ISPs will therefore be opposing usage based billing more than big ISPs.
This therefore means should many countries go the Canadian way, smaller ISPs will have to merge or go under. Bigger ISPs will also return to profitability.
Some people have argued that the metered usage approach is counter innovation and development saying that the rapid development of the Internet was brought about partly because of non-usage based billing. Services such as online learning and cloud computing will be affected by a metered Internet, they add. What the proponents of this line of thought don’t seem to see is the ISP is a for-profit business and not a charity organization. If a cloud computing service provider wants to offer their services via an ISP network to its end users, it should step in and subsidize consumer costs of connecting to their service and not let the consumer foot the entire connectivity bill to access the cloud while the cloud service provider pays close to nothing to avail his services on the ISPs network. If an online learning resource (say TED.com) wants users to benefit from its vast bank of educational videos for ‘free’ they should not let the consumer foot the entire bill of accessing the videos, they should chip in by offering subsidized access to TED.com videos.
ISPs should therefore also develop models of how they will make providers of services such as hulu, netflix and other pay for access of their services over their networks. They should not only focus on billing the end user while these providers get away with it. This will ensure that metered services remain relatively cheap to the end users as ISPs will also be making some money from these content providers.
A paradigm shift on how ISPs conduct their business is therefore needed if they are to survive in the market place. This means ISPs will have no option than to adopt metered billing (billing by volume) and expire unused subscribed volume bundles after a specified time period to spur activity and revenue flow. Otherwise, they will find it difficult to make money from offering “unlimited” Internet services.
Early 2009, the African continent was heavy with expectation as several fiber optic submarine cables landed on its shores. This it was hoped would avail copious amounts of bandwidth to the continent and reduce its dependency on the existing satellite based connectivity to the Internet.
This development was also accompanied by sweeping changes in the telecoms sector in Africa such as:
- Market liberalization and the end of dependency on the incumbent operator for international connectivity.
- ISPs and NSPs offering ‘smart’ pipes instead of the more traditional ‘dumb’ pipes hence moving up the IT value chain.
- The Rapid increase of the value of broadband connectivity to businesses and individuals in the world. corporates wanted bigger and faster pipes to do business and cut costs while individuals wanted to access the media rich content on the internet such as streaming videos and social networks.
- Deeper penetration of the mobile wireless network into Africa that carried with it mobile data services
The above happenings could not be sustained over the existing satellite bandwidth which was limited in quantity and speed and hence the need to deploy submarine cables.
Industry analysts predicted a massive drop of up to 90% in connectivity costs as consumers migrated to the undersea cables. The Kenyan and South African blogosphere was awash with predictions and expectation of cheaper broadband in these two countries. However, this was not to be as ISPs were slow to adjust prices downwards and gave various reasons as to why they could not do this. some of the reasons are:
- The cost of providing broadband connectivity is made up of many other costs and not entirely on the cost of international backbone connectivity.
- The submarine cable utilization was low and the economies of scale could not come into play to lead to a reduction in pricing.
- They needed to first recoup their investment in the new submarine connectivity systems before the end user can enjoy lower pricing.
- etc etc
What they did instead was to offer more bandwidth for the old price. This ensured that they sustained positive cash flows.
This did not go down well with most consumers who felt cheated by the ISPs. The fact of the matter is that prices did indeed drop by a considerable margin even though this drop wasn’t what was promised or envisaged.
I am however of the opinion that as consumers, we are ignorant of the fact that the ISPs made promises to us based on US and EU pricing models which are totally not applicable in Africa. Whereas a user in the US pays an average of $3.33 per Mbps and a user in Japan pays $0.27 per Mbps, His counterpart in Nigeria will pay $2,400 and in Kenya will pay $700 for the same capacity. The question that arises is why this big difference in pricing?
The answer lies in historical factors of infrastructure development in Africa and the issue of local content.
History of Infrastructure
The years between 1995 and 2001 witnessed an intense investment in ICTs in the United States and Europe characterized by many start-ups and massive capital investment in Internet infrastructure based on speculation of an impending IT explosion. These companies had envisioned a huge market for high speed broadband internet.
In their investment quest, many of these businesses dismissed standard and proven business models, focusing on increasing market share at the expense of the bottom line and a mad rush at acquiring other companies leading to many of them failing spectacularly.
This period before the burst saw the laying of hundreds of thousands of kilometers of fiber optic cables both on land and under sea as companies invested based on pure speculation not on strategic market research information. When the envisaged market failed to materialize, these companies could not get a return on their investments or be profitable and went burst culminating in what is known today as the dotcom bubble burst. Some of the casualties include WorldCom, Tyco, global crossing, Adelphia communications and many more.
When these companies went bankrupt, their massive investment in national and international fiber optic networks lay underutilized and was bought for throw away prices by new investors such Comcast and Sprint. So low were the prices that some cable was bought for 60 US cents per Gbps per kilometer in 2002 compared to the 37 US dollars per Gbps per kilometer the Seacom cable is costing to build in 2009.
Because of the heavy investment in the cables connecting Africa, the operators have no option but offer prices to the consumers that will ensure profitability to the investors because any attempt to emulate their American counterparts will lead to failure to break even or even make a profit.
I believe one of the key differentiating factors between African Internet and the US or EU version is the aspect of local content. By local content i do not mean content such as regional news, websites in local languages and the likes, I mean content hosted locally within the African continents local loop network.
The fact that nearly all Internet content is hosted outside Africa, means that we are fully dependent on international backhaul to access this content. A user in Atlanta Georgia does not need to cross the US shores to get cnn.com because CNN hosts its content in Atlanta (and many other cities in the US) so to him, its a local connection. A user in the UK will traverse few local loops within the UK to access sky.com and will therefore not need international capacity. The same US and UK users will do few hops to access a verio.net hosting server (where nation.co.ke is hosted) The concept of what the ‘Internet’ is in the US or UK is totally different from what it is in Nairobi because a user still has to leave the continent to access nation.co.ke hosted at Verio.net servers. It is therefore more expensive for the African user to access the Internet because he always has to traverse international links that are private commercial ventures.
However, if we work on development of local content by hosting content within the continent of Africa, we can drastically cut the dependence on international capacity. If for example the nation.co.ke website was hosted somewhere in Nairobi, we would not need international fiber capacity to access it. Now take this and apply to majority of the websites we visit (facebook, CNN, soccernet etc) If we had good hosting and cache services locally, only cache updates would need to utilize the international capacity as all African traffic will remain within Africa. A good example is a user in the UK accessing cnn.com (a US website), they do not need to leave the UK because cnn.com is cached in London. The same is not true for the African user.
There is therefore a need for a paradigm shift in the efforts being put to make African broadband Internet cheaper. I believe it does not lie in the laying of more submarine cables from Africa to Europe and US. The solution lies in the provision of reliable data centers within Africa in which content can be hosted cutting down the cost of Internet access drastically.
Just to prove that international capacity is not the solution, the combined internet traffic traversing the trans-Atlantic cables between the US and Europe accounts for less than 30% of the total traffic on these cables with the rest being business data (VPNs etc) and voice traffic. All this is because to the US and EU, the Internet is a local network.
We need to make the Internet local in Africa.
A dumb pipe is a pipe on an operators network that is used to simply transfer bits to and from a customer to the Internet. The use of the term “dumb” refers to the inability of the operator to restrict services and applications to its own portal and primarily just provide simple bandwidth and network speed.
On the other hand a smart pipe refers to an operator’s network which leverages existing or unique service capabilities as well as the operator’s own customer relationships to provide value above and beyond that of just data connectivity. The use of the term “smart” refers to the operator’s ability to add value for additional (and often unique) types of services and content beyond just simple bandwidth and network speed.
An example of a smart pipe is a link from an ISP that provides Quality of service on mission critical applications such as VoIP and Business traffic while throttling recreational traffic such as peer to peer traffic or gaming. An example of a dumb pipe is a link from your ISP that you use to make ‘free’ skype calls or download torrents over the internet to the detriment of critical business transactions.
With the ongoing price wars and erosion of revenues being experienced by ISPs in Kenya, the need to maintain healthy margins is now a critical issue for all the ISPs. The first thing that comes to mind in such a situation is the reduction of cost of doing business such as international bandwidth costs, human resources and process inefficiency costs. Indeed some ISPs have already announced intention to cut costs such as here.
I however feel there is one more trick up the sleeves of ISPs that they can employ to dramatically improve their margins and revenues. This is by adopting their networks to offer smart pipes as opposed to plain old bandwidth pipes to the internet. By offering smart pipes, ISPs will preserve the value of their pipes while enabling new business models and generating new flows of revenue.
Many ISPs in the country still offer broadband connectivity in a ‘utility company’ fashion where they just run the infrastructure without providing any ‘richness’ to the services they offer. This has not only led to poor customer experience and eventual customer flight, but also alienated the greater ISP world farther from the mainstream IT by failing to expose their network capabilities to the larger enterprise IT community through service delivery frame works and web services.
Other players who a while back were considered as harmless are now encroaching the ISP turf by offering value added services on the ISPs infrastructure. The Kenyan ISP has refused to move its products/services higher the value-chain. A case in point is Skype or google which enables users to make ‘free’ voice calls over the internet, i say ‘free’ because its free to the end user but it costs the ISP offering the end user the pipe through which these free calls are made. The EULA for Skype actually make a user agree to making his PC a skype super node further making the ISP bare the cost of other persons who are not customers to the ISP to make ‘free’ calls.
I know some of you maybe asking “How does it cost the ISP?, after all the customer is paying for the pipe”. The free calls cost the ISP because of the different way in which an ISP is billed for internet and how it bills its customers.
To explain this, let us separate the fact that ISPs pay for International connectivity capacity (fat pipes via fiber or satellite) and also for the Internet connectivity through these pipes, These are two separate costs. The cost for leasing these ‘fat pipes’ is constant irrespective of usage whereas the cost of the internet connection is billed on usage (i.e. 95th percentile of transferred data). So here is an ISP billing the customer for a flat rate monthly fee for a specific capacity of pipe to the internet. However, the ISP is not being billed a flat fee for the wholesale internet connectivity it is purchasing from higher tier carriers. This means that any variation in cost of providing connectivity will not be absorbed by the end user but by the ISP. When the ISP absorbs this cost, its free to the user. This is why ISPs such as Safaricom and iWayAfrica consider volume of data transferred as a critical determinant of how the customer should pay for their services. They do not do it purely on speed/capacity of the pipe.
Ok,back to the discussion…..
The ISPs have an opportunity of a life time to offer smart pipes that add value to their offering effectively making them ‘value’ pipes. Also, in the ever increasing world of convergence and cloud computing, the ISPs have a critical role to play as other players in the ICT sector are doing. With many end user application developers now offering SaaS, its time ISPs started offering NaaS (Network as a service) which will help them offer virtualized bandwith services on common infrastructure. This will drive down the cost of building and maintaining individual networks (as a write this, Lang’ata road is full of trenches as ISPs try to outdo each other in building their networks). The offering of NaaS and value added services and QoS to customers will have the effect of lowering operating costs and at the same time increasing the value derived by the consumer of the ISPs services leading to customer loyalty and increased revenue streams.
If ISPs share infrastructure and keep their costs in check, they will now focus on offering better service to their customers as opposed to being in the business of building networks. I agree that there is plenty of value in offering network services, but there is also value in ISPs functioning as aggregators of content and as exposures of Web and business services through the smart pipes. However as Adan Pope, CTO/CSO of Telcordia, once said “The term ‘pipe’ alone puts ISPs in the position of being in the job of conveyance while others provide value,” he said. And that’s just wrong. There is need to take advantage of the unique position ISPs find themselves in and make money. If they don’t then someone else will as was the case in the mobile industry where Apple is now providing services to iPhone users that should have been provided by the telcos had they been smart enough. We are also seeing Nokia taking the same route with their ovi store albeit with some collaboration with mobile operators. Google, Skype, nymgo, dropbox and many others are now offering ‘value’ over ISP pipes as ISPs keep on complaining about diminishing returns on investment and customer churn.
Wikipedia defines Network neutrality (also net neutrality, Internet neutrality) as a principle proposed for user access networks participating in the Internet that advocates no restrictions by Internet service providers (ISP’s) and governments on content, sites, platforms, the kinds of equipment that may be attached, and the modes of communication.
Tim Bernes Lee who invented the Internet says this of net neutrality:
If I pay to connect to the Net with a certain quality of service, and you pay to connect with that or greater quality of service, then we can communicate at that level.
This therefore means that if I pay my local Kenyan ISP to connect to the Internet at say 4Mbps and another user in the USA also does the same, both of us should therefore enjoy equal access to all content on the Internet without hindrance of any kind. If i host a website that is accessible via my 4Mbps line to the internet and the user in the US also hosts his own website then the quality of access of both websites should be equal, neither mine or his should be slower.
Those against net neutrality say that if i want my website to be accessed better and faster, i need to pay more money to my ISP to do that, failure to which traffic to my website via my 4Mbps link will be treated with bias with respect to those who have paid.
All this brouhaha about net neutrality came from large traffic carriers such as sprint, level3, BT, Verizon etc who argued that companies such as Facebook, yahoo and Google are using their infrastructure to make money while they get a raw deal. This led to an ISP such as Verizon partnering with Google and Verizon agreeing to give Google traffic more priority over their network than say traffic from yahoo. This means that if your ISP is Verizon or peers to Verizon then your experience when accessing yahoo will be different from when accessing Google. This effectively creates a ‘private Internet’.
Where is the danger in this? you may ask. In the joint statement by google and Verizon where they try to deny that they want to privatize the Internet, (available here), somewhere in between the many lines of legal mumbo-jumbo, they say:
“This means that for the first time, wireline broadband providers would not be able to discriminate against or prioritize lawful Internet content, applications or services in a way that causes harm to users or competition.”
This effectively creates a loophole for throttling unlawful content on the internet. This is because they do not have the mandate to define what is lawful and wha is unlawful. The organizations or governments operationalizing this have a free hand to define lawfulness of content. MySpace traffic will now be deemed unlawful by say Facebook friendly ISP….. get the trend?
If for example Saudi Arabia decides all Christian websites are unlawful then Saudi ISPs can throttle access to these websites in Saudi Arabia. If the US government decides a website such as Wikileaks is unlawful, they can go ahead and instruct all American ISPs (translated most ISPs in the world) to throttle traffic to Wikileaks because ….. wait for it… it’s coming……… they do not pay the ISP to carry its traffic. Internet censorship will have found a back door to the world! This would also mean that if Facebook pays level3 to prioritize its traffic, it kills off competition from MySpace because MySpace experience on a level3 network would be poor.
The network owners have argued that Network Neutrality is unnecessary because there is sufficient competition in the broadband market to deter bad behavior. They argue that if Verizon degraded access to a site or discriminated against the use of one service in favor of another, they would anger customers who would move to another network operator in the area.
Consumers must have robust competition and multiple choices for this theory to work. But such competition or choice does not exist in Africa, and it isn’t likely to exist in the foreseeable future. This is because most ISPs in the continent end up connecting to the same backbone cable to the internet that peers to the same big provider in US and EU :-(. There is insufficient competition between different technologies and ISPs to produce any kind of deterrent should one operator block our access to the free-flowing Internet. This leaves the African user in a very bad situation.
What does this mean to African Internet content and users?
Africa is a net content importer, this means that majority (over 99%) of all Internet content viewed in Africa comes from outside Africa. This was occasioned by the fact that our local backbone infrastructure was and continues to be poor to effectively host content locally. lack of local content, bandwidth scarcity and unreliable power supply have made the continent a net consumer of traffic rather than a generator. The situation is so bad that even our local media houses who produce close to 100% local news have to host their websites and portals in the US or Europe. African ISPs peer to European or American Tier 2 ISP networks who eventually peer to tier 1 ISPs such as Verizon and sprint who are adversely mentioned in deals with companies such as Google on preferential traffic treatment.
Africa recently joined the rest of the world on the internet by laying high-capacity submarine fiber optic cables. This is not only expected to provide faster access to foreign content, but is also aimed at availing African content to the world and Africa in general. The lack of IXP’s means that the African user still accesses most of the local content via the internet in Europe or the US. With the lack of net neutrality, Africa-originated traffic will not be treated equally on the internet playing field as say traffic from Google or Microsoft. This is because the latter will be paying millions of dollars to have their traffic receive preferential treatment from the large data carriers. This, coupled with the fact that the continent produces slightly less than 2% of the total world traffic, will mean African content will be nearly invisible on the Internet.
The result of this is stiffed growth of local content and general development of the continent. With the legacy telecoms systems also turning into the ubiquitous IP on which the Internet is based, African telecoms traffic (primarily voice) to the world will also be disadvantaged shutting off the continent from the world once more.
Sadly, the continents industry players have been very silent on the matter. There has not been a single conference/meeting of minds on the continent to discuss/advocate net neutrality. Only the South African government has tried to discuss it but not from the general perspective of throttling non-paying traffic from providers but from the consumer perspective.
This silence makes me get the feeling that the absence of net neutrality will be good for despotic African governments who will ride on the wave and ban access to information that will pose a threat to their rule and hold on power. I hope i am wrong.
African ISPs/ content developers and consumers need to speak out for net neutrality. Local NGOs that are very vocal on issues such as human rights should also come out and advocate for a neutral internet because thanks to it, they were able to communicate to the world when governments curtailed their freedom of speech. NGOs in America are doing it such as here, here, here and here.
Here in Kenya we are so busy laying infrastructure (fiber, 3G, LTE etc) and talking politics that we forget the fundamental threat to the very cause of the telecoms revolution in the country. Let us do all we can to protect the public Internet from going private.