Archive

Archive for the ‘Broadband and Internet’ Category

Ka-band Satellite Broadband: Hit or Hype?

August 21, 2012 7 comments

60cm Ka-band dish on a roof top capable of delivering up to 15Mbps

With the landing of several undersea cables in Africa in the last three years, many a pundit have hailed it as a new dawn of telecommunications in the continent. The cables brought with them massive bandwidth capacities to the continent that enabled faster and cheaper communications. Before the arrival of these undersea cables, Satellite was used to connect Africa to the rest of the world. These satellites had the following characteristics:

  1. Expensive due to the fact that satellite transponder leasing was expensive due to the extremely high demand for the capacity. This demand reached peak circa 2005 when operators were even buying capacity from satellites that were still on paper, not yet built and launched.
  2. Due to the cost and scarcity of capacity, many back-haul pipes were congested making satellite communications slow and irritating to use.

The arrival of cheap and abundant terrestrial capacity led many to declare that satellite was destined to history books and that there will be no market for satellite broadband in the years to come.

Three years down the line, reality has hit home as the following facts downed:

  1. The issue of back-haul was resolved by the undersea cables, these cables did not however address the last mile access problem. There is a lot of capacity at the landing stations that cannot be distributed to end users as there is no good last mile infrastructure in place. Spectrum scarcity has also made things worse.
  2. Even on the existing last mile networks and those being put up to meet this demand, reliability has been a key issue due to poorly designed networks and fiber cuts. Industry leaders now seem to agree to this fact as seen here
  3. No regulatory framework was set-up to harness the advantages brought by the availability of bandwidth. Regulators failed to come up with new policies and laws such as infrastructure sharing, spectrum farming and sharing

The result is that the consumer has not benefited much as ISP’s and NSP’s continue to offer mediocre services. There are reports of some ISP customers getting as low as 92% availability which translates to about 29 days of outage in a year.

Will Satellite make a come back?

Before we answer this question, we need to be aware of the key advantages that satellites provide. These are:

  1. Very high availability. No technology beats satellite when it comes to availability. Downtime is rare and far in between making majority of well designed satellite systems achieve the proverbial 99.999% availability (52 minutes of outage in a year).
  2. Satellites offer instant availability of service over a large area without the need to lay additional infrastructure.

With the advent of Ka-band satellites, the landscape is about to change as these will bring with them large amounts of bandwidth and make them available instantly to large geographical regions. The key advantage of Ka-band satellites over the more traditional Ku-band and C-band satellites are:

  1. In terms of capacity, one Ka-band satellite is equal to about 100 Ku-band satellite yet they cost the same to manufacture and put into orbit. This means that capacity on Ka-band will be much cheaper to the point of giving fiber capacity competition.
  2. Ka-band utilizes spot beam technology that enables the use of smaller antenna (as small as 60cm) and cheaper modems. At the moment, a full Ka-band kit is competitive on price to terrestrial technology equipment. Intelsat is developing a spot beam architecture utilizing all bands that will allow 2 satellites to cover all populated continents of the world. Read more on the Intelsat Epic™ project here

With more capacity available to offer higher speeds at much lower costs, with equipment being cheaper and more competitive to terrestrial offerings and giving much higher reliability that terrestrial services can only dream of. What will prevent Satellite broadband from making a come back?

If recent events are anything to go by, Satellite broadband is already making a come back to Africa. The recent launch and uptake of capacity on Yahsat 1B and Hylas-2 satellites over Africa will  avail high-speed capacity whose quality rivals that of majority of the terrestrial services latency not withstanding. The main reason why i think Ka-band will be a game changer in the African broadband market is that operators have realized that it is one thing to roll out a terrestrial infrastructure and another to operate and maintain it. Operational costs of the newly laid terrestrial wired and wireless networks are becoming prohibitively high due to vandalism and sabotage. The terrestrial networks offer very many points of failure to offer any reliable service. Ka-band satellite will offer cheaper bandwidth that is more reliable and easy to access and install on any place in the continent. It takes an average of 3 weeks to survey and install a fiber cable in a city like Nairobi, it takes about 2 hours to fully set-up a Ka-band dish and connect to the Internet….. Once the fiber hype dies, Ka-band broadband via satellite will be a hit in Africa.

Anyone dismissing my argument should look at the following links that talk of roll-out and expansion of Ka-band satellite in Europe, Middle east and USA which we consider to be pretty “wire up” than Africa.

  1. Hughes announcement of the launch of Echostar 17 to offer 100Gbps broadband services in North America: http://bit.ly/ShARi6 after the successful launch and sale of capacity on Spaceway satellites
  2. Viasat Ka-band 100Gbps broadband service offering in North America: http://bit.ly/ShBAjj
  3. Avanti communications announcement of the launch of Hylas-2 to offer services in Mideast, Africa, Europe and the Caucus: http://bit.ly/ShBYhL

Why Your Mobile Data Bundle Expires

July 23, 2012 13 comments

In the past few weeks, there has been a rise in consumer complaints aimed towards mobile operators. These complaints relate to the expiry of data bundles after a fixed period of time from activation. The general feeling in the market is that if a customer purchases a data bundle, the mobile operator has no right whatsoever to expire that bundle and the customer should use it for however long he or she wishes to. A user who buys 100MB data bundle can therefore take as long as he wishes (up to say 12 or even 36 months ) to consume it.

This argument is from a customer’s perspective and they have every right to argue that way because they have spent their hard-earned money to purchase these bundles. However, how does the mobile operator view it?

Upstream and Downstream Contractual Obligations

When a customer purchases a data bundle, an implied contract is automatically brought into existence, this contract is between the consumer and mobile operator. The consumer commits to pay or pays for the data bundle and the mobile operator commits to delivering the data to the customers mobile device at the agreed speed and volume. For a binding contract to be formed there must be:

  • An offer which is accepted and for which valid consideration is given;
  • An intention to create a legal relationship; and
  • Certainty of terms.

From the above, the additional points to note in a contract as is below:

  • An offer must be communicated, that is why you get a confirmation message from the mobile operator notifying you that you are about  to purchase xx MB of data and you need to confirm if this is correct.
  • The Acceptance must also be communicated, that is why you also get a message confirming the bundle purchased
  • Certainty of terms means that there must be certainty as to the parties, subject matter, and price. In this it means a purchaser of a data bundle is assumed to be aware of what he is purchasing and what the terms in the contract means, in this case it is assumed that someone purchasing 100 MB knows what an MB means and how much he can do with an MB of data. A user cannot purchase 100MB and expect to download and upload  anything more than that.

On the flip side, the mobile operator also enters into a contract with the upstream data provider who connects the operator to the Internet. This contract is informed from the fact that downstream customers have brought in business by purchasing bundles. The same rules of a contract apply. However, in this case, the mobile operator does not purchase bundles as such but purchases capacity in Mbps (Mega bits per second). The fact that the mobile operator has purchased “capacity” means that he will pay for the capacity to transmit or receive whether he uses it or not, Capacity is defined as the actual or potential ability to perform, yield, receive or contain. Unlike bundles, capacity does not denote the quantity but the ability.

The mobile operator therefore anticipates that customers who have purchased bundles will connect and use the bundles and goes ahead and commits to sufficient capacity to enable this happen.

What would happen if mobile users purchased bundles and none of them use the bundles for say 6 months? During this time, the mobile operator will have purchased capacity to connect the customers the Internet (capacity in terms of pipe, equipment and human resources). If none of the users consume their purchased bundles, the mobile operator will still incur costs towards this capacity because staff have to be paid their 6 months salary, the upstream provider will need to be paid irrespective of if the pipes were used or not, operating overheads of equipment and depreciation will also be incurred during these silent six months. Being a commercial venture, this is not sustainable as for this firm to be profitable, revenues must be higher than costs.

The mobile operator therefore has to put in place measures to protect itself against such scenarios by setting a time limit within which the purchased data bundles can be used. There is the argument that the customer has prepaid the bundles and the mobile operator has positive cash flow in the whole transaction. However, this positive cash flow is only on the onset and as time goes on, the operator faces diminishing returns from this transaction.  Data bundles therefore have to have an expiry date to ensure that the mobile operator does not run into losses. The same way bottled water cheese have a ‘use by‘ date, the two do not essentially expire in the sense of usability, but this date is set to strike a balance between committed resources/capacity and consumption rate to ensure positive returns to the investors.

 

 

 

Africa Will Ignore Satellite Communications At Its Own Peril

May 25, 2012 7 comments

The landing of various undersea cables on the African shores in the last three or so years has heralded a new dawn of high speed communications that offered clearer international calls and faster broadband speeds. These cables have made bandwidth which was once a scarce commodity be in near oversupply. Indeed as I write this, quite a huge chunk of the undersea capacity is unlit.

With the arrival of these cables, telcos and ISPs that once depended on expensive FSS (Fixed Satellite Service) capacity moved their traffic to the more affordable undersea cables. Cost of bandwidth came down but not to the level envisioned by the consumers as most fell by about 40% and not the expected 90%. I had warned of the possibility of these prices not coming down by 90% as was the expectation and hype in a previous blog post in 2009.

Satellite At Inflection Point

Incidentally, when the cables were arriving, some major developments were happening in the Satellite world. However, due to the hype and excitement of the arrival of undersea cables, majority of us didn’t care or notice these changes that were set to revolutionize satellite communications. These changes have created a lot of excitement in the telecommunications world but are largely ignored here as we believe that the undersea cables are the future.

The Situation In Kenya (and by extension Africa)

With several cables landing on the Kenyan coast, it would be expected that quality of service from ISP’s would be at its best. However, this is not the case as the quality of service has greatly deteriorates over time due to a poorly maintained last mile network. We have bandwidth at the shorelines but we are unable to fully utilize its potential. Majority of the operators in Kenya embarked on ambitious plans to lay last mile fibre cable around the country, the same thing is happening in other African countries too albeit at a slower pace. these are good steps, however, these telcos have oversimplified the issue of last mile access to that of laying cable on poles or burying it underground. That’s just 5% of the entire job of last mile provision, the other 95% lays in maintaining the network which sadly none of the telcos were prepared for. they thought that after laying the cables, money would start flowing in. Last mile cable cuts due to civil works is currently one of the biggest cause of downtime in Kenya today, hardly a day ends without incidents of cable cuts as roads are being expanded, new buildings come up and natural calamities such as trees falling on overhead cables and flooding of cable man holes.

Collectively as Africa, we seem to underestimate the size of this continent, operators do not know what it will take to wire Africa to the same levels as Europe or the US. The map below shows the task ahead of us as far as wiring Africa is concerned, its not going to be an easy job. Africa is the size of the US, China, India, Europe and Japan put together. Click on it for a larger image

True size of Africa

This size poses a challenge as far as laying last mile networks in Africa is concerned, the lack of reliable electric power supply also poses a challenge on how far these networks can grow from the major cities. Click on this map here  to see how far behind Africa is in as far as power supply distribution is concerned, it was taken by NASA at night sequentially as each part of the earth moved into night time.

As seen above, it will take quite a large amount of investment to bring this continent to the levels of other continents as far as connectivity is concerned.Even when this is done, it will be an expensive affair which will make connectivity expensive as investors will need a return on their investment.

However, all is not lost as the once derided Satellite service is now making a comeback and will soon give terrestrial services a run for their money. Already, the US and Europe are undergoing a major shift in the use of Satellite to provide broadband service. Currently there are more investors putting their money into satellite launches than into laying undersea cables.

Below are some of the developments in Satellite that will herald this comeback but have sadly slipped past most of us.

Ka-Band Commercialization

Unlike Ku and C band satellites that were in use before the arrival of cables in Africa, Ka-band satellites use spot beam technology which allows frequency re-use and the provision of hotter beams. What this means is that satellite capacity can be greatly expanded due to frequency re-use and CPE equipment is now cheaper due to a hot beam/signal. A single Ka-band satellite is now equivalent to about 100 Ku-band satellites for the same cost. The two main reasons why satellite was ditched for fiber was the cost of equipment and bandwidth. Due to these two developments, satellite operators will soon be offer prices as low as 300 USD per Mbps down from about 6,000 USD per Mbps.

The reason why Ka-band was not commercially viable for sometime was because the technology had not mature enough for viable commercialization. Ka-band which operates at the 17-30Ghz range is susceptible to weather interference but there now exists techniques to counter this hence greatly improving reliability. The High operating frequency meant more expensive detecting equipment (modems) but advances in technology have allowed for the manufacture of affordable 200 USD modems today.

More Efficient modulation Schemes

In Communications, the amount of data that can be sent over a transmission channel is dependent on the noise on that channel and the modulation scheme used. There have been great advances in modulation techniques and noise suppression allowing the pushing of more data over smaller channels. This includes the use of Turbo coding which is so far mankind’s best shot at reaching the Shannon limit. One recent and notable development was by Newtec where they managed to push 310Mbps over a 36Mhz transponder which translates to 8.6Mbps/MHz , previously the much you could do was 2.4Mbps/MHz. Read the Newtec story here.

Combine this with Ka-spot beam frequency re-use and you will have satellite capacity that is cheaper than fiber bandwidth, if you add this to the reach that satellite foot print provides, you will have instant broadband available on the entire continent.

MEO Satellites

At around the same time the cables were landing, a Google-backed broadband project was being announced. The project dubbed O3B (Other 3 Billion) denotes the unconnected 3 Billion people in the world. Google believes that this is the most viable way to avail broadband to the masses. Otherwise how do you explain the fact that Google has never invested in fiber capacity to Africa or the developing countries? I wrote about the O3B project in a previous blog post that you can read here.

The O3B will utilize satellites that are closer to the earth hence the term MEO which stands for Medium Earth Orbit. The fact that these satellites are closer means that latency on the links will be much lower (at 200ms) compared to traditional satellite capacity that gives about (600ms). This will enable higher throughput at lower latencies. To read more on the relationship between  latency and throughput, read this tutorial here

Satellite Refueling

majority of the satellites in orbit have a lifespan of about 15 years. This lifespan is determined by the amount of fuel it can carry. This fuel lasts about 15 years and once its depleted, the satellite cannot be maneuvered and is therefore not usable. As a write this, Intelsat which is the worlds largest commercial satellite fleet operator, has signed up for satellite refueling services from MDA corp to extend the life of some satellites by about 5 years. What this means is that operators can get more money out of their satellites due to extended life and therefore they can now offer cheaper bandwidth.

Combine the advantages of Ka-band spot beam, efficient modulation, LEO satellites and ability to refuel satellites and you have with you the solution to the myriad of problems afflicting consumers in Africa today as far as reliable and high speed broadband connectivity is concerned.

By the end of 2014, Satellites will offer cheaper, reliable and more affordable bandwidth than undersea fiber optic cables. This is the reason why investors are flocking to launch satellites than lay cables. These include Yahsat which is an Abu Dhabi company, Avanti communications launching in Europe and Africa and Hughes which is launching Ka-band satellites in the US mainland for broadband connectivity.If undersea cables were as good as was touted by local operators, why are investors putting money in launching in the US and Europe which are more wired than Africa? Locally, the Nigeria government launched its own communication satellite that they say will enhance broadband reach in the country faster than terrestrial technologies.

Watch this space….

Why Unlimited Internet is a Big Revenue Drain for Operators

April 19, 2012 9 comments

The announcement by Safaricom that it’s doing away with its unlimited Internet bundle did not come as a surprise to me. I had discussed the historical reason behind the billing model that is used by ISP’s and mobile operators in a previous blog post here in Feb 2011.

The billing model used in unlimited Internet offering is flawed. This is because the unit of billing is not a valid and quantifiable measure of consumption of service. An ISP or mobile operator charging a customer a flat fee for a size of Internet pipe (measured in Kbps) is equivalent to a water utility company charging you based on the radius of the pipe coming into your house and not the quantity of water you consume (download) or sewerage released (upload).

What will happen if the local water company billed users by a flat rate fee based on per-centimeter radius of pipe going into their homes rather than volume of water consumed?  A user with a pipe of radius that is 1% more than the neighbor enjoys 2% more water flow into their house (do the math!). The problem is that their bills will not differ by 2% but by 1% based on the difference in radius of the pipes. A 2% difference yields a 4% difference in consumption but a 2% difference in billing. The result is that a small group of about 1% users end up consuming about 70% of all the water. This figure is arrived at as follows: A marginal unit increase in resource leads to a near doubling of marginal utility. This is a logarithmic gain (Ln 2=0.693 which means that 69% of utility is enjoyed by about 1% of consumers) . This is the figure issued by Bob Collymore the CEO of Safaricom who said that 1% of unlimited users are consuming about 70% of the resources. This essentially means costs could outstrip revenues by 70:1. This does not make any business sense. Not even a hypothetical NGO engaged in giving ‘free’ Internet through donor funding can carry such a cost to revenue ratio. As to why ISP’s and mobile operators thought billing by size of pipe to the Internet could make money is beyond me.

Bandwidth Consumption Is Not Linear

One mistake that network engineers make is to assume that a 512Kbps user will consume double what a 256Kbps user does and therefore advice the billing team that billing the 512Kbps twice the price of the 256Kbps can cover all costs. This is not true. There are things or activities that a 256Kbps user will not be able to do online, like comfortably do Youtube videos. A 512Kbps user will however be able to do Youtube without a problem. The result is that a 512Kbps user will do much more Youtube videos as the 256Kbps user becomes more frustrated with all the buffering and stops all together attempting to watch online videos. The result is that the consumption of the 512Kbps user will be much higher than double that of the 256Kbps user. Other than Youtube, websites can detect your link speed and present differentiated  rich content based on that. I’m sure some of us have been given an option to load a ‘basic’ version of Gmail when it detects a slow link. The big pipe guy never gets to be asked if he can load lighter web pages, rich content is downloaded to his browser by default while the smaller pipe guy gets less content downloaded to his browser in as much that they are both connected to the same website. The problem here is that the difference in content downloaded by the two people on 512K and 256K link is not linear or even double but takes a more logarithmic shape.

Nature Of Contention: Its a Transport and not Network problem

The second mistake that the network engineers make in a network is to assume that if you put a group of customers in a very fat IP pipe and let them fight it out for speeds based on an IP based QoS mechanism is that with time each customer will get a fair chance of getting some bandwidth out of the pool. The problem is that nearly all network QoS equipment characterize a TCP flow as a host-to-host (H2H) connection and not a port-to-port (P2P not to be confused with Peer2Peer) connection. There could be two users with one H2H connection each but one of them might posses about 3000 P2P flows. The problem here is that bandwidth is consumed by the P2P flows and not the H2H flows. User with the 3000 P2P flows ends up taking up most of the bandwidth. This explains why peer to peer (which establishes thousands of P2P flows) is a real bandwidth hog.

So what happens when an ISP dumps the angelic you in a pipe with other malevolent users who are doing peer to peer traffic such as bit-torrent? They will hog up all the bandwidth and the equipment and policies set will not be able to ensure fair allocation of bandwidth to all users including you. So some few users doing bit-torrent end up enjoying massive amounts of bandwidth while the rest doing normal browsing suffer. That explains why some users on the Safaricom Network could download over 35GB of data per week as per comments by Bob Collymore. Please read more on how TCP H2H and P2P flows work here. Many ISP’s engage engineers proficient in layer3 operations (CCNP’s, CCIP’s, CCIE’s etc ) to provide expertise on a layer 4 issue of TCP H2H and P2P flows. You cannot control TCP flows by using layer 3 techniques. IP Network engineers are assigned the duties of transport engineers.

At the end of the day, there will be a very small fraction of ‘happy’ customers and a large group of dissatisfied and angry customers. The few happy customers flat rate revenues are not able to cover all costs as the unhappy customers churn. If on the other hand these bandwidth hogs paid by the GB, the story would be very different. This is what operators are realizing now and moving with speed to implement. Safaricom is not the only one affected by this; Verizon, AT&T, T-Mobile in the US are all in different stages of doing away with unlimited service due to their unprofitable nature.

White space and it’s possible impact on broadband in Kenya

October 23, 2011 5 comments

White space refers to frequencies that are allocated for telecommunications but are not used in active transmission.  They however play a crucial role in enabling interference free communication. In layman’s terms, if there are two adjacent FM radio stations say Hope FM at 93.30Mhz and BBC-Africa at 93.90Mhz, the two are separated by white space bandwidth equal to the difference of the two (93.90Mhz-93.30Mhz = 0.6Mhz or 600Khz). If you look at the entire FM or TV spectrum, there is a lot of white space frequencies not in active use but is used as guard band to enable listeners tune clearly and avoid hearing two radio channels at the same time. The same is true also for Television (TV) transmission.

TV transmission uses the UHF frequency range of 470Mhz-806MHz (for example KTN Kenya transmits at 758-764Mhz which is channel 62 on the ITU chart. Remember this logo here? that rainbowy 62 wasn’t a fashion statement). Each TV station is allocated 6Mhz out of which only three points are used for picture, color and audio, the rest is white space. Taking the KTN example, 759.25Mhz is used for video, 762.83Mhz is used for color while 763.75Mhz is used for the audio in the TV channel. The rest is what is known as white space and just lies in waste though serving as guard bands.

It is this wasteful nature of analog TV and radio broadcast that there is a concerted push by CCK to move to digital transmission which unlike analog, does not have white spaces and therefore doesn’t waste precious frequencies. The push to digital TV is informed by the fact that  if all stations transmit digital signals, they will free up the white spaces for other uses.
From the consumer’s perspective, the push for analog to digital transmission is because they will also benefit from clearer and rich content that it will bring along (such richness includes being able to set reminders on future programs, scroll what’s next e.t.c just as you are able to do now on satellite TV such as DStv but not on the local free to air stations).

Kenya has set a target of 2012 to complete the analog to digital TV migration and there is already a lot of progress being made on that front. Once the migration is complete, it will have released the UHF band for other uses such as broadband internet, SCADA and remote metering systems and many more. However, once released, these frequencies will be unlicensed meaning anyone can use them without prior approval by CCK. In the USA for example the fact that white space will be unlicensed has seen the FCC face legal proceedings by wireless microphone manufacturers (which use white space) because it would mean that they will be interfered with by more powerful transmission sources such as white space base stations on tall buildings etc. The FCC has however gone ahead and allowed the use of white space for other uses.

The fact that white space is at lower frequency (470Mhz-806MHz ) compared to existing last mile solutions such as Wimax (2000Mhz and 5000Mhz ranges) or Wi-Fi, the white space can travel much farther and around physical objects. It is estimated that a Wi-Fi hotspot that changes to use white space frequency range can increase its coverage area by 16 times hence enabling wide reach. The lower frequencies will also make detection of white space signals easier and less power-hungry (This is because the higher the frequency detected, the more complex the equipment and the more power required, this explains why your phone’s battery drains faster if Wi-Fi or blue-tooth (Higher freq)is turned on than if you tune to FM stations (lower freq) on the same phone).
The wide coverage, cheaper equipment and lower power requirements will present endless possibilities of extending broadband coverage beyond towns or areas with Wimax, Wi-Fi, or GSM coverage. The use of white space will also lead to cheaper internet as investment in infrastructure will be minimum as one base station will be enough to cover the entire city and beyond making last mile infrastructure CAPEX lower. This is unlike Wimax which to cover a city like Nairobi would need about 5-10 base stations.

The Institute of Electrical and Electronic Engineers (IEEE) recently announced the finalization of 802.22TM white space standard known as the Wireless Regional Area Networks (WRAN) or Super-Wi-Fi (Microsoft calls its version White-Fi) that will deliver speeds of up to 22Mbps per channel. This paves way for equipment manufacturers to design interoperable white space transceivers and I believe we will soon see them in phones, laptops and tablet computers. Last April, A test white space network was established at Rice University in Houston in which one base station was able to provide broadband connectivity to 3,000 residents of East Houston.

White space use for broadband connectivity will be a game changer in the country when it comes to last mile connectivity as the current networks are not very extensive in reach and neither do they offer reliable and affordable connectivity. The new white space systems will offer wide coverage especially in rural Kenya that has low population density that has made investors shun extending services to them due to CAPEX required to set-up  networks to serve the rural folk. White space utilization will now make it possible for these investors to recoup their investments in rural areas as they will now cover larger areas cheaply.

What is causing the Ka-band Satellites launch delays?

October 14, 2011 Leave a comment

The Ka band is part of the K band of the microwave band of the electromagnetic spectrum. The Ka symbol refers to “Kurtz-above” — in other words, the band directly above the Kurtz (K)-band. The 30/20 GHz band is used in communications satellites. Similarly Ku stands for Kurtz-under band.

In the past two years or so, there has been a lot of furor over the utilization of the Ka band for commercial purposes such as broadband and audio-visual broadcast. The higher operating frequencies of Ka band and the spot beam design mean that the cost per bit transmitted on a Ka band satellite is significantly lower than that of Ku or C band satellites. This fact makes Ka band an attractive alternative to the more expensive Ku and C band and positions satellite communications as a worthy challenger to the now ubiquitous and cheaper fiber optic cables.

This lower cost per bit has seen investors and satellite operators come up with Ka band satellite launch projects such as the O3B project (see my take on O3B here), the ViaSat, Yahsat and the Hughes spaceway. Major satellite operators have also announced plans to launch Ka band birds in the near future with the notable exception of Intelsat who have so far been silent about their Ka band plans apart from its partial investment in the wildblue Ka satellite that provides high-speed broadband in North America.

Intelsat’s non-commitment to Ka band was perhaps the first sign that they knew something other operators didn’t know. As for now it is left to our fertile imagination as to why the largest satellite fleet owner did not jump into the Ka band-wagon (pun intended)

Recent events and realizations however show why Intelsat was right. The Ka band satellite projects have been faced by unforeseen hitches that have caused delays and indefinite postponement of some satellites.

Lack of key components.
During the design of a satellite, very few components are COTS. This means that majority of the components have to be specifically designed and manufactured for that particular satellite and no other satellite will have an exact component characteristics. One of the most important component in a satellite is the Traveling Wave Tube (TWT).  These are RF amplifiers and you can read more about them by clicking here. Apparently the two major manufacturers of TWT’s for satellite application (L3 communications and Thales of France) have run into difficulties in the design and manufacture of Ka band TWT’s. The high demand for Ka band TWT’s is also putting a strain in their manufacturing capacity making the acquisition of TWT’s a critical path in these satellite projects.

Interest from airlines.
One thing that no one foresaw was the strong interest of Ka services from airlines because the cost per bit for Ka band is significantly lower, Airlines could now offer affordable  high-speed broadband on-board flights, The demand for on board internet services is also skyrocketing thanks to devices such as iPads and smart phones. These airlines were willing to pay upfront and sign long-term contracts with satellite operators such as the case between Jetblue and Viasat who signed an agreement early this year. This has sent some satellite designers back to the drawing board to design satellites that are capable of seamless cross-beam and sometimes cross-satellite hand over to enable aircraft’s to be always connected.  The immense opportunity presented by airlines presents a lucrative market in which satellite operators are willing to play in especially due to the toughening competition on land from terrestrial fiber optic cables. In a few years, broadband on aircraft will be ubiquitous thanks to this development.
This fact has made Ka band not as competitive as before for broadband providers  as compared to the existing Ku band satellites because the demand for Ka is now bigger than was before anticipated.

Cheaper CPE’s?
The assumption that Ka band will present cheaper customer premise equipment (CPE) to Ku band was also far-fetched. The assumption that because the antenna is smaller it will therefore be cheaper is wrong.  This would only hold true for receive only systems such as DTH Tv like DStv. When it comes to broadband connectivity where there is need to transmit, a smaller dish presents two problems:

  • The smaller dish being presented to consumers of Ka band present a bigger possibility for interference if not precisely pointed, the small dish avails a wide beam that can cause adjacent satellite interference. Ka band installation technicians will need to be extremely precise in pointing the smaller dishes. Sometimes this extreme precision is lacking especially in Africa and Asia where adherence to standards is lax. I see a situation where Ka band installations will still continue to be done on Ku size dishes such as the 1.2m or 1.8m dish. The smaller cheaper dishes will simply not work well unless they are on automatic pointing or gyroscopic systems on maritime vessels and aircraft or on receive-only systems.
  • Because of the high operating frequencies of Ka band, the tolerances for the RF equipment design will have to be very low. The design of a cheaper and more tolerant RF system is simply not possible. The existing cheap Ka band RF systems are not the best and do not offer good tolerances for the extremely high-speed data transfer figures touted for the Ka band services This is because they introduce noise due to the poor design. You cannot buy a 50 dollar RF system and expect to transmit at 10Mbps.

So far, delays in Ka band satellites launch have been experienced by ViaSat-1, YahSat-1B, the eight O3B satellite constellation and many more.
The market needs to wake up to the fact that Ka band services will come but they will need time to mature into what industry analysts say they will be. It will not be an overnight success story but there is hope that development of Ka band technology and systems will eventually lead to cheaper satellite broadband.

on twitter

Will the current BGP Scale a bigger Internet?

August 8, 2011 2 comments

The fast increase of the Internet routing table size on the Default-free Zone (DFZ) is becoming a major concern to IP carriers all over the world. The number of active BGP entries on border routers is increasing at a quadratic if not exponential rate (see figure below). The future unhindered scalability of the Internet is in doubt. In spite of the use of Border Gateway Protocol (BGP), inter domain routing does not scale any more these days as the volume of routing information keeps on growing by the day and it is not  clear if the current routing technology will keep pace with this growth and still do it cost effectively. Today it costs more for border routers to exchange routing information than it did few years ago due to investment in more powerful routers that can keep up with this growth.

The depletion of the IPv4 address space and the inevitable adoption of the IPv6 addressing scheme means that routers will now have to exchange much larger routing tables because the vast amount of IPv6 addresses  require even more prefixes to be announced in the Default-free Zone. This problem will be compounded by the desire by network operators to announce more specific ( and hence longer prefix) routes to their critical infrastructure such as DNS and Content Delivery Networks (CDNs) in the now wider prefixes in IPv6. This tendency to announce very specific routes by use of longer prefixes stems from the desire to prevent prefix hijacking by malicious Autonomous Systems (AS’s) as was the case in 2008 when an AS owned by Pakistan Telecom announced the Youtube IP space with a longer prefix leading to Youtube traffic being redirected to Pakistan because it was the more specific route. With cyber crime rates increasing worldwide, network engineers want to ensure high availability of their networks on the Internet and end up announcing very long prefixes that have an effect of making the Internet routing table unnecessarily larger. This is the reason why I still think the old rule of eBGP routers filtering any route to a network longer than a /22 should still be in force. A peek on some Internet routing tables will show the existence of even /25’s.

Active BGP prefix growth (source: http://bgp.potaroo.net/as2.0/bgp-active.txt)

The growing size of the Internet and its inevitable changes and failures leads to a large rate of routing table updates that stresses router CPUs and there have been several proposals made to modify BGP to make it scale. The current routing tables are linear and its high time logarithmic scale routing was introduced that can summarize routes in a logarithmic fashion. By this I mean that summarization of  prefixes should be much more intense at the longer side and less intense as the prefixes become shorter.

The above can be achieved in three ways namely:

Aggregation proxies: In this way, ISPs will announce or redistribute routes to their networks via a non BGP protocols to a router aggregation proxy. This proxy will receive many long prefixes and aggregate them into shorter ones for eventual announcement via BGP. The regional allocation of IPs through organizations such as LAPNIC, RIPE, AfriNIC and the rest make aggregation proxies a very viable path because the regional allocation of IP spaces is not random (e.g. any IP starting with 196. or with 41. is from an African ISP). AfriNIC can therefore host aggregation proxies that speak to African border routers via a non BGP protocol and this aggregation proxy can then announce a single entry of say the 196 range to the Internet. the other local aggregation servers in Americas, Europe and Asia can then have filters to reject any inbound traffic to the Africa IP’s because that would be IP hijacking. The downside to aggregation proxies is that paths will now be longer as the proxy introduces an extra hop. the trade-off between a massive reduction of the routing table size and path elongation has to be weighted to see if this is a viable alternative.

DNS-like lookup system: This system will apply to non routable prefixes. in this concept, all the long prefixes are retained and recorded in a DNS-like lookup system in which a particular IP space is mapped to a specific border router. Anyone wishing to communicate with this IP space will do a lookup to get a next hop IP address and send this traffic to it. As a result, the long prefixes are not routable on the Internet but the lookup system knows a router from which the traffic can be forwarded without the use of inter-domain routing information. In simple terms this will be like a DNS not for domain names but for long prefix IP spaces. This proposal will eliminate the need to have long prefixes on the Internet routing table and a bar can be set to filter anything longer than say a /19 from being announced on the now cleaner DFZ. This will have the advantage of returning control of what appears on the DFZ routing table to regional organizations such as AfriNIC as opposed to AS managers who can sometimes be selfish.

Locator-Identifier split (Loc/ID): Whereas the above two methods overlay the existing BGP and enhance it, this approach replaces the existing inter domain routing as we know it. The Locator-Identifier split (Loc/ID)  proposes the scraping of IP addressing as we know it and coming up with 2-level routing architecture to replace the current hierarchical inter-domain system. The argument behind Loc/ID is that the reason why IP-based routing is not scalable is because the IP address assigned to a device is now being used as its unique Identifier as opposed to it serving the dual role of it being both a locator  and identifier. By splitting it into a Locator section and an ID section, then summarizing the locators on the DFZ, considerable reductions can be achieved on the routing table because routing on the DFZ will be based on locators and not on both locators and identifiers. Cisco recently developed the Loc/ID separation Protocol (LISP) that is hoped will replace BGP in future as BGP will no longer be able to scale a bigger IPv6 Internet. Read more about LISP by clicking here. Cisco is currently promoting LISP as an open standard and not a proprietary standard in the hope that the Internet Engineering Task Force (IETF) will adopt it.

In summary, network operators need to be aware of the non-scalability of BGP and start preparing their networks for the adoption of either of the 3 proposals above. I would however bet that the Loc/ID way of doing things will prevail and LISP will replace BGP as the inter domain routing protocol of choice on the Internet.

Follow

Get every new post delivered to your Inbox.

Join 91 other followers