Archive

Author Archive

Netflix experience on Ka-Band VSAT in Kenya

January 8, 2016 5 comments

Yesterday I, like most people here; woke up to the news that the American multinational provider of on-demand Internet streaming media; Netflix, has expanded into several countries including Kenya. Social media reaction in my view was a tie between those who think these new comers will ‘disrupt’ the market currently dominated by Multichoice’s DStv. The jury on what exactly is the meaning of disruption as applied in that discussion, is however still out.

My views on their foray into Kenya aside, I decided to test the service on my home VSAT link. This was after I read on how it works just in case I had made any assumptions that were wrong. Here, I found out that the minimum recommended bandwidth is 3 Mbps for SD quality video and 5 Mbps for HD quality video.

The particulars of the link are as follows:

  • Ka-band service off the Avanti Hylas-2 satellite at 31 degrees East (somewhere above Uganda)
  • 74 centimeter elliptical dish with a 1 watt Ka-band radio
  • Hughes HN9260 satellite router
  • 15 Mbps download and 2 Mbps upload speed
  • Netgear AC2350 Nighthawk X4 WiFi router

With the VSAT kit I achieved a strong enough signal to enable a DVB-S2 carrier at 8-PSK 8/9 on the down link and do a TDMA/FDMA carrier of 2048 Ksps at QPSK 4/5 on return w.r.t the remote terminal

The 74 centimeter dish with a clear view of the western sky. From Nairobi the look angle is a favourable 88.5 degrees

The 74 centimeter dish mounted on a perimeter wall  with a clear view of the western sky. From Nairobi the look angle is a favourable 88.5 degrees

I registered an account and selected a 58 minute SD quality documentary titled “Rise of the drones” and proceeded to view it. Its took about 3 seconds to open the stream and the streaming started.

The Netflix main screen opened on the Firefox browser

The Netflix main screen opened on the Firefox browser

The picture quality was as expected for  an SD video on my old laptop, I however could not identify how to check this video’s resolution on the stream.

Video quality was consistent throughout the session with no downward review of picture quality

I watched it to the end without a single “Netflix and Chill as it buffers” moment and the stream download rate indicator was about 5 minutes ahead of the play indicator throughout the time.

rate

The progress bar (in lighter shade of grey ahead of the red play duration bar) showing about 5-minute lead

The VSAT links Cacti graph for the 58 minute session showed  that the stream consumed an average of just below 3 Mbps with a peak of 3.7 Mbps. During this time the total downloaded data was 1.3 GB by calculating the area under the graph.

Cacti graph utilization during the 57 minutes of documentary streaming.

Cacti graph utilization during the 58 minutes of documentary streaming.

The above results means that in a multi-viewer scenario where more than one person is using Netflix on the LAN , the VSAT’s 15 Mbps capacity can support 4 concurrent viewers without a problem and will be limited only by the WiFi routers’ capability.

Update: I did Netflix for the entire day on Saturday 9th (via a HDMI stream dongle on TV) with my kids in the usual TV schedule as we do on DSTv (punctuated with sessions of outside play, reading/study, quiet times and no TV during meals). We had consumed 19.4 GB by the time we went to sleep.

Angani outage. What really happened

November 8, 2015 5 comments

Cloud-OutageOn Thursday, many subscribers of the local Angani cloud service noticed a prolonged inaccessibility of their services hosted at their two data center locations. The Angani cloud service was down. Being one of their customers, I was also affected.

For most part of the morning, social media was filled with quest for what could have possibly happened to cause such a massive outage. It was later in the day that word started flowing around about what could be happening at Angani. Many speculated that the recent low key exit of one of its founders and CEO Phares could have a direct correlation to the outage. At that time, it was but speculation. Bloggers such as Kachwanya, wrote an article that said that there were boardroom wangles that led to the ouster of the CEO Phares.

Since then, nothing much has been discussed on what exactly happened, most of the discussion is on who has any info on what happened.

This is what happened

From what I gather, problems started when a new group of investors put their money into the company and got some seats on the board. There was tension between the co-founders but only Phares and Brian were from a cloud computing background. This tension manifested itself through tension in daily operations that led to the ouster of Phares from the CEO role.

Brian could be said to have single handedly set up the Angani infrastructure. They have hired more engineers to work under Brian but their level of experience meant that Brian still ran most of the platform. During setup, the fact that Angani (unlike many cloud providers) did not own any data center, they depended on third party hosting. Due to this, Brian built a secure system from physical access this being a commercial shared data center.

When the two left, the new management team exchanged communication to the effect that the two need to handover the passwords to them. The two said that they will need a document signed showing that they have handed over the passwords and are therefore free from any liability. The board member declined. The two gave the passwords to their lawyer and informed the Angani board to go pick them from there upon signing the chain of custody forms. They declined and instead brought in an external consultant called Shape Blue to attempt to hack the system and gain control. They also proceeded to sue both Brian and Phares. One of the Angani communiques indicated that Shape blue were brought in after the crash, they were brought in earlier to hack the system.

Due to the security designed by Brian into the systems, the consultants managed to change the root password but lost access to the system in the process. Because of this, even the passwords that could have helped them had they picked them from the lawyers were now useless.

Despite the lawsuits, the two cut short their holiday in Malindi because they did not have any laptops so could not help remotely. They got to Nairobi to be informed that their help would not be required immediately. Which was a little disrespectful.
​ ​Brian has been willing to help but the lack of goodwill from the Angani team keeps him away.

The Angani team has refused to sign anything with Brian who is willing to help and are propagating the theory that the system was crashed by Brian and not the result of an attempted break in to the system by Shape Blue. The only way out of this situation I believe is if the two groups sat with a mediator and find a way to restore service and save the now tarnished local hosting scene.

(c) image techweekeurope.co.uk

Data centers and the environment: The case of Facebook

September 24, 2015 1 comment
Facebook data center engineer

A Facebook data center engineer

There has been an increased uptake and use of the Internet especially  social media by many in the world. This has led to rapid deployment of infrastructure to support this increased demand.

This infrastructure consumes power. It is estimated that data centers that power the internet world-over consume about 1.3% of the world’s total electric power. This might seem small but if you consider that Facebook consumed about 532 million kWh in 2011 (must be close to double that amount now). At current Kenyan electricity tariffs, that’s about 10.6 Billion shillings in power bills. Google consumed just over 2 billion kWh during the same time to power their servers world-wide. With most of this power being from coal plants, data centers are attracting the attention of groups such as green peace who are have launched campaigns such as ‘unfriend coal’ which was geared towards forcing Facebook to lower its dependence on coal to power its service.

With pressure piling on data centers to lower their carbon foot prints, innovation and new way of thinking is needed. One of the low hanging fruits is to build new data centers in regions that use green energy. One of the prime locations now for setting up data centers is Iceland. The country generates all of its power from geothermal steam and hydro. The cool weather there also means that natural cold air that is about 5.5 degrees C on average is simply circulated in the data center to cool the equipment as opposed to using air conditioning systems for forced cooling. This means that a server operating out of Iceland is cheaper to run and has a near zero carbon emission attached to it. According to Verne Global’s findings in 2013, the 10 year energy cost (the length of a standard data center hosting contract) for 1 megawatt of IT load in Keflavik, Iceland is near $3.5 million, compared to nearly $23 million in London, $20 million in Frankfurt, $12.5 million in Chicago, around $6 million in Oslo, Norway. The other bonus is the geographical location of Iceland makes latency from a server there to Europe and US nearly equal at 40ms.

However, with the likes of Facebook who have already invested a lot of money on data centers in the US, they cannot simply cart it to Iceland. They have therefore come up with innovative ways to lower their data center energy costs. It is estimated that about 25% of power in a data center goes to cooling, 10% is wasted in the conversion from AC to DC and back to AC voltage, IT load taking 46% of the power (25% servers, 8% network and 13% storage) there is a huge opportunity to lower the IT load portion and cooling portion.

IT load efficiency

Facebook did some research and found out that servers running low-level loads use power more inefficiently than idle servers or servers running at moderate or greater loads. In short a server should either be kept idle or at moderate/high load, not in low load. The traditional method of load distribution on a group of servers is known as round robin. This method is efficient on computing resources but inefficient on power use. Facebook developed a new way of doing things known as Autoscale.

Autoscale is designed to distribute incoming requests to the servers so that they are either idling, or running at medium/high-capacity and not in between. It tries to avoid assigning workloads in a way that results in servers running at low capacity. This was informed by a test that was done by Facebook engineers. In this test they found out that a server that is in idle mode consumes about 60 watts of power. If some light lower level load is applied to the server, the power consumption goes from 60 to 130 watts. However, if the same server is run at medium or higher loads, the power consumption is about 150 Watts; a 2o watt difference between low load and high load. This means that its more energy-efficient to give an already moderately busy server some more load (20 watts extra consumed) as opposed to giving this load to an idle server (70 watts extra consumed if you do this). Autoscale will also reduce the number of servers sharing the load so that it puts as many servers as possible in idle mode. In low traffic periods such as American midnight. Autoscale dynamically adjusts the size of the server pool in use, so that each active server will get at least a medium-level CPU load. Servers not in the active pool don’t receive traffic.

The other method deployed to reduce power consumption is the reduction of power transformation. There is about 10-15% loss in transformers and rectifiers found in UPS’s. In most data center setups, mains AC power is fed to a centralized UPS. The UPS converts this AC to DC and back to AC to supply the servers with power. This AC-DC-AC conversion results in about 6-12% loss. a way to lower this loss is to have the servers supplied directly by mains AC power but have localized UPS’s on each rack that can give up to 45 seconds of backup power as the diesel generator turns on in case of a power outage (a very rare occurrence in the developed world). Eliminating centralized UPS’s means that data centers can save about 10% of power. Feeding direct AC power from the grid to servers can be a tricky affair, this is because reactive components in the grid such as motors that power everything from escalators to coffee grinders lower the power factor and increase reactive power. The deployment of reactive synchronous condensers in data centers lowers reactive power which is responsible for some losses depending on power factor of received power. Facebook has deployed in-house custom-made reactor power panels which try to bring the power factor as close as possible to unity. Other than improving the quality of power, the Facebook reactors also reduced harmonic distortion in the power system which causes delays in generators kicking in when there is a detected power loss from the mains.

Use of 277Volts instead of 120 or 240Volts

Facebook hardware is also designed to operate at 277 Volts AC as opposed to the standard 120Volts in the USA main supply systems. The reason behind this is simple. with US 3 phase power being supplied at 480 Volts, the single phase neutral doesn’t come out at the 120Volts but at 227 Volts (you can use imaginary/complex number cube root of 1 components to derive this). The lowering of 227Volts to 120Volts by a transformer leads to about 3% transformation losses. So operating the servers at 277Volts and not 120Volts saves 3% power. The diagram below shows how a servers efficiency improved with the use of a higher voltage.

Hewlett-Packard server power supply efficiency as a function of load

Hewlett-Packard server power supply efficiency as a function of load (c) Syska Hennessy Group

A server operating at 240Volts (which is what we use in Kenya) is 91% efficient at 50% load compared to a similar server operating at 120Volts. jacking up this to 277Volts improves efficiency to 92% compared to a server at 120Volts at 89% efficiency on 50% load. The reason why America uses 120Volts is because in the early days of electricity, bulbs were made of carbon filaments that lasted longer if operated at 120Volts than at 230Volts, because most of electricity was used for lighting, it made sense then to run the grid at 120Volts. Later, when electricity went to Europe and Asia, technology had improved and the tungsten filaments could do higher, more efficient voltage at 240Volts.

Simpler cooling and Humidity control

About 12% of the cooling energy consumption goes to delivering the cold air at the point of heat rejection. By use of a ductless cooling system, the cold air is delivered at the center of the data center and with additional smaller cooling systems at the rack where the heat is generated, substantial power savings can be achieved.

The use of a vapor seal can also play a critical role in controlling relative humidity, reducing unnecessary humidification and dehumidification. If humidity is too high in the data center,conductive anodic failures (CAF), hygroscopic dust failures (HDF), tape media errors and excessive wear and corrosion can occur. These risks increase exponentially as relative humidity increases above 55 percent. If humidity is too low, the magnitude and propensity for electrostatic discharge (ESD) increases, which can damage equipment or adversely affect operation. Also, tape products and media may have excessive errors when exposed to low relative humidity.

Most equipment manufactured today is designed to draw in air through the front and exhaust it out the rear. This allows equipment racks to be arranged to create hot aisles and cold aisles. This approach positions racks so that rows of racks face each other, with the front of each opposing row of racks drawing cold air from the same aisle (the “cold” aisle). What this does is that it makes it easier to draw out hot air from the hot isles before it mixes with the cold air which lowers the cooling efficiency.

compressor_efficiencyThe other method of lowering cooling costs is through the use of multi step compressors for the cooling systems. Most traditional cooling systems simply switch on the compressors at full load when the thermostat input dictates that cooling should happen. a 4 step compressor operation showed that compressors operate at different efficiency at various steps. The diagram  on the side shows that the compressor in question is most efficient at step 2. The cooling system is designed in such a way that the compressor operates at step 2 most of the time.  Off the shelf cooling systems work well but are grossly power inefficient for use in data centers.

The internet is currently moving towards cloud computing. This essentially means that data centers will continue to grow and soon the power consumed by data centers will pile pressure on the grids and the environment. The use of green energy sources and innovation will go a long way in reducing the contribution of the Internet to global warming.

India blocks access to porn. How did they do it?

August 4, 2015 4 comments

blockedYesterday, against a Supreme Court decision, the telecom regulator in India ordered all ISPs licensed and operating in the country to block access to pornographic websites. This was after a private suit that petitioned the government to block the websites as part of the process to rid India of a negative image as the rape capital of the world (some people have suggested albeit jokingly that India changes its name to Rapistan). According to the suit, unfettered access to pornography is responsible for the high number of rape cases in the country.

Considering that most of the content on the internet is now hosted on content delivery networks (CDNs) such as Akamai and also on distributed cloud platforms, how does a country block access to pornography whose source server could be the same one hosting other non-pornographic websites? This is to say, a CDN server by a company such as Akamai could be hosting within it both a pornographic website and a religious website, how then is it possible to block one and not the other using common tools that that can block an IP or a port (port 80 or 443). If say the CDN server in my example has an IP 77.220.9.1 (random IP for illustration purposes) and is hosting both the religious content and porn on the same webserver listening on port 80, if we block the IP or the port then we lose access to all the content in the server and not just the pornographic content. How then did India do it?

Deep Packet Inspection

Ordinarily, most network equipment we interact with (including your home WiFi router) operate from layer 4 and below of the OSI model, it therefore means that these devices can act on layer 4 and below attributes such as port numbers, IP addresses and MAC addresses. Due to the shared nature of most internet infrastructure today, these tools become ineffective in selectively blocking content which is at the application layer of the OSI model. An appliance that operates above the OSI layer 7 is therefore needed to accomplish this. Simply blocking CDNs IP addresses such as Akamai  would lead to outage to other websites that are also hosted there

These appliances are able to ‘see’ layer 7 traffic so that access to our server example 77.220.9.1 that’s hosting http://www.religiouswebsite.com and a http://www.pornwebsite.com both on port 80 can be told apart by the layer 7 appliance.

These devices achieve this through what is called Deep Packet Inspection (DPI). Does that mean there is Shallow Packet Inspection? Sort of, when a router seated at later 4 looks at a packets header to see what source and destination address the packet has, it’s a form of shallow packet inspection as it doesn’t venture beyond the packet headers. With DPI, the appliance goes further and looks into the payload in the packet that’s carrying the actual user content and determines what type of content the packet is carrying. By use of unique signatures within the packet payload, the appliance can therefore tell apart porn from non-porn content. How they do this is a trade secret.

The appliance signatures can be classified as a group in a rule (e.g. Adult content or Social media) or be applied individually such as signatures that can detect Facebook, Twitter, Gmail etc. These can then be applied to various rules such as blocking or admitting the content. Further refinement of these rules can also be applied for example a rule to block Facebook and twitter in an office during working hours or block them completely 24/7 as is the case in China where the two social media platforms are blocked.

DPI can also do further identification of traffic for a more refined control. For example, the appliance might be configured to allow Facebook but block any videos shared on Facebook. It can also be used to block Facebook status posts with certain key words while allowing the rest of the content.

This as you can imagine gives immense power to any government or institution to block access to or posting of content it deems unfit for public consumption. This power can also be abused by regimes by suppressing access to content that is deemed dangerous to the regimes existence and rule as is the case in Turkey where the government blocks twitter at will if it feels threatened.

a Layer 7+ appliance output showing ability to classify content at above layer 7. Worth noting is all these protocols in this output happened on port 80.

A Layer 7+ appliance output showing ability to classify content at above layer 7 of the OSI model. Worth noting is all these protocols (other than HTTPS)  in this output happened on port 80 but the device can identify each protocol by use of DPI signatures, it can even tell apart HTTP browsing from HTTP file download with some appliances able to tell even the type of file and size.

Broadband as a value add? Yes, Its about the eyes.

June 5, 2015 Leave a comment

InternetThe days of ISPs making super profits are long gone. The margins being created by ISPs world over are thin. Also, should Internet connectivity prices go lower due to either more competition or legislation, ISPs stand to create even thinner margins in future. There will therefore be little if any revenue/profit oriented incentives for ISPs to be in business.

Having worked in the industry for about 12 years now (That’s eons in Internet growth terms), I have seen the ISP industry evolve both on the technology front and its value proposition to customers. The liberalization of the sector in most countries has also attracted many investors into the industry, this has created a stiff and competitive market, this has brought with it diminishing returns on investments. Small ISPs are dying or being bought out as they cannot stay afloat. Large ISPs are also merging to create economies of scale to survive.

With the coming projects such as Google’s project Loon and Facebook’s Internet.org (and subsequent Internet by drones project) and many more that aim to provide nearly free Internet to the worlds’ unconnected, there will be no financial incentive for a commercial ISP to go into business anymore.

So what do ISPs need to do?

There has been a lot of talk in the market about value addition and that ISPs should stop selling ‘dumb pipes’ and offer value over and above just the internet pipe. All this has already happened and at the moment ISPs have been outmaneuvered by OTT providers who are providing this value addition type of services over the links the ISPs are providing to their customers. For example, some years ago, all ISPs were offering VoIP as a value add, now with the likes of Skype and Whatsapp calls, ISP-provided VoIP is a dud. Another example is dedicated hosting at ISP provided ‘data centers’ (a room with access control and cooling :-) ), with the maturity of cloud services, such a service is also not appealing anymore to customers. ISPs are at the end of their rope.

If you carefully analyze all recent ISP mergers and buyouts in Africa (and beyond if you have the time), you will realize that buy out decisions are less and less being based on an ISPs profitability or revenues and cash flow position. They are now based on subscriber numbers. But what is the commercial point of buying a unprofitable or low revenue business? Answer: Its about the eyes.

ISPs are and will no longer be about direct internet pipe derived revenues but about indirect revenues. Sources of these indirect revenues include online advertizing, OTT services and content delivery and purchase. This is the very reason why giants such as Google and Facebook have entered the ISP business, Its about the eyes. An ISP with more subscribers and loss making is now more attractive to buy than one with few subscribers and super profitable. Unbelievable isn’t it?

End to end control.

OTT operators such as Facebook have been blamed by traditional ISPs for using the ISPs network infrastructure to do business with the ISPs end users. Attempts by ISPs to make these operators pay for delivery of content has been met with opposition due to fears that such an arrangement can result in a tiered internet and with that a demise of net-neutrality that has been one of the key characteristics and a supposed catalyst of internet development. Attempts to camouflage net-neutrality-flouting arrangements by use of ISP led offers such as Facebook’s Internet.org where users on certain networks access Facebook and Whatsapp for free outside their data plans have also been meeting resistance. Being so froward thinking, I am of the opinion that these companies foresaw the resistance to their initiatives to offer their content for free by paying the traditional ISPs, this is why they are all rushing to roll out their own infrastructure to provide free or near free internet to the masses. At the moment, other than their Satellite/baloon projects being tested in New Zealand, Google is already testing out high speed fiber -FTTH in select American cities. This will give them end to end control of the broadband supply chain and therefore quell concerns of creation of a tiered internet. This of course assumes they will come up with a way to show regulators that they have fair access policies for all third party traffic.

The future

As i see it, the traditional ISP will die a natural death if they don’t adapt to the coming changes. What was once a value add will become the product and vice versa. Internet broadband will be a value add to content and OTT services. A content provider such as Facebook or Google will offer you free internet to access their content. Internet broadband provision will be a value addition to content providers. As someone once said, if the product/service is free, you are the product. The free internet will come with privacy strings attached so as to enable advertizers track your habits and offer more targeted adverts. This targeting is getting more accurate and spookier if the tweet below is anything to go by.

glasstweet

The use of browser safety features to disable cookies wont work as companies such as Google are now using what is known as device finger printing to identify you. Device finger printing works on the basis that your computers OS, installed programs (and the dates they were installed), CPU serial number, hardware configuration (RAM/HDD/attached peripherals) will give your computer a unique identifier if applied to an algorithm. Therefore your computing device is unique and can therefore be tracked without the need to set cookies.

Why Is Kenya Power Dumping Pre-paid Meters?

May 19, 2015 5 comments

meter2Recently, the country’s only power utility company announced that it was slowing down the roll out of the prepaid metering system that they launched about 6 years ago. The reason given for this about turn was that the company is losing revenues as it is now collecting less from the same customers who are now on prepaid metering than they did before when the same group of customers were on post paid metering system.

According to the Kenya Power records, about 925,000 out of the 3.17 Million customers are on prepaid meters. Before the 925k moved to prepaid, they were collecting about four times more than what they currently collect from the same customers. The Kenya Power MD stopped short of accusing customers with prepaid meter tampering as his explanation of the reduced revenues. With the reduction in revenues, Kenya power has decided to classify this reduction as ‘unpaid debts’ in their books. Meter tampering would be across both pre and post paid users if he still holds the opinion that prepaid users are tampering with meters. In fact there are lower chances of a prepaid user tampering with the meter than a post paid user doing the same.

My little accounting knowledge tells me that it is every company’s dream to convert all their customers to prepaid. This shifts the cash flow position to a very favorable one of positive cash flow, you have the money from customers before they consume your service/product. With a prepaid metering system, Kenya power was heading to accounting nirvana but the recent revelations about the accumulating ‘debts’ from prepaid customers was a shock to many. First and foremost, if you do not buy prepaid meter tokens, you cannot consume power on credit and pay later, so how is this reduction in  revenues from prepaid meter consumers classified as a debt as opposed to an outright reduction in collected revenue?

Faulty meters?

There are two main brands of power meters used by Kenya power, Actaris and Conlog. The later brand was found to be defective 3 years into the roll out, the meters were erroneously calculating remaining power tokens especially after a power outage, you could be having say 30Kwh’s remaining on your meter and after a power blackout, the meter reads -30Kwh or some other random negative value. This is what consumers would notice, we cannot for sure say that the same meters also under bill on the same breath. Of course if it under bills, very few consumers would complain or even notice, they would however be quick to notice a negative token value because they would lose power. Could faulty meters be the problem here? Could Kenya power be suffering from substandard meters? Here is a blog link to one affected consumer who complained in 2012 about the faulty meters. Kenya power attempted to replace some Conlog meters but I still see some in the wild in use.

Reality of estimate billing?

We have all been there, where you receive an outrageous bill from Kenya power. This is because more often than not, they estimate power consumed and never get to read the meters in your house. When was the last time you saw a Kenya power meter reader on a motor bike in your estate if you are on postpaid? According to Kenya power books, one post-paid domestic customer consumed 12 Kwh of electricity and on average paid Sh1,432. And each prepaid customer consumed an average 23 Kwh and paid roughly Sh756 to the power company. This can only mean two things:

  • The postpaid customers are over billed due to poor estimation methods as meters are seldom read. I noticed this on my water bill too. When my bill is say 600/= and i overpay 2000/= when settling the 600/= bill, my next bill will be in the regions of 2000/= (estimated from my last payment). So i make sure i pay the exact amount on the bill these days to deny them room to estimate and over bill me.
  • The prepaid meters are spot on accurate. This is the most plausible reason and I will explain below.

Prepaid meters are accurate?

Unlike the old school postpaid meters that measure total ‘apparent’ power consumed, the new prepaid meters assume an efficient electricity grid and measure effective or real power consumed by the customers appliances.  In a situation where the power distribution grid is inefficient, the voltage and current are not in phase. This leads to a lot of ‘wasted’ power. In postpaid, consumers pay for the grid inefficiencies, in prepaid, they do not. This is why there has been a drastic reduction in revenues because consumers are now paying for what they consume and not the wastage on the grid. Perhaps this is what Kenya power sees as ‘consumed but unpaid for power’ by the prepaid meter users? Could be, this is because its not possible to consume more than what you have paid for on a prepaid meter. apparent power is consumed but not measured by the meters. This is especially true if you have appliances with electric motors in them such as washing machines, water pumps and air condition systems.  Read more about power factor by clicking here

You can read older articles on my blog touching on Kenya Power by clicking the links below:

  1. How Kenya can enjoy lower electricity tariffs
  2. Kenya is ripe for a Demand Response Provider
  3. Kenya Power Needs To Be Penalized For Blackouts
  4. There is need to end the Kenya Power monopoly

What Whatsapp voice means for MNO’s

April 1, 2015 8 comments

Facebook inc recently introduced the ability to make voice calls directly on its Whatsapp mobile application. This is currently available on Android OS and soon to be made available on iOS.

What this means is that mobile users with the updated app can now call each other by using available data channels such as Wi-Fi or mobile data. Going by a recent tweet by a user who tried to use the service on Safaricom, the user claims that they made a 7 minute call and consumed just about 5MB’s of data. If these claims are true, then it means that by using Whatsapp, a user can call anyone in the world for less than a shilling a minute. This is lower than most mobile tariffs.

Is this a game changer?

Depends on who you ask. First lets look at what happens when you make a Whatsapp call. When a user initiates a call to another user over Whatsapp, both of them incur data charges, in the case of the twitter user I referred to above who consumed 5MBs, the recipient of the call also consumed a similar amount of data for receiving the call. If it so happens that both callers were on Safaricom, then just about 10MB’s were consumed for the 7 minutes call. The cost of 10MBs is close to what it would cost to make a GSM phone call for the same duration of time anyway. Effectively, to now receive a Whatsapp call, it is going to cost the recipient of the call. This is unlike on GSM where receiving calls is free.  When the phone rings with an incoming Whatsapp call, the first thought that crosses a call recipients mind is if he/she has enough data ‘bundles’ on their phone to pick the call. The danger is if there is none or the data bundle runs out mid-call, the recipient will be billed at out of bundle rate of 4 shillings an MB. Assuming our reference user above called someone whose data had run out, Safaricom will have made 5 Shillings from the 5MBs and 28 shillings from the recipient. A total of 33 shillings for a 7 minute call translating to 4.7 shillings a minute which is more than the GSM tariffs.

This effectively changes the cost model of making calls. the cost is now borne by both parties, something that might not go down well with most users. I have not made a Whatsapp call as my phone is a feature phone but I believe if a “disable calls” option does not exist, Whatsapp will soon introduce it due to pressure from users who do not wish to be called via Whatsapp due to the potential costs of receiving a call. That will kill all the buzz.

Will operators block Whatsapp calls?

It is technically possible to block Whatsapp texts and file transfers using layer 7+ deep packet inspection systems such as those from Allot’s NetEnforcer and Blue coat’s Packeteer. I believe an update to detect Whatsapp voice is in the offing soon and this will give operators the ability to block Whatsapp voice. The question however is what will drive them to block it?  MNO’s will have no problem allowing Whatsapp traffic as it wsill mot likely be a boon for them if most of the calls are on-net (They get to bill both parties in the call). If however most calls are off-net (Like those to recipients on other mobile networks locally or international), then MNO’s might block or give lower QoS priority to make the calls of a poor quality to sustain a conversation. They might however run into problems with the regulator should subscribers raise concerns that they think the operators are unfairly discriminating Whatsapp voice traffic. Net neutrality rules (not sure they are enforceable in Kenya yet) require that all data bits on the internet be treated equally, it should not matter if that bit is carrying Whatsapp voice, bible quotes or adult content. This will mean that operators can be punished for throttling Whatsapp voice traffic in favour of their own voice traffic. This therefore presents a catch 22 situation for them. What they need to do is come up with innovative ways to benefit from this development like offering slightly cheaper data tariffs for on-net Whatsapp voice to spur increased Whatsapp usage within the network (and therefore bill both participants).

Worth noting is that it costs the operator more to transfer a bit on 3G than it does on 4G. Operators who roll out 4G stand to benefit from Whatsapp voice as they can offer data at a lower cost to them and this benefit can be passed down to subscribers. The fact that voLTE is all the rage now, Whatsapp voice can supplement voLTE and can even be a cheaper way for operators to offer their voice services on their LTE networks without further investment in voLTE specific network equipment.

In short any operator who wants to benefit from Whatsapp voice has to go LTE.

Follow

Get every new post delivered to your Inbox.

Join 121 other followers