Archive

Author Archive

Will Safaricom Be Declared a Dominant Operator?

April 23, 2016 Leave a comment

 

lion

Last week, The Communication Authority said that their self-imposed March deadline to create clear guidelines on how it handles dominance of an operator had lapsed. This was occasioned by the failure to get a suitable international consultant to carry out a research study which would assist the Communication Authority in identifying and developing several key market interventions that would have assisted in managing the effects of a dominant player in the market. It is worth noting that the issue of dominance cuts across broadcasting, postal and telecommunications sectors. The finding of dominance must be based on the context  and circumstances of the relevant market and this is why the Communication Authority is engaging a consultant to study the market. They cannot go ahead and declare an entity as dominant or abusing its dominance without this study.

Is dominance a bad thing?

Before I answer that question, I would first like to define what is dominance. Unfortunately, because of a lack of local guidelines in place, there is no clear and detailed definition of what dominance is from a Kenyan perspective other than a brief mention in section 84W of the Kenya Information and Communication Act (KICA). However, internationally recognized definitions do exist.

The European Commission defines dominance thus: “A position of economic strength enjoyed by an undertaking which enables it to prevent effective competition being maintained in the relevant market by affording it the power to behave, to an appreciable extent, independently of its competitors, customers and ultimately consumers”. An operator can become dominant by virtue of a well implemented growth strategy and there is therefore nothing wrong being a dominant player. However, it is the abuse of this dominance that attracts attention from regulators. If an operator occupies a dominant position and is declared dominant by way of a gazette notice as per the KICA, several tests can be conducted to see if they are likely to abuse this position. One of the key tests is existence of barriers to entry of new operators into the market the operator is dominant, it could be that they are dominant because of high barriers to entry for new entrants to offer effective competition. It could be also that they are dominant because no other investor is interested in that market because they can get better returns elsewhere, this is despite low barriers to entry into the market the dominant player is in. The other test is if the operator possesses what is known as Significant Market Power (SMP). The European Commission recognizes SMP when an operator controls more that 25% of the market it operates in, this assumes a fully competitive market, In countries that are transitioning from a monopoly (like Kenya) this is usually set at 65% of market share (KICA section 84W however mentions 25% in relation to determining the dominance of an operator and not in explicitly defining if an operator has SMP). However, it should be noted that  SMP designation is simply a trigger for the application of behavioral or structural conditions by the regulator and not necessarily a prerequisite condition for dominance.

The abuse of dominance can only occur if the dominant operator engages in behavior that is anti-competitive as recognized by law. This abusive behavior should be harmful to competition or consumers or both.

Competition Authority or Communication Authority?

Mid last year, there was confusion on who between the Competition Authority and Communication Authority should deal with anti-competitive behaviors of a dominant operator in the telecommunications sector.  I did some research on this and came to a conclusion that its the Communication Authority’s mandate to deal with any ICT operator abusing their dominance. Below are my reasons for coming to this conclusion.

Whereas the Competition Authority deals with all commercial forms of competition across all sectors, their mandate can be said to forbear when it comes to telecommunications, postal and broadcasting. The main difference in how the Competition Authority and Communications Authority deal with competition is that the Competition Authority mostly acts on a retrospective basis on raised complaints of anti-competitive behavior (Ex Post regulation), on the other hand the Communications Authority behaves in a forward looking manner and tries to prevent anti-competitive behaviors by implementing government policy by use of regulations that modify the behavior of operators (Ex Ante regulation). Competition policy is typically aimed at preventing market participants from interfering with the operation of competitive markets while telecommunications, postal and broadcast regulation often manipulates market circumstances and operator behavior to achieve public goals. In short, Competition Authority controls the market for commercial interests while Communication Authority controls the market for public interest.

One point worth noting is that telecommunications, postal and broadcast operators in a regulated environment can use what is known as ‘the regulated conduct defense’ to not be under the control of the Competition Authority. In this defense, operators are regulated by regulations that are deemed to be in public interest and any activities they carry out within this regulated environment cannot attract liability under common competition laws. This defense is however not very applicable in situations where the telecommunications, postal or broadcast sector is highly competitive and the regulator forbears from regulation and lets market forces do most of the self regulation, in such circumstances, the Competition laws can be applied to telecom and broadcast operators as is the case in USA and EU.

An Analysis of Safaricom’s position in the market

As per the 2015 Q4  sector statistics, Safaricom controls 64.7% of mobile voice subscribers, 63% of mobile data subscribers and 71.7% of mobile money users. The first step in the process of determining if Safaricom is a dominant operator involves defining and looking at the market it operates in and if the same market possesses barriers to entry by others  that could have caused them to become dominant. The nature of our licensing regime means that Safaricom’s geographical and product market is the same as that of its fellow licensees in the same category of license. It is very clear from the figures above that Safaricom’s large market share triggers the need to analyze if it is dominant by evaluating if it  possesses Market Power, a key factor in dominance determination. Market power can be see in the following:

  • Profitability. Safaricom’s profitability is much higher than the rest of the competitors combined.
  • Pricing behavior. Safaricom’s prices are not the lowest in the market and they do not react to competitor price reductions, promotions or offers.
  • Vertical integration of its operations. Safaricom tightly controls nearly the entire value chain in delivering its products and services.
  • Bundling: Safaricom bundles both competitive and non competitive products, it also bundles its local loops and essential facility capabilities with its products (e.g. Selling Internet access (a product) via a Wimax/fiber network it owns and controls (local loop) and the inability of competitors to use this Wimax/fiber network to sell their internet services)
  • Barriers to market entry by competition to take advantage of their high prices. This is the point that I want to focus on below.

Barriers to Market entry

One of the key factors in determining if an operator is dominant is what happens if they increase prices of their products and services. If barriers to market entry are high, then no new entrant will easily come in and offer lower prices and take customers away from them. If barriers are however low, new entrants can easily come into the market and offer cheaper pricing and make them regret increasing their prices by loss of customers to them. In my analysis, barriers to market entry in Kenyans mobile telecommunication sector are very low especially with the advent of Mobile Virtual Network Operators  (MVNO’s) and the proposed infrastructure sharing regulations that are coming into place. This means that the Communication Authority has done a splendid job of making it easy for competition to be offered to Safaricom on voice, data and mobile money should an investor find it attractive to do so. This factor alone I believe is sufficient to prevent the regulator from declaring Safaricom dominant or even term some of their actions (like bundling) as abuse of their dominant position. The fact that end users can take advantage of Mobile Number Portability (MNP) and move to competition and enjoy lower priced  services makes it even easier for competition to overcome customer inertia and get customers to move to them. The big question is then why isn’t competition significantly eating into Safaricom’s market share?

The answer could lie in Safaricom’s extensive network coverage which is unmatched. But the new infrastructure sharing laws will poke holes into this answer as it will allow any other mobile operator to use Safaricom’s network in a national roaming agreement that will enable them offer affordable services across the country where there is Safaricom coverage, It will also allow competitors to use Safaricom’s local loops to offer service. This means that any operator competing with Safaricom will now be able to cover the country just like them. So there will be no excuse for any customer to not move to any competing operator for better or cheaper service should they wish to.

So with the availability of MNP, infrastructure sharing regulations, MVNO licensing, and many other playing field leveling regulations set by the regulator, I believe it will be very hard for the Communication Authority to declare Safaricom a dominant operator or one who is also abusing their position of dominance.

 Lion image (c) http://www.daler-rowney.com

Categories: Uncategorized

Is Universal Access/Service a Government or Operator Obligation?

March 30, 2016 4 comments

ruralSecond to creating a level playing field for all ICT operators, one of the widely accepted objectives of regulation of the ICT sector in developing countries is to promote universal access of basic ICT services. In developed economies, the objective changes from universal access to universal service. The difference is that access promotes the notion that every person should have reasonable means of accessing basic ICT services (like a phone booth at the local shopping center) while universal service is about promoting and maintaining availability of a variety of ICT services to individuals and households. Both these terms are combined into what is known as universality.

It is clear that that universal access definition has been overtaken by events based on the recent developments especially in the wake of mobile communication boom in many developing countries. To a very large extent, its no longer about ensuring access but ensuring that a variety of services are delivered to the end user.

The need by governments to make universality a reality stems from increasing evidence that access to ICTs improves the overall socioeconomic well being of its citizens. However, with the wave of privatization of ICT services such as telecommunications, the operation of telecoms moved from social welfare minded government ministries to profit minded private entities. When privatization took place in the early 1990’s new entrants focused on providing services to profitable market segments based on geography, disposable income and population density (which improves economies of scale and scope). The result is that regions or populations that were not profitable were at the risk of being left our in the ICT revolution. To prevent this from happening, regulators were quick to include mandatory service obligations (MSOs) in the licenses issued to new entrants. These obligations mandated the operators to extend their networks (and in effect their services) to areas where the cost of providing the services and maintaining the networks was higher than the revenues realized from the same areas. This seemed to be the only practical solution to connect the ‘unprofitables’. Other solutions were available and open to use by the operators such as cross-product subsidies (which haven’t worked well due to the fact that on the other hand the regulator enforces cost-based pricing making cross-product subsidies difficult to implement). It is worth noting that the definition of Universal Service varies from country to country, in Finland for example, universal access includes the right for every individual to access 1Mbps of broadband internet in addition to other services.

In addition to the measures above, the regulator in Kenya also developed a Universal Service Fund (USF) framework which according to them on page 1 of the framework draft document was to “to complement private sector initiatives towards meeting universal access objectives”. The document title and the aim I have quoted above are conflicting to a keen eye.

If indeed the aim of the USF was to complement private sector, why is the same private sector being obligated by regulatory instruments to fund it?

The International Telecommunications Union (ITU) lists many way in which USF can be funded,  one of the more popular ways is by budgetary allocation from the government. Other ways are by use of Access Deficit Charges (ADCs) and the levying of  a percentage of monies collected by operators in their business operations towards the USF kitty. The ITU states that should a regulator go the revenue levy way, it must not place a unfair burden on the operator on how these levies can be collected. For example the regulator cannot say that it will levy a percentage for every call minute  or every MB of data used by subscribers, this would make accounting difficult and hence the approach of levying the total revenues of the operators which is easier and more transparent.

Several countries have implemented USFs that are beneficiaries of government budgetary allocations. Such countries include Chile and Peru. Incidentally the same countries are hailed as success stories of how universality has improved lives of its citizens. This is because the desire to offer universal service or access is a social obligation of the government and not private firms. Its in the governments interest to connect these otherwise unprofitable regions/people and it can easily do it from budget.

Chile’s approach has been an interesting case study of how, if done right, the USFs can work to meet government objectives. The regulator there took the concession path by having operators bid to provide services on a concession basis. The regulator would then pick the lowest bidder. The results were that most of the bids were 50% below the budgetary allocations meaning that the approach was financially efficient. Proper policies were put in place to define the penalties, rights and obligations of each winning concessionaire to ensure they delivered.

This is the approach the Kenyan regulator should take. Instead of levying operators a percentage of their hard earned revenues. The operators, through the ITU definition can claim that the regulator has placed an unfair burden on them from the perspective of them not being directly responsible for economic development of the citizens (whether through ICTs or other means). Universality is a social program and it therefore squarely falls on government arms. Profitability or lack thereof  from universality is a secondary consequence whose impact cannot be directly measured.

Proponents of operator-funded USFs argue that unseen benefits such as multiplier effect of connecting the unprofitable directly benefit the operators, if that is the case then this decision to connect these people should be a commercial decision by the operators and not a license requirement. An example of the multiplier effect is when for example I (being of better economic means and living in the city) can now use airtime (read revenues) to call my rural relatives who are now connected thanks to supposedly the implementation of universality. My act of calling them in addition to other people I normally call adds revenues to operators. The operator should therefore connect my rural relatives because I will call them and not because they will call me. This is a straightforward  commercial decision.

Obliging ICT operators to fund the USF is unfair because social economic benefits accrued from connecting the population are felt across several fronts such as improved health, education and increased commercial activities and not just by way of improved profits by operators if any. Universality’s key outcome is not purely an ICT one and making only ICT players fund it is tantamount to the unfair burden on the operators mentioned by ITU.

It is my opinion therefore that the current approach to universal service funding should be re-looked at and if possible a new method of funding it through direct government budget allocation be adopted.  This is already happening in providing roads, hospitals and schools.   The regulator needs to revisit this because of the following reasons:

  • The current market structure where one operator is making most of the revenues is unfair to this operator as they will be contributing the most to this fund. There are no clear guidelines on how these funds will be utilized leaving room for abuse.
  • Failure for the law to accommodate ICT industry players in the Universal Service Advisory Council meaning they have no say on monies they contributed. This technically makes it a tax.
  • Already, operators are extending their networks to seemingly unprofitable regions without the need for government to push them. Advancement in technology and convergence is making what universality defines as unprofitable now seemingly commercially viable because its now much cheaper to build and scale networks. USF objectives need to be reviewed or done away with altogether

Should the regulator be adamant about maintaining the USF due to various unreasonable and political ends, then operators have recourse at the international courts as Kenya is a signatory to the WTO  General Agreement on Trade in Services (GATS) especially the agreement on basic telecommunications

Netflix experience on Ka-Band VSAT in Kenya

January 8, 2016 7 comments

Yesterday I, like most people here; woke up to the news that the American multinational provider of on-demand Internet streaming media; Netflix, has expanded into several countries including Kenya. Social media reaction in my view was a tie between those who think these new comers will ‘disrupt’ the market currently dominated by Multichoice’s DStv. The jury on what exactly is the meaning of disruption as applied in that discussion, is however still out.

My views on their foray into Kenya aside, I decided to test the service on my home VSAT link. This was after I read on how it works just in case I had made any assumptions that were wrong. Here, I found out that the minimum recommended bandwidth is 3 Mbps for SD quality video and 5 Mbps for HD quality video.

The particulars of the link are as follows:

  • Ka-band service off the Avanti Hylas-2 satellite at 31 degrees East (somewhere above Uganda)
  • 74 centimeter elliptical dish with a 1 watt Ka-band radio
  • Hughes HN9260 satellite router
  • 15 Mbps download and 2 Mbps upload speed
  • Netgear AC2350 Nighthawk X4 WiFi router

With the VSAT kit I achieved a strong enough signal to enable a DVB-S2 carrier at 8-PSK 8/9 on the down link and do a TDMA/FDMA carrier of 2048 Ksps at QPSK 4/5 on return w.r.t the remote terminal

The 74 centimeter dish with a clear view of the western sky. From Nairobi the look angle is a favourable 88.5 degrees

The 74 centimeter dish mounted on a perimeter wall  with a clear view of the western sky. From Nairobi the look angle is a favourable 88.5 degrees

I registered an account and selected a 58 minute SD quality documentary titled “Rise of the drones” and proceeded to view it. Its took about 3 seconds to open the stream and the streaming started.

The Netflix main screen opened on the Firefox browser

The Netflix main screen opened on the Firefox browser

The picture quality was as expected for  an SD video on my old laptop, I however could not identify how to check this video’s resolution on the stream.

Video quality was consistent throughout the session with no downward review of picture quality

I watched it to the end without a single “Netflix and Chill as it buffers” moment and the stream download rate indicator was about 5 minutes ahead of the play indicator throughout the time.

rate

The progress bar (in lighter shade of grey ahead of the red play duration bar) showing about 5-minute lead

The VSAT links Cacti graph for the 58 minute session showed  that the stream consumed an average of just below 3 Mbps with a peak of 3.7 Mbps. During this time the total downloaded data was 1.3 GB by calculating the area under the graph.

Cacti graph utilization during the 57 minutes of documentary streaming.

Cacti graph utilization during the 58 minutes of documentary streaming.

The above results means that in a multi-viewer scenario where more than one person is using Netflix on the LAN , the VSAT’s 15 Mbps capacity can support 4 concurrent viewers without a problem and will be limited only by the WiFi routers’ capability.

Update: I did Netflix for the entire day on Saturday 9th (via a HDMI stream dongle on TV) with my kids in the usual TV schedule as we do on DSTv (punctuated with sessions of outside play, reading/study, quiet times and no TV during meals). We had consumed 19.4 GB by the time we went to sleep.

Data centers and the environment: The case of Facebook

September 24, 2015 1 comment
Facebook data center engineer

A Facebook data center engineer

There has been an increased uptake and use of the Internet especially  social media by many in the world. This has led to rapid deployment of infrastructure to support this increased demand.

This infrastructure consumes power. It is estimated that data centers that power the internet world-over consume about 1.3% of the world’s total electric power. This might seem small but if you consider that Facebook consumed about 532 million kWh in 2011 (must be close to double that amount now). At current Kenyan electricity tariffs, that’s about 10.6 Billion shillings in power bills. Google consumed just over 2 billion kWh during the same time to power their servers world-wide. With most of this power being from coal plants, data centers are attracting the attention of groups such as green peace who are have launched campaigns such as ‘unfriend coal’ which was geared towards forcing Facebook to lower its dependence on coal to power its service.

With pressure piling on data centers to lower their carbon foot prints, innovation and new way of thinking is needed. One of the low hanging fruits is to build new data centers in regions that use green energy. One of the prime locations now for setting up data centers is Iceland. The country generates all of its power from geothermal steam and hydro. The cool weather there also means that natural cold air that is about 5.5 degrees C on average is simply circulated in the data center to cool the equipment as opposed to using air conditioning systems for forced cooling. This means that a server operating out of Iceland is cheaper to run and has a near zero carbon emission attached to it. According to Verne Global’s findings in 2013, the 10 year energy cost (the length of a standard data center hosting contract) for 1 megawatt of IT load in Keflavik, Iceland is near $3.5 million, compared to nearly $23 million in London, $20 million in Frankfurt, $12.5 million in Chicago, around $6 million in Oslo, Norway. The other bonus is the geographical location of Iceland makes latency from a server there to Europe and US nearly equal at 40ms.

However, with the likes of Facebook who have already invested a lot of money on data centers in the US, they cannot simply cart it to Iceland. They have therefore come up with innovative ways to lower their data center energy costs. It is estimated that about 25% of power in a data center goes to cooling, 10% is wasted in the conversion from AC to DC and back to AC voltage, IT load taking 46% of the power (25% servers, 8% network and 13% storage) there is a huge opportunity to lower the IT load portion and cooling portion.

IT load efficiency

Facebook did some research and found out that servers running low-level loads use power more inefficiently than idle servers or servers running at moderate or greater loads. In short a server should either be kept idle or at moderate/high load, not in low load. The traditional method of load distribution on a group of servers is known as round robin. This method is efficient on computing resources but inefficient on power use. Facebook developed a new way of doing things known as Autoscale.

Autoscale is designed to distribute incoming requests to the servers so that they are either idling, or running at medium/high-capacity and not in between. It tries to avoid assigning workloads in a way that results in servers running at low capacity. This was informed by a test that was done by Facebook engineers. In this test they found out that a server that is in idle mode consumes about 60 watts of power. If some light lower level load is applied to the server, the power consumption goes from 60 to 130 watts. However, if the same server is run at medium or higher loads, the power consumption is about 150 Watts; a 2o watt difference between low load and high load. This means that its more energy-efficient to give an already moderately busy server some more load (20 watts extra consumed) as opposed to giving this load to an idle server (70 watts extra consumed if you do this). Autoscale will also reduce the number of servers sharing the load so that it puts as many servers as possible in idle mode. In low traffic periods such as American midnight. Autoscale dynamically adjusts the size of the server pool in use, so that each active server will get at least a medium-level CPU load. Servers not in the active pool don’t receive traffic.

The other method deployed to reduce power consumption is the reduction of power transformation. There is about 10-15% loss in transformers and rectifiers found in UPS’s. In most data center setups, mains AC power is fed to a centralized UPS. The UPS converts this AC to DC and back to AC to supply the servers with power. This AC-DC-AC conversion results in about 6-12% loss. a way to lower this loss is to have the servers supplied directly by mains AC power but have localized UPS’s on each rack that can give up to 45 seconds of backup power as the diesel generator turns on in case of a power outage (a very rare occurrence in the developed world). Eliminating centralized UPS’s means that data centers can save about 10% of power. Feeding direct AC power from the grid to servers can be a tricky affair, this is because reactive components in the grid such as motors that power everything from escalators to coffee grinders lower the power factor and increase reactive power. The deployment of reactive synchronous condensers in data centers lowers reactive power which is responsible for some losses depending on power factor of received power. Facebook has deployed in-house custom-made reactor power panels which try to bring the power factor as close as possible to unity. Other than improving the quality of power, the Facebook reactors also reduced harmonic distortion in the power system which causes delays in generators kicking in when there is a detected power loss from the mains.

Use of 277Volts instead of 120 or 240Volts

Facebook hardware is also designed to operate at 277 Volts AC as opposed to the standard 120Volts in the USA main supply systems. The reason behind this is simple. with US 3 phase power being supplied at 480 Volts, the single phase neutral doesn’t come out at the 120Volts but at 227 Volts (you can use imaginary/complex number cube root of 1 components to derive this). The lowering of 227Volts to 120Volts by a transformer leads to about 3% transformation losses. So operating the servers at 277Volts and not 120Volts saves 3% power. The diagram below shows how a servers efficiency improved with the use of a higher voltage.

Hewlett-Packard server power supply efficiency as a function of load

Hewlett-Packard server power supply efficiency as a function of load (c) Syska Hennessy Group

A server operating at 240Volts (which is what we use in Kenya) is 91% efficient at 50% load compared to a similar server operating at 120Volts. jacking up this to 277Volts improves efficiency to 92% compared to a server at 120Volts at 89% efficiency on 50% load. The reason why America uses 120Volts is because in the early days of electricity, bulbs were made of carbon filaments that lasted longer if operated at 120Volts than at 230Volts, because most of electricity was used for lighting, it made sense then to run the grid at 120Volts. Later, when electricity went to Europe and Asia, technology had improved and the tungsten filaments could do higher, more efficient voltage at 240Volts.

Simpler cooling and Humidity control

About 12% of the cooling energy consumption goes to delivering the cold air at the point of heat rejection. By use of a ductless cooling system, the cold air is delivered at the center of the data center and with additional smaller cooling systems at the rack where the heat is generated, substantial power savings can be achieved.

The use of a vapor seal can also play a critical role in controlling relative humidity, reducing unnecessary humidification and dehumidification. If humidity is too high in the data center,conductive anodic failures (CAF), hygroscopic dust failures (HDF), tape media errors and excessive wear and corrosion can occur. These risks increase exponentially as relative humidity increases above 55 percent. If humidity is too low, the magnitude and propensity for electrostatic discharge (ESD) increases, which can damage equipment or adversely affect operation. Also, tape products and media may have excessive errors when exposed to low relative humidity.

Most equipment manufactured today is designed to draw in air through the front and exhaust it out the rear. This allows equipment racks to be arranged to create hot aisles and cold aisles. This approach positions racks so that rows of racks face each other, with the front of each opposing row of racks drawing cold air from the same aisle (the “cold” aisle). What this does is that it makes it easier to draw out hot air from the hot isles before it mixes with the cold air which lowers the cooling efficiency.

compressor_efficiencyThe other method of lowering cooling costs is through the use of multi step compressors for the cooling systems. Most traditional cooling systems simply switch on the compressors at full load when the thermostat input dictates that cooling should happen. a 4 step compressor operation showed that compressors operate at different efficiency at various steps. The diagram  on the side shows that the compressor in question is most efficient at step 2. The cooling system is designed in such a way that the compressor operates at step 2 most of the time.  Off the shelf cooling systems work well but are grossly power inefficient for use in data centers.

The internet is currently moving towards cloud computing. This essentially means that data centers will continue to grow and soon the power consumed by data centers will pile pressure on the grids and the environment. The use of green energy sources and innovation will go a long way in reducing the contribution of the Internet to global warming.

Broadband as a value add? Yes, Its about the eyes.

June 5, 2015 Leave a comment

InternetThe days of ISPs making super profits are long gone. The margins being created by ISPs world over are thin. Also, should Internet connectivity prices go lower due to either more competition or legislation, ISPs stand to create even thinner margins in future. There will therefore be little if any revenue/profit oriented incentives for ISPs to be in business.

Having worked in the industry for about 12 years now (That’s eons in Internet growth terms), I have seen the ISP industry evolve both on the technology front and its value proposition to customers. The liberalization of the sector in most countries has also attracted many investors into the industry, this has created a stiff and competitive market, this has brought with it diminishing returns on investments. Small ISPs are dying or being bought out as they cannot stay afloat. Large ISPs are also merging to create economies of scale to survive.

With the coming projects such as Google’s project Loon and Facebook’s Internet.org (and subsequent Internet by drones project) and many more that aim to provide nearly free Internet to the worlds’ unconnected, there will be no financial incentive for a commercial ISP to go into business anymore.

So what do ISPs need to do?

There has been a lot of talk in the market about value addition and that ISPs should stop selling ‘dumb pipes’ and offer value over and above just the internet pipe. All this has already happened and at the moment ISPs have been outmaneuvered by OTT providers who are providing this value addition type of services over the links the ISPs are providing to their customers. For example, some years ago, all ISPs were offering VoIP as a value add, now with the likes of Skype and Whatsapp calls, ISP-provided VoIP is a dud. Another example is dedicated hosting at ISP provided ‘data centers’ (a room with access control and cooling🙂 ), with the maturity of cloud services, such a service is also not appealing anymore to customers. ISPs are at the end of their rope.

If you carefully analyze all recent ISP mergers and buyouts in Africa (and beyond if you have the time), you will realize that buy out decisions are less and less being based on an ISPs profitability or revenues and cash flow position. They are now based on subscriber numbers. But what is the commercial point of buying a unprofitable or low revenue business? Answer: Its about the eyes.

ISPs are and will no longer be about direct internet pipe derived revenues but about indirect revenues. Sources of these indirect revenues include online advertizing, OTT services and content delivery and purchase. This is the very reason why giants such as Google and Facebook have entered the ISP business, Its about the eyes. An ISP with more subscribers and loss making is now more attractive to buy than one with few subscribers and super profitable. Unbelievable isn’t it?

End to end control.

OTT operators such as Facebook have been blamed by traditional ISPs for using the ISPs network infrastructure to do business with the ISPs end users. Attempts by ISPs to make these operators pay for delivery of content has been met with opposition due to fears that such an arrangement can result in a tiered internet and with that a demise of net-neutrality that has been one of the key characteristics and a supposed catalyst of internet development. Attempts to camouflage net-neutrality-flouting arrangements by use of ISP led offers such as Facebook’s Internet.org where users on certain networks access Facebook and Whatsapp for free outside their data plans have also been meeting resistance. Being so froward thinking, I am of the opinion that these companies foresaw the resistance to their initiatives to offer their content for free by paying the traditional ISPs, this is why they are all rushing to roll out their own infrastructure to provide free or near free internet to the masses. At the moment, other than their Satellite/baloon projects being tested in New Zealand, Google is already testing out high speed fiber -FTTH in select American cities. This will give them end to end control of the broadband supply chain and therefore quell concerns of creation of a tiered internet. This of course assumes they will come up with a way to show regulators that they have fair access policies for all third party traffic.

The future

As i see it, the traditional ISP will die a natural death if they don’t adapt to the coming changes. What was once a value add will become the product and vice versa. Internet broadband will be a value add to content and OTT services. A content provider such as Facebook or Google will offer you free internet to access their content. Internet broadband provision will be a value addition to content providers. As someone once said, if the product/service is free, you are the product. The free internet will come with privacy strings attached so as to enable advertizers track your habits and offer more targeted adverts. This targeting is getting more accurate and spookier if the tweet below is anything to go by.

glasstweet

The use of browser safety features to disable cookies wont work as companies such as Google are now using what is known as device finger printing to identify you. Device finger printing works on the basis that your computers OS, installed programs (and the dates they were installed), CPU serial number, hardware configuration (RAM/HDD/attached peripherals) will give your computer a unique identifier if applied to an algorithm. Therefore your computing device is unique and can therefore be tracked without the need to set cookies.

Why Is Kenya Power Dumping Pre-paid Meters?

May 19, 2015 8 comments

meter2Recently, the country’s only power utility company announced that it was slowing down the roll out of the prepaid metering system that they launched about 6 years ago. The reason given for this about turn was that the company is losing revenues as it is now collecting less from the same customers who are now on prepaid metering than they did before when the same group of customers were on post paid metering system.

According to the Kenya Power records, about 925,000 out of the 3.17 Million customers are on prepaid meters. Before the 925k moved to prepaid, they were collecting about four times more than what they currently collect from the same customers. The Kenya Power MD stopped short of accusing customers with prepaid meter tampering as his explanation of the reduced revenues. With the reduction in revenues, Kenya power has decided to classify this reduction as ‘unpaid debts’ in their books. Meter tampering would be across both pre and post paid users if he still holds the opinion that prepaid users are tampering with meters. In fact there are lower chances of a prepaid user tampering with the meter than a post paid user doing the same.

My little accounting knowledge tells me that it is every company’s dream to convert all their customers to prepaid. This shifts the cash flow position to a very favorable one of positive cash flow, you have the money from customers before they consume your service/product. With a prepaid metering system, Kenya power was heading to accounting nirvana but the recent revelations about the accumulating ‘debts’ from prepaid customers was a shock to many. First and foremost, if you do not buy prepaid meter tokens, you cannot consume power on credit and pay later, so how is this reduction in  revenues from prepaid meter consumers classified as a debt as opposed to an outright reduction in collected revenue?

Faulty meters?

There are two main brands of power meters used by Kenya power, Actaris and Conlog. The later brand was found to be defective 3 years into the roll out, the meters were erroneously calculating remaining power tokens especially after a power outage, you could be having say 30Kwh’s remaining on your meter and after a power blackout, the meter reads -30Kwh or some other random negative value. This is what consumers would notice, we cannot for sure say that the same meters also under bill on the same breath. Of course if it under bills, very few consumers would complain or even notice, they would however be quick to notice a negative token value because they would lose power. Could faulty meters be the problem here? Could Kenya power be suffering from substandard meters? Here is a blog link to one affected consumer who complained in 2012 about the faulty meters. Kenya power attempted to replace some Conlog meters but I still see some in the wild in use.

Reality of estimate billing?

We have all been there, where you receive an outrageous bill from Kenya power. This is because more often than not, they estimate power consumed and never get to read the meters in your house. When was the last time you saw a Kenya power meter reader on a motor bike in your estate if you are on postpaid? According to Kenya power books, one post-paid domestic customer consumed 12 Kwh of electricity and on average paid Sh1,432. And each prepaid customer consumed an average 23 Kwh and paid roughly Sh756 to the power company. This can only mean two things:

  • The postpaid customers are over billed due to poor estimation methods as meters are seldom read. I noticed this on my water bill too. When my bill is say 600/= and i overpay 2000/= when settling the 600/= bill, my next bill will be in the regions of 2000/= (estimated from my last payment). So i make sure i pay the exact amount on the bill these days to deny them room to estimate and over bill me.
  • The prepaid meters are spot on accurate. This is the most plausible reason and I will explain below.

Prepaid meters are accurate?

Unlike the old school postpaid meters that measure total ‘apparent’ power consumed, the new prepaid meters assume an efficient electricity grid and measure effective or real power consumed by the customers appliances.  In a situation where the power distribution grid is inefficient, the voltage and current are not in phase. This leads to a lot of ‘wasted’ power. In postpaid, consumers pay for the grid inefficiencies, in prepaid, they do not. This is why there has been a drastic reduction in revenues because consumers are now paying for what they consume and not the wastage on the grid. Perhaps this is what Kenya power sees as ‘consumed but unpaid for power’ by the prepaid meter users? Could be, this is because its not possible to consume more than what you have paid for on a prepaid meter. apparent power is consumed but not measured by the meters. This is especially true if you have appliances with electric motors in them such as washing machines, water pumps and air condition systems.  Read more about power factor by clicking here

You can read older articles on my blog touching on Kenya Power by clicking the links below:

  1. How Kenya can enjoy lower electricity tariffs
  2. Kenya is ripe for a Demand Response Provider
  3. Kenya Power Needs To Be Penalized For Blackouts
  4. There is need to end the Kenya Power monopoly

What Whatsapp voice means for MNO’s

April 1, 2015 8 comments

Facebook inc recently introduced the ability to make voice calls directly on its Whatsapp mobile application. This is currently available on Android OS and soon to be made available on iOS.

What this means is that mobile users with the updated app can now call each other by using available data channels such as Wi-Fi or mobile data. Going by a recent tweet by a user who tried to use the service on Safaricom, the user claims that they made a 7 minute call and consumed just about 5MB’s of data. If these claims are true, then it means that by using Whatsapp, a user can call anyone in the world for less than a shilling a minute. This is lower than most mobile tariffs.

Is this a game changer?

Depends on who you ask. First lets look at what happens when you make a Whatsapp call. When a user initiates a call to another user over Whatsapp, both of them incur data charges, in the case of the twitter user I referred to above who consumed 5MBs, the recipient of the call also consumed a similar amount of data for receiving the call. If it so happens that both callers were on Safaricom, then just about 10MB’s were consumed for the 7 minutes call. The cost of 10MBs is close to what it would cost to make a GSM phone call for the same duration of time anyway. Effectively, to now receive a Whatsapp call, it is going to cost the recipient of the call. This is unlike on GSM where receiving calls is free.  When the phone rings with an incoming Whatsapp call, the first thought that crosses a call recipients mind is if he/she has enough data ‘bundles’ on their phone to pick the call. The danger is if there is none or the data bundle runs out mid-call, the recipient will be billed at out of bundle rate of 4 shillings an MB. Assuming our reference user above called someone whose data had run out, Safaricom will have made 5 Shillings from the 5MBs and 28 shillings from the recipient. A total of 33 shillings for a 7 minute call translating to 4.7 shillings a minute which is more than the GSM tariffs.

This effectively changes the cost model of making calls. the cost is now borne by both parties, something that might not go down well with most users. I have not made a Whatsapp call as my phone is a feature phone but I believe if a “disable calls” option does not exist, Whatsapp will soon introduce it due to pressure from users who do not wish to be called via Whatsapp due to the potential costs of receiving a call. That will kill all the buzz.

Will operators block Whatsapp calls?

It is technically possible to block Whatsapp texts and file transfers using layer 7+ deep packet inspection systems such as those from Allot’s NetEnforcer and Blue coat’s Packeteer. I believe an update to detect Whatsapp voice is in the offing soon and this will give operators the ability to block Whatsapp voice. The question however is what will drive them to block it?  MNO’s will have no problem allowing Whatsapp traffic as it wsill mot likely be a boon for them if most of the calls are on-net (They get to bill both parties in the call). If however most calls are off-net (Like those to recipients on other mobile networks locally or international), then MNO’s might block or give lower QoS priority to make the calls of a poor quality to sustain a conversation. They might however run into problems with the regulator should subscribers raise concerns that they think the operators are unfairly discriminating Whatsapp voice traffic. Net neutrality rules (not sure they are enforceable in Kenya yet) require that all data bits on the internet be treated equally, it should not matter if that bit is carrying Whatsapp voice, bible quotes or adult content. This will mean that operators can be punished for throttling Whatsapp voice traffic in favour of their own voice traffic. This therefore presents a catch 22 situation for them. What they need to do is come up with innovative ways to benefit from this development like offering slightly cheaper data tariffs for on-net Whatsapp voice to spur increased Whatsapp usage within the network (and therefore bill both participants).

Worth noting is that it costs the operator more to transfer a bit on 3G than it does on 4G. Operators who roll out 4G stand to benefit from Whatsapp voice as they can offer data at a lower cost to them and this benefit can be passed down to subscribers. The fact that voLTE is all the rage now, Whatsapp voice can supplement voLTE and can even be a cheaper way for operators to offer their voice services on their LTE networks without further investment in voLTE specific network equipment.

In short any operator who wants to benefit from Whatsapp voice has to go LTE.