Building Owners Impeding Telecom Services Roll-out Due To Lack Of Laws

January 21, 2018 3 comments

strctured-cabling-system-1There is this new building in Nairobi that is now inviting new tenants to occupy it after being recently completed and opened with much fanfare. As usual, new tenants were expected to fit suitable furniture and fittings into their new premises. But there was one problem: The building; in all its glory, lacked suitable telecommunication raiser ducts and conduits and occupants could neither pull cable to create Local Area Networks nor could they easily connect any floor to the basement where a local ISP had placed its fiber optic switch. The building owners also asked the tenants to bear any costs related to the modification of the building to enable the setting up of telecommunication infrastructure.

I searched the Kenya building code of 2009 and the National Construction Authority Regulations of 2014 to see if any of them compel building designers to incorporate telecommunication ducts as part of a buildings services in a similar manner it specifies requirements for  plumbing, electrical cabling, ventilation and heating/cooling. Sadly both do not make it mandatory for designers to incorporate into their designs paths, risers, ducts, trays or any other cable containment mechanisms that will enable the easy pulling, organizing and routing of telecommunication cables to any part of the building.

For buildings that have incorporated telecommunication cabling space and paths in their design and construction, the MDF or telecommunication room is usually small, poorly ventilated, poorly supplied with power and mostly located on the basement of most buildings which can sometimes be a long distance to the top floor depending on the building height. Building owners are also charging telecommunication companies hosting charges to host their switches and other equipment in the MDF room. These charges vary from a low of KES 3,000 to a high of KES 25,000 a month with or without electric supply. In the same breath, utility companies providing water and electricity are not charged rent for their equipment and fixtures such as electricity meters by the building owners to avail their products and services to the same tenants. In fact, its the building owners who pay the utility companies for them to bring in services to them.

With nearly every office needing internet connectivity for normal operations just like they need electricity, water and drainage, why do building owners create barriers for telecommunication companies by not making their buildings  cable ready and if they do,  go ahead and levy monthly rent for the cabinet hosting the equipment? Some building owners have also gone ahead and signed exclusivity agreements with one provider to host their equipment and avail internet connectivity in the whole building, locking out competitors. I know of operators who have been denied building entry to avail services to potential customers because competition locked them out with an exclusivity agreement between them and the building owner.

Outside buildings, telecom operators also have to apply and pay for wayleaves and permits from county governments and the Kenya National Highways Authority (KeNHA) to trench, lay cable, and do back-filling. It takes an average of three weeks to obtain a permit from any of these bodies and at a significant cost too. Permits to cross major highways by way of micro-tunneling can take several months to obtain.

The above challenges exist because there is no clear legal framework in which the telecom operators can work in to avail their services which can now be considered as utility services similar to electricity and water supply. There is need to do the following so as to make it easier for the operators and consumers of their services:

  • Amend the Kenyan building regulations to ensure that all commercial an multi-dwelling residential units such as apartments have suitable and standardized telecommunication cable pathways and containment fixtures. MDF room location should also take into consideration the transmission distance limitations of some technologies such as electrical/copper based cable maximum transmit distances.
  • Make it illegal for building owners to sign exclusivity agreements with a single or a select number of providers and baring the rest from building entry. Every operator should be given equal and reasonable access to their potential customers in any building. The proposed Kenya infrastructure sharing act can incorporate a section that outlaws exclusivity agreements as they also effectively bar infrastructure sharing.
  • The Kenya wayleave act cap 292 should be amended and modernized to reflect the current realities. The act assumes that the government is the sole provider of utility services and does also not incorporate the provision of utilities by private enterprises.
  • All road designs where necessary and possible, should incorporate buried ducts and manholes along the entire stretch of the road and suitable micro-tunnel crossings to carry any operators fiber-optic and coaxial cables at a small monthly fee payable to the road owner (KeNHA or county government). This will avoid instances where each operator has to trench and bury their own cable. This is sometimes done on the same side and section of the road and often leads to accidental cutting of a competitors cable as an operator trenches to lay their own cable. In the last 3 months, I’ve heard of about 4 instances of this happening in Kenya. Other than this, frequent trenching of roads inconveniences other road users and pedestrians. This is especially true in cities and towns.
  • County governments should be compelled by law to not charge telecom operators for permits to lay cable, this can be done by amending the Kenya Information and Communication act section 85 and 86 to explicitly state that no fees should be levied by local authorities and also specify timelines within which permits should be granted. The overall economic effect of letting the operators lay cable without many barriers such as fees and delays in approvals far outweigh the financial gain from permit fees by the county governments.
  • Physical planning departments in both national and county governments should incorporate telecommunication infrastructure real estate current and future needs when designing cities and towns.
  • The Kenya Information and Communication act should specify harsher punishment to telecom infrastructure vandalism acts such as fiber optic cable cutting or destruction of a mobile/wireless base station. In the same breath, it should also compel operators to conduct awareness campaigns to citizens living near telecommunication infrastructure on the dangers of tampering with the infrastructure.

Recent research shows that access to telecommunication services such as the Internet and telephony has a great impact on the socioeconomic well being of citizens especially in developing countries such as Kenya. It is therefore important that operators get all the support from the government in their quest to roll out services to the citizens because in dong so, they help the government in meeting its socioeconomic objectives.

 

Advertisements

The Internet of Things is about to change how we live and work

January 2, 2018 Leave a comment

intel_iot-m1Last week, the Communication Authority of Kenya released its sector statistics report for July to September 2017 showing that Internet penetration has hit 112.7% in the country. This is higher than the mobile penetration rate which stood at 90.4% for the same period.

The availability of Internet access presents a great opportunity for individuals and businesses to improve their lives and operations through efficiencies gained by adopting the Internet as a tool in their daily activities.

As the technology evolves, the Internet as we know it is also rapidly changing, it is now no longer restricted in virtual interfaces such as web browsers and apps such as WhatsApp or Youtube. The Internet is now moving out of the screen and into the real world and will soon be part and parcel of our living and working environments. Many items in our environment from the clothes we wear, furniture to electric appliances and homes will become part of the Internet in what is now known as the Internet of things or IoT in short.

By connecting all these items to the internet (and ultimately to each other), The IoT will present us with endless possibilities to better our lives from an individual perspective and also lower costs and create new revenue streams for businesses.

Take for example the idea of connected fabrics and wearables which will connect all your clothes and other attire to the Internet. This will enable your shirt for example to detect the chemicals in your sweat and send this information to your email or WhatsApp telling you that based on the chemicals in your sweat, you are about to come down with an infection, the shirt or wearable will also be able to take your heart rate and blood pressure constantly and warn you or even share this information directly with your doctor. Another example in IoT connected fabrics is wearing your favourite football jersey that immediately glows when your team or favourite player scores a goal. Connected homes will also present a great opportunity for families. Take for example a fully connected home where the fridge detects that butter is running out and automatically adds this to the your shopping list that is resident in your phone or tablet. The phone or tablet will then send you a reminder when it detects you approaching a supermarket that stocks that particular brand of butter that you love. Imagine also getting an early morning meeting appointment in your calendar and this automatically adjusts your wake-up alarm to an earlier time than normal bases on traffic conditions of the route you intend to use to the meeting from your house. This same alarm will also send a signal to switch on the water heater slightly earlier than normal.

On the business front, organizations stand to benefit a great deal from the IoT. Using the butter and connected shirt example above, the supermarket can place small screens on shopping trolleys that automatically display your shopping list when your phone is near the trolley and automatically deletes each item you pick and place in the IoT-enabled trolley, the screen can also have shopping floor navigation aids to help you easily locate shelves hosting the items in your shopping list. They can also place sensors at the shelves that will make your shirt give you a signal (this can be vibration or change in colour of the shirt) when you pass by the frozen display area where butter is kept. Businesses can also adopt IoT in their processes to improve efficiency. For example, insurance companies can use sensors embedded in cars they insure to accurately gauge driver behaviour on the road and offer lower premiums to good drivers and higher premiums to reckless ones. County governments can also leverage the IoT to improve efficiency in parking space management in cities and towns. For example, sensors under each parking slot can be connected to mobile app or to IoT-enabled cars to indicate free or occupied slots and automatically navigate the drive to the nearest free slot, they can also measure how long a particular car has parked and automatically bill the car owner on time spent basis. The county government can also implement different parking rates based on demand for space and for traffic control too (e.g slots farther from the CBD would be made cheaper than those in CBD.

The IoT can also bring significant efficiencies into the agricultural sector. Aquaculture farmers in Vietnam are already using IoT sensors to detect pond water salinity and automatically switching on fresh water pumps to dilute the pond water to the correct salinity, the pump switching system is also connected to an IoT-enabled mini weather station that will delay switching on if rains are forecasted.

Despite great strides made on the internet penetration in Kenya, more needs to be done to create a conducive environment for the growth and adoption of the IoT. This could include passing the necessary legislation on cyber security and privacy, two major concerns in the adoption of the IoT. It is estimated that there will be over 75 billion IoT devices in the world by 2025 making IoT enabled devices ubiquitous, this presents a great opportunity for Kenya to once again lead its peers on new technology adoption.

Categories: IoT Tags: , , ,

Will Safaricom Be Declared a Dominant Operator?

April 23, 2016 1 comment

 

lion

Last week, The Communication Authority said that their self-imposed March deadline to create clear guidelines on how it handles dominance of an operator had lapsed. This was occasioned by the failure to get a suitable international consultant to carry out a research study which would assist the Communication Authority in identifying and developing several key market interventions that would have assisted in managing the effects of a dominant player in the market. It is worth noting that the issue of dominance cuts across broadcasting, postal and telecommunications sectors. The finding of dominance must be based on the context  and circumstances of the relevant market and this is why the Communication Authority is engaging a consultant to study the market. They cannot go ahead and declare an entity as dominant or abusing its dominance without this study.

Is dominance a bad thing?

Before I answer that question, I would first like to define what is dominance. Unfortunately, because of a lack of local guidelines in place, there is no clear and detailed definition of what dominance is from a Kenyan perspective other than a brief mention in section 84W of the Kenya Information and Communication Act (KICA). However, internationally recognized definitions do exist.

The European Commission defines dominance thus: “A position of economic strength enjoyed by an undertaking which enables it to prevent effective competition being maintained in the relevant market by affording it the power to behave, to an appreciable extent, independently of its competitors, customers and ultimately consumers”. An operator can become dominant by virtue of a well implemented growth strategy and there is therefore nothing wrong being a dominant player. However, it is the abuse of this dominance that attracts attention from regulators. If an operator occupies a dominant position and is declared dominant by way of a gazette notice as per the KICA, several tests can be conducted to see if they are likely to abuse this position. One of the key tests is existence of barriers to entry of new operators into the market the operator is dominant, it could be that they are dominant because of high barriers to entry for new entrants to offer effective competition. It could be also that they are dominant because no other investor is interested in that market because they can get better returns elsewhere, this is despite low barriers to entry into the market the dominant player is in. The other test is if the operator possesses what is known as Significant Market Power (SMP). The European Commission recognizes SMP when an operator controls more that 25% of the market it operates in, this assumes a fully competitive market, In countries that are transitioning from a monopoly (like Kenya) this is usually set at 65% of market share (KICA section 84W however mentions 25% in relation to determining the dominance of an operator and not in explicitly defining if an operator has SMP). However, it should be noted that  SMP designation is simply a trigger for the application of behavioral or structural conditions by the regulator and not necessarily a prerequisite condition for dominance.

The abuse of dominance can only occur if the dominant operator engages in behavior that is anti-competitive as recognized by law. This abusive behavior should be harmful to competition or consumers or both.

Competition Authority or Communication Authority?

Mid last year, there was confusion on who between the Competition Authority and Communication Authority should deal with anti-competitive behaviors of a dominant operator in the telecommunications sector.  I did some research on this and came to a conclusion that its the Communication Authority’s mandate to deal with any ICT operator abusing their dominance. Below are my reasons for coming to this conclusion.

Whereas the Competition Authority deals with all commercial forms of competition across all sectors, their mandate can be said to forbear when it comes to telecommunications, postal and broadcasting. The main difference in how the Competition Authority and Communications Authority deal with competition is that the Competition Authority mostly acts on a retrospective basis on raised complaints of anti-competitive behavior (Ex Post regulation), on the other hand the Communications Authority behaves in a forward looking manner and tries to prevent anti-competitive behaviors by implementing government policy by use of regulations that modify the behavior of operators (Ex Ante regulation). Competition policy is typically aimed at preventing market participants from interfering with the operation of competitive markets while telecommunications, postal and broadcast regulation often manipulates market circumstances and operator behavior to achieve public goals. In short, Competition Authority controls the market for commercial interests while Communication Authority controls the market for public interest.

One point worth noting is that telecommunications, postal and broadcast operators in a regulated environment can use what is known as ‘the regulated conduct defense’ to not be under the control of the Competition Authority. In this defense, operators are regulated by regulations that are deemed to be in public interest and any activities they carry out within this regulated environment cannot attract liability under common competition laws. This defense is however not very applicable in situations where the telecommunications, postal or broadcast sector is highly competitive and the regulator forbears from regulation and lets market forces do most of the self regulation, in such circumstances, the Competition laws can be applied to telecom and broadcast operators as is the case in USA and EU.

An Analysis of Safaricom’s position in the market

As per the 2015 Q4  sector statistics, Safaricom controls 64.7% of mobile voice subscribers, 63% of mobile data subscribers and 71.7% of mobile money users. The first step in the process of determining if Safaricom is a dominant operator involves defining and looking at the market it operates in and if the same market possesses barriers to entry by others  that could have caused them to become dominant. The nature of our licensing regime means that Safaricom’s geographical and product market is the same as that of its fellow licensees in the same category of license. It is very clear from the figures above that Safaricom’s large market share triggers the need to analyze if it is dominant by evaluating if it  possesses Market Power, a key factor in dominance determination. Market power can be see in the following:

  • Profitability. Safaricom’s profitability is much higher than the rest of the competitors combined.
  • Pricing behavior. Safaricom’s prices are not the lowest in the market and they do not react to competitor price reductions, promotions or offers.
  • Vertical integration of its operations. Safaricom tightly controls nearly the entire value chain in delivering its products and services.
  • Bundling: Safaricom bundles both competitive and non competitive products, it also bundles its local loops and essential facility capabilities with its products (e.g. Selling Internet access (a product) via a Wimax/fiber network it owns and controls (local loop) and the inability of competitors to use this Wimax/fiber network to sell their internet services)
  • Barriers to market entry by competition to take advantage of their high prices. This is the point that I want to focus on below.

Barriers to Market entry

One of the key factors in determining if an operator is dominant is what happens if they increase prices of their products and services. If barriers to market entry are high, then no new entrant will easily come in and offer lower prices and take customers away from them. If barriers are however low, new entrants can easily come into the market and offer cheaper pricing and make them regret increasing their prices by loss of customers to them. In my analysis, barriers to market entry in Kenyans mobile telecommunication sector are very low especially with the advent of Mobile Virtual Network Operators  (MVNO’s) and the proposed infrastructure sharing regulations that are coming into place. This means that the Communication Authority has done a splendid job of making it easy for competition to be offered to Safaricom on voice, data and mobile money should an investor find it attractive to do so. This factor alone I believe is sufficient to prevent the regulator from declaring Safaricom dominant or even term some of their actions (like bundling) as abuse of their dominant position. The fact that end users can take advantage of Mobile Number Portability (MNP) and move to competition and enjoy lower priced  services makes it even easier for competition to overcome customer inertia and get customers to move to them. The big question is then why isn’t competition significantly eating into Safaricom’s market share?

The answer could lie in Safaricom’s extensive network coverage which is unmatched. But the new infrastructure sharing laws will poke holes into this answer as it will allow any other mobile operator to use Safaricom’s network in a national roaming agreement that will enable them offer affordable services across the country where there is Safaricom coverage, It will also allow competitors to use Safaricom’s local loops to offer service. This means that any operator competing with Safaricom will now be able to cover the country just like them. So there will be no excuse for any customer to not move to any competing operator for better or cheaper service should they wish to.

So with the availability of MNP, infrastructure sharing regulations, MVNO licensing, and many other playing field leveling regulations set by the regulator, I believe it will be very hard for the Communication Authority to declare Safaricom a dominant operator or one who is also abusing their position of dominance.

 Lion image (c) http://www.daler-rowney.com

Categories: Uncategorized

Is Universal Access/Service a Government or Operator Obligation?

March 30, 2016 4 comments

ruralSecond to creating a level playing field for all ICT operators, one of the widely accepted objectives of regulation of the ICT sector in developing countries is to promote universal access of basic ICT services. In developed economies, the objective changes from universal access to universal service. The difference is that access promotes the notion that every person should have reasonable means of accessing basic ICT services (like a phone booth at the local shopping center) while universal service is about promoting and maintaining availability of a variety of ICT services to individuals and households. Both these terms are combined into what is known as universality.

It is clear that that universal access definition has been overtaken by events based on the recent developments especially in the wake of mobile communication boom in many developing countries. To a very large extent, its no longer about ensuring access but ensuring that a variety of services are delivered to the end user.

The need by governments to make universality a reality stems from increasing evidence that access to ICTs improves the overall socioeconomic well being of its citizens. However, with the wave of privatization of ICT services such as telecommunications, the operation of telecoms moved from social welfare minded government ministries to profit minded private entities. When privatization took place in the early 1990’s new entrants focused on providing services to profitable market segments based on geography, disposable income and population density (which improves economies of scale and scope). The result is that regions or populations that were not profitable were at the risk of being left our in the ICT revolution. To prevent this from happening, regulators were quick to include mandatory service obligations (MSOs) in the licenses issued to new entrants. These obligations mandated the operators to extend their networks (and in effect their services) to areas where the cost of providing the services and maintaining the networks was higher than the revenues realized from the same areas. This seemed to be the only practical solution to connect the ‘unprofitables’. Other solutions were available and open to use by the operators such as cross-product subsidies (which haven’t worked well due to the fact that on the other hand the regulator enforces cost-based pricing making cross-product subsidies difficult to implement). It is worth noting that the definition of Universal Service varies from country to country, in Finland for example, universal access includes the right for every individual to access 1Mbps of broadband internet in addition to other services.

In addition to the measures above, the regulator in Kenya also developed a Universal Service Fund (USF) framework which according to them on page 1 of the framework draft document was to “to complement private sector initiatives towards meeting universal access objectives”. The document title and the aim I have quoted above are conflicting to a keen eye.

If indeed the aim of the USF was to complement private sector, why is the same private sector being obligated by regulatory instruments to fund it?

The International Telecommunications Union (ITU) lists many way in which USF can be funded,  one of the more popular ways is by budgetary allocation from the government. Other ways are by use of Access Deficit Charges (ADCs) and the levying of  a percentage of monies collected by operators in their business operations towards the USF kitty. The ITU states that should a regulator go the revenue levy way, it must not place a unfair burden on the operator on how these levies can be collected. For example the regulator cannot say that it will levy a percentage for every call minute  or every MB of data used by subscribers, this would make accounting difficult and hence the approach of levying the total revenues of the operators which is easier and more transparent.

Several countries have implemented USFs that are beneficiaries of government budgetary allocations. Such countries include Chile and Peru. Incidentally the same countries are hailed as success stories of how universality has improved lives of its citizens. This is because the desire to offer universal service or access is a social obligation of the government and not private firms. Its in the governments interest to connect these otherwise unprofitable regions/people and it can easily do it from budget.

Chile’s approach has been an interesting case study of how, if done right, the USFs can work to meet government objectives. The regulator there took the concession path by having operators bid to provide services on a concession basis. The regulator would then pick the lowest bidder. The results were that most of the bids were 50% below the budgetary allocations meaning that the approach was financially efficient. Proper policies were put in place to define the penalties, rights and obligations of each winning concessionaire to ensure they delivered.

This is the approach the Kenyan regulator should take. Instead of levying operators a percentage of their hard earned revenues. The operators, through the ITU definition can claim that the regulator has placed an unfair burden on them from the perspective of them not being directly responsible for economic development of the citizens (whether through ICTs or other means). Universality is a social program and it therefore squarely falls on government arms. Profitability or lack thereof  from universality is a secondary consequence whose impact cannot be directly measured.

Proponents of operator-funded USFs argue that unseen benefits such as multiplier effect of connecting the unprofitable directly benefit the operators, if that is the case then this decision to connect these people should be a commercial decision by the operators and not a license requirement. An example of the multiplier effect is when for example I (being of better economic means and living in the city) can now use airtime (read revenues) to call my rural relatives who are now connected thanks to supposedly the implementation of universality. My act of calling them in addition to other people I normally call adds revenues to operators. The operator should therefore connect my rural relatives because I will call them and not because they will call me. This is a straightforward  commercial decision.

Obliging ICT operators to fund the USF is unfair because social economic benefits accrued from connecting the population are felt across several fronts such as improved health, education and increased commercial activities and not just by way of improved profits by operators if any. Universality’s key outcome is not purely an ICT one and making only ICT players fund it is tantamount to the unfair burden on the operators mentioned by ITU.

It is my opinion therefore that the current approach to universal service funding should be re-looked at and if possible a new method of funding it through direct government budget allocation be adopted.  This is already happening in providing roads, hospitals and schools.   The regulator needs to revisit this because of the following reasons:

  • The current market structure where one operator is making most of the revenues is unfair to this operator as they will be contributing the most to this fund. There are no clear guidelines on how these funds will be utilized leaving room for abuse.
  • Failure for the law to accommodate ICT industry players in the Universal Service Advisory Council meaning they have no say on monies they contributed. This technically makes it a tax.
  • Already, operators are extending their networks to seemingly unprofitable regions without the need for government to push them. Advancement in technology and convergence is making what universality defines as unprofitable now seemingly commercially viable because its now much cheaper to build and scale networks. USF objectives need to be reviewed or done away with altogether

Should the regulator be adamant about maintaining the USF due to various unreasonable and political ends, then operators have recourse at the international courts as Kenya is a signatory to the WTO  General Agreement on Trade in Services (GATS) especially the agreement on basic telecommunications

Netflix experience on Ka-Band VSAT in Kenya

January 8, 2016 7 comments

Yesterday I, like most people here; woke up to the news that the American multinational provider of on-demand Internet streaming media; Netflix, has expanded into several countries including Kenya. Social media reaction in my view was a tie between those who think these new comers will ‘disrupt’ the market currently dominated by Multichoice’s DStv. The jury on what exactly is the meaning of disruption as applied in that discussion, is however still out.

My views on their foray into Kenya aside, I decided to test the service on my home VSAT link. This was after I read on how it works just in case I had made any assumptions that were wrong. Here, I found out that the minimum recommended bandwidth is 3 Mbps for SD quality video and 5 Mbps for HD quality video.

The particulars of the link are as follows:

  • Ka-band service off the Avanti Hylas-2 satellite at 31 degrees East (somewhere above Uganda)
  • 74 centimeter elliptical dish with a 1 watt Ka-band radio
  • Hughes HN9260 satellite router
  • 15 Mbps download and 2 Mbps upload speed
  • Netgear AC2350 Nighthawk X4 WiFi router

With the VSAT kit I achieved a strong enough signal to enable a DVB-S2 carrier at 8-PSK 8/9 on the down link and do a TDMA/FDMA carrier of 2048 Ksps at QPSK 4/5 on return w.r.t the remote terminal

The 74 centimeter dish with a clear view of the western sky. From Nairobi the look angle is a favourable 88.5 degrees

The 74 centimeter dish mounted on a perimeter wall  with a clear view of the western sky. From Nairobi the look angle is a favourable 88.5 degrees

I registered an account and selected a 58 minute SD quality documentary titled “Rise of the drones” and proceeded to view it. Its took about 3 seconds to open the stream and the streaming started.

The Netflix main screen opened on the Firefox browser

The Netflix main screen opened on the Firefox browser

The picture quality was as expected for  an SD video on my old laptop, I however could not identify how to check this video’s resolution on the stream.

Video quality was consistent throughout the session with no downward review of picture quality

I watched it to the end without a single “Netflix and Chill as it buffers” moment and the stream download rate indicator was about 5 minutes ahead of the play indicator throughout the time.

rate

The progress bar (in lighter shade of grey ahead of the red play duration bar) showing about 5-minute lead

The VSAT links Cacti graph for the 58 minute session showed  that the stream consumed an average of just below 3 Mbps with a peak of 3.7 Mbps. During this time the total downloaded data was 1.3 GB by calculating the area under the graph.

Cacti graph utilization during the 57 minutes of documentary streaming.

Cacti graph utilization during the 58 minutes of documentary streaming.

The above results means that in a multi-viewer scenario where more than one person is using Netflix on the LAN , the VSAT’s 15 Mbps capacity can support 4 concurrent viewers without a problem and will be limited only by the WiFi routers’ capability.

Update: I did Netflix for the entire day on Saturday 9th (via a HDMI stream dongle on TV) with my kids in the usual TV schedule as we do on DSTv (punctuated with sessions of outside play, reading/study, quiet times and no TV during meals). We had consumed 19.4 GB by the time we went to sleep.

Data centers and the environment: The case of Facebook

September 24, 2015 1 comment
Facebook data center engineer

A Facebook data center engineer

There has been an increased uptake and use of the Internet especially  social media by many in the world. This has led to rapid deployment of infrastructure to support this increased demand.

This infrastructure consumes power. It is estimated that data centers that power the internet world-over consume about 1.3% of the world’s total electric power. This might seem small but if you consider that Facebook consumed about 532 million kWh in 2011 (must be close to double that amount now). At current Kenyan electricity tariffs, that’s about 10.6 Billion shillings in power bills. Google consumed just over 2 billion kWh during the same time to power their servers world-wide. With most of this power being from coal plants, data centers are attracting the attention of groups such as green peace who are have launched campaigns such as ‘unfriend coal’ which was geared towards forcing Facebook to lower its dependence on coal to power its service.

With pressure piling on data centers to lower their carbon foot prints, innovation and new way of thinking is needed. One of the low hanging fruits is to build new data centers in regions that use green energy. One of the prime locations now for setting up data centers is Iceland. The country generates all of its power from geothermal steam and hydro. The cool weather there also means that natural cold air that is about 5.5 degrees C on average is simply circulated in the data center to cool the equipment as opposed to using air conditioning systems for forced cooling. This means that a server operating out of Iceland is cheaper to run and has a near zero carbon emission attached to it. According to Verne Global’s findings in 2013, the 10 year energy cost (the length of a standard data center hosting contract) for 1 megawatt of IT load in Keflavik, Iceland is near $3.5 million, compared to nearly $23 million in London, $20 million in Frankfurt, $12.5 million in Chicago, around $6 million in Oslo, Norway. The other bonus is the geographical location of Iceland makes latency from a server there to Europe and US nearly equal at 40ms.

However, with the likes of Facebook who have already invested a lot of money on data centers in the US, they cannot simply cart it to Iceland. They have therefore come up with innovative ways to lower their data center energy costs. It is estimated that about 25% of power in a data center goes to cooling, 10% is wasted in the conversion from AC to DC and back to AC voltage, IT load taking 46% of the power (25% servers, 8% network and 13% storage) there is a huge opportunity to lower the IT load portion and cooling portion.

IT load efficiency

Facebook did some research and found out that servers running low-level loads use power more inefficiently than idle servers or servers running at moderate or greater loads. In short a server should either be kept idle or at moderate/high load, not in low load. The traditional method of load distribution on a group of servers is known as round robin. This method is efficient on computing resources but inefficient on power use. Facebook developed a new way of doing things known as Autoscale.

Autoscale is designed to distribute incoming requests to the servers so that they are either idling, or running at medium/high-capacity and not in between. It tries to avoid assigning workloads in a way that results in servers running at low capacity. This was informed by a test that was done by Facebook engineers. In this test they found out that a server that is in idle mode consumes about 60 watts of power. If some light lower level load is applied to the server, the power consumption goes from 60 to 130 watts. However, if the same server is run at medium or higher loads, the power consumption is about 150 Watts; a 2o watt difference between low load and high load. This means that its more energy-efficient to give an already moderately busy server some more load (20 watts extra consumed) as opposed to giving this load to an idle server (70 watts extra consumed if you do this). Autoscale will also reduce the number of servers sharing the load so that it puts as many servers as possible in idle mode. In low traffic periods such as American midnight. Autoscale dynamically adjusts the size of the server pool in use, so that each active server will get at least a medium-level CPU load. Servers not in the active pool don’t receive traffic.

The other method deployed to reduce power consumption is the reduction of power transformation. There is about 10-15% loss in transformers and rectifiers found in UPS’s. In most data center setups, mains AC power is fed to a centralized UPS. The UPS converts this AC to DC and back to AC to supply the servers with power. This AC-DC-AC conversion results in about 6-12% loss. a way to lower this loss is to have the servers supplied directly by mains AC power but have localized UPS’s on each rack that can give up to 45 seconds of backup power as the diesel generator turns on in case of a power outage (a very rare occurrence in the developed world). Eliminating centralized UPS’s means that data centers can save about 10% of power. Feeding direct AC power from the grid to servers can be a tricky affair, this is because reactive components in the grid such as motors that power everything from escalators to coffee grinders lower the power factor and increase reactive power. The deployment of reactive synchronous condensers in data centers lowers reactive power which is responsible for some losses depending on power factor of received power. Facebook has deployed in-house custom-made reactor power panels which try to bring the power factor as close as possible to unity. Other than improving the quality of power, the Facebook reactors also reduced harmonic distortion in the power system which causes delays in generators kicking in when there is a detected power loss from the mains.

Use of 277Volts instead of 120 or 240Volts

Facebook hardware is also designed to operate at 277 Volts AC as opposed to the standard 120Volts in the USA main supply systems. The reason behind this is simple. with US 3 phase power being supplied at 480 Volts, the single phase neutral doesn’t come out at the 120Volts but at 227 Volts (you can use imaginary/complex number cube root of 1 components to derive this). The lowering of 227Volts to 120Volts by a transformer leads to about 3% transformation losses. So operating the servers at 277Volts and not 120Volts saves 3% power. The diagram below shows how a servers efficiency improved with the use of a higher voltage.

Hewlett-Packard server power supply efficiency as a function of load

Hewlett-Packard server power supply efficiency as a function of load (c) Syska Hennessy Group

A server operating at 240Volts (which is what we use in Kenya) is 91% efficient at 50% load compared to a similar server operating at 120Volts. jacking up this to 277Volts improves efficiency to 92% compared to a server at 120Volts at 89% efficiency on 50% load. The reason why America uses 120Volts is because in the early days of electricity, bulbs were made of carbon filaments that lasted longer if operated at 120Volts than at 230Volts, because most of electricity was used for lighting, it made sense then to run the grid at 120Volts. Later, when electricity went to Europe and Asia, technology had improved and the tungsten filaments could do higher, more efficient voltage at 240Volts.

Simpler cooling and Humidity control

About 12% of the cooling energy consumption goes to delivering the cold air at the point of heat rejection. By use of a ductless cooling system, the cold air is delivered at the center of the data center and with additional smaller cooling systems at the rack where the heat is generated, substantial power savings can be achieved.

The use of a vapor seal can also play a critical role in controlling relative humidity, reducing unnecessary humidification and dehumidification. If humidity is too high in the data center,conductive anodic failures (CAF), hygroscopic dust failures (HDF), tape media errors and excessive wear and corrosion can occur. These risks increase exponentially as relative humidity increases above 55 percent. If humidity is too low, the magnitude and propensity for electrostatic discharge (ESD) increases, which can damage equipment or adversely affect operation. Also, tape products and media may have excessive errors when exposed to low relative humidity.

Most equipment manufactured today is designed to draw in air through the front and exhaust it out the rear. This allows equipment racks to be arranged to create hot aisles and cold aisles. This approach positions racks so that rows of racks face each other, with the front of each opposing row of racks drawing cold air from the same aisle (the “cold” aisle). What this does is that it makes it easier to draw out hot air from the hot isles before it mixes with the cold air which lowers the cooling efficiency.

compressor_efficiencyThe other method of lowering cooling costs is through the use of multi step compressors for the cooling systems. Most traditional cooling systems simply switch on the compressors at full load when the thermostat input dictates that cooling should happen. a 4 step compressor operation showed that compressors operate at different efficiency at various steps. The diagram  on the side shows that the compressor in question is most efficient at step 2. The cooling system is designed in such a way that the compressor operates at step 2 most of the time.  Off the shelf cooling systems work well but are grossly power inefficient for use in data centers.

The internet is currently moving towards cloud computing. This essentially means that data centers will continue to grow and soon the power consumed by data centers will pile pressure on the grids and the environment. The use of green energy sources and innovation will go a long way in reducing the contribution of the Internet to global warming.

Broadband as a value add? Yes, Its about the eyes.

June 5, 2015 Leave a comment

InternetThe days of ISPs making super profits are long gone. The margins being created by ISPs world over are thin. Also, should Internet connectivity prices go lower due to either more competition or legislation, ISPs stand to create even thinner margins in future. There will therefore be little if any revenue/profit oriented incentives for ISPs to be in business.

Having worked in the industry for about 12 years now (That’s eons in Internet growth terms), I have seen the ISP industry evolve both on the technology front and its value proposition to customers. The liberalization of the sector in most countries has also attracted many investors into the industry, this has created a stiff and competitive market, this has brought with it diminishing returns on investments. Small ISPs are dying or being bought out as they cannot stay afloat. Large ISPs are also merging to create economies of scale to survive.

With the coming projects such as Google’s project Loon and Facebook’s Internet.org (and subsequent Internet by drones project) and many more that aim to provide nearly free Internet to the worlds’ unconnected, there will be no financial incentive for a commercial ISP to go into business anymore.

So what do ISPs need to do?

There has been a lot of talk in the market about value addition and that ISPs should stop selling ‘dumb pipes’ and offer value over and above just the internet pipe. All this has already happened and at the moment ISPs have been outmaneuvered by OTT providers who are providing this value addition type of services over the links the ISPs are providing to their customers. For example, some years ago, all ISPs were offering VoIP as a value add, now with the likes of Skype and Whatsapp calls, ISP-provided VoIP is a dud. Another example is dedicated hosting at ISP provided ‘data centers’ (a room with access control and cooling 🙂 ), with the maturity of cloud services, such a service is also not appealing anymore to customers. ISPs are at the end of their rope.

If you carefully analyze all recent ISP mergers and buyouts in Africa (and beyond if you have the time), you will realize that buy out decisions are less and less being based on an ISPs profitability or revenues and cash flow position. They are now based on subscriber numbers. But what is the commercial point of buying a unprofitable or low revenue business? Answer: Its about the eyes.

ISPs are and will no longer be about direct internet pipe derived revenues but about indirect revenues. Sources of these indirect revenues include online advertizing, OTT services and content delivery and purchase. This is the very reason why giants such as Google and Facebook have entered the ISP business, Its about the eyes. An ISP with more subscribers and loss making is now more attractive to buy than one with few subscribers and super profitable. Unbelievable isn’t it?

End to end control.

OTT operators such as Facebook have been blamed by traditional ISPs for using the ISPs network infrastructure to do business with the ISPs end users. Attempts by ISPs to make these operators pay for delivery of content has been met with opposition due to fears that such an arrangement can result in a tiered internet and with that a demise of net-neutrality that has been one of the key characteristics and a supposed catalyst of internet development. Attempts to camouflage net-neutrality-flouting arrangements by use of ISP led offers such as Facebook’s Internet.org where users on certain networks access Facebook and Whatsapp for free outside their data plans have also been meeting resistance. Being so froward thinking, I am of the opinion that these companies foresaw the resistance to their initiatives to offer their content for free by paying the traditional ISPs, this is why they are all rushing to roll out their own infrastructure to provide free or near free internet to the masses. At the moment, other than their Satellite/baloon projects being tested in New Zealand, Google is already testing out high speed fiber -FTTH in select American cities. This will give them end to end control of the broadband supply chain and therefore quell concerns of creation of a tiered internet. This of course assumes they will come up with a way to show regulators that they have fair access policies for all third party traffic.

The future

As i see it, the traditional ISP will die a natural death if they don’t adapt to the coming changes. What was once a value add will become the product and vice versa. Internet broadband will be a value add to content and OTT services. A content provider such as Facebook or Google will offer you free internet to access their content. Internet broadband provision will be a value addition to content providers. As someone once said, if the product/service is free, you are the product. The free internet will come with privacy strings attached so as to enable advertizers track your habits and offer more targeted adverts. This targeting is getting more accurate and spookier if the tweet below is anything to go by.

glasstweet

The use of browser safety features to disable cookies wont work as companies such as Google are now using what is known as device finger printing to identify you. Device finger printing works on the basis that your computers OS, installed programs (and the dates they were installed), CPU serial number, hardware configuration (RAM/HDD/attached peripherals) will give your computer a unique identifier if applied to an algorithm. Therefore your computing device is unique and can therefore be tracked without the need to set cookies.