Archive

Author Archive

The Need For Corporate Governance In The Telecommunications Sector

May 14, 2016 Leave a comment

cg1111When I got my first job, I was over the moon for working for a Subsidiary of the then largest telecommunications firms in the world; Worldcom. At that time, about 70% of all Internet traffic was flowing through Worldcom’s infrastructure. Our letter head had ‘A Worldcom Company’ under the logo, however few months later that statement changed from ‘A Worldcom Company’ to ‘A MCI Company’.  In a span of about 5 months, Worldcom had filed for chapter 11 bankruptcy protection (at that time the largest in American corporate history) and emerged as MCI Communications after a thorough reorganization.

Things started going downhill for the firm when The Securities and Exchange Commission (SEC) sought information about accounting malpractices in the company more specifically on how the then CEO Bernard Ebbers managed to obtain over $400 Million in personal loans from the company. The SEC investigation unearthed more accounting malpractices to the tune of $11 Billion which were covered up by the management in accounting malpractices.

All this happened at a time when the Internet was going mainstream as the preferred communication platform for many people in the world. Worldcom/MCI scandal meant that despite being at the fore front of the internet revolution, they missed the future they helped create. Smaller competitors who had no chance of ever beating Worldcom in the long distance data carrier market ate their lunch. The company missed a massive opportunity due to greed by few people in the organization. The American “Hall of Shame” which has the likes of Enron is actually populated by many telecommunication companies such as Tyco, Adelphia communications and Global Crossing. The common denominator in all these hall of shamers is there was lack of corporate governance to protect the corporation and customers from action (or inaction) by staff that jeopardized the company’s existence.

Here in Kenya the lack of properly instituted corporate governance practices has had its fair share of victims. We mostly know of the mainstream ones such as Chase Bank of Kenya, Imperial bank and Dubai Bank. Lack of corporate governance does not discriminate and has also affected many SMEs in addition to many large corporations. Worth noting however is that frameworks for corporate governance for SMEs are somehow more focused on ex post interventions while in large corporations its more of ex ante guidelines. The later is more difficult to institute because external regulations (if present) need to be put into consideration, this is especially true for financial institutions and telecommunications operators. The need for corporate governance guidelines and mechanisms to monitor its application and effectiveness have never been more relevant in Kenya than today. With the corruption ‘epidemic’ sweeping the country and impacting the economic outlook of the country, many organizations are implementing measures to ensure that employees, suppliers and other stakeholders behave in a manner that is in the best interest of the company. So as to make sure that these guidelines are effective, many organizations have also setup mechanisms to measure their performance. These mechanisms include internal audits and scoreboards or scorecards. These measure how well the companies are meeting the governance standards set and other social-economic and environmental  parameters. These parameters are such as how well the company adheres to human rights, ethics and the impact of their commercial activities on the environment.

Corporate Governance in Telecommunications

As highlighted above, the hall of shame is full of telecommunication companies that flouted governance principles in one way or the other. Most of the past victims have been due to falsified books of accounts. However, future victims will be as a result of the malpractice of corruption by theft and kickbacks by people lower in the company structure resulting in non-performance. The days when theft was by the CEO and CFO are long gone, the future will be littered with companies brought down by ordinary members of staff with influence on budgets or assets. This poses a challenge especially in telecommunication firms in developing countries such as Kenya. These companies are experiencing rapid growth in revenues and expenditure. The rolling out of telecommunication products and services is  very capital intensive and the large sums of money involved creates a situation where the involved parties will try to use the process to unlawfully enrich themselves through inflation of costs, favoring suppliers who are willing to give kickbacks, non-performance and anti-competitive behaviour.

The only example of a local telecommunications company that has instituted corporate governance structures and reporting processes is Safaricom. These have so far been effective based on their annual sustainability reports that are available here.These reports show deliberate effort to ensure that all actions taken by the stakeholders are in the best interest of the company and also at the same time remain socially and environmentally responsible.

Governance and Transparency

Making these reports public also enhances the transparency of the process because it makes all stakeholders party to the results of their adherence. Because the staff are stakeholders and also subject to the governance guidelines, an external independent auditor is usually appointed to carry out the measurement of how well the  firm has met the governance and ethics in addition to other sustainability goals. The result is usually a detailed report.

The fact that some of these reports directly mention individuals or organizations involved in improper conduct during the discharge of their duties, they are handled in confidence and are mostly handed to C-level management and sometimes the board for discussion and analysis. The result of the analysis can sometimes result in drastic punitive measures being taken on persons or organizations mentioned in them.

As you can guess, some level of effort to frustrate the implementation of actions points from the reports’ analysis from people directly or indirectly mentioned in the reports is usually observed, just as is the case now where some people and external suppliers claim to have been unfairly treated by the operator. This claim is based on an confidential audit report that is yet to be discussed by the Safaricom management that was leaked to the public. The fact that someone within the organization (Safaricom or the auditors-KPMG) leaked the report shows that there exists a group of individuals out to malign the company or settle personal scores because of lost opportunities to enrich themselves.

With telecommunications services becoming an essential part of economic development and democratic space as the main means of free speech dissemination in many countries, specific regulation is usually applied to modify the behaviour of telecom operators. These regulations are usually aimed at ensuring a level playing field among the operators and also ensure that the end users get value for their money and access to quality services. These regulations stop short of institutionalizing operational procedures for the operators. As the market matures and becomes more competitive, the regulators role transforms from that of a regulator to a facilitator. However, nothing stops the regulator from going further to spell out how these companies run. This is especially true in countries where the operators are deeply involved in malpractice and anti-competitive behaviour. A Case in point is Nigeria where the regulator spelled out mandatory corporate governance principles for all operators to abide in. This is because internal ethical and governance guidelines either never worked or existed. The result of the lack of these was a chaotic market fraught with poor service, non compliance to regulation, corruption and anti-competitive behaviour. The Kenyan regulator has done a great job of creating an open and level playing field and is now slowly moving away from direct regulation and letting market forces dictate the behavior of operators. However, if the operators lack a strong corporate governance framework, the result is that instead of market forces working to the advantage of the consumer, they will be working to the advantage of a few corrupt individuals who work for the operator or deal with them as suppliers. This danger is clearly very visible to the CEO and this is why they have spent a better part of the last few years or so to institute sound ethical and governance principles in their organization. This process has had its fair share of victims who were found guilty of malpractice. If the guilty party was a member of staff, disciplinary action or dismissal was carried our and if it was a supplier, commercial ties were severed. So far, Safaricom has fired 169 and disciplined 37 members of staff for malpractice.

In their book “Competing For The Future”, Prof. Gary Hamel and the late Prof. CK  Prahalad warn of CEOs who; because of the past and current success of their organizations, tend to believe that they can copy paste into the future what they did that resulted in success in the past. They also warn about CEOs who are blinded by current success and believe it is as a result of their midas touch. These CEOs end up believing that what they do now know isn’t worth knowing, this is the worst situation an organization can find itself in; a CEOs with hubris. The leaked audit report was evidence of the forward thinking of the CEO and the management who didn’t assume that just because of the massive profits made, everything was OK. It shows that Safaricom was right to commission the report and discover that they risk losing their position as market leader if malpractice continues in the organization. It might seem trivial that the monies found to have been lost in the audit revelations were not much so as to affect or cripple their operations. But the power of compounded incremental change can be so profound and sometimes outright dangerous. what seems little to affect the short term can actually be very big in the long term, Malpractices need to be nipped in the bud before their effect causes irreversible damage. Nothing sums this up better than the 1.01 law.

Screen-Shot-2015-10-02-at-12.18.33-PM-300x128

A 1% daily improvement or 1% daily deterioration can have a profound effect in a year

Will Safaricom Be Declared a Dominant Operator?

April 23, 2016 Leave a comment

 

lion

Last week, The Communication Authority said that their self-imposed March deadline to create clear guidelines on how it handles dominance of an operator had lapsed. This was occasioned by the failure to get a suitable international consultant to carry out a research study which would assist the Communication Authority in identifying and developing several key market interventions that would have assisted in managing the effects of a dominant player in the market. It is worth noting that the issue of dominance cuts across broadcasting, postal and telecommunications sectors. The finding of dominance must be based on the context  and circumstances of the relevant market and this is why the Communication Authority is engaging a consultant to study the market. They cannot go ahead and declare an entity as dominant or abusing its dominance without this study.

Is dominance a bad thing?

Before I answer that question, I would first like to define what is dominance. Unfortunately, because of a lack of local guidelines in place, there is no clear and detailed definition of what dominance is from a Kenyan perspective other than a brief mention in section 84W of the Kenya Information and Communication Act (KICA). However, internationally recognized definitions do exist.

The European Commission defines dominance thus: “A position of economic strength enjoyed by an undertaking which enables it to prevent effective competition being maintained in the relevant market by affording it the power to behave, to an appreciable extent, independently of its competitors, customers and ultimately consumers”. An operator can become dominant by virtue of a well implemented growth strategy and there is therefore nothing wrong being a dominant player. However, it is the abuse of this dominance that attracts attention from regulators. If an operator occupies a dominant position and is declared dominant by way of a gazette notice as per the KICA, several tests can be conducted to see if they are likely to abuse this position. One of the key tests is existence of barriers to entry of new operators into the market the operator is dominant, it could be that they are dominant because of high barriers to entry for new entrants to offer effective competition. It could be also that they are dominant because no other investor is interested in that market because they can get better returns elsewhere, this is despite low barriers to entry into the market the dominant player is in. The other test is if the operator possesses what is known as Significant Market Power (SMP). The European Commission recognizes SMP when an operator controls more that 25% of the market it operates in, this assumes a fully competitive market, In countries that are transitioning from a monopoly (like Kenya) this is usually set at 65% of market share (KICA section 84W however mentions 25% in relation to determining the dominance of an operator and not in explicitly defining if an operator has SMP). However, it should be noted that  SMP designation is simply a trigger for the application of behavioral or structural conditions by the regulator and not necessarily a prerequisite condition for dominance.

The abuse of dominance can only occur if the dominant operator engages in behavior that is anti-competitive as recognized by law. This abusive behavior should be harmful to competition or consumers or both.

Competition Authority or Communication Authority?

Mid last year, there was confusion on who between the Competition Authority and Communication Authority should deal with anti-competitive behaviors of a dominant operator in the telecommunications sector.  I did some research on this and came to a conclusion that its the Communication Authority’s mandate to deal with any ICT operator abusing their dominance. Below are my reasons for coming to this conclusion.

Whereas the Competition Authority deals with all commercial forms of competition across all sectors, their mandate can be said to forbear when it comes to telecommunications, postal and broadcasting. The main difference in how the Competition Authority and Communications Authority deal with competition is that the Competition Authority mostly acts on a retrospective basis on raised complaints of anti-competitive behavior (Ex Post regulation), on the other hand the Communications Authority behaves in a forward looking manner and tries to prevent anti-competitive behaviors by implementing government policy by use of regulations that modify the behavior of operators (Ex Ante regulation). Competition policy is typically aimed at preventing market participants from interfering with the operation of competitive markets while telecommunications, postal and broadcast regulation often manipulates market circumstances and operator behavior to achieve public goals. In short, Competition Authority controls the market for commercial interests while Communication Authority controls the market for public interest.

One point worth noting is that telecommunications, postal and broadcast operators in a regulated environment can use what is known as ‘the regulated conduct defense’ to not be under the control of the Competition Authority. In this defense, operators are regulated by regulations that are deemed to be in public interest and any activities they carry out within this regulated environment cannot attract liability under common competition laws. This defense is however not very applicable in situations where the telecommunications, postal or broadcast sector is highly competitive and the regulator forbears from regulation and lets market forces do most of the self regulation, in such circumstances, the Competition laws can be applied to telecom and broadcast operators as is the case in USA and EU.

An Analysis of Safaricom’s position in the market

As per the 2015 Q4  sector statistics, Safaricom controls 64.7% of mobile voice subscribers, 63% of mobile data subscribers and 71.7% of mobile money users. The first step in the process of determining if Safaricom is a dominant operator involves defining and looking at the market it operates in and if the same market possesses barriers to entry by others  that could have caused them to become dominant. The nature of our licensing regime means that Safaricom’s geographical and product market is the same as that of its fellow licensees in the same category of license. It is very clear from the figures above that Safaricom’s large market share triggers the need to analyze if it is dominant by evaluating if it  possesses Market Power, a key factor in dominance determination. Market power can be see in the following:

  • Profitability. Safaricom’s profitability is much higher than the rest of the competitors combined.
  • Pricing behavior. Safaricom’s prices are not the lowest in the market and they do not react to competitor price reductions, promotions or offers.
  • Vertical integration of its operations. Safaricom tightly controls nearly the entire value chain in delivering its products and services.
  • Bundling: Safaricom bundles both competitive and non competitive products, it also bundles its local loops and essential facility capabilities with its products (e.g. Selling Internet access (a product) via a Wimax/fiber network it owns and controls (local loop) and the inability of competitors to use this Wimax/fiber network to sell their internet services)
  • Barriers to market entry by competition to take advantage of their high prices. This is the point that I want to focus on below.

Barriers to Market entry

One of the key factors in determining if an operator is dominant is what happens if they increase prices of their products and services. If barriers to market entry are high, then no new entrant will easily come in and offer lower prices and take customers away from them. If barriers are however low, new entrants can easily come into the market and offer cheaper pricing and make them regret increasing their prices by loss of customers to them. In my analysis, barriers to market entry in Kenyans mobile telecommunication sector are very low especially with the advent of Mobile Virtual Network Operators  (MVNO’s) and the proposed infrastructure sharing regulations that are coming into place. This means that the Communication Authority has done a splendid job of making it easy for competition to be offered to Safaricom on voice, data and mobile money should an investor find it attractive to do so. This factor alone I believe is sufficient to prevent the regulator from declaring Safaricom dominant or even term some of their actions (like bundling) as abuse of their dominant position. The fact that end users can take advantage of Mobile Number Portability (MNP) and move to competition and enjoy lower priced  services makes it even easier for competition to overcome customer inertia and get customers to move to them. The big question is then why isn’t competition significantly eating into Safaricom’s market share?

The answer could lie in Safaricom’s extensive network coverage which is unmatched. But the new infrastructure sharing laws will poke holes into this answer as it will allow any other mobile operator to use Safaricom’s network in a national roaming agreement that will enable them offer affordable services across the country where there is Safaricom coverage, It will also allow competitors to use Safaricom’s local loops to offer service. This means that any operator competing with Safaricom will now be able to cover the country just like them. So there will be no excuse for any customer to not move to any competing operator for better or cheaper service should they wish to.

So with the availability of MNP, infrastructure sharing regulations, MVNO licensing, and many other playing field leveling regulations set by the regulator, I believe it will be very hard for the Communication Authority to declare Safaricom a dominant operator or one who is also abusing their position of dominance.

 Lion image (c) http://www.daler-rowney.com

Categories: Uncategorized

Is Universal Access/Service a Government or Operator Obligation?

March 30, 2016 4 comments

ruralSecond to creating a level playing field for all ICT operators, one of the widely accepted objectives of regulation of the ICT sector in developing countries is to promote universal access of basic ICT services. In developed economies, the objective changes from universal access to universal service. The difference is that access promotes the notion that every person should have reasonable means of accessing basic ICT services (like a phone booth at the local shopping center) while universal service is about promoting and maintaining availability of a variety of ICT services to individuals and households. Both these terms are combined into what is known as universality.

It is clear that that universal access definition has been overtaken by events based on the recent developments especially in the wake of mobile communication boom in many developing countries. To a very large extent, its no longer about ensuring access but ensuring that a variety of services are delivered to the end user.

The need by governments to make universality a reality stems from increasing evidence that access to ICTs improves the overall socioeconomic well being of its citizens. However, with the wave of privatization of ICT services such as telecommunications, the operation of telecoms moved from social welfare minded government ministries to profit minded private entities. When privatization took place in the early 1990’s new entrants focused on providing services to profitable market segments based on geography, disposable income and population density (which improves economies of scale and scope). The result is that regions or populations that were not profitable were at the risk of being left our in the ICT revolution. To prevent this from happening, regulators were quick to include mandatory service obligations (MSOs) in the licenses issued to new entrants. These obligations mandated the operators to extend their networks (and in effect their services) to areas where the cost of providing the services and maintaining the networks was higher than the revenues realized from the same areas. This seemed to be the only practical solution to connect the ‘unprofitables’. Other solutions were available and open to use by the operators such as cross-product subsidies (which haven’t worked well due to the fact that on the other hand the regulator enforces cost-based pricing making cross-product subsidies difficult to implement). It is worth noting that the definition of Universal Service varies from country to country, in Finland for example, universal access includes the right for every individual to access 1Mbps of broadband internet in addition to other services.

In addition to the measures above, the regulator in Kenya also developed a Universal Service Fund (USF) framework which according to them on page 1 of the framework draft document was to “to complement private sector initiatives towards meeting universal access objectives”. The document title and the aim I have quoted above are conflicting to a keen eye.

If indeed the aim of the USF was to complement private sector, why is the same private sector being obligated by regulatory instruments to fund it?

The International Telecommunications Union (ITU) lists many way in which USF can be funded,  one of the more popular ways is by budgetary allocation from the government. Other ways are by use of Access Deficit Charges (ADCs) and the levying of  a percentage of monies collected by operators in their business operations towards the USF kitty. The ITU states that should a regulator go the revenue levy way, it must not place a unfair burden on the operator on how these levies can be collected. For example the regulator cannot say that it will levy a percentage for every call minute  or every MB of data used by subscribers, this would make accounting difficult and hence the approach of levying the total revenues of the operators which is easier and more transparent.

Several countries have implemented USFs that are beneficiaries of government budgetary allocations. Such countries include Chile and Peru. Incidentally the same countries are hailed as success stories of how universality has improved lives of its citizens. This is because the desire to offer universal service or access is a social obligation of the government and not private firms. Its in the governments interest to connect these otherwise unprofitable regions/people and it can easily do it from budget.

Chile’s approach has been an interesting case study of how, if done right, the USFs can work to meet government objectives. The regulator there took the concession path by having operators bid to provide services on a concession basis. The regulator would then pick the lowest bidder. The results were that most of the bids were 50% below the budgetary allocations meaning that the approach was financially efficient. Proper policies were put in place to define the penalties, rights and obligations of each winning concessionaire to ensure they delivered.

This is the approach the Kenyan regulator should take. Instead of levying operators a percentage of their hard earned revenues. The operators, through the ITU definition can claim that the regulator has placed an unfair burden on them from the perspective of them not being directly responsible for economic development of the citizens (whether through ICTs or other means). Universality is a social program and it therefore squarely falls on government arms. Profitability or lack thereof  from universality is a secondary consequence whose impact cannot be directly measured.

Proponents of operator-funded USFs argue that unseen benefits such as multiplier effect of connecting the unprofitable directly benefit the operators, if that is the case then this decision to connect these people should be a commercial decision by the operators and not a license requirement. An example of the multiplier effect is when for example I (being of better economic means and living in the city) can now use airtime (read revenues) to call my rural relatives who are now connected thanks to supposedly the implementation of universality. My act of calling them in addition to other people I normally call adds revenues to operators. The operator should therefore connect my rural relatives because I will call them and not because they will call me. This is a straightforward  commercial decision.

Obliging ICT operators to fund the USF is unfair because social economic benefits accrued from connecting the population are felt across several fronts such as improved health, education and increased commercial activities and not just by way of improved profits by operators if any. Universality’s key outcome is not purely an ICT one and making only ICT players fund it is tantamount to the unfair burden on the operators mentioned by ITU.

It is my opinion therefore that the current approach to universal service funding should be re-looked at and if possible a new method of funding it through direct government budget allocation be adopted.  This is already happening in providing roads, hospitals and schools.   The regulator needs to revisit this because of the following reasons:

  • The current market structure where one operator is making most of the revenues is unfair to this operator as they will be contributing the most to this fund. There are no clear guidelines on how these funds will be utilized leaving room for abuse.
  • Failure for the law to accommodate ICT industry players in the Universal Service Advisory Council meaning they have no say on monies they contributed. This technically makes it a tax.
  • Already, operators are extending their networks to seemingly unprofitable regions without the need for government to push them. Advancement in technology and convergence is making what universality defines as unprofitable now seemingly commercially viable because its now much cheaper to build and scale networks. USF objectives need to be reviewed or done away with altogether

Should the regulator be adamant about maintaining the USF due to various unreasonable and political ends, then operators have recourse at the international courts as Kenya is a signatory to the WTO  General Agreement on Trade in Services (GATS) especially the agreement on basic telecommunications

Netflix experience on Ka-Band VSAT in Kenya

January 8, 2016 7 comments

Yesterday I, like most people here; woke up to the news that the American multinational provider of on-demand Internet streaming media; Netflix, has expanded into several countries including Kenya. Social media reaction in my view was a tie between those who think these new comers will ‘disrupt’ the market currently dominated by Multichoice’s DStv. The jury on what exactly is the meaning of disruption as applied in that discussion, is however still out.

My views on their foray into Kenya aside, I decided to test the service on my home VSAT link. This was after I read on how it works just in case I had made any assumptions that were wrong. Here, I found out that the minimum recommended bandwidth is 3 Mbps for SD quality video and 5 Mbps for HD quality video.

The particulars of the link are as follows:

  • Ka-band service off the Avanti Hylas-2 satellite at 31 degrees East (somewhere above Uganda)
  • 74 centimeter elliptical dish with a 1 watt Ka-band radio
  • Hughes HN9260 satellite router
  • 15 Mbps download and 2 Mbps upload speed
  • Netgear AC2350 Nighthawk X4 WiFi router

With the VSAT kit I achieved a strong enough signal to enable a DVB-S2 carrier at 8-PSK 8/9 on the down link and do a TDMA/FDMA carrier of 2048 Ksps at QPSK 4/5 on return w.r.t the remote terminal

The 74 centimeter dish with a clear view of the western sky. From Nairobi the look angle is a favourable 88.5 degrees

The 74 centimeter dish mounted on a perimeter wall  with a clear view of the western sky. From Nairobi the look angle is a favourable 88.5 degrees

I registered an account and selected a 58 minute SD quality documentary titled “Rise of the drones” and proceeded to view it. Its took about 3 seconds to open the stream and the streaming started.

The Netflix main screen opened on the Firefox browser

The Netflix main screen opened on the Firefox browser

The picture quality was as expected for  an SD video on my old laptop, I however could not identify how to check this video’s resolution on the stream.

Video quality was consistent throughout the session with no downward review of picture quality

I watched it to the end without a single “Netflix and Chill as it buffers” moment and the stream download rate indicator was about 5 minutes ahead of the play indicator throughout the time.

rate

The progress bar (in lighter shade of grey ahead of the red play duration bar) showing about 5-minute lead

The VSAT links Cacti graph for the 58 minute session showed  that the stream consumed an average of just below 3 Mbps with a peak of 3.7 Mbps. During this time the total downloaded data was 1.3 GB by calculating the area under the graph.

Cacti graph utilization during the 57 minutes of documentary streaming.

Cacti graph utilization during the 58 minutes of documentary streaming.

The above results means that in a multi-viewer scenario where more than one person is using Netflix on the LAN , the VSAT’s 15 Mbps capacity can support 4 concurrent viewers without a problem and will be limited only by the WiFi routers’ capability.

Update: I did Netflix for the entire day on Saturday 9th (via a HDMI stream dongle on TV) with my kids in the usual TV schedule as we do on DSTv (punctuated with sessions of outside play, reading/study, quiet times and no TV during meals). We had consumed 19.4 GB by the time we went to sleep.

Angani outage. What really happened

November 8, 2015 5 comments

Cloud-OutageOn Thursday, many subscribers of the local Angani cloud service noticed a prolonged inaccessibility of their services hosted at their two data center locations. The Angani cloud service was down. Being one of their customers, I was also affected.

For most part of the morning, social media was filled with quest for what could have possibly happened to cause such a massive outage. It was later in the day that word started flowing around about what could be happening at Angani. Many speculated that the recent low key exit of one of its founders and CEO Phares could have a direct correlation to the outage. At that time, it was but speculation. Bloggers such as Kachwanya, wrote an article that said that there were boardroom wangles that led to the ouster of the CEO Phares.

Since then, nothing much has been discussed on what exactly happened, most of the discussion is on who has any info on what happened.

This is what happened

From what I gather, problems started when a new group of investors put their money into the company and got some seats on the board. There was tension between the co-founders but only Phares and Brian were from a cloud computing background. This tension manifested itself through tension in daily operations that led to the ouster of Phares from the CEO role.

Brian could be said to have single handedly set up the Angani infrastructure. They have hired more engineers to work under Brian but their level of experience meant that Brian still ran most of the platform. During setup, the fact that Angani (unlike many cloud providers) did not own any data center, they depended on third party hosting. Due to this, Brian built a secure system from physical access this being a commercial shared data center.

When the two left, the new management team exchanged communication to the effect that the two need to handover the passwords to them. The two said that they will need a document signed showing that they have handed over the passwords and are therefore free from any liability. The board member declined. The two gave the passwords to their lawyer and informed the Angani board to go pick them from there upon signing the chain of custody forms. They declined and instead brought in an external consultant called Shape Blue to attempt to hack the system and gain control. They also proceeded to sue both Brian and Phares. One of the Angani communiques indicated that Shape blue were brought in after the crash, they were brought in earlier to hack the system.

Due to the security designed by Brian into the systems, the consultants managed to change the root password but lost access to the system in the process. Because of this, even the passwords that could have helped them had they picked them from the lawyers were now useless.

Despite the lawsuits, the two cut short their holiday in Malindi because they did not have any laptops so could not help remotely. They got to Nairobi to be informed that their help would not be required immediately. Which was a little disrespectful.
​ ​Brian has been willing to help but the lack of goodwill from the Angani team keeps him away.

The Angani team has refused to sign anything with Brian who is willing to help and are propagating the theory that the system was crashed by Brian and not the result of an attempted break in to the system by Shape Blue. The only way out of this situation I believe is if the two groups sat with a mediator and find a way to restore service and save the now tarnished local hosting scene.

(c) image techweekeurope.co.uk

Data centers and the environment: The case of Facebook

September 24, 2015 1 comment
Facebook data center engineer

A Facebook data center engineer

There has been an increased uptake and use of the Internet especially  social media by many in the world. This has led to rapid deployment of infrastructure to support this increased demand.

This infrastructure consumes power. It is estimated that data centers that power the internet world-over consume about 1.3% of the world’s total electric power. This might seem small but if you consider that Facebook consumed about 532 million kWh in 2011 (must be close to double that amount now). At current Kenyan electricity tariffs, that’s about 10.6 Billion shillings in power bills. Google consumed just over 2 billion kWh during the same time to power their servers world-wide. With most of this power being from coal plants, data centers are attracting the attention of groups such as green peace who are have launched campaigns such as ‘unfriend coal’ which was geared towards forcing Facebook to lower its dependence on coal to power its service.

With pressure piling on data centers to lower their carbon foot prints, innovation and new way of thinking is needed. One of the low hanging fruits is to build new data centers in regions that use green energy. One of the prime locations now for setting up data centers is Iceland. The country generates all of its power from geothermal steam and hydro. The cool weather there also means that natural cold air that is about 5.5 degrees C on average is simply circulated in the data center to cool the equipment as opposed to using air conditioning systems for forced cooling. This means that a server operating out of Iceland is cheaper to run and has a near zero carbon emission attached to it. According to Verne Global’s findings in 2013, the 10 year energy cost (the length of a standard data center hosting contract) for 1 megawatt of IT load in Keflavik, Iceland is near $3.5 million, compared to nearly $23 million in London, $20 million in Frankfurt, $12.5 million in Chicago, around $6 million in Oslo, Norway. The other bonus is the geographical location of Iceland makes latency from a server there to Europe and US nearly equal at 40ms.

However, with the likes of Facebook who have already invested a lot of money on data centers in the US, they cannot simply cart it to Iceland. They have therefore come up with innovative ways to lower their data center energy costs. It is estimated that about 25% of power in a data center goes to cooling, 10% is wasted in the conversion from AC to DC and back to AC voltage, IT load taking 46% of the power (25% servers, 8% network and 13% storage) there is a huge opportunity to lower the IT load portion and cooling portion.

IT load efficiency

Facebook did some research and found out that servers running low-level loads use power more inefficiently than idle servers or servers running at moderate or greater loads. In short a server should either be kept idle or at moderate/high load, not in low load. The traditional method of load distribution on a group of servers is known as round robin. This method is efficient on computing resources but inefficient on power use. Facebook developed a new way of doing things known as Autoscale.

Autoscale is designed to distribute incoming requests to the servers so that they are either idling, or running at medium/high-capacity and not in between. It tries to avoid assigning workloads in a way that results in servers running at low capacity. This was informed by a test that was done by Facebook engineers. In this test they found out that a server that is in idle mode consumes about 60 watts of power. If some light lower level load is applied to the server, the power consumption goes from 60 to 130 watts. However, if the same server is run at medium or higher loads, the power consumption is about 150 Watts; a 2o watt difference between low load and high load. This means that its more energy-efficient to give an already moderately busy server some more load (20 watts extra consumed) as opposed to giving this load to an idle server (70 watts extra consumed if you do this). Autoscale will also reduce the number of servers sharing the load so that it puts as many servers as possible in idle mode. In low traffic periods such as American midnight. Autoscale dynamically adjusts the size of the server pool in use, so that each active server will get at least a medium-level CPU load. Servers not in the active pool don’t receive traffic.

The other method deployed to reduce power consumption is the reduction of power transformation. There is about 10-15% loss in transformers and rectifiers found in UPS’s. In most data center setups, mains AC power is fed to a centralized UPS. The UPS converts this AC to DC and back to AC to supply the servers with power. This AC-DC-AC conversion results in about 6-12% loss. a way to lower this loss is to have the servers supplied directly by mains AC power but have localized UPS’s on each rack that can give up to 45 seconds of backup power as the diesel generator turns on in case of a power outage (a very rare occurrence in the developed world). Eliminating centralized UPS’s means that data centers can save about 10% of power. Feeding direct AC power from the grid to servers can be a tricky affair, this is because reactive components in the grid such as motors that power everything from escalators to coffee grinders lower the power factor and increase reactive power. The deployment of reactive synchronous condensers in data centers lowers reactive power which is responsible for some losses depending on power factor of received power. Facebook has deployed in-house custom-made reactor power panels which try to bring the power factor as close as possible to unity. Other than improving the quality of power, the Facebook reactors also reduced harmonic distortion in the power system which causes delays in generators kicking in when there is a detected power loss from the mains.

Use of 277Volts instead of 120 or 240Volts

Facebook hardware is also designed to operate at 277 Volts AC as opposed to the standard 120Volts in the USA main supply systems. The reason behind this is simple. with US 3 phase power being supplied at 480 Volts, the single phase neutral doesn’t come out at the 120Volts but at 227 Volts (you can use imaginary/complex number cube root of 1 components to derive this). The lowering of 227Volts to 120Volts by a transformer leads to about 3% transformation losses. So operating the servers at 277Volts and not 120Volts saves 3% power. The diagram below shows how a servers efficiency improved with the use of a higher voltage.

Hewlett-Packard server power supply efficiency as a function of load

Hewlett-Packard server power supply efficiency as a function of load (c) Syska Hennessy Group

A server operating at 240Volts (which is what we use in Kenya) is 91% efficient at 50% load compared to a similar server operating at 120Volts. jacking up this to 277Volts improves efficiency to 92% compared to a server at 120Volts at 89% efficiency on 50% load. The reason why America uses 120Volts is because in the early days of electricity, bulbs were made of carbon filaments that lasted longer if operated at 120Volts than at 230Volts, because most of electricity was used for lighting, it made sense then to run the grid at 120Volts. Later, when electricity went to Europe and Asia, technology had improved and the tungsten filaments could do higher, more efficient voltage at 240Volts.

Simpler cooling and Humidity control

About 12% of the cooling energy consumption goes to delivering the cold air at the point of heat rejection. By use of a ductless cooling system, the cold air is delivered at the center of the data center and with additional smaller cooling systems at the rack where the heat is generated, substantial power savings can be achieved.

The use of a vapor seal can also play a critical role in controlling relative humidity, reducing unnecessary humidification and dehumidification. If humidity is too high in the data center,conductive anodic failures (CAF), hygroscopic dust failures (HDF), tape media errors and excessive wear and corrosion can occur. These risks increase exponentially as relative humidity increases above 55 percent. If humidity is too low, the magnitude and propensity for electrostatic discharge (ESD) increases, which can damage equipment or adversely affect operation. Also, tape products and media may have excessive errors when exposed to low relative humidity.

Most equipment manufactured today is designed to draw in air through the front and exhaust it out the rear. This allows equipment racks to be arranged to create hot aisles and cold aisles. This approach positions racks so that rows of racks face each other, with the front of each opposing row of racks drawing cold air from the same aisle (the “cold” aisle). What this does is that it makes it easier to draw out hot air from the hot isles before it mixes with the cold air which lowers the cooling efficiency.

compressor_efficiencyThe other method of lowering cooling costs is through the use of multi step compressors for the cooling systems. Most traditional cooling systems simply switch on the compressors at full load when the thermostat input dictates that cooling should happen. a 4 step compressor operation showed that compressors operate at different efficiency at various steps. The diagram  on the side shows that the compressor in question is most efficient at step 2. The cooling system is designed in such a way that the compressor operates at step 2 most of the time.  Off the shelf cooling systems work well but are grossly power inefficient for use in data centers.

The internet is currently moving towards cloud computing. This essentially means that data centers will continue to grow and soon the power consumed by data centers will pile pressure on the grids and the environment. The use of green energy sources and innovation will go a long way in reducing the contribution of the Internet to global warming.

India blocks access to porn. How did they do it?

August 4, 2015 4 comments

blockedYesterday, against a Supreme Court decision, the telecom regulator in India ordered all ISPs licensed and operating in the country to block access to pornographic websites. This was after a private suit that petitioned the government to block the websites as part of the process to rid India of a negative image as the rape capital of the world (some people have suggested albeit jokingly that India changes its name to Rapistan). According to the suit, unfettered access to pornography is responsible for the high number of rape cases in the country.

Considering that most of the content on the internet is now hosted on content delivery networks (CDNs) such as Akamai and also on distributed cloud platforms, how does a country block access to pornography whose source server could be the same one hosting other non-pornographic websites? This is to say, a CDN server by a company such as Akamai could be hosting within it both a pornographic website and a religious website, how then is it possible to block one and not the other using common tools that that can block an IP or a port (port 80 or 443). If say the CDN server in my example has an IP 77.220.9.1 (random IP for illustration purposes) and is hosting both the religious content and porn on the same webserver listening on port 80, if we block the IP or the port then we lose access to all the content in the server and not just the pornographic content. How then did India do it?

Deep Packet Inspection

Ordinarily, most network equipment we interact with (including your home WiFi router) operate from layer 4 and below of the OSI model, it therefore means that these devices can act on layer 4 and below attributes such as port numbers, IP addresses and MAC addresses. Due to the shared nature of most internet infrastructure today, these tools become ineffective in selectively blocking content which is at the application layer of the OSI model. An appliance that operates above the OSI layer 7 is therefore needed to accomplish this. Simply blocking CDNs IP addresses such as Akamai  would lead to outage to other websites that are also hosted there

These appliances are able to ‘see’ layer 7 traffic so that access to our server example 77.220.9.1 that’s hosting http://www.religiouswebsite.com and a http://www.pornwebsite.com both on port 80 can be told apart by the layer 7 appliance.

These devices achieve this through what is called Deep Packet Inspection (DPI). Does that mean there is Shallow Packet Inspection? Sort of, when a router seated at later 4 looks at a packets header to see what source and destination address the packet has, it’s a form of shallow packet inspection as it doesn’t venture beyond the packet headers. With DPI, the appliance goes further and looks into the payload in the packet that’s carrying the actual user content and determines what type of content the packet is carrying. By use of unique signatures within the packet payload, the appliance can therefore tell apart porn from non-porn content. How they do this is a trade secret.

The appliance signatures can be classified as a group in a rule (e.g. Adult content or Social media) or be applied individually such as signatures that can detect Facebook, Twitter, Gmail etc. These can then be applied to various rules such as blocking or admitting the content. Further refinement of these rules can also be applied for example a rule to block Facebook and twitter in an office during working hours or block them completely 24/7 as is the case in China where the two social media platforms are blocked.

DPI can also do further identification of traffic for a more refined control. For example, the appliance might be configured to allow Facebook but block any videos shared on Facebook. It can also be used to block Facebook status posts with certain key words while allowing the rest of the content.

This as you can imagine gives immense power to any government or institution to block access to or posting of content it deems unfit for public consumption. This power can also be abused by regimes by suppressing access to content that is deemed dangerous to the regimes existence and rule as is the case in Turkey where the government blocks twitter at will if it feels threatened.

a Layer 7+ appliance output showing ability to classify content at above layer 7. Worth noting is all these protocols in this output happened on port 80.

A Layer 7+ appliance output showing ability to classify content at above layer 7 of the OSI model. Worth noting is all these protocols (other than HTTPS)  in this output happened on port 80 but the device can identify each protocol by use of DPI signatures, it can even tell apart HTTP browsing from HTTP file download with some appliances able to tell even the type of file and size.

Follow

Get every new post delivered to your Inbox.

Join 138 other followers