Archive

Author Archive

Ideas For CIOs and IT Managers On Securing Their Networks

November 21, 2014 4 comments

cyber-securityThere has been a lot of talk about increased cases of cyber criminals accessing information stored on computing networks. Many an events organization have also held conference after conference targeting IT managers and CIOs to ostensibly sensitize them on the matter. Many have gladly drawn attendance cheques in favour of these conference organizers for a seat or two where they will go through slide after slide of how to protect their information and data. After the conference, the usual group photo (and many selfies) are taken, not forgetting that one photo where the IT manager or CIO is receiving a certificate of participation from the organizers and their sponsors.

The reality on the ground is that many conference-certificate-waving CIOs still continue to ignore and fail to implement basic measures to protect their networks and information.  Their ignorance however is no defense as cyber criminals continue to seek ways to get into their networks. These criminals try to gain access to networks for two main reasons:

  • To steal information and data from you
  • To use your network as a launch pad for further attacks, this is mostly done by criminals to cover their tracks. A Romanian criminal attacking say a US bank will most likely carry out the attack from an unprotected network in Africa or anywhere else.

I would like to put the issue of cyber security into perspective based on my experience in running large networks for the last 10 years or so.

Why are you a target?

You are a target because you are connected to the public internet, it’s as simple as that. As long as your IP addresses are routed over the public Internet, you will be a target. It’s not because you are a bank, insurance firm, government, Vatican or even a small 2 computer CBO office in Lokichar. You will be attacked for as long as you are online.

How do you tell if you are under attack?

No, when you get attacked, you wont see your computer mouse moving on its own opening files and spewing thousands of lines of code scrolling on your screen like in the movies. It is hard to tell if you are under attack by just sitting on your PC, However if you measure several key parameters on your network, you can know if you are under attack (whether the attack is successful or not is not the issue here). The first thing is your firewalls CPU usage. Many firewalls are low CPU users if configured properly (i am using the term firewall loosely here for now). rarely will a properly sized firewall consume more than 25% CPU. If your firewall is consuming more than that, it is either the wrong firewall size for your network or it is wrongly configured. So if  your CPU usage deviates from the normal by a huge margin, you are under attack. Below is a graph of my firewall CPU when it was busy fighting off a massive attack. As seen, CPU shot to 100% for sometime as cyber criminals initiated a DDoS  on all my /20 and /18 public address space on the Internet. If under ordinary operation my CPU was say 85%, that would leave just 15% to fend off possible attacks and gives a higher probability of an attack being successful because of using a smaller/less powerful firewall

CPU usage on the firewall showing a spike in % CPU cycle usage during an attack.

CPU usage on the firewall showing a spike in % CPU cycle usage during an attack.

The other symptom that you are under attack is an unusually slow network response times. However, network performance should not be used as the only indicator, rather it should be used together with other symptoms. This is because there are many other factors that can cause your network to slow down other than an attack. Firewall software systems reside in memory for faster access by the firewall engine, you will therefore rarely note an increase in memory utilization during an attack. Memory utilization increase in firewalls is mostly due to turning on of additional features on the firewall, for example a firewalls memory utilization increases if you turn on inbound SSL certificate inspection or mail scanning. it is advisable to turn off features you do not use on any device on your network. Also, just because a firewall has a feature you need, it does not mean u have to use it on the firewall device. For example, instead of letting the firewall do email spam scanning, you can turn that off and do it on a dedicated mail scanner Linux box. This action frees up CPU power for network protection.

Next Generation firewalls have inbuilt systems that can warn you if they detect suspicious activity. These warning can be in the form of an email sent to you with details about the attack. A good example is the email below showing attempted tcp scan for any open SSH ports 22 on my network from a criminal in Russia and an ICMP flood attempt by another in China. If the Russian criminal had managed to see some open port 22 on the scanned IP, he would then embark on hacking the device that has that port open, he was however blocked at the firewall and the attempt reported.

A screenshot of an email from a  NextGen firewall detailing attempted attacks on the network

A screenshot of an email from a NextGen firewall detailing attempted attacks on the network

Getting a good system that can prompt you of suspicious activity via email or SMS is highly recommended. You do not want to arrive at work in the morning and find a gory cyber crime scene just because you never got alerted when it all started.

Are all firewalls equal?

Of course not. Many IT admins grew up in Cisco environments and sat for Cisco certifications which they proudly display on their CV’s, they have therefore been conditioned to believe that Anything by Cisco must be the best in the market. That is very far from the truth. From experience, Cisco will offer very good protection up to layer 4 of the OSI model. beyond that (where most attacks occur), its’ performance has been very poor even with their attempt to move from Cisco PIX to the Adaptive Security Appliance (ASA). There are many comparisons online of ASA vs other firewalls like this one here which compared the Cisco ASA and the Fortinet’s Fortigate firewall (Which in my opinion is the best firewall in the world)

Next Generation firewalls have  Intrusion Prevention System (IPS), OSI layer 7 application control with Deep Packet Inspection (DPI). This therefore means the system is both application and content aware. This offers a Unified Threat Management (UTM) system.

Measures to protect your network

There is no one size fits all solution to tackling the ever-increasing attacks on cyberspace. However based on my experience, the following steps are recommended:

  1. Shut down all unused services on your network. For example if you have a Linux server that has Domain Name Service (DNS) running yet you do not use it, stop the  DNS daemon. This lowers the risk of a criminal gaining access to your network, remember that they need to establish a network/Internet socket to gain access. A socket is made up of an IP address and a port. They have the IP, don’t give them the port.
  2. Use non default ports. If you have to use a service within your network, it is advisable to use non-default ports for these services. For example, everyone knows that SSH runs on port 22, that will be the port a cyber criminal will most likely look for. Running SSH on say port 2222 will contribute to an extent to the security of your service incase the criminals manage to gain access past the UTM system. In addition to this, avoid using public DNS for domain name to IP mapping for internal services. But how will users access the services and DNS if they are outside the office network? (see point 4 below)
  3. Control access. Even after changing the ports as per the point above, it is also advisable to set access control rules to the services running on your network. This can be done by use of authentication (username/strong password pair), restricting which IP’s can access the ports via the use of access lists, restricting time of day when the services can be accessed if possible, use management policies such as frequent mandatory password changes. Also, highly recommended is the use of RSA  security tokens in addition to the passwords.
  4. Use of Virtual Private networks (VPNS). if you have users who need to access resources in the office network from outside the office (e.g a traveling salesman), they should do this by use of a Dial-In VPN service. This service should terminate at your UTM device
  5. Use a proven UTM appliance. Do your research before falling for marketing ploys, just because it’s from Cisco, it does not mean its the best. Just because its expensive, it does not mean it can do more/better/faster. Use of “systems that can scale” is a common buzz word in the ICT world mostly applied to having a system that will grow with your use. In the UTM world, a system that can scale is one which other than growing with your needs will also adapt quickly to changing nature of threats. For example, how long did your UTM vendor take to update their IPS signature with the heartbleed vulnerability? a 6 hour delay after the discovery of the threat led to the Canadian Revenue Agency losing taxpayer data.
  6. Enforce Bring Your Own Device (BYOD) policies. One of the easiest ways for criminals to gain access to your network is through the use of compromised systems belonging to your staff. That iPad that your CEO or that smart phone your accountant brings and connects to the office WiFi, is it safe? There are now many BOYD best practice recommendations including the simplest which is having such devices connect to a different and policy controlled VLAN in the office. many free apps that smart phone users download have back doors through which criminals can gain access to your network if the device is connected via WiFi.
  7. Control resource use. By use of policies such as those offered by Microsoft domain controllers, the IT admin can enforce resource use policies such as disable installation of software onto computers by staff. Many pirated software programs habour malware and back doors that can be used by criminals.
  8. Use of Internet Security Software. Also commonly known as Antivirus programs, each node on a network should have an updated Internet security software. These have evolved from being plain Antivirus detectors to security suites that provide protection from phishing, malware and insecure web browsing. The jury is still out on which is the best security software. I would highly recommend Kaspersky end point security software followed by Sophos.
  9. Gain visibility. A survey showed that over 70% ofCIOs have no idea what type of traffic runs on their network. By gaining visibility on what is running on the network and what time,CIOs can lower the risk of an attack. The graph below shows traffic running on a network identified by a device that can do Deep Packet Inspection (DPI). a simple system will classify Facebook traffic as HTTP (because its via port 80 at layer 4), with a DPI device, you can gain insights into exactly what is running on a network and control it. In the example below, because he can now see whats running on the network, a CIO may decide to block Yahoo mail access from the office network if he feels it poses a threat to the network if users will download malware or click on spam links on personal emails from within the office network.

    Protocols

    Graph from an application aware DPI device showing protocols at layer 7

What about encrypted traffic?

With the increase in the use of Secure Socket Layer (SSL) encryption on the open internet after the NSA debacle, many networks are noting a steady rise in encrypted traffic especially HTTPS. Older UTMs are unable to inspect encrypted traffic and this therefore poses a great danger to networks.  A recent report by Gartner Research says that less than 20% of organizations inspect encrypted traffic entering or leaving their networks. You might be wondering if it is possible to inspect SSL encrypted traffic, yes it is possible to decrypt most SSL encrypted traffic and confirm certificate authenticity with the use of a good UTM system. This ensures that only traffic with genuine encryption certificates enters the network.

Frequently Asked Questions part II

October 24, 2014 3 comments

Further to my previous attempt last year to answer some common questions people ask in relation to every day technology, I have come up with a second list of answers to more common questions people ask. I will try to make my answers as simple as possible.

Why are smart phones poor at battery power conservation?

Most of you have experienced this, your newly released iPhone or Samsung galaxy phone has to be recharged every day compared to your feature phone ‘Kabambe’ 2G phone which is charged once a week. You have also noticed that when you turn off Wi-Fi and data services on your high-end phone, the battery lasts longer. Other than the number of apps running on your phone, there is another factor that greatly affects your phone’s battery power consumption rate. This is called signaling. Other than your deliberate use of the phone to make calls and data transfer, the phone also is in constant communication with the base station sending what is known as signaling data. In fact in older poorly designed mobile networks, the signaling data is more than the actual user data. By turning off data on a phone, you greatly reduce the amount of signaling exchanged between your phone and the base station and hence conserving power. With the advent of newer technologies such as 4G/LTE, signaling is more efficiently done and hence better power consumption on 4G/LTE phones. Other than offering faster data rates, these newer technologies are also easy on your battery. There have been attempts by base station equipment manufacturers to lower the amount of signaling from phones but this hasn’t worked very well. In fact one of the biggest attractions of 4G/LTE is not the faster data rates but the lower power requirements and less signalling volume and more efficient signaling techniques they use.

My Internet speed tests do not match my links’ performance

We have all been there, your internet link seems slow and your YouTube videos are buffering but when you perform a speed test your results are spot on at your subscribed plan of say 10Mbps. Well, lets start with the basics. You call it your link ‘to the internet’ but have you ever wondered where  this internet is located? The fact is that most of the content we consume here in Africa is not hosted within the continent. This means that you have to traverse an undersea cable to get your content. For example to access bbc.co.uk from Nairobi, you have to go all the way to Mombasa, take an undersea cable either via south Africa or Suez canal to Europe and into United Kingdom to get to the website. So if it’s a news clip on the bbc website, the video traffic has to travel all the way. However when you perform a speed test, the speed test app on your phone or your browser is deliberately redirected by your ISP to a server within the country. So if you are in Nairobi CBD on a Zuku link, the speed test server is on Mombasa road and if you are on Safaricom then the speed test server is on Waiyaki way. Due to the fact that this is very near, your data transfer rates will be very fast compared to if you did the same tests using a server that is in Europe. On some speed test websites such as http://www.speedtest.net you can manually select a server, try selecting a server in Nairobi and one in Europe or USA and note the difference. So when your ISP sells you 10Mbps, it’s a 10Mbps circuit and not necessarily 10Mbps to the internet. This state of affair is slowly changing as content providers such as Google and content delivery networks such as Akamai are now locally caching traffic. this means that Akamai will keep a copy of frequently accesses content at a server in Nairobi meaning that the trip to get the content in US/EU is cut.

Why did the inventors of blue LED win the Nobel prize and not the inventors of Red and Green LED?

The 2014 Nobel prize in Physics went to three scientists who invented the blue Light Emitting Diode (LED) in the early 1980’s. The red and green LEDs had been invented in the 1950’s and they never won the nobel. The answer is two-fold:

  1. You need red, green and blue LED’s to make white light. It was impossible to make white light out of  only the red and green LEDs, blue was needed.
  2. To make a red or green LED was a straight forward process of sandwiching several crystalline elements together, the problem with making a blue LED in a similar fashion led to the quick destruction of the structure of the LED light causing it to disintegrate immediately due to the elements involved. Their approach involved growing the crystal elements on each other as opposed to taking existing element crystals and physically fusing them together.

Other than providing the ability to now produce white light (red + green + blue = white), The trio went on to turn their blue LEDs into blue lasers, found in Blu-ray players. Because the wavelength of blue light is shorter than that of red LEDs, the beam can be focused to a small spot. This lets you cram more information on to a disc and read it out, giving Blu-rays a better picture quality than regular DVDs.

The white light from LEDs is in use in many areas of your normal life. The screen from which you are reading this article is made from LEDs and the white you see on the screen is made possible buy combining the three primary colour LED lights. LEDs are also finding increasing use in lighting buildings. a LED light uses 75% less energy and lasts 25 times longer than a traditional incandescent bulb.

Why do cellphones no longer spot the protruding antenna?

Advances in antenna engineering have led to the development of Antenna arrays than can work as a single antenna. It is therefore possible to make many tiny antennas on an electronic circuit board and use then as one would do a single antenna. The biggest problem in using several antennas was the increases scattering loss and introduction of noise into the signal. newer coding techniques that worked in noisier channels meant that a signal can still be extracted even in a noisy environment and the use of micro antenna arrays was now possible. This is why modern handsets do not have the protruding antenna ‘finger’.

Why isn’t the more efficient and durable Einstein-Szilard refrigerator in production?

Unlike today’s refrigerators that use a mechanical compressor with moving parts, the Einstein-Szilard refrigerator which was patented in 1930 by Albert Einstein uses an electromagnetic pump with no moving parts, some modifications of the Einstein system also do not use electricity but can use any heat source such as a flame from paraffin or cooking gas.. This design was more efficient, silent and durable and was maintenance free and could last 100 years. The refrigerator patents were bought by Swedish home appliance manufacturer Electrolux ostensibly to stop its mass production. No manufacturer would want to  build something that would never break down and last 100 years. Electrolux is one of the largest compression type refrigerator manufacturer in the world, compression type systems have moving parts and therefore don’t last long.  In your lifetime you will buy 2-3 of the compression type refrigerators as opposed to you inheriting one Einstein-Szilard unit from your parents = no cash for manufacturers.

Why do cars turn?

From a simplistic view, when you turn the wheels of a car by use of a steering wheel, the car should not turn. This is because the two turned wheels will have different centers of circles if they turn by the same angle. This problem means that when wheels are turning, the inside wheel and the outside wheel will trace out circles of different radius and they will just skid in the general inertia direction. This was solved by a German engineer called Georg Lankensperger and was however patented by a a patent agent called Ackermann. This solution was called the  Ackermann steering geometry. This is a geometric arrangement of linkages in the steering of a car or other vehicle designed to solve the problem of wheels on the inside and outside of a turn needing to trace out circles of different radius. This was achieved by making sure all the wheels on the car have the same pivot point (denoted by D) as shown below. As the car moves faster, the point denoted by D moves forward to somewhere closer to the driver’s seat and a small twist in the steering wheel has a big turning effect compared to when the car is slower.

Ackermann geometry showing the different angles whe front wheels assume to enable a vehicle turn.

Ackermann geometry showing the different angles when front wheels assume to enable a vehicle turn. In this diagram, the outer wheel is at 23 degrees while the inner is at 20 degrees viewed from common point D. If both wheels turned by the same degree the car would skid forward.

 Are the three wires I see on power lines live, neutral and earth?

pole_smallWe are all familiar with the power socket outlet where the three holes provide contacts for the live wire, neutral wire and the earth wire. The assumption by many is that the three wires we see on power lines represent the live, neutral and earth. The actual fact is that the three wires on the pole are all live. The earth cable does not leave your premises and is usually connected to a copper rod that is buried in the soil somewhere near your house and sometimes connected to your metal water pipes (ever go a mild electric shock when you touch a tap in the shower and you have an open wound?). The neutral cable leaves your house but is connected to the body of the transformer that is supplying your house. The live wire leaves your house and is interconnected through transformers to the generating units wherever they are located.  Domestic users usually have a single live cable and a neutral cable coming into their premises carrying 240Volts. However heavy and industrial users who use more power have all the three cables plus a neutral cable coming into their premises with each of the three  carrying 250 volts. Because AC electric current is a sine wave, the three 240volt sources are 120 degrees apart and their sum is not 720Volts (240+240+240) but is 415 volts ( 240 x 1.732) We use the value 1.732  because it is the square root of 3 (You can Google why). The reason why a heavy user is asked to use three phase power is because the heavy load is distributed on the three phases as opposed to a single phase. The overall power sucked from the grid is therefore lower because there is less waste heat produced in the three wires compared to if all the power was in one wire in a single phase. Kenya power sets a limit of 2Kw load on single phase. One more point to note is that the three wires you see on poles usually carry 11,000 volts each and this is lowered by the transformer in your neighborhood to 250 volts. The reason why it is transmitted at 11,000 volts is because over long distances it results in less power losses during transmission than if it was transmitted at 240volts. To minimize the losses, it is brought at 11,000 volts to your neighborhood and lowered to 240 volts by the transformer.

Why The Plan To Improve Power Generation Capacity Will Backfire

September 15, 2014 4 comments

powerOne of the promises by the current government to the citizens is improved electric power supply and connection of more homes and businesses to the national grid. This is indeed an excellent plan as a good power supply is an enabler of better living standards.

Kenya’s current power generation capacity stands at 1700MW and the country consumes just about 1400MW on average. The government plans to increase this capacity to 5000MW in the next few years; a 194% increase. Plans are already in full gear to try to meet this promise despite some challenges which I believe are more political than technical.

It is estimated that Kenya has a 10,000MW potential from geothermal power and is grossly under-utilizing this potential at the current installed generation capacity of 209MW. The government formed the Geothermal Development Corporation to champion the harnessing of this resource several years ago. however politics has bedeviled the corporation and to-date, it has not facilitated the generation of even 1MW directly. Progress is however being made towards GDC facilitating the generation of more power from geothermal as it has already engaged reputable drilling and generation companies to do that. The main mandate of GDC is ‘to avail steam to power plant developers to generate electric power’.

We have also seen the controversial award of the tender to set up a coal based power plant in the coast to Centum and its partners. I will not wade into the controversy surrounding the tendering process but this is also one of the key projects the government is taking towards availing 5000MW to the national grid. The tender stipulates certain technical and commercial conditions to be met by the investors and one of them is the provision of generated power to the national grid at a lower price and the use of high calorific value coal.

On Saturday, Dr. David Ndii wrote an article in the Saturday nation that showed that we as a country do not need 5000MW. In his estimates, we need about 2700MW if history is anything to go by. In his article, he showed that as time passed, the power required to produce one unit of the country’s Gross Domestic Product (GDP)  is decreasing and not increasing and that it is a fallacy to imply that increased power output will lead to faster economic development. Power consumption is more of a result of development than its cause. I agree with his sentiments.

Assuming these ambitious projects do actually take off, my biggest concern is the effect of this excess capacity on the consumers. Simple high school economics might tell us that an over-supply of electric power would lead to cheaper per unit cost of power, but this might not be the case, in fact if these ambitious projects succeed, the per unit cost of power might go up due to over-supply. Power generation (and other utility systems) that involve the private sector are tricky and their outcomes might not obey simple laws of economics.

The biggest problem we have with the current arrangement between the government and the independent power producers (IPPs) is the flawed contracts that favor the IPPs and leave the consumer exposed. The contracts are based on the government buying all the power produced by these producers irrespective of whether the government finds use for this power or not. With the current consumption of about 1400MW against a production capacity of 1700MW, the 5000MW the government is promising in the next two years will cause a glut. Unfortunately, this glut will not cause the prices to go down but go up. This is because consumers will have to pay for this  purchased but unconsumed  power.

SGR is not an ideal consumer

There is no ready market in the short-term for this power. There are many people saying the upcoming projects will need this power and they go ahead to quote the electrification of the standard gauge rail (SGR) system which is being touted as one of the key consumers of this power. The other viable consumer is Konza techno city whose ills I’ve discussed elsewhere.

First things first, a train is a very efficient mode of transport, a freight train can carry 1 ton of goods for 300kms on  a litre of diesel this translates to about 16Kwh of equivalent electric power per ton per kilometer, at 18 Shs per Kwh, that’s 288 Shs per ton per km compared to  today’s price of 104 Shs per litre of diesel to do the same work. The current SGR project is designed to run diesel-electric locomotives and converting it to adapt to a pure electric system would cost a lot. In the US, it is estimated to cost about 292Million shillings per kilometer to convert a traditional rail system to an electric system. This being Kenya, the cost per kilometer is bound to be higher especially due to corruption. So if the Govt wants the rail system to be electric, they better build an all-electric system from day one as future conversion will be expensive.

The idle capacity trap and lopsided contracts

In their paper titled “Manufacturer’s response to infrastructure deficiencies  in Nigeria” published by the world bank, Authors Kyu Sik Lee and Alex Anas look at the cost of private infrastructure provision in Nigeria with focus on electric power. They noted that over 75% of private power generation capacity remains idle most of the day and is used briefly during peak power load periods. This is due to the nature of Nigeria (and by extension ) African power usage patterns which can best be described as saw tooth in pattern. The high % of idle capacity results in a very high total average cost of private power generation. This idle capacity is a cost that these private operators are incurring and must be passed to the consumer. The current Kenyan contracts mandate the government to buy the “power produced” and not the “power demanded by the market”. This even makes it worse as consumers will be paying for both the idle capacity and the excess generated power. I wrote about the pitfalls of such lopsided contracts in January 2012  (read the article here) when I said the country need to be wary of the contract it signed with the wind power producer Lake Turkana  Wind Power (LTWP). This contract mandated Kenya to buy power from the wind farm “when they generate”. With wind flow and speed prediction not being an exact science, this meant that should wind blow at whatever time and date and the wind turbines turn, Kenya has to buy that power. This is unreasonable because the wind might start blowing right after Kenya power has just asked another fossil fueled IPP to give it power to meet peak time demand, at that time when the fossil fueled IPP is generating to meet the demand, Kenya power receives a notification that wind is blowing in Turkana and must buy that power even though at that time they do not need it as they cannot ask the fossil fueled IPP to stop generation so randomly. The result is higher power costs as Kenya power will now incur costs to the fossil fuel IPP and the wind power IPP, The wind might also blow when our govt owned Hydro dams are full and Kenya power still has to buy that power. The government later realized this and sought to change the contract, when LTWP company declined, they have been facing hurdle after hurdle in trying  to set up their wind farm. We know the government is a master of hurdle set-ups if they don’t like you.

At the end of the day, the governments plan to produce 5000MW may be well-intentioned but it has been poorly thought out because the contracts are poorly drafted on purpose. The reason I say this is because of endemic corruption on the country. Someone is overlooking these glaring discrepancies in these contracts for their personal benefit, the consumer will end up paying dearly for these corrupt acts of commission. I am by no means saying Kenya does not need this 5000MW of power. My problem is the fact that this power will be available in the next two years creating a glut and the contracts are flawed. What should have happened is a planned and gradual ramp-up of power generation over a span of 10-15 years matching economic development to power demand; as power consumption is a result and not a cause of development, sadly our politicians want this done within their elective term so as to take all the glory at whatever cost or to meet unrealistic election promises.

Essar exit from the Kenyan market: A Post mortem

September 2, 2014 2 comments

Communication TowerLast week, Essar telecom exited the Kenya market after a short uneventful life that reminded me of the story of Simon Makonde that we once read in Primary school.

In 2007/2008 financial year, Essar Global together with its local partners invested just about 200 million dollars  in setting up Essar Telecom Kenya Ltd (ETKL) which was trading as Yu Mobile. The sale of ETKL marks the final exit of the Essar group from the telecoms business worldwide after investing over $1.2 billion and selling all its telecoms businesses for $6.5 billion. Before the sale of ETKL, it has also sold its outsourcing and consulting firm Aegis for about $650 million in June this year.

The short existence of ETKL in the Kenyan telecom market space has been market by a string of bad decisions that led to its eventual downfall. I will discuss some of the reasons that i think led to their downfall mostly based on my personal interaction with ETKL’s trading arm Yu Mobile and its consulting arm Aegis/AGC

Lack of staff buy-in on the brand

I am no brand expert, far from it; but the staff at ETKL and its consulting arm Aegis/AGC lacked brand loyalty towards their own brand. I once had a meeting at Essar where I met a senior member of the company’s management team and we exchanged business cards and lo and behold, the cellphone number on his card had Safaricoms NDC 0721. I jokingly asked him if he had ported the line and he said no and that was followed by a barrage of excuses as to why he doesn’t have a Yu number, I left it at that. This is also not the only member of staff with a non Yu number, many other junior members spotted Airtel and Safaricom lines without shame.

Lack of belief in Local talent

In the same meeting mentioned above, we were discussing roll out of a service to many parts of Africa, there was no single African in  the meeting room representing ETKL, all the four were non indigenous Indians. Non knew even where Kakuma or Bungoma was on the map of Kenya but they all knew where Maasai Mara was. I would mention Lokichar and they would go “is that near Maasai Mara?”.  Kenya has very talented telecoms and project management staff and ETKL leaving them out of their management team spelled doom from the word go. As a supplier, I would be unsure of their commitment to any project if they people planning the project had no clue of what it takes to deliver it. Other than that, plans were at an advanced stage to move the call center to India where they believed it was cheaper to run a call center from. I agree that Indian call centers are some of the best in the world. I interact with Tata/Neotel telecoms call center in India on a frequent basis and they offer world-class service. I however feel that local culture know-how is very important when offering end-user support to a population that is not very literate or not comfortable in speaking English. A Mobile call center in India would have been a flop. Shudder to think of the conversation between your local mama Mboga and an Indian support executive. Other than lack of local management team, The CEO of ETKL was a wrong choice, here is a very experienced man who is used to run ongoing enterprises being sourced to run a start-up. The two require very different skill-sets. Its common  for boards to hire former Chief Operating Officers into CEO positions but that applies only to firms that are already up and running, appointing a chief ‘operating’ officer type of person to head a start-up is a mistake as they are used to controlling processes and not setting them up, most are very bad at this. Whereas running a start-up is akin to building a car from various parts and make it run, running an ongoing enterprise is akin to making that car go faster and efficiently.

Poor supplier relationship

One of ETLK’s biggest problem was its poor relationship with suppliers. From its Indian roots, bargaining is the order of the day and every supplier is pushed to the wall to reduce prices and in doing this affect its margins and profitability. While working at a leading telecoms equipment supplier, a policy decision was made never to work with ETKL after they defaulted on a 5 million dollar debt for equipment supplied for over 9 months, this was just one supplier, how much more was owed? The bad relationship with suppliers meant that ETKL lagged behind in the provision of service demanded by the market. This bad relationship trait was also present downstream as Yu Mobile agents lacked necessary support from ETKL in way of merchandise and products such as scratch cards for airtime, many abandoned the business altogether and focused their energies on providers who were serious.

Lack of a clear market penetration and brand strategy

ETKL entered the market when the price wars between the current Airtel and Safaricom was just starting, ETKL participated in this war for sometime before abandoning it and offering free calls within its network and unbelievably low rates for cross network charges. Their pricing was even way below the CCK ceiling for across network termination rates at that time. What informed this decision?

Back in India, the model of offering free calls or dirt cheap rates on network works very well. This is because there exists other revenue streams for mobile operators other than talk time. For example Bharti Airtel offered very low call rates because they made more money from Value Added Services (VAS). Just to give you an example, Airtel made about 34% of its annual revenues due to many of their 193 million subscribers voting at premium rates for their favorite participants on the Indian version of  ‘America got talent’ show. The same strategy didn’t work very well here because there was no VAS to speak of and the use of a mobile phone in Kenya is predominantly to talk if end year results of the dominant player are anything to go by.

Other than pricing, the management at ETKL skimmed down on marketing campaign and they also got it wrong. one of the reasons why Safaricom’s marketing is on point is because it connects with everyone. The use of Swahili words in product naming and campaigns has worked very well for Safaricom and ETKL and Airtel didn’t realize this till much later, Airtel later on took notice of the power of local languages in product naming and its connection with the market had improved greatly even with mistakes in the choice of words as exemplified next. Whereas Safaricoms airtime advance is called Okoa Jahazi, Airtel called it Kopa credo. The use of the word Kopa (Swahili for borrow due to lack) puts the borrower in a begging mental situation while the use of the words Okoa Jahazi (save the boat) by Safaricom puts the customer in a position where he feels by borrowing airtime he furthers the collective will of the nation to move forward and develop(no one operates a swahili boat/jahazi alone). While Safaricom called airtime sharing “Sambaza” which is Swahili for spread out of abundance or excess. Yu Mobile decided to use the word “Eneza”. For those in the know, the word Eneza is associated with the spread of disease and pestilence and have a negative ring to it. The examples here are just but one of the more simpler mistakes done in their marketing. ETKL’s choice of brand colors was also ill-advised. The use of colors in the Kenyan flag from the primary color chart for a logo, however patriotic, is plain boring and lacks appeal, add yellow to that and you have a disaster. Safaricom and Airtel have done very well in this area, their choice of colors works. Again I am speaking as a layman here, not as a marketing or branding guru. These are views from the street.

Lack of a technology road map

There was a joke floating around last year when Safaricom was testing its ‘4G’ LTE service that Yu Mobile was still on 2G and planned to skip to 4G and bypass 3G because they don’t do odd numbers. Well the reality of the matter is that ETKL lacked a clear road map on technology roll out. If you look at the way leading operators run their business, they involve equipment manufacturers such as Ericsson, Nokia and Alcatel-Lucent in their technology road map planning and decision making, the manufacturers all pitch whats new and what they can do and at what price and help the operator in decision making on what to adopt for what market segment, I guesstimate that over 90% of Safaricoms products were first suggested to them by suppliers and technology partners. With bad blood between ETKL and suppliers that included equipment manufacturers, there was a slim chance that they could do any planning to adopt whats good for their customers. The result is patchy coverage across the country with some form of roaming agreement for Yu lines on the Airtel network where they had no coverage (not sure about the legality of this). This also led to poor VAS offering as no vendor or supplier wanted to work with them, other than the disastrous flop that was YuCash, there was nothing else ETKL could offer as value addition to their customers.

At the end, ETKL was bleeding cash, costs far outnumbered revenues and at one point some suppliers sued or went for arbitration to recover monies owed to them. When Mobile Number Portability came, ETKL didn’t do enough to woo customers to their network, they couldn’t, because their network was patchy and didn’t cover most of the country and lacked VAS to offer; They were between a rock and a hard place. The decision to sell a 200 million dollar investment for 120 million is a sign that they had given up all hope of ever making an impact in the Kenyan market and wanted to cut the losses.

Is Safaricom justified in stopping the use of Skinny SIMs by Finserve?

August 26, 2014 3 comments

taisyssimcardsWith the recent licensing of Mobile Virtual Network Operators (MVNOs) by the Communication Authority of Kenya, One of the licensees by the name of Equity Bank (trading as Finserve Africa) has been in the news a lot as they plan their service roll out.

The idea behind MVNO is that they lease excess capacity from a ‘brick and mortar’ mobile network operator (MNO) at wholesale prices and use this excess capacity to serve areas that the host was unable to reach profitably or offer services the host was unable to offer efficiently or profitably or both. This might seem tricky at a glance but in some markets such as UK MVNOs actually offer better service than their hosts. Virgin mobile  UK (a MVNO) has been voted the best ‘mobile’ operator for several years now while the MNO that hosts them was voted the worst performer. It’s all about service and market perception and not how many base stations or Mobile Switching Centers you own.

Equity Banks Finserve Africa will ride on Airtel Kenya’s mobile network and will have their own MSISDN and a unique National Destination code ( the 07xx prefix). Due to lessons from recent history when Kenya introduced Mobile Number Portability and failed, Finserve Africa saw it fit to not go the way of luring potential customers with new SIM cards that will involve the MNP process or change of the MSISDN for users, they instead opted to use what is known as a Skinny-SIM. This is a paper-thin SIM foil that can be stuck on a subscribers existing SIM card instantly availing an additional MSISDN to the subscriber on the same handset. The subscribers biggest fear of losing his original MSISDN which has now become part of his identity is therefore taken care of at a very marginal financial and emotional cost.

The way this works is that the Skinny SIM is attached by means of a special self adhesive to the existing SIM making sure its in the correct orientation. The SIM card is then placed back into the phone and the two SIMs will each avail a second SIM menu on the phone. This will enable the use to still receive and make calls and access Value Added Services (VAS) on his or her old number in addition to doing the same on the new number that is now availed by the attached Skinny SIM. Market forces therefore come int play in the users decision on what SIM to use for what service.

Finserve is much more interested on the VAS element provided by the new SIM. It intends to roll out mobile banking and money transfer services that will be in direct competition to Safaricom’s M-PESA and M-Shwari services.

Safaricom’s Outcry

The fact that Finserve made its intention of competing with Safaricom on VAS clear from the onset has sent shivers in the Safaricom boardroom. One of the key customer stickiness factors possessed by Safaricom was its VAS especially the money transfer element and the fact that ‘peculiar’ Kenyans were emotionally attached to their MSISDNs. Now that the Skinny SIM technology will enable Finserve circumvent this, Safaricom feels very threatened and stands to lose a substantial share of the market to Finserve.

Safaricom has alleged that the Skinny SIM poses a danger to its M-PESA service as it can be used to carry out ‘man-in-the-middle’ attacks on M-PESA service and reveal the M-PESA PIN and other transaction details. To back its claims, Safaricom engaged the GSM Association (GSMA) to assist in giving credence to these claims.

One thing that is escaping most people is that the GSMA is an association of the willing. It states on its website thus:

“The GSMA represents the interests of mobile operators worldwide. Spanning more than 220 countries, the GSMA unites nearly 800 of the world’s mobile operators with 250 companies in the broader mobile ecosystem, including handset and device makers, software companies, equipment providers and Internet companies, as well as organizations in industry sectors such as financial services, healthcare, media, transport and utilities. “

It will therefore come to the rescue of its members in cases where bearing of credence on some statements is concerned. With Safaricom being partly owned by Vodafone (Tier 0 GSMA member) who are one of the biggest financiers of the GSMA and with 500 voting rights, did we really expect them to deny Safaricom’s unfounded allegations on the dangers of Skinny SIMs? In its articles of association a GSMA member must “Operate and/or is allocated frequencies to operate a GSM network” This clearly means only MNOs and not MVNOs can be full GSMA members. MVNO’s are admitted as associate members and as it stands Finserve is not a GSMA member. The bottom line is:

  1. GSMA response is biased, here is CAK refereeing a fight between Safaricom and Finserve and they opt to ask GSMA  who Safaricom and its parent are members and Finserve is not, for advise.
  2. GSMA is not a standards setting or approval body and can therefore not be an authority on matters technical, it can give its opinion but its opinion is based on member interests. GSMAs opinion cannot stand in a court of law. Should Finserve proceed to court, GSMA cannot be an expert witness, the best it can be is a friend of the court.  The Institution of Electrical and Electronic Engineers (IEEE) which sets many of today’s telecommunication system technical standards would have been better placed to answer the Communications Authority of Kenya’s queries and not an optional membership organization.

If the current standoff proceeds to court, the burden of proof will be upon Safaricom to show the court that the allegations its making are true. it will need to show that it is indeed possible to compromise the security of their M-PESA service if a skinny SIM is  attached to a Safaricom SIM card. They will also need to prove that this compromise can be used to their competitors advantage. If indeed by attaching a Skinny SIM the M-PESA service can be compromised, the question then is if this situation will give its competitor undue advantage over them. This is the difficult part because merely proving that the Safaricom SIM can be ‘hacked’ when a skinny SIM is attached is not enough grounds to stop Finserve from rolling out service, they need to prove that this act of compromising the SIM will give Finserve an undue advantage in the market.

Why CCK’s call for Infrastructure sharing is ill informed

April 14, 2014 3 comments

mastLast week we were treated to a spectacle that was the Communications Commission of Kenya (CCK) demanding that mobile network operators stand to have their licenses revoked or not renewed should they fail to open their infrastructure to competitors use. This call is not only ridiculous and careless, it is also backward, taking us back to the KP&TC days when the govt controlled telecoms and kept all operators on a short leash.

The CCK Director General seems to have been bit by the ‘populist’ bug, making road side populist declarations without carefully thinking of consequences. For one, the CCK is a regulator, by that definition, it should not dictate how operators go about their business, it should create an environment where operators find it advantageous to follow the laid down regulations. So instead of threatening non-renewal of operating licenses should they not share infrastructure, how about setting up some tax incentive for those who share their infrastructure with others? That way, operators will without coercion share infrastructure if they stand to benefit from the incentives.

Below I outline the reasons why I think CCK is mistaken in issuing vile threats to operators who don’t toe the infrastructure sharing line.

Technical incompatibilities

There is a general assumption that many of the technologies in the GSM market are compatible across manufacturers. This is not entirely true and a lot of work needs to go into making  various systems from different manufacturers work together. This is one hurdle that is difficult to cross. Take a scenario where one operator is using the slightly outdated RADIUS protocol for Authorization, Authentication and Accounting (AAA) while another is using the more advanced DIAMETER protocol for its AAA. In this case the radius user has to upgrade to diameter as backward compatibility of diameter to radius is a problem.

Lets even forget the more advanced issues of AAA, lets just go to basic mechanical compatibility. lets assume CCK forces operators to share Base stations.  One of the biggest issues that will arise is that when the existing owner of the base station was designing the mast, he made several assumptions such as the loading on the mast by the various antenna and cable, the mast was therefore designed to take this load without much trouble. However, here comes CCK demanding that additional load be put on the masts in the name of sharing, what happens? The structural integrity of the mast is lost and it now becomes unstable if it exceeds certain loads and wind speeds. This in turn will be a health hazard in two ways:

  1. The mast will be unstable posing a danger to neighboring structures such as residential houses as it will now carry more load than it was initially designed for.
  2. The levels of radio frequency radiation will now be higher due to additional  transmitters on that location, this calls for additional NEMA approvals and if they fail the approval test, a mast relocation has to be done to take it far from populated areas due to higher emitted radiation. Please note that this radiation might not be necessarily be a health hazard more than it interfering with other systems either directly or by production of harmonics to the nth level. I can bet CCK has never bothered about the effects of harmonic distortion and interference to communication systems. I recently shared an article of how FM radio stations can be the Achilles heel of LTE deployment if harmonic distortion from them is not checked. read it here. Forcing operators to transmit from the same location will only make such issues worse.

The radio frequency planning departments of many mobile operators are usually a bee hive of activity as engineers plan their networks to ensure that they maximize the use of scarce radio spectrum and avoid radio frequency interference (RFI). If CCK forces operators to share infrastructure without coming up with modalities of how these operators will work together to counter RFI, we will have a situation where different RF planning dept work in disharmony leading to increases cases of RFI on the GSM network which will in tun lead to poor service..

Legal and commercial issues.

You have all bought an electronic device and asked for a manufacturer warranty from the seller. This warranty however is only valid if you use the device within set guidelines otherwise you risk voiding the warranty. For example you void the warranty of a domestic washing machine if you use it in a commercial setting such as laundromat. Same thing applies to telecoms equipment. If operator XYZ has purchased equipment from a manufacturer for use in a particular way, this equipment has to be used within set guidelines and operating environments otherwise the warranty is void. As it stands many warranties in force right now will be voided the minute the operators share these equipment with competition, especially if this involves interfacing with non standard protocols or mediation tools and interfaces.

Many operators have also invested heavily in infrastructure roll-out mostly using finance tools such as loans and special purpose vehicles (SPV’s). The legal existence of SPV’s is anchored on a well defined return on investment (ROI) path which can be disrupted if CCK has its way. I cannot not claim to be a finance expert but i foresee many of these financial tools backfiring on the mobile operators should they  be forced by CCK to share assets purchased this way as their well anticipated ROI now becomes unpredictable. I welcome comments from finance experts on this matter.

Other than technical infrastructure, the CCK also requires the sharing of sales and marketing infrastructure such as vendors, resellers and agents. Building an agency network takes a lot of effort, time and money. The dedication that one operator has put into building an extensive network even where others have failed cannot go unnoticed. The agency and vendor network  and not the technology network is the key differentiator between many operators in Kenya. It will not be easy for say Safaricom to open up its agency network to competition without a legal fight. CCK has no legal mandate to force operators to share agency networks in a willing buyer willing seller market. These same agents have been approached by competition and competition has not offered enough incentive to woo them, i do not think a law would work either. Also, those who tried failed and offered valuable lessons to the rest.  When the once successful Mobicom ditched Safaricom dealership in favour of Orange in 2010, that was the last time we heard of them. The agents also know that in as much as CCK will allow their current principal (Safaricom) to allow its competition to approach them, many agents will not be willing to take them on board.

For CCK to peg license renewal on a new radical rule such as this contravenes the laws of natural justice, you cannot introduce clauses in a license that will seem to put the licensee at a commercial disadvantage especially if no possibility of future amendment was mentioned in the initial licence requirements. There are some specific grandfather clauses that the CCK cannot just wake up and remove from the original licensing requirements especially after operators have put so much in the way of investment into network and capacity building.

Also one last thing. The fact that CCK is transforming to an Authority (Communications Authority of Kenya- CAK) also means it now can also be a player in the telecoms sector especially in an equalizing capacity of setting up infrastructure and leasing to operators in a commercial setting. This change to an authority, plus the demand to operators to share infrastructure introduces Nemo iudex in causa sua on the part of CAK especially when disputes arise in matters of infrastructure sharing. It cannot be a judge or arbitrator in an area they also have an interest in.

The Importance of Local Internet eXchange Points (IXPs)

March 17, 2014 1 comment

IXP-network_switchImagine you work for a company on the 2nd floor of a building in Nairobi and you send an email to your neighboring company on the 3rd floor. What would be the typical path your email will take to get to the recipient? Will it just cross the floor to your neighbors mail server and eventually to his inbox? it’s not as simple as that.

The Internet works by use of a specialized routing protocol called Border Gateway Protocol (BGP). ISP’s use BGP to tell each other what networks are behind them effectively letting other ISPs know which customers and mail servers are on their networks. This action is called announcing or advertizing of routes. In simple terms each ISP effectively says to the rest “The person with the IP address x.x.x.x is on my network, if you want to reach him talk to me”. IP x.x.x.x could be a server running your email, web or any service on the internet or your PC. The other routers that receive this announcement keep a record of this info on what is known as a routing table. Each ISP has a special router on the border (hence BGP) of their network to the rest of the internet that ‘speaks’ BGP and keeps a routing table of all the routes it has learned from listening to announcements made by other ISP routers while at the same time announcing the network behind it to others.

The above system has worked very well in the US and EU where most of the internet infrastructure is located. When less developed areas like Africa started to connect to the Internet, the BGP speaking routers of African ISPs were talking to US and EU routers telling them how to reach African Networks. There seems to be no problem with this setup because African networks were largely net recipients of traffic and sent out very little. However with time, African networks started generating quite some considerable amount of traffic (like your email to your neighbor on 3rd floor). A problem arose because African ISPs were exchanging traffic in US and EU through more established tier 1 ISPs. This therefore meant that your email to your 3rd floor neighbor will leave your PC, go to your ISP network which then takes the traffic to USA or EU to a tier 1 ISP which then exchanges this traffic with another tier 1 which is connected to your neighbors ISP network, this 2nd tier 1 then hands your neighbors ISP this traffic and transmits it back to Africa to your neighbors mail server on 3rd floor. This long path taken poses several problems:

  1. Traffic whose source and destination was Nairobi, left the country to USA or EU and came back. This utilized expensive International undersea fiber optic bandwidth to and from USA or EU making email delivery an expensive affair.
  2. Due to the above, should there be an undersea fiber-optic cable cut, your email would remain undelivered for the duration of the outage. This can sometimes take days. It would be faster to take the stairs and talk to your neighbor.
  3. Other than email, some sensitive local traffic such as banking traffic ends up crossing international borders posing a legal challenge of who or what law applies to instances where that data is tampered with after its left the country. Some countries actually forbid banks from exchanging their traffic outside the country’s borders leading to investment in expensive networks that keep such sensitive traffic within the country. The cost of this investment is usually passed on to consumers.

With time, more and more traffic is being locally generated and locally consumed. Your neighbors ISP now needs to exchange traffic with your ISP in Nairobi and not in USA. They can do this through the use of a local Internet eXchange Point (IXP). Kenya currently has the Kenya Internet eXchange Point (KIXP) which was formed in response to the need for local ISP’s to exchange traffic locally. This not only made local traffic local but also meant that we could continue communicating within the country without the need of undersea cables. So in the email to your neighbor scenario, the email leaves your PC, goes to your ISP which is now exchanging traffic with your neighbor’s ISP at KIXP at Sameer ICT park on Mombasa road, your neighbors ISP then pick this traffic and delivers it to your neighbor. This is faster, cheaper and more reliable than the traditional way of exchanging traffic outside the country.

IXPs are now evolving to not only become data exchange points, but are now increasingly being used to provide content caching for BW hungry services such as videos. Imagine a popular YouTube video which has been shared on social media and all over sudden everyone in the country is clicking the link to watching it. Instead of every person who is watching it connecting to a server in USA, the video can be locally cached on Mombasa road so that other than the first two or three people who had to leave the country to get the video, the rest of the subsequent viewers would get the video from Mombasa road now and not from USA. At the moment however, KIXP is not offering content caching, this is being provided by Google directly using content cache servers in the same data center as the IXP.   Other than KIXP which is based in Nairobi, a second IXP was launched in Mombasa so that users in Mombasa wishing to exchange traffic within Mombasa do not have to come to the Nairobi IXP to do that, they can now exchange traffic locally within Mombasa. at the moment 29 ISPs and enterprise networks such as banks are exchanging traffic in Nairobi while 8 are doing so in Mombasa.

To see a full list of current IXPs worldwide and the amount of traffic they are keeping local, please click here

More importantly from a network engineering perspective, IXPs allow network operators to exchange quite a considerable amount of traffic amidst all the IPv4 address scarcity today. Many IXPs such as the one in Kenya bend rules to allow its members announce or advertize more specific networks (up to less than /24 i think) which would otherwise be filtered by BGP routers on the Internet that aim to keep smaller routing tables. This therefore means other than keeping traffic local, IXPs help its members increase its bits exchanged per IP ratio during these difficult times of IPv4 scarcity.

Follow

Get every new post delivered to your Inbox.

Join 91 other followers