How The Cloud Lowers Barriers to Entry into the Mobile Telco Space

The announcement by Nokia that it is partnering with Google Cloud, Microsoft Azure and Amazon Web Services (AWS) to develop cloud-native 5G services was a significant and inevitable step that all communication equipment manufacturers have to take to remain in the game and meet the ever increasing demand for modern telecommunication systems.

With the rapid drop in cost of compute and storage power due to the economies of scale that cloud services provide, and the inherent ability to dynamically shift workloads across geographical regions and scale both vertically and horizontally with a single mouse click, services that previously required dedicated hardware that took time and money to set-up can now be spun and consumed in a matter of seconds. The shift from traditional DevOps to cloud-first DevOps also means that new and more efficient system architectures that take advantage of the cloud to offer ‘serverless’ and highly scalable computing, which significantly lower the cost of running a service on the cloud while at the same time improving service availability and user experience.

Having worked in the telecom sector for sometime, I have noted the below to be a growing problem:

  1. Telecommunication systems are highly vertically integrated and often deploy proprietary technologies, leaving very little room for interoperability of various vendor systems in one network.
  2. There is a very hard coupling between software and hardware used in telecommunication systems, system upgrades often involve hardware upgrades, making the upgrade complex, time consuming and expensive to carry out

The move by equipment manufacturers such as Nokia and others to develop their systems for the cloud will solve the above problems by them adopting Open-Radio Access Networks (O-RAN) approach on the cloud to create Cloud RAN. This cloudification of the RAN using O-RAN doesn’t necessarily mean that all Cloud-RAN is based on O-RAN but that O-RAN on the cloud can simultaneously solve the two problems I mentioned above in one swoop. Cloud-RAN decouples hardware from software, enabling the adoption of different lifecycles for hardware and software. This will accelerate the pace of innovation in that space and also enable the mixing and matching of various vendors on the same network easily than before. It will also shorten time to market for new technologies and services.

Other than the technical efficiencies that the cloud introduces into the telecom network, adopting the cloud on the telco network also lowers the Total Cost of Ownership (TCO) as it leads to lowered Capex and Opex spend through:

  1. Fast deployment of services: A fully virtualized base station enables both the virtualized distributed unit (DU) and centralized unit (CU) to run on a general purpose processor platform (servers on the cloud). New software and services can be introduced more quickly and more easily because the software code doesn’t need adapting for proprietary hardware. Time savings mean cost savings.
  2. High hardware utilization: Pooled software running centrally on the cloud enables more traffic to be processed on the same data center servers. This more efficient use of processing power allows hardware to be dimensioned according to average traffic loads. The cost of over-dimensioning to meet peak load is simply eliminated, lowering network rollout costs and idle capacity costs.
  3. Multi-tenancy resources.: The same physical servers can host different applications and services. Cost savings are achieved by using spare general-purpose compute resources for applications other than RAN.
  4. Lower operational costs: Centralizing baseband functions in the cloud data center reduces the need for radio site visits, cuts the number of hardware platforms needed, consolidates installation and maintenance processes, and lowers radio site leasing costs.
  5. Greater automation: Open interfaces due to O-RAN allows for integration of artificial intelligence (AI) and machine learning (ML) algorithms, enabling the network to automatically optimize itself to meet unpredictable demand in mere minutes or even seconds.

The biggest and yet unseen benefit however for this cloudification of the base station and mobile network is how equipment manufacturers can now offer mobile networks as a service. The Nokia partnership with Microsoft Azure aims to do exactly that. In this approach, Nokia runs a fully fledged mobile network service on the Azure cloud edge to offer 4G and 5G connectivity to enterprises. The current situation is that large enterprises often depend on a network operator to meet their external connectivity needs while using different technologies for their internal needs. Most enterprises today for example depend on a mobile carriers 4G network to make calls, connect to enterprise resources when outside the office and has to switch to a local Wi-Fi or use the office LAN when in the office. The move by Nokia and Microsoft will enable an enterprise to run 5G in the office (possibly replacing Wi-Fi) and avail the same private mobile network anywhere in the world where their staff find themselves seamlessly. This as you can imagine will significantly lower the time and cost of setting up a mobile network by avoiding high capex initial investment in hardware and paying for this capability as a service on a per-use model. This lowered barrier to entry, means that the running of 4G and 5G networks will no longer be the preserve of deep pocketed telecommunication companies but also enterprises, investors and innovators.

Another benefit is that this move opens up the previously closely guarded and highly proprietary telco equipment innovation ecosystem to everyone and not just the limited R&D teams in those companies. This open architecture and cloud approach is bound to spur new innovations in service and value delivery to consumers. I believe this will also have the effect giving more power to the telco in determining the pace and direction of technology adoption, something that is currently heavily vendor driven. With close to 6/10 of network changes being vendor suggested/initiated, more often than not, the operator is at the mercy of vendors. The move of the mobile network ecosystem to the cloud on an open architecture will change this, to the benefit of the operator and subscribers.

The Wullf of Mombasa Road

The family and many friends and former colleagues and partners in the tech industry in Kenya received the news of the demise of Kai-Uwe Wullf, the founding CEO of Kenya Data Networks (KDN) with shock and sadness. The company was the first private company to provide communication infrastructure in direct competition to the then Kenya Posts and Telecommunications Corporation KP&TC in 2003. At that time, the few upcoming ISPs such as Africaonoline, Form-net, SwiftGlobal, Net2000, UUNET, and Wananchi depended on KP&TC copper leased lines to provide internet services. These could either be digital leased line (Kenstream) at 64Kbps per copper pair or an analog 9.6Kbps dialp-up connection that could be turbocharged to 14.4Kbps. These lines were fraught with downtimes and poor service delivery on the part of KP&TC and this frustrated the nascent ISP sector.

As one of the first users of the internet for email communication in the early 2000’s, The Africa Safari Club in Mombasa was a frustrated customer, frequent downtimes and missed bookings meant that the quality of service that the club was know for was suffering. In one of his visits to one of the clubs premises, the Sameer Group chairman Mr. Naushad Merali happened to meet the MD of the Africa Safari Club ASC and a discussion about the poor state of email services came up, in no time, Naushad the then ASC MD Mr Kai-Uwe Wulf would strike a plan to establish a private solution to the poor connectivity. With Kai’s previous background in ICT back in Germany, and together with Richard Bell who was already running Merali’s ISP Swiftglobal, formed what would be later known as The Kenya Data Networks Company.

KDN provided digital leased lines via wireless WiMAX technology. They also provided frame relay services that enabled ISP to provide connectivity and email to their customers in a more reliable way and and much higher capacity than was previously possible on copper lines. This as you would imagine led to the explosion of the use of internet and email services in the country as the wireless technology made the service available across large areas of the city and large towns in Kenya.

Starting service in late 2002 and getting a license to officially commence operations in January 2003, KDN was a public data network operator and later got a license to operate an internet gateway. This was also historic as in the early days of the internet in Kenya, only the government had the monopoly to connect the country to international voice and data networks via the Longonot earth station. This service was slow and congested as it was only 7Mbps in 2002 having grown from 32Kbps in 1995 when the first ISP in Kenya was formed. By the time KDN was a year old at the end of 2003, Kenya was doing 64Mbps to the Internet, a 900% increase in bandwidth consumption.

In 1995, the African Regional Center for Computing ARCC launched a full Internet system with financial support from the British Government’s Overseas Development Agency to pay for an international leased line. They became the first organization in Kenya to offer email addresses under the ARCC domain to customers who were mostly international NGOs. The first email address in Kenya belonged to the ARCC chairman Dr. Shem Ochuodho. With the increasing popularity of the service, ARCC leased copper telephone lines and connected analog modems that dialed to their email server to send and receive email. When KP&TC realized that their voice leased lines were being used to send emails and connect to the Internet, they declared the use of their lines for email access or the use of international leased lines directly as illegal stopping ARCC on their tracks. With time, KP&TC created data communication products out of the copper lines by offering dial-up lines and ‘always-ON’ copper digital leased lines branded as Kenstream. Sadly, KP&TC could not keep up with the demand for email and internet services and within no time, they were struggling to upgrade their exchanges fast enough. The lack of reliable last mile and a congested international gateway meant that the various ISPs came together and formed the Telecommunication Providers Association of Kenya (TESPOK) to present a single front on the fight to liberalize the telecommunications sector and allow them to provide some of these services in direct competition with KP&TC which later in 1999 broke into several bodies; Telkom Kenya, Posta Kenya, and Communication Commission of Kenya CCK, the sector regulator.

After much lobbying by the industry players, CCK granted KDN the first public Data License and Kai went on to turn Merali’s $4Million KDN investment into $236Million in less than 5 years. Thanks to Kai’s leadership at KDN, Kenya had some of the most reliable internet in the region and its International gateway at some point carried close to 70% of Kenya’s traffic to the Internet. KDN’s game changing network very early on in the country’s adoption of the Internet ushered in the technology boom we enjoy today. From their Mombasa road office, KDN rolled out wireless networks and was the pioneer of metro networks in Kenya by laying the first Nairobi fiber metro network in May 2005. KDN would also be the first entity to lay an underground fiber-optic cable linking Nairobi and the coastal town of Mombasa and therefore availing the first undersea cable internet to Kenya’s interior with low latency and high bandwidth.

KDN is the grandfather to today’s Liquid Telecom, East Africa’s largest data and internet traffic carrier handling more than half of the regions internet and business communication traffic. Kai’s foundational leadership at KDN enabled this country become an early adopter of the Internet to great benefit to the socioeconomic well-being of all citizens.

Kai’s contribution to the growth of internet and ICT in general did not end with KDN, he went on to later take on leadership roles in Google, Nashua and recently was an advisor at Volt AI. May he RIP.

Why are Telcos Repositioning Themselves as IT Companies?

Last week, Safaricom CEO Peter Ndegwa announced major changes in the structure of the company. One of the notable changes was the merging of the Networks and Information Technology departments into one to be known as Technology and Information Organization. This is but one of the many moves the telco has made in recent times to reposition itself from being a purely telecommunications company into an IT service provision company without losing focus on its main reason why it holds a license from the regulator while at it.

Other than Safaricom here in Kenya, Liquid Telecom South Africa relaunched its brand as Liquid Intelligent Technologies and dropped the word Telecom from its name altogether. According to its press release, Liquid is now focusing on becoming a digital services provider in South Africa. There was also a recent announcement by Telkom Kenya that it has a new strategic focus on being the digital transformation partner of choice for enterprise and SMEs.

Why are these Telcos all over sudden reorganizing their service offerings and focus as they move to becoming integrated digital services providers?

Reason number 1

When it comes to total spend on ICT services, most organizations spend about 15% of their ICT budget on communication and connectivity with the remaining 85% going to other ICT services. With many organizations embarking on or already in the process of Digital transformation, IT budgets are experiencing higher and higher budgetary allocation as organizations work towards being future ready. These telcos are realizing that they are leaving a lot of money at the customer’s table by just providing connectivity.  This desire for a bigger slice of enterprise IT spend is the reason why the focus on the provision of enabling technologies for digital transformation. With many organizations planning or in the process of digital transformation, IT budgets are getting larger allocations and this money is what these telcos are eyeing.

Reason Number 2

With OTT providers such as Facebook, Google and others, building their own telco infrastructure and in some cases partnering with Telcos in rolling out networks and services, the infrastructure space is now awash with new non traditional players who see infrastructure not as an end in itself as telcos do, but rather as a means to another more lucrative end of driving user traffic to their platforms. With deeper pockets, these players are investing money into research and development in new ways of delivering telco services to the masses and have also a better grip of the end user than the telco has. For example, an Android phone user watching a Youtube video or google search on the Telkom Kenya network has more touch points with Google than with Telkom Kenya in their online experience. With Google Loon coming into the picture to offer 4G for Telkom, nearly the entire experience will be powered by Google. This consigns many telcos into irrelevancy in the ever-increasing online user experience. Also, with most of these content providers hosting their content locally within the country and bringing the content via their private links, commercial undersea cable and regional carriers have started to see below projected demand for internet traffic as most content is locally cached. Facebook on the other hand,  is already zero rating their WhatsApp and Facebook services on many mobile telco networks including here in Kenya where many data bundles now include ‘free’ WhatsApp. I discussed this trend in a different blog post here. With Facebook pioneering mobile money transfer via WhatsApp in Brazil and India this year, they will in no time avail it in Kenya and Africa in general. This will be a big upset to M-pesa because the M-pesa experience is always offline and away from where people spend most of their time online. WhatsApp payment will bring payment into Facebook and Instagram and any online portal where many online businesses exist today. The fact that you have to leave the browser or the app to complete a transaction is M-pesa’s Achilles heel. With traditional telco losing their space in the online experience of their subscribers other than providing the internet bandwidth, their focus better be in ways they can become more relevant to their subscribers online experience.

In a nutshell, traditional telco revenue models are under threat and the re-organization we are witnessing is a realization of this and taking action when there is still time.

Serverless Computing is Changing IT Consumption Models

With the rapid adoption of cloud computing by organizations globally, IT workloads that were previously done by infrastructure found in the customers premises were moved off premises to the cloud. The shift to the cloud from on-premise was informed by the advantages that cloud infrastructure had over on-premise with lowered operating costs and hyper-scalability being at the top.

As time went on, the mere shift of existing IT workloads to the cloud wasn’t optimal and many applications and systems have now been redesigned to take advantage of cloud architectures. This is because traditionally, applications were designed in a monolithic way that never considered the added efficiencies of deconstructing the various functions and runtimes as separate processes that if well managed, can lead to lower TCO.

Runtime as a cost

On-premise, powering a server to run an application was a fixed cost. The server would run 24/7 consuming power and being cooled and managed by someone even if the application its hosting isn’t used often. The cost of running at high and low loads was nearly equal and scalability was limited by the time it took to deploy new hardware which was often measured in number of weeks or months.
With the advent of cloud, consumers moved their workloads to Virtual machines (VMs) which sort of mimicked physical servers in their premises and the cost model was similar, in that; low and high traffic periods cost nearly the same per runtime. The only advantage now is that scalability to expand capacity was reduced from number of weeks to a matter of minutes or seconds if well automated. The per-runtime cost however remained mostly unchanged. This made owning and running and application and expensive affair.

Function as a Service (FaaS)

What if cloud providers cloud charge customers, not based on how many servers and their computing and storage capacity they lease from them, but charge them only when their application runs or is accessed? For example, instead of leasing a VM to host a website and pay for the VM lease on a website that is accessed say 10 times a month, why not just pay for the 10 instances the website is accessed and not pay for the entire month when the VM was idle? This approach is known as serverless computing because in this instance, you are not paying for a VM in which you host the website, but only paying for when the website is serving pages to users. This therefore means as the website owner, your concern now shifts from managing the VM (Operating system, web server software, databases, AAA, memory. cache etc) to only managing the website contents. The preceding example is simplistic but if you look at a large organization running several servers that host their business applications, then removing the task of managing servers from such an organization creates more time and money for them to focus on what really matters i.e. the efficiency of the application. The customer is now running their applications in a serverless environment. Serverless does not mean the total absence of servers, it means that the headache of managing the underlying infrastructure and platform is removed from the customer and a new billing method that charges on when the application runs as opposed to the power and size of servers is adopted.

This approach is cost effective in the long run as organizations can significantly lower their computing costs by only paying for computing power they actually use (when the application runs) and not pay for the CPU, memory and storage they chronologically lease in a VM. This new billing approach is becoming very attractive because of the benefit of lowered IT costs if the system is well designed and optimized for the cloud environment in which it is hosted in. The rise and demand for cloud DevOps engineers and cloud architects is fueled by many organizations desire to re-architecture their applications for the cloud. As many CIO’s are realizing the hard way, legacy systems simply ported to the cloud do not derive much benefit of being in the cloud and could sometimes even cost more to run in the cloud if not well architectured. The re-design of systems is a big part of moving to the cloud and serverless architectures have provided a new and efficient way to run applications on the cloud.

Some common serverless computing services include AWS Lambda, Microsoft Azure Functions, Google’s Cloud Run and others. These services enable organizations to focus their talent and resources on designing and writing better applications by using the FaaS capability to run applications that are cost effective to run and can scale in real-time with high availability globally. The pay-per-use approach has shifted IT costs from being predominantly fixed to highly variable costs and this as any bean counter will tell you, is accounting nirvana.

5G and the New Kenyan Economy

In their 2019 Hype Cycle report for Enterprise Networking, Gartner placed 5G services at the Peak of Inflated Expectations. This means that we are at the peak of media and marketing hype about 5G. In this phase, people are over-expectant about what the technology can deliver coupled with unrealistic projections of the same becoming available to the wider population. The report also says that 5G is about 10 years away from showing real-world benefits and being consumed by the mass market. The fact of the matter however is that 10yrs is not a long time and 5G is coming and it will change the economy in more significant ways than the current and previous mobile generations did.

The question in many people’s mind however is why is the world rushing to 5G when much of the worlds population isn’t even covered by 4G? The other question is some people’s mind is also why do we need higher speeds yet the current needs are being met twice over by the existing generation? The answer lies in how this new generation of mobile technology is designed. Unlike previous generations, 5G is designed with three things in mind namely; Offer higher speeds, connect a much large number of devices to the network and do this at a much lower latency.
These three focus areas of 5G means that wireless mobile networks will no longer just be high-speed highways to access content online, but will morph and become part of our critical supporting infrastructure to provide services such as urban transportation, medical services, e-government and run smart cities.

The same way previous generations of mobile networks have created jobs and revolutionized how we live and work (e.g. mobile money, internet, social media use), 5G will do even more. For example, to deliver low latencies on some critical services such as remote medical and machine operation procedures and access to the cloud, content providers will need to host their content and Machine Learning code as close as possible to the end-users. To achieve this, they will need to host their content, not on the cloud, but at the edge. Edge computing is going to experience an exponential growth as content providers host content at edge. This will democratize storage service provision and enable smaller investors and players to invest in edge computing services in their locality. A staff sacco for example, instead of doing the usual land-buying ritual as a form of investment can build small green energy powered data centers in rural Kenya and lease them out to 5G network operators and content providers.

The ability to connect a significantly large number of devices to the network is also a strong point of 5G. Unlike previous generations that only connected mobile phones, 5G will connect sensors and other everyday items such as cars, home electronics, gates, windows and doors, pets, furniture and many more to the Internet. The same way mobile devices have created employment for the masses ranging from those who sell them, repair or even maintain them or even sell airtime scratchcards, these newly connected devices and sensors will likewise create new business and job opportunities for the citizens in ways we cannot imagine today.

Finally, the higher throughput that 5G will bring will usher new ways of accessing online content and infotainment, due to low latency and high bandwidth in 5G, technologies such as Virtual Reality (VR) and Augmented Reality (AR) will go mainstream, product owners will be creating VR/AR tours of their products for marketing and sales purposes, e-learning will change from being presented on a 2D screen into an immersive 4D experience and this will change how education is delivered. All these technologies will spur new industries the same way the initial internet and mobile revolution spurred new jobs and careers for web and mobile app designers, social media influencers and vloggers. The lower latency and bandwidth will also enable the remote operation of machinery and patients by experts from halfway round the world and would bring world-class medical care or technical expertise close to where its needed in a cost effective way. This development will also spur new opportunities and business models will emerge. If you are amazed by the current mobile revolution, prepare to be one hundred times more amazed by what 5G will usher into the world.

What Next for Airtel after failed JV Attempt with Telkom Kenya?

With Safaricom dominating the mobile services sector by commanding a 65% market share compared to Airtel Kenya’s 24.6% and Telkom’s 6.7%, the latter two saw it fit to form a Joint Venture (JV) and join forces in trying to erode Safaricom’s market share. They intended to do this by taking advantage of synergies derived from the JV. These include Telkom’s history as the incumbent operator and owning a lot of infrastructure and real estate, Airtel Kenya’s parent company’s experience in running successful mobile networks and services and finally, both parties customer bases.

In this JV under the name Airtel-Telkom (missed opportunity to name it AirT&T 🙂 ) , it was planned that Airtel would run the mobile business as Telkom focuses on Digital Services to the enterprise. However this was not to be as in August 2020, Telkom issued a statement saying they were no longer pursuing the JV transaction as they saw it fit to change their strategy. This announcement came some months after the regulator; Communications Authority of Kenya gave new conditions for them to give the JV a green light. These conditions included that none of the parties can enter into any other sale/merger/buy-out in the next 5 years. Other conditions included that Telkom relinquishes its 900 MHz and 1800MHz RF real-estate back to the government upon the expiry of license term because Airtel was the one now to offer mobile services and it has more than enough RF spectrum to serve their merged customer base four-times over. The two were also expected to honor any existing obligations to the government such as paying for their operating license and spectrum fees.

As you would imagine, these new conditions made it very difficult for the JV to make any business sense. It put Telkom at a disadvantage and made the JV unprofitable. Many saw Safaricom’s hand on these new conditions but I beg to side with the regulator here because:

  1. With Airtel running the mobile side of things in the JV, they have enough spectrum to serve both their customers and Telkom’s customers. The two will therefore be sitting on a lot of premium RF real estate yet they lack the customer numbers to utilize it efficiently. The JV according to the regulator was resulting in inefficient use of RF spectrum. One of the regulator mandate is to ensure efficient use of spectrum as a scare resource. This is why they asked Telkom to hand back to the regulator RF allocated to them in the 900 and 1800Mhz bands once their existing license expires if they went ahead with the JV.
  2. A JV does not create a new legal entity that is capable of becoming a licensee by the regulator. Both firms were therefore expected to meet their obligations individually. These include license fees payment, filling of returns to the regulator and other bodies and Quality of Service measurement.

With the JV now off the table and Telkom already announcing its new strategic direction, what’s next for Airtel?

Why is Airtel Here in the First Place?

The many mergers and acquisitions that led to Airtel Kenya aside, one of the cardinal mistakes that its predecessors made was failing to connect with the ordinary Kenyan. Kencell Communications, the first licensed mobile operator was the predecessor to Airtel today. They entered the market with per-minute billing which was at the time 35 shillings a minute, a call lasting one second or 60 seconds cost the same on their network. This was not very subscriber friendly. Enter the second mobile operator Safaricom who introduced per-second billing and this model was an instant hit with the nascent market. With Safaricom becoming the ordinary persons preferred network due to per-second billing, Kencell decided to focus its energies on the business sector by selling heir services to corporates. Unbeknownst to them at that time, the mobile market turned out to be a largely mass market one with individual users massively outnumbering corporates as the customers. With this early lesson, Safaricom learned to speak the ordinary citizens language and managed to connect with them in a way Kencell could only dream of. For example, unlike Airtel, Safaricom’s products and marketing were predominantly in Swahili. In fact all Safaricom products to-date have a Swahili name or origin. Failure to connect with the customer at a personal level by Kencell and its subsequent brands/owners is largely responsible for Airtel Kenya’s situation today as Safaricom’s product names have become verbs and nouns in the common citizens daily speak.

I however think that all is not lost for Airtel. If you look at Safaricom’s financial results, their profit and revenue contribution from new services such as Internet, enterprise connectivity, fiber to the home is on the rise while that from voice is not growing as fast. With this in mind, Airtel has the opportunity to reinvent itself as more than a mobile operator. It has the tools and resources to transform itself from simply being viewed as a mobile operator to a Digital Services Provider and Integrator. There is an increasing drive towards digital economy with many businesses going online and depending on connectivity and the cloud. Homes are also getting connected to the Internet and this is driving digital content consumption. This is the space in Airtel needs to play in. The recent launch of Airtel TV is a step in the right direction and should not stop there. Airtel’s dalliance in the African art scene through sponsorships in the past has also put them at a premier position to be a leading content generator in the country. If Royal Media Services with their relatively shallower pockets made Viusasa work, why can’t Airtel?

Another area is the provision of digital services to corporates. Liquid Telecom partnered with Microsoft as Safaricom partnered with Amazon to offer hyperscale computing to the market. GCP, AliCloud are all potential hyperscale computing players that Airtel can partner with to target the corporate market.

Airtel has an immense opportunity to become a leader in digital services and content if it first stops seeing itself as a mobile operator (and therefore stop competing with Safaricom or Telkom) and moving fast to capture this emerging market and become the go to provider for Artificial Intelligence/Machine Learning, Cloud Computing and content delivery. The recent investment by Facebook into Reliance-Jio in India is indicative of what progressive mobile operators need to do to remain in business. The Facebook investment infuses into the telco, a new way of thinking about how value can be delivered to consumers.

What’s attracting Microsoft to Nokia again?

There is a prediction by CSS Insights; a market research company, that Microsoft might buy Nokia Networks in 2021. This is because of the former’s new found appetite in acquiring Telecom gear companies as it reposition itself as a highly vertically integrated operation. Microsoft recently acquired Affirmed Networks and Metaswitch in a drive seen by many as positioning itself as the dominant cloud player. In 2013, Microsoft bought Nokia’s devices division for $7B in a largely failed deal that was seen as Microsoft’s attempt to catch up on the mobile space which it had lagged to innovate/invest into for some years.

With the telco network going to the cloud, it is now possible to host in the cloud, RF and other equipment traditionally found in a base station, on the cloud and share these resources across the entire network. With this change, telcos can roll out services with lightning speed and lower network costs significantly as resources are cross-shared on the cloud and can therefore be purchased as-a-service and not as equipment. Telcos will also enjoy a pay-per-use model which will significantly lower their capex on network roll-out and maintenance. The role of the telco equipment manufacturer is also changing as they now move from sale and maintenance of network equipment to the sale of full scale telco services from the cloud. This means that manufacturers such as Nokia, Huawei and Ericsson will run massive cloud based telco networks and lease these as a service to operators. This possibility of running large parts of mobile networks on the cloud is what is I believe is what the market analysts see as attracting Microsoft to Nokia. The reverse is also true, for Nokia to offer telco on the cloud, it will need robust cloud infrastructure and services which it currently lacks, pairing up with a large cloud player such as Microsoft will give it access to an already existing cloud platform run by Microsoft.

The other factor that I think is giving this prediction higher chances of coming to pass is that Over The Top (OTT) content providers market reach is being slowed down by telcos who levy a fee to subscribers for them to access the internet/content. By eliminating them from the content supply chain, OTT content providers such as Facebook, Netflix, Google and and cloud platforms will reach more customers who are currently unable to enjoy their services due to data costs. At the moment in many countries including Kenya, Facebook Inc is already working with telcos to zero rate access to Facebook page and WhatsApp on many data plans. Facebook does not believe you should pay your mobile provider to use their services, same for Google, Netflix, Microsoft who also fortunately or unfortunately have deep pockets and research and development teams to roll out far superior networks. These players will soon start taking up shareholding telcos as is the case of Facebook’s recent $5.7B investment in India’s Relliance-Jio. I discussed this developing trend of OTT content providers investment in telecommunication services in detail in a previous post here.

The ongoing trade war between China and the US which saw Huawei network gear and software being banned in US network could also be a factor. I discussed this trade war in detail here. With no major US based telco equipment manufacturer (after Nokia absorbed Alcatel-Lucent), Microsoft sees an opportunity to build an image of a US-owned supplier with the acquisition of Nokia as it will position Microsoft/Nokia as the preferred supplier to US networks. With 5G being heavily dependent on the cloud and edge computing, Nokia’s experience at the edge will enable Microsoft dominate the 5G space from the cloud to the edge with ease.

Kenya Needs a Well Coordinated Data Centers Investment Policy and Strategy

The growth of online content consumption fueled by the trinity of cheaper smart phones, social media and 4G is evident everywhere you look. Your average mama mboga, office workmate, spouse is today a regular consumer and producer of content such as video clips, photos (mostly memes) and posts to social media more often. The popularity of Facebook, Instagram, TikTok, WhatsApp, YouTube, video and audio streaming such as Netflix, Showmax and Viusasa says it all.

This therefore means that content providers need to ensure higher levels of service quality by improving their systems to cope with the demand and deliver the expected experience. One example is that these days YouTube Videos rarely buffer like they did 6 years ago because YouTube is storing those popular videos in Nairobi and not in a data center in Europe or the US.

This drive to deliver a good user experience by the providers means that they have to depend more and more on the public cloud infrastructure. This infrastructure is run by cloud providers such as Google, Amazon Web services (AWS), Microsoft, Alibaba, and many more. These cloud providers on the other hand, lease data centers (DCs) from private investors such as the Africa Data Centers and iColo.
The reason why these content providers use the public cloud is because of how it is designed to be fault tolerant and always avail services at the expected quality.

Cloud Infrastructure 101

The public cloud is designed in such a way that cloud services are provided as close to the consumers as possible without compromising service levels and availability. To do this, cloud providers have points of presence where they avail cloud services from. These points of presence are region based, so there would be Asia North region, Asia South, Africa North, Africa South, Europe East etc. Within these regions, they have city regions or zones. For example Africa South region can have Capetown, Durban and Johannesburg zones.
Within these zones they have data centers that are at least 60miles apart from each other and interconnected with high speed cables. To ensure high availability, many cloud providers usually have at least data centers in a zone. so using the example above, there would be three data centers in Durban area separated by at least 60 miles, same for Capetown and Johannesburg

Is Kenya ready?

With the example above, it means that for Kenya to be attractive to cloud operators, we must invest in DCs in a way that will make it attractive for providers seeking reliable infrastructure to provide content from. At the moment, when I take stock of our status, I believe there is room for improvement in as far as preparing the country to being a destination to cloud providers. The location of current DCs and future planned ones doesn’t inspire confidence on the service reliability from these DCs by would be customers.

The Africa Data Center- ADC at Nairobi (Mombasa road) is by far the best run and largest data center in East Africa, closely followed by the iColo data centers in Mombasa and Nairobi (Karen). Safaricom also runs a Data center in Thika town. There is ongoing investment by Huawei and a local partner in a Data center in Mombasa and also the government DC in Konza that is currently in makeshift modular structures as a proper one is being set up.
All these are investments in the right direction, but I think we are dragging our feet as far as investing in world class data center services is concerned. I sometimes imagine what the situation would be if all the Kenya’s empty malls investors had put money into DCs instead? We would become the regional cloud hub for East and Central Africa by virtue of having a more stable economy and political climate and better infrastructure and power supply by far.

Kenya needs to coordinate and catalyze the investment into data centers by private investors for example by giving tax concessions and incentives to anyone investing in a DC at a predetermined location or region. By this I mean that government policy makers can map out how Kenya can coordinate the investment into DCs to ensure that we become attractive to the large cloud players.

This year, Microsoft and Google announced that they intend to be carbon free by the year 2030. By this they mean that their services will run off offices, data centers and networks that rely 100% on renewable energy sources. This of course means that will stop using coal power and opt for greener sources such as solar, wind and geothermal. With Kenya sitting in a region where all these are abundant, there is no reason why Naivasha should not turn from a flower town on the decline into the DC capital of Kenya. Its closer to EA regions than Mombasa, has abundant geothermal and a lake to cool the DCs. Instead, Djibouti is eating Naivasha’s lunch as it’s fast becoming the regional DC go-to city.

The reason I am calling for central coordination on this is because so far, the investments have been driven by other factors other than suitability of the DC locations in relation to other nearby DCs and the investment levels have also been very low. For example, the iColo DC in Karen is too close to the Africa DC on Mombasa road, this doesn’t meet the minimum distance DCs should be separated from each other. A natural disaster or major power outage on Mombasa road would likely affect Karen but not Thika town or Mai Mahiu town where I think the next DC in ‘Nairobi’ zone should be.

With content consumption expected to grow even faster with the adoption of 5G, cloud computing as we know it will also slowly morph into a hybrid of true cloud and edge computing. In the later, the content is hosted very close to the user than before. In the Nairobi case, instead of the content being hosted in a DC on Mombasa road, it would also be hosted in multiple locations near consumers such as at mobile base stations or nearest malls that will offer DC space for Edge computing services.

Please don’t get me wrong, I am not calling for regulation of the DC space in Kenya per-se. I am calling for government incentives akin to the Export Processing Zones (EPZ) concept but on DCs. So as an investor, I stand to gain tax breaks for example if I build my DC at Makindu town to supplement the DCs at Konza. I am however free to chose where I also think I should set it up as long as I am compliant to the existing laws and makes business sense. Saying that Konza Technopolis will answer my call above is missing the point as we cannot have all DCs located in one place. They will not be attractive to customers seeking high reliability and availability of services.

Timely Policies and Laws Needed in the Kenyan ICT Space

For sometime now, Kenya has continued to enjoy pole position as far as innovation and application of ICT’s is concerned. We have been hailed as an example of how, if applied correctly, ICT’s can be a catalyst for development. Kenya found itself here not by chance, but thanks to the forward-thinking leadership that put in place progressive policies, guidelines and regulations to spur the growth of ICT’s.

At around the turn of the century, Kenya formulated progressive policies, laws, regulations and guidelines that guided the growth and direction of the then nascent ICT sector. This was through the Kenya Communication Act (Cap 411A) of 1998 which underwent subsequent amendments thus: Kenya Communications (Amendment) Act of 2009, and the Kenya Information and Communications (Amendment) Act , 2013. These laws were informed by government policy towards ICT. To simplify the relationship between all these documents, Policy documents influence or result in laws (acts of parliament) which in turn result in sector guidelines and regulations (handled by the regulator). This therefore means that if we get the policy wrong on the onset, then all subsequent action plans and sector laws will be off target or with undesired outcomes.

With the rapid changing ICT sector, the task of a country keeping itself current to these changes can be daunting. I would say we did very well at the onset despite several challenges that arose. One of the most common example of where policy/laws/regulations lagged behind and failed the citizens was in the area of data protection which led to a lot of mobile phone fraud (ala Kamiti/’mtoto ameumwa na nyoka‘ SMSs) which has led to rampant identify theft and mobile money fraud. Another area that is facing challenges due to poor or delayed policy, laws and guidelines is on the infrastructure front on matters to do with protection of ICT infrastructure from accidental damage, vandalism and sabotage. Cybersecurity too is becoming a big problem especially now that many offline business transactions are going online and social media.

The above examples show that it is very risky for any country to have lagging policies/laws/regulations on matters ICT. It becomes very difficult to reverse any established ICT related vice using laws once technology gives laws a wide leading gap. To correct this often involves not just new laws, but adoption of newer technologies, whose cost is often passed on to the end users. In summary, lagging laws end up eroding any financial or technical margins we have. This is already happening on the e-commerce space where the lack of proper national physical addressing system, delayed data protection and cyber security laws led to the loss of trust in online transactions and higher operating costs by the vendors due to a poor national addressing system.

There is a saying that often, innovation leads regulation. This is true, the question however is by what margin or gap should regulation lag behind the innovation curve? Due to accelerating pace of technology advancement, this lag needs to be smaller and smaller. This is however not happening in Kenya. The Kenya Information and Communication policy guidelines were recently gazetted after nearly 4 years of review and deliberations. This is too long. The period of time it took to make these amendments is very long in the ICT universe and a lot has changed, making some of the documents content obsolete or near obsolete. Several examples below.

When the gazetting took place, many media pieces focused on the policy proposing that ICT companies can only be considered as Kenyan or local if the shareholding includes 30% local ownership. There is hue and cry over this as many seem to misinterpret the law to say that foreign owned ICT companies will not be allowed to operate in Kenya without 30% local ownership. This is far from the truth. What this says is that the government will give preference to Kenyan ICT companies when awarding govt tenders and then goes ahead to define what a local company is. This therefore means a foreign company can still operate in Kenya and still get government ICT tenders but can drastically increase it’s chances of being awarded the tenders if they meet the 30% local ownership. This is different from saying that only locally owned companies can operate in the Kenya ICT space.

The focus of the policy’s critics should however been on the fact that its taking very long to enact or gazette policies and laws on matters ICT in the country leading to a lag that is costing us our competitive advantage as a country. We stand to lose the gains made so far if we are not nimble enough to change with the times by ensuring that our legal, institutional and regulatory framework development curve closely follows the innovation curve.

As a country, we also can claim leadership in the region and Africa on our institutional framework setup. We have an extremely vibrant for forward-looking ICT regulator and bodies that are tasked with directing and spur uptake and use of ICT in our socioeconomic activities. We must however take care to avoid the pitfalls that seem to befall these institutions that are sometimes come across as being more interested in revenue generation than carrying out their mandate.The focus on levies by both parastatal/national and county governments for wayleaves means that governments will tend to take the approach which maximizes their revenues as opposed to an approach that is geared towards successful rollout of ICT services. A few months ago, the ICT Authority issued a notice that would have made all fiber plant operators to pay a levy to them in addition to seeking approval for cable laying on top of the already existing levies by local governments and roads authorities where these cables pass. I say this because great initiatives such as the National Broadband Strategy (2018-2023) might not yield much if the implementation of this strategy will be met with hurdles in the name of county levies such as the case when Turkana county government demanded KES 93 million in levies from Geonet Technologies for laying fiber in the county. See story here.

Considering the National ICT policy document has been in the works since 2016, the gazetting of the same close to 4 years later is a mistake we could only afford to make once. In the next few years, if we remain rigid in the quick implementation of these very good policies, laws and strategy documents, we risk losing our leadership in matters ICT in the continent. The rapid adoption of existing and emerging technologies that require trust to shift from offline to the online space for them to succeed (e.g e-commerce) means that legal and regulatory tools and frameworks need to be properly established. A good example is the hue and cry from citizens during the very noble Huduma number registration process. The process failed because of lack of trust that was caused by lack of requisite legislation on data protection being in place due to the nature of data that this registration process was collecting. The rise in cybercrime and mobile money fraud too is as a result of failure to be quick in enacting laws and regulations around online transaction authentication and trust management. For example, there is a disconnect in Identify verification for mobile money transfer as the form of identification is offline while the transaction takes place online, this makes authentication of transacting parties a subjective matter left to the vendor/agent serving you and not tied to a fool proof and objective approach such as online digital ID verification.

AI: A spanner in the works

The nearly four-year delay in the adoption of the new policy coincided with a period of rapid development and maturity of Machine Learning, Cloud hyper-scale computing and data analytics. These three have led to the increasing adoption of Narrow AI systems in many areas of life. The application of AI to algorithms that impact peoples very lives and health. The use and application of ICT’s such as AI now cuts across moral and philosophical areas, opening a Pandora’s box on how this can actually be regulated with the existing policy framework. Forward looking regulators and governments are now re-looking at the ICT regulatory space with AI lens. This is where we should be as a country as far as our national ICT policy is concerned. There is a drive in South Africa by industry leaders to have the local regulator be reconstituted into a more forward looking body that will help the citizens enjoy digital dividends.

Other than AI, the advent of e and m-commerce apps and online shopping brings in new regulatory challenges too. For example what can the regulator do to ensure high levels of trust by the public of online service rating systems and that they are not abused or gamed? Is the 5 star rating on your Uber driver’s profile genuine? are the number of followers for that Instagram shop and it’s reviews genuine? The ICT policy or resultant laws and guidelines in the current market conditions should already be addressing that.

It is my hope that the pending legal, institutional and regulatory framework and tools at our disposal will be implemented with the desired timelines. These include the National ICT Infrastructure Masterplan- NIIM (2019-2029) which is a very key document for the future of ICT services roll-out in this country.

Voice is About to Become the New User Interface and a Global Equalizer

In November 2014, Amazon released the first smart speaker that had a digital assistant named Alexa. For those not familiar with what this is, its simply a cloud-connected smart speaker that a user can issue voice based commands to do simple things like play music, seek weather, traffic, and news updates and also control other smart devices in the home. All this is done by initiating a conversation with the smart speaker by uttering a ‘wake’ word before the command such as ‘Alexa how is the weather today?’ and Alexa would respond with the weather update, here the word ‘Alexa’ is the wake word and the minute it hears this, it actively listens for the next words and decodes what you are saying by use of cloud-based speech recognition systems. As of mid 2019, Amazon estimates that 30% of American and European homes have a smart speaker, up from 22% a year ago. This is about 100 million Alexa devices in the market.
Not to be be left behind, in 2016, Google also released a virtual assistant called Google Assistant and was initially available on select smart speakers but in 2019 made it available in over 1 Billion android phones in the world (talk about scale!). Other virtual assistant flavors include Apple’s Siri that’s available in all iPhones, Microsoft Cortana in Windows 10 and Samsung’s Bixby.

With these assistants available in smart speakers and phones, a user is able to interact with a computing device such as phone or personal computer to access information and carry out tasks that would have traditionally required them to use an input device such as a touch screen, mouse or keyboard. For example, instead of unlocking my phone screen and opening Google maps to check traffic conditions to say Galleria Mall, all I need to do now is say to my phone ‘Hey Google, traffic to Galleria Mall?’ and the assistant would answer back with the results like ‘There is moderate traffic to Galleria mall, from where you are, it should take you 7 minutes to get there’. I can also initiate a phone call by simply saying ‘Hey google, call Thomas Sankara’ and the assistant will search for his number in my phone book and initiate the call without me touching the phone. On the appliances and electronics side, I will no longer need to look for the TV remote and change channels and I can instead simply say ‘TV, change channel to BBC news’ and its done. This is so good in many ways because:

  1. Its much faster and involves fewer steps to get the same results if not better
  2. It is more natural and intuitive than current interfaces that often need some training/skill or even literacy to use
  3. I can do all this while my hands and eyes are occupied doing something else. For example if I’m driving, I can still get to use maps and make calls without looking at or touching the phone. another example if I could ask the TV to change channels while I’m busy preparing a sandwich.

Other than accessing information from the internet as per the above examples, voice based assistants can also be used to control smart devices and appliances (explains Samsung’s foray with bixby) schedule/cancel meetings and open apps in the phone, all by using voice commands.

Why is this a big deal?
With the recent advances in Artificial Intelligence and Machine learning, Speech recognition systems have become pretty accurate in deciphering words in human speech. With speech being a highly variable input because everyone has a unique voice and accent and variable surrounding noises, it was initially difficult to get computing systems to understand human speech, but with AI and Machine learning advances in the last 5 years, this is now possible. Google assistant and Alexa can now decipher English speech and accent by Lemaiyan from Narok or Billy Ray Cyrus from Texas with near equal accuracy for both inputs.

The biggest leverage that voice has is that AI systems that power these digital assistants are now being trained in various languages and dialects. As of mid 2019, Amazon’s Alexa supports seven overall languages: English, French, German, Italian, Japanese, Portuguese (Brazilian), and Spanish. Google Assistant on the other hand currently supports sixty overall languages including Swahili, Telugu, Gujarati, Zulu, Mandarin and many more.
With the addition of more languages currently ongoing, a voice based interaction with the Internet through mobile phones and smart speakers means that people who were previously locked out of the benefits of the Internet because they could not read and write would all over sudden be able to access the limitless opportunities that being connected presents to them in the comfort of their local language. It will soon be possible for everyone in the world to search for information on the internet, interact with a mobile phone or computer apps, home appliances and electronics by simply speaking to it using the local language. This will be the most significant step in bridging the digital divide since the liberalization of telecommunications in the 1990’s and can be leveraged to create a more equal society. The multiplier effect of this is mind boggling if you think about it. A farmer in Eldoret will be able to seek markets for his produce or even operate a herbicide spraying drone by issuing voice commands in his local language, A mother in rural Sri Lanka will be able to seek nutritional information for her child by speaking the local language to her phone’s digital assistant, set reminders for hospital visits or school meetings without the need for her to know how to read and write in English. A non Greek speaker will also be able to participate in conversations taking place in Greek seamlessly by using the assistant to translate the conversations back and forth.

The popularity of voice based interaction is also growing with the touch screen slowly taking a backseat as the main user interface to the treasure trove that is the Internet and modern appliances and electronics. The below stats sampled from developed countries lend to the fact that voice based user interface to technology and the services it provides is on a hockey stick trajectory in adoption (source):

  1. 40% of adults use voice search on a daily basis (Forbes)
  2. 52% of people use voice search while driving (Social Media Today)
  3. 65% of consumers ages 25-49 years old talk to their voice-enabled devices daily (PwC)
  4. On average, more men than women use voice search at least once per month (Social Media Today)
  5. A study conducted by Uberall found that 21% of respondents were using voice search on a weekly basis (Search Engine Watch)
  6. Close to 50% of people are now researching products using voice search (Social Media Today)
  7. The number of voice search increased by 35x from 2008 to 2016 (Kleiner Perkins)
  8. A HubSpot survey found that 74% of respondents had used voice search within the last month (HubSpot)
  9. Mobile voice search on Google is now translated in over 60 languages (Wikipedia)

With the main mode of interaction with the online world being voice based, the rise of voice based services will also be on the rise. Organizations are today deploying chatbots and voicebots to answer customer queries, take orders and fulfill them. For example, in the USA, its now possible to order pizza from Pizza hut by simply saying ‘Alexa, order pizza hut’ and it will provide the menu options. If you instead say ‘Alexa reorder pizza hut’, then it proceeds to re-order what you ordered last time. This improves the efficiency of service delivery as these bots are available 24/7 at nearly zero marginal cost per additional customer unlike hiring humans to do the work. These systems are also very well versed in the specific details and operations of the company and know were each bit of information is in the organization. A chatbot does not need to put the customer on hold to confirm something from sales or finance department, it has access to all this information and can serve the customer in real-time.

Social media will also move from the current text and multimedia based platforms such as Facebook to voice based personas or avatars. Instead of curating an abstract Facebook wall with posts and status updates, people will curate voice avatars that will be continuously trained to learn information about us and even speak on our behalf (in our exact voice even). For example, a person can train his avatar to respond to questions on social media on their behalf. If my Avatar has access to my calendar and I have allowed it to respond to people (or specific people) about my schedule and itinerary for the day, then another avatar/user can ask it where the other user is or what they will be doing at 3PM today and get and answer. My avatar can also represent me in online meetings and take note of what was discussed and what my take aways or action points from the meeting are, and share this with me at the end of the day. The blurring of the line between social media and real-life will also happen as this avatar can also take on responsibilities in real life. For example. Instead of the HR manager sending a mail to staff inviting them for a physical meeting to brief them on the new staff medical cover, the manager can instead invite all staff avatars to the meeting and leave me to do more productive activities during the meeting time, a win-win for everyone. The avatars being AI based systems, will also be more efficient in recalling information and analysis better than a human and can be used to carry out repetitive tasks or work on my behalf and I get paid. The avatar efficiency and closeness to my offline behavior and character will be a function of how much information I allow it to learn about me. The more I let it learn about me (how I speak, my moods, my social life, my work life, my plans for the day etc), the closer it will be to resembling me as I am in real-life. Mix this with all the information that is on the Internet and you have yourself a virtual worker who can work on my behalf and also interact with others online while I sleep or go fishing in Murang’a. This is the idea behind Microsoft Cortana, create a digital assistant for the workplace that can learn about you and assist you in your work in the office to schedule and remind me of meetings, look for information in the company ERP systems, respond to emails, read reports and take action, etc.

Despite all these possibilities, the issue of privacy and security is at the forefront as the major road block to voice based user interface adoption. For example, is your smart speaker or google assistant on your phone constantly listening to your conversations that are outside the wake word? Can hackers eavesdrop into your intimate or personal one-on-one talk with others in the room?
The truth is there will be no escape to voice adoption as it presents the most natural way for most humans to use and control technology and also allow technology to talk back to us with feedback or results in a way we understand. With the coming hyper-connected world and IoT devices, the current user interfaces such as touch screens will be unable to make us efficiently interact with technology. There is therefore a need for the developers of these systems to put in place measures that will build trust in these systems and instill confidence that the systems are not being abused or used to intrude into our private spaces, thoughts and speech.

The other fear is the cybersecurity aspect. There was a story last year where hackers used AI speech generation systems to imitate the voice of a company CEO on phone and stole a large amount of money. (Read about it here or a local version of the same here). This presents a new threat by voice based systems to the cyberspace and this needs to be dealt with in the design and implementation of these systems.

Finally, web based systems and apps are these days being designed with the ‘mobile first’ philosophy, this is about to change into Voice first, Watch this space.