The Wullf of Mombasa Road

The family and many friends and former colleagues and partners in the tech industry in Kenya received the news of the demise of Kai-Uwe Wullf, the founding CEO of Kenya Data Networks (KDN) with shock and sadness. The company was the first private company to provide communication infrastructure in direct competition to the then Kenya Posts and Telecommunications Corporation KP&TC in 2003. At that time, the few upcoming ISPs such as Africaonoline, Form-net, SwiftGlobal, Net2000, UUNET, and Wananchi depended on KP&TC copper leased lines to provide internet services. These could either be digital leased line (Kenstream) at 64Kbps per copper pair or an analog 9.6Kbps dialp-up connection that could be turbocharged to 14.4Kbps. These lines were fraught with downtimes and poor service delivery on the part of KP&TC and this frustrated the nascent ISP sector.

As one of the first users of the internet for email communication in the early 2000’s, The Africa Safari Club in Mombasa was a frustrated customer, frequent downtimes and missed bookings meant that the quality of service that the club was know for was suffering. In one of his visits to one of the clubs premises, the Sameer Group chairman Mr. Naushad Merali happened to meet the MD of the Africa Safari Club ASC and a discussion about the poor state of email services came up, in no time, Naushad the then ASC MD Mr Kai-Uwe Wulf would strike a plan to establish a private solution to the poor connectivity. With Kai’s previous background in ICT back in Germany, and together with Richard Bell who was already running Merali’s ISP Swiftglobal, formed what would be later known as The Kenya Data Networks Company.

KDN provided digital leased lines via wireless WiMAX technology. They also provided frame relay services that enabled ISP to provide connectivity and email to their customers in a more reliable way and and much higher capacity than was previously possible on copper lines. This as you would imagine led to the explosion of the use of internet and email services in the country as the wireless technology made the service available across large areas of the city and large towns in Kenya.

Starting service in late 2002 and getting a license to officially commence operations in January 2003, KDN was a public data network operator and later got a license to operate an internet gateway. This was also historic as in the early days of the internet in Kenya, only the government had the monopoly to connect the country to international voice and data networks via the Longonot earth station. This service was slow and congested as it was only 7Mbps in 2002 having grown from 32Kbps in 1995 when the first ISP in Kenya was formed. By the time KDN was a year old at the end of 2003, Kenya was doing 64Mbps to the Internet, a 900% increase in bandwidth consumption.

In 1995, the African Regional Center for Computing ARCC launched a full Internet system with financial support from the British Government’s Overseas Development Agency to pay for an international leased line. They became the first organization in Kenya to offer email addresses under the ARCC domain to customers who were mostly international NGOs. The first email address in Kenya belonged to the ARCC chairman Dr. Shem Ochuodho. With the increasing popularity of the service, ARCC leased copper telephone lines and connected analog modems that dialed to their email server to send and receive email. When KP&TC realized that their voice leased lines were being used to send emails and connect to the Internet, they declared the use of their lines for email access or the use of international leased lines directly as illegal stopping ARCC on their tracks. With time, KP&TC created data communication products out of the copper lines by offering dial-up lines and ‘always-ON’ copper digital leased lines branded as Kenstream. Sadly, KP&TC could not keep up with the demand for email and internet services and within no time, they were struggling to upgrade their exchanges fast enough. The lack of reliable last mile and a congested international gateway meant that the various ISPs came together and formed the Telecommunication Providers Association of Kenya (TESPOK) to present a single front on the fight to liberalize the telecommunications sector and allow them to provide some of these services in direct competition with KP&TC which later in 1999 broke into several bodies; Telkom Kenya, Posta Kenya, and Communication Commission of Kenya CCK, the sector regulator.

After much lobbying by the industry players, CCK granted KDN the first public Data License and Kai went on to turn Merali’s $4Million KDN investment into $236Million in less than 5 years. Thanks to Kai’s leadership at KDN, Kenya had some of the most reliable internet in the region and its International gateway at some point carried close to 70% of Kenya’s traffic to the Internet. KDN’s game changing network very early on in the country’s adoption of the Internet ushered in the technology boom we enjoy today. From their Mombasa road office, KDN rolled out wireless networks and was the pioneer of metro networks in Kenya by laying the first Nairobi fiber metro network in May 2005. KDN would also be the first entity to lay an underground fiber-optic cable linking Nairobi and the coastal town of Mombasa and therefore availing the first undersea cable internet to Kenya’s interior with low latency and high bandwidth.

KDN is the grandfather to today’s Liquid Telecom, East Africa’s largest data and internet traffic carrier handling more than half of the regions internet and business communication traffic. Kai’s foundational leadership at KDN enabled this country become an early adopter of the Internet to great benefit to the socioeconomic well-being of all citizens.

Kai’s contribution to the growth of internet and ICT in general did not end with KDN, he went on to later take on leadership roles in Google, Nashua and recently was an advisor at Volt AI. May he RIP.

Serverless Computing is Changing IT Consumption Models

With the rapid adoption of cloud computing by organizations globally, IT workloads that were previously done by infrastructure found in the customers premises were moved off premises to the cloud. The shift to the cloud from on-premise was informed by the advantages that cloud infrastructure had over on-premise with lowered operating costs and hyper-scalability being at the top.

As time went on, the mere shift of existing IT workloads to the cloud wasn’t optimal and many applications and systems have now been redesigned to take advantage of cloud architectures. This is because traditionally, applications were designed in a monolithic way that never considered the added efficiencies of deconstructing the various functions and runtimes as separate processes that if well managed, can lead to lower TCO.

Runtime as a cost

On-premise, powering a server to run an application was a fixed cost. The server would run 24/7 consuming power and being cooled and managed by someone even if the application its hosting isn’t used often. The cost of running at high and low loads was nearly equal and scalability was limited by the time it took to deploy new hardware which was often measured in number of weeks or months.
With the advent of cloud, consumers moved their workloads to Virtual machines (VMs) which sort of mimicked physical servers in their premises and the cost model was similar, in that; low and high traffic periods cost nearly the same per runtime. The only advantage now is that scalability to expand capacity was reduced from number of weeks to a matter of minutes or seconds if well automated. The per-runtime cost however remained mostly unchanged. This made owning and running and application and expensive affair.

Function as a Service (FaaS)

What if cloud providers cloud charge customers, not based on how many servers and their computing and storage capacity they lease from them, but charge them only when their application runs or is accessed? For example, instead of leasing a VM to host a website and pay for the VM lease on a website that is accessed say 10 times a month, why not just pay for the 10 instances the website is accessed and not pay for the entire month when the VM was idle? This approach is known as serverless computing because in this instance, you are not paying for a VM in which you host the website, but only paying for when the website is serving pages to users. This therefore means as the website owner, your concern now shifts from managing the VM (Operating system, web server software, databases, AAA, memory. cache etc) to only managing the website contents. The preceding example is simplistic but if you look at a large organization running several servers that host their business applications, then removing the task of managing servers from such an organization creates more time and money for them to focus on what really matters i.e. the efficiency of the application. The customer is now running their applications in a serverless environment. Serverless does not mean the total absence of servers, it means that the headache of managing the underlying infrastructure and platform is removed from the customer and a new billing method that charges on when the application runs as opposed to the power and size of servers is adopted.

This approach is cost effective in the long run as organizations can significantly lower their computing costs by only paying for computing power they actually use (when the application runs) and not pay for the CPU, memory and storage they chronologically lease in a VM. This new billing approach is becoming very attractive because of the benefit of lowered IT costs if the system is well designed and optimized for the cloud environment in which it is hosted in. The rise and demand for cloud DevOps engineers and cloud architects is fueled by many organizations desire to re-architecture their applications for the cloud. As many CIO’s are realizing the hard way, legacy systems simply ported to the cloud do not derive much benefit of being in the cloud and could sometimes even cost more to run in the cloud if not well architectured. The re-design of systems is a big part of moving to the cloud and serverless architectures have provided a new and efficient way to run applications on the cloud.

Some common serverless computing services include AWS Lambda, Microsoft Azure Functions, Google’s Cloud Run and others. These services enable organizations to focus their talent and resources on designing and writing better applications by using the FaaS capability to run applications that are cost effective to run and can scale in real-time with high availability globally. The pay-per-use approach has shifted IT costs from being predominantly fixed to highly variable costs and this as any bean counter will tell you, is accounting nirvana.

My Thoughts on The US-China Trade Wars’ Effect on US Tech Dominance

The arrest of Huawei’s CFO, Meng Wanzhou in Vancouver in December 2018 at the request of the US brought to fore the ongoing trade war between the US and China which is mostly instigated by US president Donald Trump. The arrest came after the U.S. Department of Justice accused Meng (she’s also the daughter of Huawei CEO Ren Zhengfei ) for allowing SkyCom (a Huawei subsidiary), to do business in Iran, violating U.S. sanctions against the country and misleading American financial institutions in the process. This action attracts a jail term of over 30 years in the US.
Hot on the heels of her arrest were concerns that the perceived close ties Huawei has with the Chinese communist government, would allow the Chinese state to spy on any country that runs Huawei telecom equipment especially the upcoming 5G network. It is alleged by the US that Huawei has ‘backdoors’ to all their hardware that can allow unhindered entry into any network and conduct espionage or even shut down the equipment. The US has therefore banned all US telecom operators from using Huawei equipment in their networks especially in 5G deployment.

Why 5G and not 4G?
Unlike 4G which was backward compatible with older 3G and 2G technologies, 5G is the first generation of mobile technology that is not backward compatible. This means that to roll out 5G, a totally new network is needed compared to previously where an upgrade from 3G to 4G was mostly done by upgrading the software of several network components and adding few more components. 5G therefore means building an entirely new network from scratch for mobile operators.
Another factor is that 5G is designed to power the next generation of connected devices and enhance the adoption of IoT worldwide. What this means is that in the near future, vehicles, furniture, and factory machinery will all be connected; surgeries, remote robot operations and many of today’s manual activities will be done by machines that will be connected via the 5G network. In a global economy that is increasingly dependent on connectivity to derive any economic and technical efficiencies, the desire to have full control of the 5G network is obvious. Take a scenario where the entire US transport, agriculture and manufacturing sectors runs on 5G network equipment that Chinese government has a direct access through backdoors created by Huawei for the Chinese state. This is what Trump fears. Indeed he is somewhat justified to habour these fears, but the question is whether they are valid fears. I will not delve into the politics of it but will instead look at the possible scenario of the effect of America’s actions towards Huawei and the industry dynamics.

Trade Wars and Cybersecurity
With the escalating Trade war between US and China, Donald Trump last week declared a national emergency on cybersecurity. Trump signed an executive order declaring a national emergency relating to securing the US cybersecurity supply chain. Under the order’s provisions, the U.S. government will be able to ban any technologies that could be deemed a national security threat. The order, “Securing the Information and Communications Technology and Supply Chain,” opened the door for the US government to classify companies like Huawei as a national security threat and ban the company’s technology from the US and also forbade US companies from trading with Huawei unless they have special permission/license from the government.

This order has resulted in companies such as Intel, Qualcomm and Alphabet (Google’s parent company) stopping the supply to Chinese firms of components that use American technology. One of the most notable announcement was by Alphabet stopping Huawei from accessing the licensed Android Operating System that it uses for its smart phones. With Huawei being the largest telecom equipment manufacturer and 2nd largest smartphone manufacturer in the world (Samsung being the leader and Apple being number 3), this ban will have far reaching effects on Huawei’s business plans. However, if the rumors are true, Huawei has been anticipating this day and already have an in-house developed mobile OS that is believed will takeover from Android OS. They have also been stockpiling chips and components that can last them 3 more months as they seek alternatives.

In their 2011 book “That Used to be Us: How America Fell Behind in the World It Invented and How We Can Come Back”, Thomas Friedman and Michael Mandelbaum list the major problems the US faces today and possible solutions. These problems are: globalization, the revolution in information technology, the nation’s chronic deficits, and its pattern of energy consumption. They also go ahead and state that to reclaim their position, the US must approach this revival with ‘war-like’ conviction. Something Trump seems to be doing to the letter.

US Tech Dominance is Waning
There are several events that have shown that the US is losing its dominance in technology space and is doing all it can to try protect that privilege.
Early in 2018, the US banned ZTE, a Chinese mobile equipment manufacturer from sourcing electronic parts from US suppliers, the reason for the ban again was the flouting of the Iran sanctions by supplying Iran with telecom equipment. The ban resulted in ZTE seeking alternative suppliers and also led to more investment in the chip manufacturing sector in China. A month before, Trump vetoed the proposed takeover of the US based microchip manufacturer Qualcomm by Broadcom, (a US founded firm but domiciled in Singapore) due to the ownership structure. FYI, Qualcomm holds several key patents on 3G and 4G which are one of its biggest revenue streams. all 3G and 4G phones purchased globally pay a royalty fee to Qualcomm for using its patents.
With the upcoming massive uptake of 5G and the rollout of a connected global economy, the US fears that Broadcom’s takeover of Qualcomm would make the US lose control of the 5G technology space. With Huawei having developed their own 5G microchip and the global market for 5G chipsets in smartphones expected to grow at a compound annual rate of 75 per cent between 2019 and 2024, The US is fearful that the cheaper and better Huawei 5G chips will erode Qualcomm’s revenues and dominance.

The recent settlement between Apple and Qualcomm inadvertently put Huawei as a top contender for the 5G chip market. This is because when Apple agreed to pay Qualcomm the disputed royalties for Apples’ use of Qualcomm patents, they also agreed to now use Qualcomm chips in all their subsequent 5G phones. Since 2016, Apple has been using Intel chips as the court battle raged, this gave Intel an opportunity to finally ride the mobile wave crest it nearly missed some years ago. However, the shock announcement by Intel that its pulling out of 5G chip research and development hours after Apple and Qualcomm settled their dispute placed Huawei as a possible major 5G chip supplier. All this time, Huawei has been developing 5G chips for its own internal consumption (used on Huawei equipment only), but with Intel pulling out, Qualcomm’s obsession with high royalty fees and Huawei offering the same technology (if not better) for much lower cost than Qualcomm and devoid of royalties, means that Huawei can easily start supplying their 5G chips to third parties such as Nokia, Ericsson, Samsung and others. This prospect of a global economy running on Chinese supplied 5G chips is what is scaring Trump. All the ‘national security’ talk is but an excuse to defend these seemingly drastic measures the US is taking against China as they head full-steam towards tech dominance. On the Internet front, the US is also feeling the heat of the decreasing dependency on US internet infrastructure to power the global internet. China has effectively managed to create a separate internet for its citizens whose size cannot be ignored. For example, WeChat has 1.04 Billion active users in China (Compared to WhatsApp’s 1.5 Billion globally), Weibo, China’s version of twitter, has over 462 Million monthly active users (Twitter has 260 Million), E-commerce site Alibaba has been growing at an annual rate of 58%, despite its revenues being far behind Amazon ($178B vs $40B), Alibaba’s net profit is comparable to Amazon. If this growth is sustained (and all signs show it will), Alibaba will surpass Amazon in sales within 5 years. China also has all odds stacked in their favour because:

  • China has over 830Million internet users (That’s more than twice the entire US population)
  • China plus its Asian neighbors account for 49% of the global internet users, closely followed by Europe with 16.8% of global internet users.
  • China’s 830Million internet users account for 20% of all users globally, followed by India’s 500Million users.
  • 98% of all Chinese access the web via a mobile device, compared to 73% in the USA.

The above few examples show that China has been successful in creating its version of the web that is somehow independent of western resources.

Russia’s National Internet Plans

With the announcement by Russia that a law is now in place to create what they call a ‘national internet’, Russian aims to be in full control of how Russian internet traffic is routed. Among other measures, it dictates the creation of the infrastructure to ensure the smooth operation of the Russian internet in the case of Russian telecom operators’ failure to connect to foreign root servers.

This seemingly drastic move was necessitated by the aggressive nature of the U.S. National Cybersecurity Strategy adopted in 2018 under Trump. To give a background as to why Putin signed the bill into law, It would be worthy noting that the US currently controls the top level root servers that help all internet users worldwide to convert human friendly domain names (such as to computer friendly address names, see the list here. This effectively means if the US decided to block certain countries from accessing the root servers, they will have effectively blocked them from accessing much of the internet and world wide web. The US also has legal right over all dot com, dot org, and dot net domain names (including this blog) and can take down any website it so wishes because they are all legally US websites. “Under these conditions, protective measures are necessary for ensuring the long term and stable functioning of the internet in Russia.” Said Vladimir Putin when signing the bill.

What Next?

These recent events indicate one thing; The world is detaching itself from over-dependence on US technology and infrastructure. This has got the US into panic mode. With the future becoming highly connected, technology will be at the fore front of human advancement and the US fears it might not be the world leader it has been in the past, like oil today; whoever has control of technology in future will control the world. To cloth this fear in ‘national security’ is callous for Trump to say the least, this is not about national security but about commercial dominance. The outcomes of such drastic moves cut both ways and will end up harming the US more than helping it. What I am sure is that China will emerge victorious if today’s statement by the Huawei CEO is anything to go by. In the last 8Hrs, the US has offered a 3-month ‘stay of execution’ for the Huawei ban because of the number of US citizens who own Huawei phones and operators who are heavily dependent on Huawei gear. This is to enable them transition to other phones and network equipment.

Locally, the US-China trade war might have major ramifications especially if the ban extends to other Chinese telecom gear and handset manufacturers. Tecno, infinix and iTel command a 34% market share of all smartphones. All these three brands are made by one company and this extends the risk further based on the market share by this one company should it also face the ban. It is also worth noting that large sections of Kenya’s mobile network runs on Huawei gear (including the base stations serving State house). The short term effect of this ban will be increased operating costs by local mobile operators as they seeks alternatives from European suppliers whose equipment and deployment cost is more expensive. The long term effect is that these Chinese companies will adapt to not depending on US tech and come up with their own technology and processes to manufacture even cheaper telecom equipment; couple this with the ambitious Belt and Road Initiative by Xi Jinping and the US will have forever lost its dominant position on the world stage.

Every Organization Should Have an ICT Policy

An acquaintance of mine told me a story of how he continued to receive a salary from an employer 4 months after he left the organization complete with statutory deductions including HELB loan repayments. We have also heard of incidents where an employee is fired and he ends up ‘locking’ up ICT systems by changing passwords or refusing to share the passwords, sometimes leading to loss of business or information. Worse, some organizations have slowly bled to death due to frequent flouting of ICT policies leading to loss of money and customers. In fact, most fraud incidents today involve the improper use of ICT systems because of the lack of or poor policy implementation. Think IFMIS/NYS.

On the surface, these staff exit related examples might seem like the failure by the HR or IT department in ensuring proper exit of an employee or the proper use of ICT resources. But deeper, they point to a more critical and dangerous state of affairs: The lack of or the failure to adhere to ICT policy best practices.

What is an ICT Policy Document?

The Oxford English Dictionary defines policy as “A course of action, adopted and pursued by a government, party,ruler, statesman, etc.; any course of action adopted as advantageous or expedient.” Adding ICT to it, the definition can be thus: an ICT policy is a roadmap with specific actions and best practices towards the adoption, use, maintenance and value extraction at reasonable cost from ICT resources. Every action taken in the organization that uses or impacts ICTs must be guided by this policy.

As you can see above, without an ICT policy, there will be no roadmap on how and why an organization should adopt ICTs. At the bare minimum, an ICT policy document for an organization should include the below:

  1. Scope and objectives of the policy document: This defines the reason why the document exists, it’s target audience and what the document covers.
  2. Technology adoption roadmap: This gives a clear definition of where the organization is and where it wants to go in the short and long term as far as ICT is concerned. For example; is the organization moving from an in-house data center to the cloud? it must be in the ICT policy. Is the organization trying to change the ICT department from being a cost center into a revenue generator? It must be in the ICT policy.
  3. ICT best practices in relation to the organizations objectives: These define the do’s and don’t’s for the organization as a whole (and not the individual ICT user in that organization). For example; is the organization outsourcing its sensitive data analysis to a third party? This must be specified in the ICT policy. Is the organization allowing personal devices such as phones (BYOD) to connect to the office WiFi? This must be specified in the ICT policy.
  4. Precautions and disciplinary measures: This section details the rights and obligations of ICT users with punitive or damage preventive measures for failure to follow the laid down ICT policies by a member of staff. The severity of the punishment should commensurate with the risk or exposure the company suffers as a result of the failure to follow the laid down processes.

Checks and Balances

A policy document is just that; a document. It has to be operationalized through the implementation of systems, processes and a mindset change to ensure its success. Many organizations have well written but poorly implemented ICT policies. This poor implementation is often as a result of failure to interpret the policy into well understood rules and regulations. A policy begets regulation, which in turn begets directives. A directive like “No member of staff shall copy into a portable disk, any document, software or multimedia that belongs to the organization…” should have stemmed from a regulation that bans the use of portable drives in the work environment. This regulation should have stemmed from the policy that states “The organization, shall treat all the organization information with utmost care, protecting it from unauthorized access or modification both in storage and in transit”.
But what happens if all the above exist and members of staff still carry around USB drives containing the organization’s data? This is where checks and balances come in. the CIO can go a step further and disable all USB ports from accepting portable drives. He can also go ahead and have the system send an alert to the relevant IT team should anyone attempt to connect a portable drive to a company computing resource.

Future Proofing ICT Policies

A good ICT policy document should be future proof and technology or vendor agnostic. This is to say that it should desist from mentioning vendors or particular technologies. These details should be in the subsequent documents that emanate from the ICT policy document. These include but are not limited to:

  1. The ICT resources user guide. This is what many confuse for an ICT policy, Its more of regulations than policy. This gives details of how the organizations ICT resources are used with best practices. This is the document that entails regulations such as social media use in the office, BYOD rules, Email etiquette etc. It also specified specific do’s and don’t’s when using the organizations ICT resources.
  2. The Technology adoption plan: This is the short and long term plan of how various new technologies will be adopted and integrated into the existing systems. It gives solid reasons and timelines for this, showing the entire ICT use lifecycle. Technology adoption should not just be for the sake of it or because there is a newer, shinier technology in the market. Technology adoption should take into consideration the competitive advantage the organization is going to earn from the adoption and capex and opex availability.

In Summary

With ICT becoming an integral part of doing business today and the digital transformation that enables it, it’s very critical that CIOs are in control of the direction and pace of ICT adoption in the organization. This control cannot happen without a policy in place. The CIO can adopt the best ICT systems with good intentions to help the organization, but without an ICT policy, these systems will serve as a conduit for fraud, information assets and eventual revenue loss to the organization. If your organization does not have an ICT policy in place, Its time to have one.