In his 2011 book titled ‘The Third Industrial Revolution; How Lateral Power is Transforming Energy, the Economy, and the World’, Jeremy Rifkin says that fundamental economic changes in the past have occurred when new transport and communication technologies converge with new energy systems. He gives examples of when the steam engine (coal energy) and the railway, when mixed with the printing press led to the first industrial revolution in Europe. He also gave the example of how internal combustion engines (fossil fuels) and electricity, the modern highway and telephone/telegraph combined to lead to the second industrial revolution which according to him, peaked in 1988. Since then, the marginal efficiency of the world economy has been flat despite all the technology advancements we have witnessed. His question then is why is this the case? Why aren’t all these new technologies not leading to a drastic improvement in nominal efficiency of economies and Countries? His answer is that we have been applying information age technologies and processes to the second industrial age economy processes. The two are incompatible. Rifkin posits that these new information and communication technologies, need to be applied to the emerging renewable energy systems and the emerging shared transportation systems. This he says, is what will lead to the third industrial revolution. he also asserts that unlike previous revolutions where efficiencies could be derived from centralization, the third industrial revolution will be about de-centralization and sharing. Think of ride sharing apps today and decentralized ledgers that power cryptocurrencies and blockchain.
The emerging decentralized, renewable energy systems where anyone can generate power via solar panels or wind turbines and feed it back into the grid, coupled with the emerging sharing economy (Uber, AirBnB, lyft etc) and driver-less cars and communication technologies with Internet of Things economy is what Rifkin believes will lead to the third industrial revolution in the information age. If technology developments in the last 10 years are anything to go by, the world is well on its way to the third industrial revolution. This is because technologies have emerged that have led to the acceleration of three critical areas namely Artificial Intelligence (AI), Internet of Things and Big data Analytics. These three, when combined; have shown that huge efficiencies are realized when they are applied to economic activities.
This is simply the simulation of human intelligence processes by machines. These processes include learning (the acquisition of information and rules for using the information), reasoning (using rules to reach approximate or definite conclusions) and self-correction. Particular applications of AI include expert systems, speech recognition and machine vision.
For AI to succeed, machine learning is used to train the AI system through feeding it as much data as possible on what its being trained on. For example, to train an AI system so that it can identify a domestic cat based on all attributes that humans use to define a cat (looks, fur, claws, size, sounds, etc), a lot of information about what a cat is, has to be fed into the system. This information could include millions of photos of various cat species, taken from different angles in varying light and environments, sound clips of cat sounds, videos of cats and other non observable attributes such as information that a cat is a pet, an animal etc. This therefore means for the AI to identify a photo or video as containing a cat, it must have been trained with as much information about cats as we can get our hands on. This as you would imagine can take a lot of time and resources. So to hasten the learning process, the AI system is trained to also train itself. Other than just analyzing the info its being given, the system, with time can also search for more info to train itself on what a cat is and is not. For example, if most of the photos of cats that are being used to train it show the cats as being indoors, the AI system does not need to be told that most cats live indoors, it can learn that by itself through inference. This is what is called Machine learning.
With machine learning comes the danger of the AI system reinforcing some biases introduced early in the AI training. If the early data used to train the AI was flawed or incomplete, there exists a danger of the initial biases or errors, being reinforced by subsequent machine learning. A good example closer home is again a pet. In Africa (or specifically where I come from) most dogs are not pets but homestead security agents. This therefore means most dogs sleep outside. This is not the case in most developed countries where most dogs are pets and live indoors. This therefore means that if we used only data from developed countries to train AI on what a dog is or is not, it will heavily rely on biased/incorrect data to define a dog. There is a possibility that your village mongrel (simba or bosco) might be classified as a fox by an AI system that wasn’t properly trained.
This is the same danger that Africa is facing currently. Most of the data used to train AI is from developed countries. Partly because there is relatively very little recorded data available on Africa/Africans that can be used to train these systems. The other reason is outright bias by the AI developers and trainers who don’t seem to believe that this data is important to train AI systems.
In a recent paper, Joy Buolamwini, a researcher at the M.I.T. Media Lab, showed how some of the biases in the real world can seep into AI, the computer systems that inform facial recognition. In the paper, Boulamwini shows that when the person in the photo being identified is a white man, the facial recognition software is right 99 percent of the time. But the darker the skin, the more errors arise — up to nearly 35 percent for images of darker skinned women. The skewed accuracy appears to be due to under-representation of darker skin tones and females in the training data used to create the face-analysis algorithms. Other examples also exist. Amazon, recently abandoned its AI based recruitment system that consistently showed bias for men over women because of flawed algorithms use to train it. A criminal justice AI-based sentencing system was also abandoned when it gave more lenient sentences to whites compared to blacks for the same crime.
There are many other examples of AI systems being biased based on how and who programmed them in the initial phases. The above systems were abandoned, not corrected. This is because with Machine learning, all subsequent learning is founded on the initial training data that was used to start the learning. If this data is biased, then the entire AI system will exhibit this bias in one form or the other and at varying degrees.
What Africa Needs To Do
Africa risks being left behind by the AI revolution. This is because many AI systems are being designed to solve problems in the developed world which are mostly not the same problems that affect us here. The designers of these systems do not worry much about the inherent bias in these systems because it doesn’t greatly affect the desired outcomes of the AI system use for the particular application they designed it for.
It would be simplistic to think that we simply need to generate more Africa-centric data and avail it online. This data needs to be generated by Africans themselves and shared from their own point of view. Most of ‘African’ data wasn’t generated by Africans themselves and in there lies the first problem, African data is inherently biased against Africans themselves (eg online content of western media coverage of Africa depicts a continent always at war and ravaged by disease). Africans need to generate their own data, in their own language and embed our culture and practices in this data while at it.
Another critical step is to ensure that Africa has the right talent to aid in AI training within Africa. The training of data scientists and AI programmers to work on African data is the second step. Yes, we might generate all this data, but it will still not be used by AI trainers in developed countries if this data does not significantly aid the learning and decision making of the AI systems. We must be deliberate in using this data for training.
We must also put our house in order as far as good governance and citizenship, law and order is concerned. As an example, it is estimated that driver-less cars will be the norm rather than the exception in the next 10 years. Auto makers are training AI systems to enable fully automated driver-less cars using data they have on their road network. A car is now able to read road signs and comply, understand the various road markings and identify obstacles. This car, driving down a road in the US, is able to achieve the outcomes of its designers. The same car, brought to Kenya, where the road signed have been vandalized, defaced or even lack of road markings; will not be able to navigate safely. Add matatu driver madness into the mix and this car won’t even last a day on Kenyan roads. This lack of working systems is a major risk for Africa in advancing at the same pace with the rest of the world. We are currently facing the same problem with e-commerce. The lack of proper physical addressing systems had greatly hindered e-commerce development in Africa. The story would have been different, had we ensured that we have a good physical addressing system in the continent.
The final thing is that as Africans, we need to be vigilant in ensuring that AI systems are not used to suppress civil and human rights by governments. China is currently testing an AI system that monitors citizens behaviour, spending and communication to calculate a social credit value for all citizens. This social credit value can determine many things including if you get a passport to travel outside China, if you are a worthy creditor and even what types of Jobs you can be employed for. This system, is fraught with the possibility of abuse if not well implemented. In china today, if you are near a stranger who has an outstanding loan and has defaulted payment, your phone, will alert you to remind that person to pay up. This according to many, is an abuse of the capabilities of AI. Such systems can (and will) be deployed by despotic regimes to control citizen behaviour and modify it as they please. If you know your creditworthiness will be determined by what you post online, it automatically limits what you can say and to whom. This is a threat that we must tackle by ensuring that AI systems are not turned into oppression tools.
These dangers are so real, I stop short of asking for a UN body to regulate new technologies for the good of mankind and prevent these technologies from being used for harm. The UN had been very successful in the telecommunications front with the International Telecommunications Union -ITU, a UN body responsible for the many positive benefits and use of telecommunication today. The same framework for good use of telecoms I think should happen for emerging technologies such as AI.