AI, inequity, and our choice and agency
“Predicting the future is really hard, especially ahead of time,” warns Rodney Brooks, former director of MIT’s Computer Science and Artificial Intelligence Laboratory. Public conversation around artificial intelligence (AI) is growing, shaping perceptions and impacting policy. How should we anticipate our interactions with technology? What can we influence?
First, recognise that AI and automation will unfold at different speeds within the same economic system. AI has been called the “electricity for the Fourth Industrial Revolution”. AI, big data, automation and quantum computing could fundamentally alter economic progress. Governments are pushing national strategies. China is aiming for AI superpower status by 2030. The EU, France and Japan are pursuing increased R&D investment while developing ethical and legal frameworks. A US taskforce called for a detailed R&D plan and an AI R&D workforce. A NITI Aayog discussion paper proposes an inclusive vision of “AI for all” for India.
But the shift will not be discrete. Old and new industries, technologies and methods of production will co-exist. This multi-velocity automation will complicate industrial strategy. There is already a push to acquire emerging technologies and corner larger market shares. China’s 2025 strategy identifies 10 sectors (including robotics and semiconductors) in which its homegrown firms want to dominate the domestic market while competing globally. Predictions are difficult. Among the top 10 applications in which China anticipates robots to have much promise are energy and mining, medicine and defence. The list also includes cleaning, filmmaking and companionship!
There are limits to what top-down industrial policy can achieve. Technology is an enabler, not an end in itself. Responsible production and consumption will depend on how AI and automation improve resource efficiency, reduce food losses and increase recycling and reuse of materials. Outcomes are not given. Policy can direct which way innovation leads.
Second, resist the temptation to predict the future of jobs. Commentary about AI and automation is replete with predictions about job losses, including high-skilled work. Meanwhile, the US already has about 78,000 AI researchers; China about half that number. This is an important indicator of technological development, but not of jobs losses and gains in specific sectors. A 2017 report by the Confederation of Indian Industry (disclaimer: I was a member of the steering group) identified several drivers that would shape the jobs ecosystem. Among them were lifelong learning systems, shapes and sizes of enterprises, social security systems, and whether technology and innovation would enable inclusive growth.
For productive employment and decent work, we need to look for opportunities for new skills and new sectors. Water, sanitation, waste management and (clean) energy would be important growth areas. Imagine new jobs for those installing rooftop electricity systems, or in decentralised water and sanitation infrastructure, or for those trained in optimising, recycling and reusing critical minerals and materials.
But what about workers’ rights? In an economy enabled by new technologies, one’s personal economic value is likely to be inversely proportional to the standardisation of tasks. The more unique the job, the greater would be one’s value in the workforce. Workers are likely to develop multiple skills spread across multiple jobs. In this evolved “gig economy”, if everyone became their own boss, who would one ask for a raise or for health coverage?
Third, understand the coming pressures on democratic engagement. In 1949, B R Ambedkar warned that inequality in social and economic life endangered political democracy. If AI, automation and other emerging technologies widened economic inequality, how would that affect nominal political equality in democracies?
Take taxation. If robots were more productive than humans, should they be taxed? Many developing countries rely more on indirect taxation due to shallow direct tax bases. If robots increased economic production, which in turn increased indirect tax revenues, the same could be potentially redistributed to those adversely impacted by growing automation. But if robots also made it cheaper to deliver essential services (say, public healthcare or clean water), then governments ought to embrace new technologies.
In such a political environment, who would have a say? Robots? Their owners? Or those affected? If AI replicated and magnified social and cultural biases and widened inequalities of opportunity, it would become harder to use democratic processes to mediate differences.
Fourth, reimagine how technology could empower sustainable development. Potential AI applications for energy, water, cities or climate change are significant. Machine learning algorithms can improve climate modelling by assigning weights to models according to their accuracy against observations. AI can support more flexible and autonomous electricity grids to integrate renewables. Wind turbines can increase efficiency when each propeller “learns” about wind speed and direction from other propellers. Sensors and control systems can improve irrigation efficiency in waterstressed regions, or guide farmers in sowing practices for increased yields. AI has helped to significantly improve accuracy in identifying cyclones.
The conversation about AI has to shift from technology to society. We are far from artificial general intelligence. In 2013-14, a supercomputer with 82,944 processors took 40 minutes to compute what 1 per cent of the human brain calculates in a second. There is no pre-AI and post-AI world. Technology will develop and get adopted at varying speeds, with attendant inequities and opportunities. How we channel their potential will depend on how our political systems make conscious efforts to give primacy to human agency. This is a very fuzzy boundary and the causality cuts both ways.