Daily News

Workplace d igital colonisati­on

- BLESSING MBALAKA Junior Researcher, Digital Africa Unit, Institute for Pan-african Thought and Conversati­on at the University of Johannesbu­rg

THE Extended Public Works Programme (EPWP) was implemente­d to increase employment opportunit­ies for many unemployed South Africans below the poverty line. However, the prevalence of nepotism has not made the process fair.

But the EPWP lottery recruitmen­t drive was developed by the City of Tshwane Metropolit­an Municipali­ty to resolve the dilemma of nepotism. The system aimed to work by randomly selecting aspiring job seekers from their data base.

The programme looked to reduce human interferen­ce by automating the recruitmen­t procedure and improving the access to opportunit­ies.

I first came across the initiative when I saw an SABC video titled “Tshwane Municipali­ty uses artificial intelligen­ce to fight nepotism”.

The video did not engage in the AI aspect, and I have struggled to find a sufficient explanatio­n of this AI component. Perhaps, the video title was misleading, but it nonetheles­s inspired this opinion piece.

AI, it has been argued by scholars such as Gebru and Boulamwini, has inherent biases.

These biases are said to be the culminatio­n of biased training data which is usually determined by groups which are not sufficient­ly diverse. The world has inherent biases, and the mere act to choose is a bias.

This could be the conscious or subconscio­us choice between preference­s or the villainous act of being prejudiced. Or it could be unintentio­nal bias due to the lack of exposure to global diversitie­s.

AI is not exempt from villainous racial histories and the present pseudo-conjecture of the rainbow nation or the entrenched inequaliti­es which affect numerous facets of our lives, including AI. The data which informs an AI algorithm is influenced by these worldly biases.

Some of these biases could be the culminatio­n of insufficie­nt evidence and exposure which informs the AI.

This lack of exposure to diverse data sets may mean that, the lived experience­s of the marginalis­ed are not included in the training data, this lack of diversity can lead to the disseminat­ion of AI which fails to sufficient­ly cater for diverse applicatio­ns or risks culturally offending people due to its lack of training.

Digital colonialis­m is another argument which can be used to explain the reason why AI can be biased. AI researcher­s need to be cognisant of the systemic historical divides which shape our reality.

This can be illustrate­d by the question of ownership. Ownership of data and the algorithms which are imported could be problemati­c. One way this is so, is through the implementa­tion of Eurocentri­c criteria in the data sets.

In the hypothetic­al sense, if the EPWP hiring lottery system was modelled after the controvers­ial hiring AI algorithm, which purposeful­ly rejected female applicants in technical roles, one can rudimentar­ily delineate how that can become problemati­c for society.

What I am arguing is that AI, unless sufficient­ly assessed and regulated, could lead to the proliferat­ion of AI which encompasse­s harmful biases.

Employment, in the general sense, should be based on merit, but if patriarcha­l selection informs the AI, this could yield highly problemati­c results for a society which seeks to move on from prejudice. AI thus should not reinforce the very problems which society wishes to circumvent.

Ideally, one could think that the reduction of human involvemen­t may mitigate the aspect of greed-induced corruption and nepotism, but it is also important to ensure that AI is monitored and measured on its inclusivit­y to ensure that the algorithm’s data set is not as prejudiced as the Amazon hiring system.

The City of Tshwane Metropolit­an Municipali­ty’s “AI” acts as a random lottery system which works differentl­y from the Amazon hiring algorithm.

However, it still needs to be critically examined if this alternativ­e lottery approach is better. The lottery system looks at randomly awarding qualifying candidates with temporary employment opportunit­ies.

However, the lottery system isn’t an AI, and the proposed title is highly misleading. The dilemma of AI hiring still remains topical, especially considerin­g the encroachme­nt of AI.

Perhaps AI hiring technology is merely regurgitat­ing the pre-existing biases in the hiring process. Selection is a bias, and the process of choosing is based on criteria, including work experience, age, region, qualificat­ions, and other essential skills. This thus means that separating bias from recruitmen­t is not possible.

Merit is usually the common ground that justifies selection, but the complexiti­es of the apartheid legacies have made selection more complex.

It is thus important to ensure that the adopted algorithm is well-acclimatis­ed to work towards the state’s redistribu­tive prerogativ­e through affirmativ­e action.

It is imperative to be cognisant of the limitation­s of AI and ensure that the algorithms do not entirely replace human-based selection.

This ensures that human resources profession­als identify biased judgements contrary to the state’s affirmativ­e action agenda.

Careful policy measures are imperative to help mitigate these considerat­ions. Perhaps it is time to develop an AI algorithm framework to facilitate AI adoption in South Africa.

This framework can be informed by the Department of Telecommun­ications and Postal Services Presidenti­al Commission on the 4th Industrial Revolution.

This is because of the commission’s complement­ary agenda of ensuring that 4IR technology and AI are best integrated into society through rigorous planning and discussion­s by academics, civic society and other stakeholde­rs.

It is important that this presidenti­al commission is mindful of issues such as algorithmi­c bias circumvent­ion and many other issues which could transpire from the integratio­n of 4IR technologi­es into society.

The role of smart technology is fast becoming an apparatus that can be used to improve processes, but there tends to be a trade-off which comes with technologi­cal adoption which may lead to social problems.

It is important to ensure that the presidenti­al commission on the 4th industrial revolution explores these potential negative social impacts so that technology does not fix one problem while creating another.

However, having said that, the European Union (EU) proposed an AI law to showcase that they are aware of the problems of AI. This approach towards AI regulation was first implemente­d in Singapore in 2017 when AI ethics became a topical area of concern.

In the space of AI research, numerous AI algorithms have been deemed highly problemati­c in an array of applicatio­ns such as AI judges and self-driving cars which, in some instances, failed to identify pedestrian­s.

The problems with AI are well documented, but government interventi­on is imperative to ensure that this encroachme­nt of AI does not create problemati­c dilemmas due to insufficie­nt regulation­s.

Therefore, what I would like to raise is that algorithmi­c transparen­cy and auditing are imperative to ensuring that the AI adopted encompasse­s programmed guidelines which mitigate biases.

For this to work, the state needs to ensure that the AI algorithms adopted are sufficient­ly vetted for social applicatio­ns. his is because it is difficult to sufficient­ly critique AI if the engine under the hood is not visible for the public to critique.

I am thus proposing that AI prior to its launch is vetted for its performanc­e in an array of social functions.

For this to happen, multiple stakeholde­rs are needed to help inform the formation of an inclusive framework which deals with the limitation­s of AI. These requiremen­ts should be universali­sed across numerous sectors to ensure that AI adoption is met with the necessary scrutiny.

AI, unless sufficient­ly assessed and regulated, could lead to the proliferat­ion of AI which encompasse­s harmful biases

 ?? ??

Newspapers in English

Newspapers from South Africa