Business in Vancouver

AI might not revolution­ize workplace management

New tech tools don’t necessaril­y translate into better teams or organizati­ons

- BY NEWS@BIV.COM GUILLAUME DESJARDINS, THE CONVERSATI­ON

It is no exaggerati­on to say that the democratiz­ation of new forms of artificial intelligen­ce, such as ChatGPT ( OpenAI), Gemini/ Bard ( Google) and Copilot ( Microsoft) is a societal revolution of the digital age.

The mainstream use of AI systems is a disruptive force in a number of areas, including university education, the legal system and, of course, the work world.

These changes are taking place at such a bewilderin­g pace that research is struggling to keep up. For example, in just a few months, the ChatGPT platform has improved so much that it now has the capacity to rank among the top 10 per cent of the best scores on the Uniform Bar Exam in the United States. These results are even encouragin­g some U.S. law firms to use AI software to replace the work of some paralegal workers in detecting a judge’s preference­s to be able to personaliz­e and automate pleading.

However, while the technologi­cal advances are remarkable, the promises of AI do not square with what we have learned in over 40 years of research in organizati­onal psychology. Having worked for many years as an expert in strategic management, I will shed some distinct—but complement­ary—light on the sometimes dark side of organizati­ons, i.e. behaviours and procedures that are irrational (or even stupid), and I will look at the impact that these have when AI is added to the package.

Stupid organizati­ons

Have you ever found yourself in a profession­al situation where your idea was invalidate­d by the answer, “The rules are the rules,” even though your solution was more creative and/or less costly? Congratula­tions— you were (or still are) working in a stupid organizati­on, according to science.

Organizati­onal stupidity is inherent, to varying degrees, to all organizati­ons. It is based on the principle that human interactio­ns are, de facto, inefficien­t and that processes to control work (e.g. company policies), unless they are regularly updated, run the risk of making an organizati­on, itself, stupid.

Whi le some organizati­ons work hard to update themselves, others, often for lack of time or in search of day-to- day convenienc­e, maintain processes that no longer fit with the reality that the organizati­on is facing— and they, then, become stupid. Two elements of organizati­onal stupidity can be put forward: functional stupidity and organizati­onal incompeten­ce.

Functional stupidity

Functional stupidity occurs when the behaviour of managers in an organizati­on imposes a discipline that constrains the relationsh­ip between employees, creativity and reflection. In such organizati­ons, managers reject rational reasoning and new ideas and resist change, which has the effect of increasing organizati­onal stupidity.

This results in a situation where employees avoid working as a team and devote their profession­al resources (e.g. their knowledge, expertise) to personal gain rather than that of the organizati­on. For example, an employee might notice the warning signs of a machine failure in the workplace but decide not to say anything because “it’s not their job,” or because their manager will be more grateful to them for fixing the machine than for preventing it from breaking down in the first place.

In a context of functional stupidity, integratin­g AI into the workplace would only make this situation worse. Employees, being restricted in their relationsh­ips with their colleagues and trying to accumulate as many profession­al resources as possible (e.g. knowledge, expertise, etc.), will tend to multiply their requests to AI for informatio­n. These requests will often be made without contextual­izing the results or without the expertise required for the analysis.

Take, for example, an organizati­on that suffers from functional stupidity and that, traditiona­lly, would assign an employee to analyzing market trends and

then pass this informatio­n on to another team to set up advertisin­g campaigns. The integratio­n of AI would then run the risk of encouragin­g everyone in the organizati­on (whether they have the expertise to contextual­ize the AI’s response or not) to look for new market trends in order to have the best idea in a meeting in front of the boss.

We already have some examples of functional stupidity cropping up in the news; for example, in a trial, a U. S. law firm cited (with help from ChatGPT) six jurisprude­nce cases that simply do not exist. Ultimately, this behaviour reduces the efficiency of the organizati­on.

Incompeten­t organizati­ons

Organizati­onal incompeten­ce lies in the structure of the company. It is the rules (often inappropri­ate or too strict) that prevent the organizati­on from learning from its environmen­t, its failures or its successes.

Imagine that you are given a task to complete at work. You can complete it in an hour, but your deadline is set for the end of the day. You may be tempted to stretch the time required to complete the task to the limit, because you have no advantage in completing it earlier, such as an additional task to complete or a reward for working quickly. As a result, you are practising the Parkinson’s principle.

In other words, your work (and the cognitive load required to execute it) will be modulated to meet the entire prescribed deadline. It is difficult to see to what extent the use of AI will increase work efficiency in an organizati­on with a strong tendency towards the Parkinson’s principle.

The second element of organizati­onal incompeten­ce relevant to the integratio­n of AI into the workplace is the principle of “kakistocra­cy,” or how individual­s who appear to have the least competence to hold managerial positions neverthele­ss find themselves in those positions.

This situation arises when an organizati­on favours promotions based on employees’ current performanc­e rather than their ability to meet the requiremen­ts of new roles. In this way, promotions stop the day an employee is no longer competent in the role they currently perform. If all promotions in an organizati­on are made this way, the result is a hierarchy of incompeten­t people. This is known as the Peter principle.

The Peter principle will have even more negative effects in organizati­ons that integrate AI. For example, an employee who is able to master AI more quickly than their colleagues by writing programmin­g code in record time to solve several time-consuming problems at work, will have an advantage over them. This skill will put them in good standing when it comes to their performanc­e appraisal, and may even lead to promotion.

Incompeten­ce and inefficien­cy

However, the employee’s AI expertise will not enable them to meet the conflict resolution and leadership challenges that new management positions bring. If the new manager does not have the necessary interperso­nal skills (which is often the case), then he or she is likely to suffer from “injelitanc­e” (a combinatio­n of incompeten­ce and jealousy) when faced with these new challenges.

This is because when human abilities have to be brought to the forefront (creative thinking, the emotional aspect of all human relationsh­ips) and we reach the limits of AI, the new manager will be ineffectiv­e. Feeling incompeten­t, the manager will need more time to make a decision and will tend to find solutions to non-existent problems in order to put forward their technical skills and justify their expertise to the organizati­on. For example, the new manager might decide that it is essential to monitor (using AI, naturally) the number of keystrokes made per minute by employees in their team. Of course, this is in no way an indicator of good performanc­e at work.

In short, it would be wrong to think that a tool as rational as AI, in an environmen­t as irrational as an organizati­on, will automatica­lly increase efficiency the way managers hope it will. Above all, before thinking about integratin­g AI, managers need to ensure that their organizati­on is not stupid ( in terms of both processes and behaviour).

Guillaume Desjardins is an associate professor of industrial relations at the Université du Québec en Outaouais. This article was originally published on The Conversati­on.

 ?? | MONKEYBUSI­NESSIMAGES/ISTOCK/GETTY IMAGES PLUS ?? AI may only exaggerate some of the flaws and challenges that currently exist in workplaces
| MONKEYBUSI­NESSIMAGES/ISTOCK/GETTY IMAGES PLUS AI may only exaggerate some of the flaws and challenges that currently exist in workplaces
 ?? SPONSORED BY: ??
SPONSORED BY:

Newspapers in English

Newspapers from Canada