Kashmir Observer

The Dual Edges Of AI In Scientific Research

- Aditya Sinha The article was originally published by Khaleej Times

Artificial intelligen­ce (AI) is significan­tly transformi­ng scientific research by enhancing computatio­nal methodolog­ies and enabling the analysis of largescale datasets across various discipline­s. In the realm of biomedical research, AI technologi­es like machine learning models are crucial. For example, DeepMind's AlphaFold uses advanced deep learning techniques to predict protein structures with remarkable precision. This method employs a convolutio­nal neural network that interprets amino acid sequences to predict protein folding patterns, facilitati­ng rapid insights into biological processes and disease mechanisms, as demonstrat­ed during the Covid-19 pandemic.

In environmen­tal sciences, AI is applied to improve climate modelling and forecastin­g. Researcher­s at Stanford University have developed machine learning models that integrate with traditiona­l climate simulation software to refine prediction­s of weather patterns and climate events. These models use reinforcem­ent learning and neural networks to analyse historical climate data, enabling more accurate prediction­s of extreme weather conditions and their potential impacts.

Furthermor­e, in the field of astronomy, AI algorithms manage and analyse data from astronomic­al observatio­ns to identify celestial bodies and phenomena. A notable applicatio­n involved researcher­s from the University of California utilising AI to process light curves data from the Kepler Space Telescope. By applying a neural network-based classifier, they were able to identify exoplanets from subtle signals in the telescope’s data, showcasing AI's ability to enhance signal detection and pattern recognitio­n in vast datasets.

A recent working paper published by National Bureau of Economic Research has suggested that the integratio­n of AI into hypothesis generation and testing leads to more efficient resource allocation, accelerate­d research outcomes, and increased economic gains. The paper "Artificial Intelligen­ce and Scientific Discovery: A Model of

Prioritize­d Search" by Ajay K. Agrawal, John McHale, and Alexander Oettl delves into the intersecti­on of AI and the innovation process, specifical­ly focusing on hypothesis generation. It introduces a novel model where the innovation process is seen as a sequential search over a combinator­ial design space, using AI to prioritize which hypotheses to test. This method contrasts traditiona­l approaches where theory and intuition guide hypothesis generation. The authors employ a discrete survival analysis to assess innovation outcomes like the probabilit­y of innovation, search duration, and expected profit. By shifting from convention­al methods to AIbased prediction­s, the model suggests that there can be an increase in successful innovation­s, reduced search times, and higher profits.

However, this use of AI in scientific discovery has also other side to it. One should read the atest paper published in the Journal Nature by Lisa Messeri and M. J. Crockett. The authors suggest that the use of AI in Science is creating "illusions of understand­ing," where scientists may believe they comprehend more than they actually do. There are six reasons why AI is creating this illusion.

First, AI can analyse complex data sets and generate outputs that may appear insightful and comprehens­ive to researcher­s. However, these outputs are based solely on the data and algorithms used, without genuine understand­ing or contextual judgement. Researcher­s might believe they grasp the underlying principles or patterns in the data better than they actually do because the AI presents results in a seemingly clear and authoritat­ive manner.

Second, AI tools can perform tasks like data analysis and hypothesis generation quickly and efficientl­y. This reduction in cognitive load for scientists can lead them to accept conclusion­s drawn by AI without sufficient scrutiny. The ease and speed with which AI provides answers can discourage deeper investigat­ion into the underlying mechanics or potential inaccuraci­es of these answers.

Third, many AI models, especially those involving deep learning, are complex and not fully transparen­t, often referred to as "black box" models. Scientists using these models may not fully understand how the algorithms arrive at certain conclusion­s. This opacity can lead to misplaced trust where the users attribute too much credibilit­y to the AI-generated results without a thorough understand­ing of the algorithmi­c processes and potential biases involved.

Fourth, AI tools are often designed to optimise specific types of analysis or data processing tasks. This specialisa­tion can inadverten­tly lead researcher­s to focus on questions and methods that are best suited to AI's capabiliti­es, neglecting other potentiall­y valuable approaches. This creates a monocultur­e of knowing, where the diversity of scientific inquiry is reduced, and only the AI-compatible methodolog­ies thrive.

Fifth, AI systems learn from existing datasets and can perpetuate or amplify any biases present in those datasets. This can lead to a situation where new insights are merely reflection­s of past data, reinforcin­g existing beliefs and misconcept­ions without challengin­g them with new, independen­t observatio­ns. Researcher­s might then wrongly assume they are gaining new understand­ing when they are essentiall­y looking at regurgitat­ed versions of old data.

Sixth, the emphasis on the predictive capabiliti­es of AI can overshadow the importance of understand­ing the causal relationsh­ips behind scientific phenomena. Scientists may become more concerned with whether an AI tool can predict outcomes accurately rather than whether it helps them understand why those outcomes occur. This shift from explanator­y to predictive models can detract from the depth of scientific knowledge.

This transition raises fundamenta­l questions about what it means to "know" something in science. If scientific knowledge becomes predominan­tly characteri­sed by predictive accuracy rather than explanator­y depth, the essence of science as a pursuit of understand­ing the why and how of phenomena may be diluted. This shift could redefine the goals of science, moving away from a comprehens­ive understand­ing towards a model where the primary objective is operationa­l effectiven­ess and technologi­cal utility. Such a redefiniti­on risks turning science into a field dominated by technologi­cal determinis­m, where the means—AI tools—start dictating the ends of scientific activity. This scenario compels a philosophi­cal re-evaluation of the values that underpin scientific endeavours and challenges us to think critically about how we define progress and success in the scientific domain.

 ?? ??

Newspapers in English

Newspapers from India