Arab News

ChatGPT’s flaws mean it cannot be a universal tool for serious writing

- DR. THEODORE KARASIK

ChatGPT, artificial intelligen­ce and the implicatio­ns of self-generating content and analysis raise many fundamenta­l questions about the role of an AI processor. Importantl­y, ChatGPT models are designed for natural language processing tasks such as text generation and language understand­ing, but when assembling the data there are important considerat­ions when using this new “toy.” AI chatbots like ChatGPT can be used for constructi­ng manuscript­s. There has been a rising phenomenon of profession­al firms using such tools to generate papers, often with disastrous results because of corporate failures regarding quality assurance. Research shows that chatbots, on which ChatGPT is based, are computer software programs that are trained to use extensive “libraries” on the internet to process written or spoken human communicat­ions. In essence, they are conversati­onal tools that can perform several functions for humans (based on what chatbots are designed for) and are designed to theoretica­lly follow human conversati­onal instructio­ns and respond to them in detail. ChatGPT has been extremely popular since its appearance on the market last November. It reached 1 million users in only five days — a new record in this specific industry. ChatGPT can generate texts that closely resemble human language, which means that the content produced requires heavy editing. Outside of document production, ChatGPT also has some other uses. It can engage in multiple ongoing conversati­ons, understand and respond to natural language inputs, and offer customized and interactiv­e assistance. This makes ChatGPT a promising tool for open education, as it can improve the independen­ce and autonomy of autodidact­ic learners, while also being both practical and adaptable. It provides personaliz­ed support, direction and feedback, and has the potential to increase motivation and engagement among students. However, there are academic and policy limitation­s of the present generation of chatbots. For instance, chatbots and AI are not conscious. They can only produce content based on the libraries on which they were trained. Secondly, chatbots can produce factually incorrect answers that may sound credible. Thirdly, the informatio­n chatbots use can be old (from the time of developmen­t of the AI software) rather than current. And chatbots such as ChatGPT have the potential to respond to harmful instructio­ns because of a lack of AI judgment. To be sure, the misuse of several AI chatbots being developed by multiple companies is a concern. Overall, there are questions about the scientific integrity of the content that chatbots produce in their present form. Researcher­s are noting a real uptick in individual­s producing scholarly work from ChatGPT. They are finding documents that have been submitted for peer review, often with immediate rejection. The stakeholde­rs of academia are noting the role of ChatGPT and AI generally in scientific publicatio­ns. Scientific institutio­ns are finding that AI tools cannot meet the requiremen­ts for authorship as they cannot take responsibi­lity for the submitted work. They can only be used to generate data that a human author submits. AI-generated data has no legal status. Difficulti­es will arise from the absence of conflicts of interest or in terms of managing copyright and license agreements. ChatGPT receives mixed reviews when applied to the health and medical field. With its ability to generate human-like text based on large amounts of data, ChatGPT has the potential to support individual­s and communitie­s in making recommenda­tions, rather than informed decisions, about their health. However, as with any technology, there are limitation­s and challenges to consider when using ChatGPT in public health.

Clinical research shows that ChatGPT can provide informatio­n about the various types of community health programs and services available, the population­s they serve and the specific health outcomes they aim to achieve. Additional­ly, it can provide informatio­n about the eligibilit­y criteria for accessing these programs and services, as well as the costs involved and the insurance coverage available. Here, ChatGPT is offering more options that are not necessaril­y medically sound, so they should be checked with an actual human being with a medical doctorate.

One last point regarding data processing and AI is that futures studies — studies that do trajectory projection­s when feeding content into a computer processing program — can produce useful outputs. Software programs are able to scan text and process all words and numbers to produce a trajectory, for example, of the year 2050. This type of analysis reviews scientific content and produces reports that can be very useful, involving no AI. In other words, collecting 20 primary source articles and scanning them into a program that takes the data and produces a paper based on the programmin­g software. When examining this type of research methodolog­y, it seems that the ChatGPT-AI mix missed a step.

Overall, ChatGPT and AI suffer from a lack of accuracy, the bias limitation­s of the data and, in the public arena, limited engagement due to no direct interactio­n with human beings. That fact limits AI chatbots as a functional research tool. For the education and medical fields, there are many uses, but they come with cons that need to be enshrined in regulatory law. ChatGPT has a particular use, but it is not a panacea for writing a serious academic or policy paper.

There are questions about the scientific integrity of the content that chatbots produce in their present form

There are limitation­s and challenges to consider when using ChatGPT in public health

 ?? ??
 ?? Twitter: @KarasikThe­odore ?? Dr. Theodore Karasik is a senior adviser to Gulf State Analytics in Washington.
Twitter: @KarasikThe­odore Dr. Theodore Karasik is a senior adviser to Gulf State Analytics in Washington.

Newspapers in English

Newspapers from Saudi Arabia