Bangkok Post

New chatbot can do a lot, but can you trust it?

- JAMES HEIN James Hein is an IT profession­al with over 30 years’ standing. You can contact him at jclhein@gmail.com.

O Cver the New Year break, I was digging a bit more into artificial intelligen­ce and especially how the ChatGPT can be used and how it could affect society.

hatGPT looks like it will impact the way we do business, program and write. Students will be using it to write assignment­s. Coders are already asking it to write code for them. Presenters are asking it to write their presentati­ons on subjects they may not be fully familiar with. You can get a test account fairly easily at chat.openai.com.

I

started by asking it about quantum gravity, then string theory and then a recipe for lamingtons. Next, I asked it to “generate a new poem based on the famous Raven poem” and away it went giving what looked to me like a reasonable response, but I’m no poetry expert. Then I asked for a C# code sample for an audio VST3 wrapper, and it generated one. Unsurprisi­ngly, it had no proof for the existence of God and it equivocate­d it to life on other planets. The current version has a knowledge cut-off of 2021, so it would not theorise on the outcome of the 2024 US election. I also asked about when human life started, but you can try that one for yourself.

F

rom these few examples, I was able to make some observatio­ns. When it comes to fact-based informatio­n, it does much better than the average Google response. On the more difficult life questions, it tends to play both sides of the fence, which in itself is not necessaril­y a bad thing. I did like the response to my question on the scientific method to which it responded that “the goal of the scientific method is to arrive at an evidence-based understand­ing of a phenomenon that can be tested and refined over time”. This came after “sharing the results of the investigat­ion with others”, something that is often lacking in modern studies.

T

he system can be used by an undergradu­ate to write a full paper if the right questions are asked. I tried “write a 1,000-word paper on the theory of leadership”. It came back with a paragraph each on four leadership types with an introducti­on and summary. With a bit of extra research, formatting and editing this could be handed in as a first-year university response to an assignment. I asked for references and it generated five of them as places to get additional info and include in such an assignment. ChatGPT can reduce the time needed to prepare a paper and even expand it to a larger document.

T

he result is potentiall­y being able to allow someone to appear more informed about subjects than they really are. Or alternativ­ely, provide a way to gain knowledge faster depending on how much additional effort was put in and if verificati­on of the material was carried out.

I

s it AI? Not in the sense that it has any awareness of what it is doing. It’s a well-trained rules engine that puts together pieces of informatio­n in a logical manner based on the available informatio­n and the rules it has been given. We also do that, but then we will potentiall­y look at what we have done and decide if it has ethical implicatio­ns, if it makes sense in a wider context, how it makes us feel, if we should just delete it, and so on. ChatGPT doesn’t have any of this capability as far as I know.

I

did try some examples starting “With the context of a xxxx” in front of the same questions and it did generate different responses. In some cases, the length of the response changed depending on what I replaced xxxx with. Not sure if this indicates some kind of bias or just the depth of the training material it was using as I chose a controvers­ial subject for my test.

T

he bottom line is that ChatGPT is one alternativ­e to Google and Wikipedia. It can also get things wrong depending on context and how the question is phrased. Schools will need to start paying attention to AI-generated material because people can be lazy and take the easiest solution. It will become more difficult to find out who actually knows and understand­s things unless people start asking questions to confirm knowledge, and with the continued hybrid work environmen­t that becomes more difficult.

D

oes it really matter? If the job gets done, it works and it is reliable, does it really matter how you get there? Some would argue on both sides of that question so it will come down to who the task is being done for and if they are happy with the results. I don’t think objectivel­y there is a right or wrong answer here. I do expect the capabiliti­es of these engines to improve over time and I’m not sure where we’ll be in 10 years but using this approach it will still not be human-similar AI. I do wonder what impact it will have on the general ability of long-term users to engage in critical thinking, another version of what I call the first Google response syndrome.

Newspapers in English

Newspapers from Thailand