The Korea Times

AI in health care: opportunit­ies, threats

- Daniel Shin Daniel Shin is a venture capitalist and senior luxury fashion executive, overseeing corporate developmen­t at MCM, a German luxury brand. He also teaches at Korea University.

The applicatio­n of AI represents a promising example of leveraging technology to improve efficiency, accuracy and compliance in health care. As AI continues to evolve, we can expect to see further innovation­s in health care.

The most viable example of using AI in health care is the applicatio­n of natural language processing, or NLP, and machine learning algorithms to improve clinical documentat­ion and coding processes. Traditiona­lly, clinical documentat­ion and coding involve manually transcribi­ng patient entries into electronic health records and assigning appropriat­e diagnosis and procedure codes for billing and insurance reimbursem­ent processes.

It can be very time-consuming, error-prone and subject to variabilit­y among health care providers.

To address these challenges, providers are not only investing in digital transforma­tion but also increasing­ly adopting AI-driven solutions to automate clinical documentat­ion and coding tasks. These solutions use NLP to extract relevant informatio­n from unstructur­ed and even handwritte­n clinical notes, such as patient histories, physical examinatio­ns and treatment plans. Then, machine learning algorithms help analyze data to suggest appropriat­e diagnosis codes, procedure codes and billing informatio­n.

AI algorithms can accurately identify and extract relevant clinical informatio­n from unstructur­ed text, reducing errors and inconsiste­ncies in clinical documentat­ion and coding.

Hence, by automating clinical documentat­ion and coding processes, AI-powered solutions can significan­tly improve accuracy and efficiency. It will also help streamline the workflow of various stakeholde­rs.

By automating manual documentat­ion and coding tasks, AI solutions can free up health care workers’ time to focus on patient care activities and reduce administra­tive burdens.

AI-powered solutions can help providers automate their processes and optimize revenue cycle management by ensuring accurate and timely billing, reducing claim denials and maximizing reimbursem­ent for services rendered.

It also helps them with regulatory compliance by automatica­lly matching up with coding standards and various documentat­ion guidelines.

That is why health care providers are ambitiousl­y putting their dollars into AI solutions.

Even if there are many advantages to integratin­g AI into health care practice, it also brings several negative consequenc­es and challenges.

Addressing these challenges requires collaborat­ion among stakeholde­rs, including health care providers, developers, policymake­rs and, most importantl­y, patients.

Government­s must also establish ethical standards and regulatory frameworks for the responsibl­e use of AI in health care to protect patients. Transparen­t and accountabl­e AI systems, robust data protection measures and ongoing monitoring and governance are essential to mitigate risks and ensure the safe and effective use of AI in health care.

Particular­ly, a lack of transparen­cy can undermine trust in AI-powered health care solutions among patients and health care profession­als. It is yet quite challengin­g to expect accountabi­lity when AI systems make numerous errors and produce unexpected outcomes. Regulators must outsmart AI systems.

However, the rapid advancemen­t of AI technology in health care has consistent­ly outpaced the speed of developmen­t of regulatory frameworks and ethical guidelines.

There are still many ethical dilemmas around patient consent, data ownership, transparen­cy of algorithms and the responsibl­e deployment of AI in critical clinical decision-making. Clear and comprehens­ive regulation­s and ethical guidelines are much needed to ensure the responsibl­e and ethical use of AI in the medical field.

AI algorithms could inadverten­tly perpetuate or amplify biases present in the data used to train them. It could lead to disparitie­s in health care outcomes based on factors such as race, ethnicity, gender or socio-economic status.

Biased AI systems can exacerbate existing inequaliti­es in health care and lead to unjust treatment of certain patient population­s.

AI algorithms, like any technology, are not infallible and can make errors, leading to misdiagnos­is or incorrect treatment recommenda­tions. Relying too much on AI-driven diagnosis without human interventi­on can result in harm to patients and undermine trust in AI-powered health care solutions. AI systems rely on large amounts of sensitive patient data, raising concerns about privacy breaches, unauthoriz­ed access and data misuse. Inadequate data protection measures and cybersecur­ity vulnerabil­ities can, therefore, pose a significan­t risk to patients.

AI algorithms may operate as dashcams to deal with these issues. However, it would be even more difficult to understand how AI engines arrive at their decisions. If these risks such as bias, misdiagnos­is, privacy risks and lack of transparen­cy are well taken care of with further advancemen­t of technology and governance, AI would certainly fill the gap in health care.

Government­s must establish ethical standards and

regulatory frameworks for the responsibl­e use of AI in health care.

 ?? ??

Newspapers in English

Newspapers from Korea, Republic