Business World

How AI can spot exam cheats and raise standards

- By Andrew Jack

Every year, the co-ordinators of the Graduate Management Admission Test reject several dozen applicants for cheating in their exams.

The underhand techniques range from stand-ins impersonat­ing candidates nominally taking the test, to the concealmen­t of cameras in coat buttons and eyeglasses. These scan and show questions to remote accomplice­s, who then supply the correct answer via a concealed earpiece.

“These are very high stakes exams,” says Sangeet Chowfla, chief executive of the Graduate Management Admission Council (GMAC), which administer­s the multiple-choice tests taken by 250,000 people each year. “There is unfortunat­ely an incentive for people to try to get an unfair advantage.”

Technology is being deployed by those who set and mark exams to reduce that fraud — which remains overall a small problem — and to create far greater efficienci­es in preparatio­n and marking, and to help improve teaching and studying.

From traditiona­l paper-based exam and textbook producers such as Pearson, to digitalnat­ive companies such as Coursera, online tools and artificial intelligen­ce are being developed to reduce costs and enhance learning.

For years, multiple-choice tests have allowed scanners to score results without human interventi­on. Now technology is coming directly into the exam hall. Coursera has patented a system to take images of students and verify their identity against scanned documents.

There are plagiarism detectors that can scan essay answers and search the web — or the work of other students — to identify copying. Webcams can monitor exam locations to spot malpractic­e.

Even when students are working, they provide clues that can be used to clamp down on cheats. They leave electronic “fingerprin­ts” such as keyboard pressure, speed and even writing style.

Emily Glassberg Sands, Cousera’s head of data science, says: “We can validate their keystroke signatures. It’s difficult to prepare for someone hell-bent on cheating, but we are trying every way possible.”

By randomly allocating a different, individual­ized selection of questions to each applicant, examiners can avoid the risk of papers being stolen and circulated in advance, or candidates copying each other’s answers while taking a test. Mr. Chowfla says batches of new GMAC questions on each theme are tested quickly on candidates to ensure they are of equivalent difficulty, and older ones are phased out once they have been used a certain number of times.

Online publishers in some subjects, such as mathematic­s and finance, create unique

questions by randomly changing the numerical variables used in exam equations. “It works very well as long as the algorithms have been programmed correctly,” says Isabelle Bajeux-Besnainou, dean of McGill University’s Desautels Faculty of Management.

Technology is also helping those who manually mark more complex answers. When school students across the UK received their A-level exam results last week, the system to generate their grades had been radically upgraded.

While candidates still normally hand write their answers, each paper is no longer sent by post to a single marker. Instead, it is scanned, segmented and individual questions are sent to different examiners to read online.

That process improves safe delivery, speed of marking and allows realtime tracking and quality-assurance of the grades.

A number of testing systems are exploring ways to compare and assess the quality of marking on more basic tests by cheaper, less experience­d graders, allowing the more experience­d ones to focus on judging more complex answers that require greater skill.

Educators are also investing in machine learning and natural language processing to evaluate the answers to more complex tests. Tim Bozik, global head of product at Pearson, says: “We’re moving from an assessment of ‘what’ to ‘how’ students answer questions.”

The company is piloting an analysis of visual scans of the steps involved in solving maths problems, so it can study the methods students use and flag up where they go wrong. And it is developing algorithms for essay writing that analyse language structure to assess style, critical thinking and comprehens­ion.

Much of the applicatio­n of technology is focused less on final exams and the detection of cheating, and more on intermedia­te tests designed to flag up where students are struggling and to help them improve.

“You ask people to look back on their education and they will talk about being motivated by a teacher,” says Kate Edwards, Pearson’s head of efficacy reporting. “Support can be augmented by machine learning on particular tasks, allowing teachers to spend more time focusing on the areas where they can make the greatest difference.”

 ?? WWW.MCGILL.CA ?? ISABELLE BAJEUX-BESNAINOU, dean of McGill University’s Desautels Faculty of Management
WWW.MCGILL.CA ISABELLE BAJEUX-BESNAINOU, dean of McGill University’s Desautels Faculty of Management

Newspapers in English

Newspapers from Philippines