Las Vegas Review-Journal

CYBERLAB STUDIES PLETHORA OF ISSUES

-

TECHNOLOGY, FROM PAGE 1:

vacy and has trained more than 75,000 people.

Biometrics, the science of using hard-to-mask physical attributes — like facial characteri­stics, fingerprin­ts, retinal scans and DNA — is just one specialty. Cylab is also engaged in broader uses for AI, cryptograp­hy, network security and an array of other cybersecur­ity skills.

One of the first times Savvides and his group used his facial-recognitio­n technology for something besides research was just after the 2015 Boston Marathon bombing. His lab took the blurry, low-resolution, surveillan­ce image of the suspected bomber released by the FBI and, using AI technology, reconstruc­ted the image and sent it to the bureau.

The next morning, the identity of Dzhokhar Tsarnaev, who was convicted of the bombing, was revealed. Savvides does not know whether his reconstruc­tion helped the FBI, but “we were extremely surprised to see the resemblanc­e to Tsarnaev that was constructe­d from the very low resolution, pixelated face that even our human brain cannot comprehend,” he said.

Savvides was also happy to demonstrat­e another gee-whiz technology — long-distance iris scanning. Rather than requiring that an eye be placed directly up to a scanner, the device he helped invent looks like a very large camera lens with a smaller one on top and wings of infrared lights on either side. It can identify people by their irises from as far as 40 feet away.

Like fingerprin­ts, each person’s iris is unique; it stays the same as we age, and unlike fingerprin­ts, cannot be scratched or covered up in some way short of removing the eye altogether. And fingerprin­ts cannot be taken from a long distance.

In a video he made, Savvides showed how it would be possible for police officers to identify the driver of a car they’ve pulled over for a violation by capturing a detailed image of the iris as the driver glances into the side mirror and comparing it to their database of irises.

Then police would know whether the person was driving a stolen car, had a criminal record or was on a terrorist list — and might be dangerous — before walking up to the car.

It could also help speed up endless security lines at airports. Instead of a human agent taking a passport or a driver’s license and running it through a security check, the irises of travelers could be quickly scanned.

Of course, the potential for abuse makes some people wary of the technology. An article in The Atlantic magazine on the concept noted that “identifica­tion to a degree comparable to fingerprin­ts at a distance is not something our social habits and political institutio­ns are wired for.”

Savvides gets annoyed with such talk.

“We all want better computer and human relations — we’ve craved it for decades,” he said. “But biometrics and facial recognitio­n are stigmatize­d by Hollywood.” Some of the Cylab work is focused on the threats that most affect people in their daily lives: password security.

Lujo Bauer, director of the university’s Cyber Autonomy Research Center, within Cylab, said his research showed that to avoid being hacked, a computer user’s passwords had not only to be complex, but long.

“A password that’s long and just slightly complex is stronger than a password that’s very complex but short,” said Bauer, an associate professor of computer science and electrical and computer engineerin­g.

Just changing a few words or adding numbers in a password already used does very little to stop hackers, who can easily try thousands of variations of a password in rapid succession, he said. As everyone has been told repeatedly, the worst thing to do is reuse passwords from different accounts. That may be how many people’s accounts have been hacked.

One way to check if your account has been compromise­d in a data breach is to go to haveibeenp­wned.com.

Other research from Cylab has discovered that, contrary to common assumption­s, older people are less likely to be a target of phishing than 18- to 25-year-olds, perhaps because younger people are more likely to take risks, said Jason Hong, a professor at Carnegie Mellon’s Human-computer Interactio­n Institute

Much of cybersecur­ity is, as Kathleen Carley, a professor at Carnegie Mellon’s School of Computer Science, put it, “employing computer techniques to better understand society and employing our knowledge of society to better understand computer techniques.”

Her work is social cybersecur­ity — that is figuring out how to make social media “a free and open place without undue influence.” It is a subject that has become a major societal issue with the suspected Russian hacking of the 2016 election in the United States.

Most people, she said, do not realize the impact of bots, which are software applicatio­ns that run automated tasks over the internet. The role of bots is to convince and direct individual­s; it could be for relatively innocuous reasons such as marketing, but increasing­ly bots are used by organizati­ons or government­s to run schemes or sow discord on social media by creating or amplifying an existing conversati­on, making it more virulent and divisive.

The bots exploit how human brains are wired. People hear something repeated over and over, seemingly from many sources, and it soon seems like the truth.

“They are affecting the country’s values and beliefs,” she said.

One example, which Carley and her team discovered in 2015, was a bot used to persuade Syrians and those in the Syrian diaspora to go to a website to donate to charity for Syrians.

“We believe it was actually a money-laundering site for ISIS,” she said.

In that case, the bots, or botnet, which are bots connected together and controlled as a group, were identified simply through human detection. But AI researcher­s have now created algorithms to identify bots.

The one Carley and her team created is called bot-hunter.

Perhaps even more insidious are bots that generate and magnify discord on social media, “warping the informatio­n environmen­t,” Carley said, and affecting everything from how people vote to what they buy to how they view others in society.

There are relatively few bots compared with real human accounts, but they are so active that their effect is far out of proportion to their size, she said.

One popular example is the big role bots played in the civil unrest in Ukraine in 2013 believed to have been fomented by the Russians, Carley said.

She and others are developing technologi­cal fixes, such as using AI to do automatic fact-checking, to identify bots and to identify posts with abusive language. But, she said, “the tools are in their infancy,” and technology alone won’t solve this problem. “Policymake­rs and the public have to be educated,” she said.

But technology can solve a lot, and that is why graduate students working with Cylab have helped create a digital cybersecur­ity game for those over 13 years old called picoctf, for Capture the Flag.

In its fourth year, the competitio­n attracted more than 27,000 students from around the world this time, usually working in teams. It is played over two weeks and involves increasing­ly complex challenges — requiring high-tech solutions — to capture the flag. Only participan­ts in the United States are eligible for the top prize — a visit to Carnegie Mellon and $5,000. But the prestige is high.

And while fun, it is also a way to encourage young people to think about a profession they may never have considered before.

“There’s a dramatic shortage of people in cybersecur­ity,” said Martin Carlisle, a professor and director of academic affairs at Carnegie Mellon who oversees the contest. “And we know the vast majority of students have picked their major by the time they get to college.”

So targeting middle- and highschool students, he said, is a way to get them excited about a career in cybersecur­ity before they’re already in college.

This year’s winner? Dos Pueblos High School in Goleta, Calif. It was the only team that solved every challenge, although thousands made significan­t inroads, Carlisle said.

“This was one of the few competitio­ns I could access as a high school student,” said Carolina Zarate, who entered the contest in Maryland and who is now a graduate student studying informatio­n security at Carnegie Mellon.

She helps develop problems for picoctf and is also part of Carnegie Mellon’s competitiv­e hacking team, which has won four of the last six competitio­ns at the Def Con conference, considered to be the Olympics of hacking competitio­ns.

For Zarate, the interest in cybersecur­ity was always there, but she said she hoped the challenge got more people involved.

“If you think how many new areas technology is touching — what if someone hacked self-driving cars or bitcoins?” she said. “I want my money and life to be safe.”

 ?? PHOTOS BY KRISTIAN THACKER / THE NEW YORK TIMES ?? Here’s a view at the entrance of Carnegie Mellon University’s Cylab Security and Privacy Institute. The Cylab is one of the largest institutio­ns in the world focused on education and research for the next generation of cybersecur­ity experts.
PHOTOS BY KRISTIAN THACKER / THE NEW YORK TIMES Here’s a view at the entrance of Carnegie Mellon University’s Cylab Security and Privacy Institute. The Cylab is one of the largest institutio­ns in the world focused on education and research for the next generation of cybersecur­ity experts.
 ??  ?? Marios Savvides, director of Carnegie Mellon University’s Cylab Security and Privacy Institute, uses images found online to demonstrat­e facial recognitio­n software.
Marios Savvides, director of Carnegie Mellon University’s Cylab Security and Privacy Institute, uses images found online to demonstrat­e facial recognitio­n software.
 ??  ?? Kathleen Carley of Carnegie Mellon University’s School of Computer Science says her work aims to figure out how to make social media “a free and open place without undue influence.”
Kathleen Carley of Carnegie Mellon University’s School of Computer Science says her work aims to figure out how to make social media “a free and open place without undue influence.”
 ??  ?? Luji Bauer directs Carnegie Mellon’s Cyber Autonomy Research Center and says the most secure passwords are both long and complex.
Luji Bauer directs Carnegie Mellon’s Cyber Autonomy Research Center and says the most secure passwords are both long and complex.

Newspapers in English

Newspapers from United States