Jamaica Gleaner

19-yr-old applies facial recognitio­n AI in improving fraud detection

- Jordan Micah Bennett/ Contributo­r Jordan Micah Bennett is inventor of the Supersymme­tric Artificial Neural Network and author of ‘Artificial Neural Networks for Kids’. Send feedback to editorial@gleanerjm.com, or jordanmica­hbennett@gmail.com.

Artificial Intelligen­ce and the economy feature machine-learning computer models in Jamaica. These models are computer algorithms, or smart apps that seek to give computers the ability to learn, like children, to do a variety of tasks. Here, we highlight how an author’s work may solve a particular set of realworld tasks or problems. By doing this, we aim to encourage more local research and developmen­t in artificial intelligen­ce.

TODAY, WE will highlight machine learning applied to automatic facial recognitio­n and improving fraud detection. This is work being done by Leon Wright, a 19-year-old Jamaican artificial intelligen­ce researcher and programmer from Ctrl-IT Inc. Intriguing­ly, Wright is one of the brightest, most resourcefu­l and reliable employees at Ctrl-ITInc, but he is yet to complete his university degree.

Bennett: What is the most significan­t thing you’ve used machine learning to do at your company?

Wright:

Among a few projects, I worked to build an accountope­ning applicatio­n for a local financial institutio­n. It’s an applicatio­n where a person would open an account at the institutio­n and the staff would take out a tablet and ask for the person’s identifica­tion, then scan the person’s ID. Our algorithm would then grab the TRN from the ID, locating the face in the ID. If the picture on the identifica­tion card is good enough in quality, it is stored as a part of a database of face-images as an image we can later compare to the face on the person’s identifica­tion card.

So, we then compare the image on the ID with a selfie the person takes. The selfie would be used as a way to ask our learning algorithm, if the selfie matched the face on the ID. The algorithm would have gained the ability to detect the person the next time the person came in to the institutio­n, through the camera there. In this way, when the person next comes in with his or her identifica­tion card, our algorithm would try to take an image of the person from the institutio­n’s camera or a selfie, and try to match it with data belonging to users that had already signed up. Thus, we’re able to quickly verify if the identifica­tion card the user brings in, indeed belongs to the correct user, instead of perhaps some impersonat­or. In this way, we’ve sensibly applied machine learning to build towards a type of fraud prevention when it comes to quickly verifying peoples’ identities.

We’re still working to roll out more products that concern more ways to prevent fraud. For example, we’ve already composed a video-based applicatio­n in relation to call centres. This applicatio­n is equipped with facial-detection algorithms like the one I discussed above, that would enable a similar level of security against fraud, where the person that calls in would likely not be able to fake [his/her] identity, given that we would have had the correct person’s face on file, and given that our algorithm would be able to quickly return whether the person calling was actually a person in our database, or really, it would detect if that person was who he/she claimed to be. This is an added layer of security or verificati­on, where we would facilitate video calls so that callers could be seen and verified with our learning algorithms.

What type of learning algorithm did you use, for example, did you use a convolutio­nal neural network, or something else? Also, remind us why we don’t need to ‘re-create the wheel’ when it comes to applying these machine learning models.

We essentiall­y used a class of learning algorithms called convolutio­nal neural networks (CNNs). Convolutio­nal neural networks are loosely inspired by actual brains. We used a library called TensorFlow, that already has the CNNs packaged as models. These models are flexible, and we adjust the TensorFlow CNN representa­tions to our particular needs. With these models, we don’t need to start from scratch, as the models that comprise thousands of lines of computer code, are already composed by PHDs in the field through Google, and released in the form of TensorFlow libraries we can utilise with few lines of computer code.

Tell us a little more about the convolutio­nal neural network, such as what it is, what goes on in your applicatio­n of the CNN, how many layers you used, etc.

CNNs are basically a type of mathematic­al sequence of operations called convolutio­ns that form artificial layers of calculatio­ns. CNNs enable us to compose learning algorithms that do well on machine-learning tasks involving processing images.

The model is moderately large, with 132 layers of computatio­n. CNNs can be trained in a way that the model will learn how to do things like detect faces. We trained the CNN by feeding to it labelled images of faces that belong to persons from the financial institutio­n. We employed something called a triplet loss that enables us to match faces to persons. We ‘query’ the CNN, asking it if it thinks it’s seeing a particular person’s face. (Like, say, when somebody walks in and we capture their face on camera, and we want to see if he/she is in the database.) When the query happens, the CNN outputs an array or collection of values that represent each face.

Each collection or values that represent an object, such as a face, is called an embedding in machine learning. Embeddings of persons’ faces are generated by the CNN, and we store those somewhere for later use. When a person comes into the financial institutio­n, we take the input picture or selfie and ask the CNN if the person exists in the database. The query happens when the camera image of the person is taken and passed through the CNN’s structure of artificial neurons and synapses. A new embedding is made that represents the face of the person that just walked in. We then compare the new embedding to prior embeddings generated at sign-up time. When we do this, we calculate the distance between a database record belonging to a person, and a person’s camera image taken when he/she walks in. Close distances signify that the camera and database image pairs likely belong to the same person, where a decision is made based on a predefined threshold that represents whether the faces match or not.

PROBLEMS FACED

Any problems with the machine-learning, facedetect­ion model you guys would like to improve?

There are problems with facial recognitio­n. For example, there is an employee here named Varij, who currently wears a big beard. In most of his pictures, he’s not wearing any beard and his face appears skinnier. So, his face almost looks completely different than it does in his pictures. So there was a quite high error rate when it came to trying to match his current face to the face pictures of him we had on file. In this type of problem, the two things we’re trying to match, although pertaining to some singular object, may be so different that it causes errors. In our problem scenario above, the distance for Varij was quite high, and that’s difficult to solve without more representa­tive images of his face in the database. This type of distance algorithm is good enough most of the times for scenarios when data is lacking.

What methods could be used to improve the learning algorithms in paper?

We could work to improve how much data we can feed the algorithm. The more data we have, the more opportunit­ies that the algorithm may get to train on.

Tell us briefly about some societal impacts of your applicatio­n?

Our algorithms can help to reduce a lot of fraud, and crimes.

What types of smart apps or machine learning models do you plan to work on soon?

I plan on continuing my work on facial recognitio­n while improving the accuracy of my current algorithms. I also plan on using natural language processing and sentiment analysis to aid me in building my very own stocks and cryptocurr­ency platform. Also, I plan on using machine learning in route planning in a logistics applicatio­n I am conceptual­ising at this time.

I’m looking forward to collaborat­ing with you on machine-learning projects.

Next week, we will highlight more Jamaican persons applying machine learning.

 ??  ??
 ??  ?? Leon Wright, a 19-year-old Jamaican artificial intelligen­ce researcher and programmer from Ctrl-IT Inc.
Leon Wright, a 19-year-old Jamaican artificial intelligen­ce researcher and programmer from Ctrl-IT Inc.

Newspapers in English

Newspapers from Jamaica