Pax en­gi­neer de­vel­op­ing AI sys­tem for radar

Aim is for ap­pa­ra­tus to rec­og­nize and re­move in­ter­fer­ence

The Enterprise - - Front Page - By BOB KAPER The writer is a civil­ian em­ployee of At­lantic Test Ranges at Naval Air Sta­tion Patux­ent River.

An en­gi­neer at Naval Air Sta­tion Patux­ent River is de­vel­op­ing an ar­ti­fi­cial in­tel­li­gence sys­tem with the po­ten­tial to teach it­self how to rec­og­nize and re­move ex­ter­nal in­ter­fer­ence from radar sig­nals.

The AI sys­tem is an out­growth of doc­toral re­search into pul­sars and mys­te­ri­ous cos­mic sig­nals called fast ra­dio bursts con­ducted by At­lantic Test Ranges elec­tri­cal en­gi­neer Stephen Itschner.

“I’m hop­ing it will help us au­to­mate a process that’s now very time con­sum­ing be­cause we have to do it all by hand,” said Itschner, who works with ATR’s Ad­vanced Dy­namic Air­craft Mea­sure­ment Sys­tem group.

If suc­cess­ful, Itschner’s sys­tem will be in­te­grated into ADAMS, which pro­vides radar cross-sec­tion data from air­craft dur­ing flight tests.

“Radar cross-sec­tion is just a mea­sure of how big a tar­get looks to a radar,” he said. “It’s more re­lated to elec­tri­cal size than to ac­tual phys­i­cal size.”

Radar sig­nals bounc­ing back from an air­craft can be con­tam­i­nated with ex­ter­nal ra­dio fre­quency in­ter­fer­ence, or RFI, he said.

“It’s es­sen­tially the same as the static you hear on a ra­dio when there’s light­ning nearby,” he said. “It can come from other radar sites, walkie-talkies, mil­i­tary ra­dios, boat ra­dios, even garage door open­ers.”

When plot­ted on an X-Y graph, RFI ap­pears as sharp peaks through­out the radar sig­nal, mak­ing it hard to tell what rep­re­sents the true radar re­turn from an air­craft and what is com­ing from un­wanted ex­ter­nal sources.

“Radar cross-sec­tion post-anal­y­sis is very la­bor in­ten­sive,” said Jim Ash­ley, head of ATR’s Air­craft Sig­na­ture and Avion­ics Mea­sure­ment branch. “We’re hop­ing Steve’s re­search will lead to an 80 per­cent so­lu­tion — let­ting the ma­chine do 80 per­cent of the work be­fore we turn it over to our hu­man an­a­lysts.”

Itschner pre­sented his ini­tial re­sults with a lim­ited set of data to a meet­ing last week of the coun­try’s top radar ex­perts at the Na­tional Radar Cross Sec­tion Test Fa­cil­ity man­aged by Hol­lo­man Air Force Base, near Alam­ogordo, N.M.

He told the group his sys­tem achieved 80 per­cent cor­rect RFI clas­si­fi­ca­tions with almost no false pos­i­tives — that is, vir­tu­ally no misiden­ti­fi­ca­tion of true radar re­turns as RFI — when us­ing a “proof-of-con­cept” set of radar data from a Lear­jet. He trained the AI sys­tem on 90 per­cent of the Lear­jet data, then tested it against the re­main­ing 10 per­cent, which the sys­tem had not en­coun­tered be­fore.

“I’ve got­ten it to train and test well on one class of tar­get,” he said. “But I haven’t yet looked at whether that type of train­ing will ex­tend to, say, a he­li­copter or other type of jet.”

Ash­ley said the ADAMS equip­ment is be­ing up­graded to han­dle new, more com­plex air­craft pro­grams that will re­quire far greater data anal­y­sis ca­pa­bil­ity. “It’s sim­ply not go­ing to be prac­ti­cal to con­tinue us­ing peo­ple to do all of it,” he said.

NRTF en­gi­neers at the con­fer­ence have come to sim­i­lar con­clu­sions, Itschner said.

“They in­de­pen­dently found they’re go­ing to have the same type of prob­lem for a slightly dif­fer­ent ap­pli­ca­tion and would need a so­lu­tion sim­i­lar to the one we’re work­ing on,” he said. “It gave me a nice warm feel­ing to know we’re on a promis­ing track.”

The sim­i­lar­i­ties be­tween Itschner’s work with radar and his Ph.D. re­search in ra­dio as­tron­omy led him to de­velop the ar­ti­fi­cial in­tel­li­gence sys­tem, or ma­chine learn­ing, as he calls it. For his Ph.D. he’s work­ing on in­stru­ments and sig­nal-pro­cess­ing tech­niques to iden­tify fast ra­dio bursts, which are very pow­er­ful but ex­tremely brief erup­tions of en­ergy from deep space.

“They’re very mys­te­ri­ous sig­nals, and no one knows quite what they are,” he said. “They only last for a mil­lisec­ond, and they’re com­pletely un­pre­dictable.”

Itschner is look­ing for com­mon­al­i­ties among fast ra­dio bursts, radar and RFI in or­der to de­velop ma­chine learn­ing sys­tems to an­a­lyze them.

He’s come up with a ma­chine learn­ing al­go­rithm — a series of com­puter in­struc­tions — called a con­vo­lu­tional neu­ral net­work. The net­work is able to iden­tify whether a piece of radar data is cor­rupted with RFI or not. In his as­tron­omy re­search, he uses a neu­ral net­work to de­ter­mine whether data cap­tured by a ra­dio tele­scope comes from a fast ra­dio burst or not.

“Peo­ple can learn to see the dif­fer­ence with­out too much train­ing, and con­vo­lu­tional neu­ral net­works are re­ally, re­ally good at mim­ick­ing hu­man vi­sion per­for­mance,” he said.

“To hand-de­sign an al­go­rithm that can see the same dif­fer­ences peo­ple can, an en­gi­neer tra­di­tion­ally would choose fea­tures that would help dis­crim­i­nate be­tween ob­jects — two types of fish for ex­am­ple. I would say, ‘let’s look at the length of the fish and the num­ber of fins it has’,” he said. “I’d just try dif­fer­ent things and then build a sys­tem around that.”

But that tra­di­tional ap­proach re­stricts the al­go­rithm’s dis­crim­i­nat­ing abil­ity, he said. “Its ac­cu­racy is lim­ited by the en­gi­neer’s imag­i­na­tion.”

So in­stead of telling his al­go­rithm to look for spe­cific char­ac­ter­is­tics of a real radar re­turn data ver­sus RFI, Itschner lets the con­vo­lu­tional neu­ral net­work fig­ure them out for it­self.

“All you do is give the al­go­rithm a bunch of ex­am­ples and an an­swer key that says what class each ex­am­ple re­ally be­longs to, and the ma­chine is able to learn the dif­fer­ence on its own,” he said. “Even­tu­ally it learns to make cor­rect de­ci­sions on new data so that a hu­man doesn’t need to ex­am­ine it.”

Itschner’s ini­tial re­sults are en­cour­ag­ing, Ash­ley said. “The next step is to buy hard­ware for the higher pro­cess­ing power needed to train the sys­tem for a wider range of radar data,” he said. The equip­ment is ex­pected to ar­rive at ATR in time to be­gin run­ning AI train­ing al­go­rithms next month.

“We’re not sure yet if it’s the right way for­ward,” he said, “but Steve’s work will help us nar­row down how best to ap­ply it to ATR.”

The writer is a civil­ian em­ployee of At­lantic Test Ranges at Naval Air Sta­tion Patux­ent River.

PHOTO BY JULIE KELLY

Elec­tri­cal en­gi­neer Stephen Itschner in­stalls a video card in an ex­ter­nal GPU case used for faster train­ing of the neu­ral net­works he’s de­vel­op­ing.

U.S. AIR FORCE PHOTO

A J-UCAS air­craft body sits on a min­i­mally re­flec­tive tar­get py­lon for radar cross-sec­tion test­ing at the Na­tional Radar Cross Sec­tion Test Fa­cil­ity. The fa­cil­ity is lo­cated on the White Sands Mis­sile Range and man­aged by the 704th Test Group at Hol­lo­man Air Force Base, Alam­ogordo, N.M.

Newspapers in English

Newspapers from USA

© PressReader. All rights reserved.