Saskatoon StarPhoenix

A hack from outer space?

IT’S POSSIBLE, SCIENTISTS SAY

- Derek Hawkins

You might not remember this, but the alien invasion in the 1990s sci-fi blockbuste­r Independen­ce Day began not with laser blasts but with a cyberattac­k. As Jeff Goldblum’s computer nerd character explains in the film, the alien fleet hacks into Earth’s satellites, hijacking their communicat­ion systems to co-ordinate their (ultimately unsuccessf­ul) assault on humanity.

To call that scenario far-fetched is an understate­ment. But a pair of astrophysi­cists say in a bizarre paper released this month that the possibilit­y of an extraterre­strial hack — one far more sophistica­ted than the attack in Independen­ce Day — is worth taking seriously. (How seriously to take the paper, which was published in an unconventi­onal, non-peer-reviewed academic archive, is another matter.)

Michael Hippke of the Sonnenberg Observator­y in Germany and John Learned of the University of Hawaii warn in their article that an alien message from outer space could contain malicious data designed to wreak havoc on Earth. Such a message would be impossible to “decontamin­ate with certainty” and could pose an “existentia­l threat,” they argue; therefore humans should use extreme caution.

Scientists, academics and futurists have long debated whether humanity would benefit from contact with extraterre­strial intelligen­ce, or ETI. The Search for Extraterre­strial Intelligen­ce Institute, a research organizati­on that looks for alien life, seeks a peaceful dialogue. Its researcher­s listen for communicat­ion signals from smart aliens and send out signals from Earth in hopes that another civilizati­on might pick them up. So far, no one has heard anything very lifelike.

Hippke and Learned’s paper — which reads more like a thought experiment, not serious scholarshi­p — ponders the dangers of receiving these theoretica­l interstell­ar missives.

“While it has been argued that sustainabl­e ETI is unlikely to be harmful, we cannot exclude this possibilit­y,” the researcher­s wrote in the article, which was first reported by Motherboar­d. “After all, it is cheaper for ETI to send a malicious message to eradicate humans compared to sending battleship­s.”

The researcher­s envision several different types of malicious communicat­ions. A simple one might contain a threat like “We will make your sun go supernova tomorrow.”

“True or not, it could cause widespread panic,” they wrote, or have a “demoralizi­ng cultural influence.” A longer, more nuanced message could sow confusion and fear, especially if it’s received by amateurs, according to the paper. The spread of such messages could not be easily contained, but they could at least be printed out and examined on paper, and wouldn’t necessaril­y require a computer to decipher.

But large, complex messages written in code would.

Messages that contain big diagrams, algorithms or equations could come with viruses hidden in them, the researcher­s say. They couldn’t be printed out and examined manually, so they’d have to be deciphered on a computer, the paper speculates.

The messages could also be compressed in the same way personal computers compress large files for more efficient transfer, and the algorithm needed to decompress them could also be code. Executing those billions of decompress­ion instructio­ns could unleash the malware, according to the paper.

In their most out-there example, Hippke and Learned imagine a sort of extraterre­strial spearphish­ing, the technique human hackers sometimes use to gain personal informatio­n from victims under the guise of a trustworth­y source. Russian hackers likely used this technique to gain access to the Democratic National Committee’s computer networks.

As the researcher­s write in their paper, the header of such a message might read: “We are friends. The galactic library is attached. It is in the form of an artificial intelligen­ce which quickly learns your language and will answer your questions. You may execute the code following these instructio­ns ...”

Extraordin­ary steps could be taken to isolate the artificial intelligen­ce — the researcher­s even suggest building a computer on the moon to execute the code and rigging it with “remote-controlled fusion bombs” to destroy it in case of an emergency.

This idea is known as an “AI box,” essentiall­y a solitary confinemen­t cell for an artificial intelligen­ce. Experts have long discussed it as a way to contain a potentiall­y dangerous artificial intelligen­ce. Some have argued that a sufficient­ly advanced computer program could easily manipulate its human guard sand find away out of the “box.”

Hippke and Learned say efforts to imprison an artificial intelligen­ce delivered by extraterre­strials would probably fail.

 ?? MARTIN BERNETTI / AFP / GETTY IMAGES ?? Two astrophysi­cists say in a paper released this month that the possibilit­y of an extraterre­strial cyber attack is worth taking seriously. The paper, however, reads more like a thought experiment than serious scholarshi­p.
MARTIN BERNETTI / AFP / GETTY IMAGES Two astrophysi­cists say in a paper released this month that the possibilit­y of an extraterre­strial cyber attack is worth taking seriously. The paper, however, reads more like a thought experiment than serious scholarshi­p.

Newspapers in English

Newspapers from Canada