A hack from outer space?
IT’S POSSIBLE, SCIENTISTS SAY
You might not remember this, but the alien invasion in the 1990s sci-fi blockbuster Independence Day began not with laser blasts but with a cyberattack. As Jeff Goldblum’s computer nerd character explains in the film, the alien fleet hacks into Earth’s satellites, hijacking their communication systems to co-ordinate their (ultimately unsuccessful) assault on humanity.
To call that scenario far-fetched is an understatement. But a pair of astrophysicists say in a bizarre paper released this month that the possibility of an extraterrestrial hack — one far more sophisticated than the attack in Independence Day — is worth taking seriously. (How seriously to take the paper, which was published in an unconventional, non-peer-reviewed academic archive, is another matter.)
Michael Hippke of the Sonnenberg Observatory in Germany and John Learned of the University of Hawaii warn in their article that an alien message from outer space could contain malicious data designed to wreak havoc on Earth. Such a message would be impossible to “decontaminate with certainty” and could pose an “existential threat,” they argue; therefore humans should use extreme caution.
Scientists, academics and futurists have long debated whether humanity would benefit from contact with extraterrestrial intelligence, or ETI. The Search for Extraterrestrial Intelligence Institute, a research organization that looks for alien life, seeks a peaceful dialogue. Its researchers listen for communication signals from smart aliens and send out signals from Earth in hopes that another civilization might pick them up. So far, no one has heard anything very lifelike.
Hippke and Learned’s paper — which reads more like a thought experiment, not serious scholarship — ponders the dangers of receiving these theoretical interstellar missives.
“While it has been argued that sustainable ETI is unlikely to be harmful, we cannot exclude this possibility,” the researchers wrote in the article, which was first reported by Motherboard. “After all, it is cheaper for ETI to send a malicious message to eradicate humans compared to sending battleships.”
The researchers envision several different types of malicious communications. A simple one might contain a threat like “We will make your sun go supernova tomorrow.”
“True or not, it could cause widespread panic,” they wrote, or have a “demoralizing cultural influence.” A longer, more nuanced message could sow confusion and fear, especially if it’s received by amateurs, according to the paper. The spread of such messages could not be easily contained, but they could at least be printed out and examined on paper, and wouldn’t necessarily require a computer to decipher.
But large, complex messages written in code would.
Messages that contain big diagrams, algorithms or equations could come with viruses hidden in them, the researchers say. They couldn’t be printed out and examined manually, so they’d have to be deciphered on a computer, the paper speculates.
The messages could also be compressed in the same way personal computers compress large files for more efficient transfer, and the algorithm needed to decompress them could also be code. Executing those billions of decompression instructions could unleash the malware, according to the paper.
In their most out-there example, Hippke and Learned imagine a sort of extraterrestrial spearphishing, the technique human hackers sometimes use to gain personal information from victims under the guise of a trustworthy source. Russian hackers likely used this technique to gain access to the Democratic National Committee’s computer networks.
As the researchers write in their paper, the header of such a message might read: “We are friends. The galactic library is attached. It is in the form of an artificial intelligence which quickly learns your language and will answer your questions. You may execute the code following these instructions ...”
Extraordinary steps could be taken to isolate the artificial intelligence — the researchers even suggest building a computer on the moon to execute the code and rigging it with “remote-controlled fusion bombs” to destroy it in case of an emergency.
This idea is known as an “AI box,” essentially a solitary confinement cell for an artificial intelligence. Experts have long discussed it as a way to contain a potentially dangerous artificial intelligence. Some have argued that a sufficiently advanced computer program could easily manipulate its human guard sand find away out of the “box.”
Hippke and Learned say efforts to imprison an artificial intelligence delivered by extraterrestrials would probably fail.