SIMULATING THE HUMAN BRAIN
If the goal is to create a machine that can show human levels of intelligence, simulating a human brain in real-time would be a good target. How much computational power does the human brain have, and how powerful would a supercomputer need to be to equal a human? It’s a conundrum that has been researched and debated for decades.
Neural networks aren’t new, and they have shown up in various forms over the years, like Terminator’s Skynet. Back in the early
1980s, some thought it would only take a hundred gigaflops, or perhaps a few teraflops, to achieve artificial intelligence. For example, see
Rudy Rucker’s Ware Tetralogy, specifically the first novel Software, written in 1982, where an early ‘shackled’ AI ran at 100 gigaflops. Today, such estimates seem quaint, though incidentally, Rucker provided an updated estimate of 300 petaflops in 2005.
Typically, estimates are given in much broader terms. We have obviously long since passed the teraflops mark, and modern supercomputers run in the petaflops range for FP64, and well into the exaflops range in FP16 AI computations. Ray Kurzweil was hired by Google to work on machine learning and language processing back in 2012, and he estimated we need around 20 petaflops to simulate a human brain. Other estimates can be as low as one petaflop or as high as one zettaflop—a million petaflops.
The difficulty with simulating the human brain is that it does lots of things. We can walk, talk in many languages, recognize images, drive a car, and even write software to create artificial intelligence— or try to. The hardware alone is only half the solution, and we may have already passed the point where we could fully simulate a human brain. Now, all we need is the right software, which seems to be taking much longer to arrive.