Decades of science fiction have conditioned us to believe that intelligent AI is inevitable. In recent years, the idea of this “True AI” or “Strong AI” has slowly shifted from the realm of pure fiction to soon, to be a reality.
And if that is true, then what of “singularity” – the tipping point after the rise of a true AI, where the intelligent machine is capable of self improvement? It is widely accepted that If we do manage to create True AI, singularity cannot be far behind.
True AI Will Exist Sometime in the 21st Century
Despite massive strides in both computing and AI research in recent decades, experts still cannot predict when the True AI will be created. Several surveys conducted among AI researchers since 2010 indicate that most of them expect Strong AI to arrive sometime in this century.
2050 is the cut-off year, with at least 45% saying we will be able develop human-like AI before the mid-point of this century. Some of the more optimistic claims come from highly respected individuals like Ray Kurzweil, director of engineering at Google.
According to him True AI (also called artificial general intelligence or AGI) capable of passing the Turing Test will exist as early as 2030. Kurzweil has a formidable record when it comes to making accurate predictions on technology – 86% of his 147 predictions since 1990 have been on point.
As for singularity, Kurzweil has a more conservative estimate of 2045 as the year when we can expect AI capable of recursive self improvement. His views are echoed by other prominent AI experts like MIT’s Patrick Winston, but with one important caveat – we would need several landmark breakthroughs in different fields to make this happen.
Arguments Against the Rise of True AI Are Also Formidable
Not everyone is convinced that we are on track to develop AGI within the next thirty years. Two main arguments work in the favor of AI skeptics. One is rooted in engineering, and the other in biology.
If you want to make something that resembles human intelligence, capable of doing calculations like the human brain, you need to have a good understanding of both. But our understanding is still quite limited. In fact, we don’t (or can’t) have a single unified interpretation of human intelligence.
And we still don’t know the exact computational power of the brain – estimates vary between 100 – 1000 petaflops. In comparison, the strongest supercomputers, like MIT’s TX-GAIA is only now reaching 100 petaflops.
While that might seem like a huge stride forward, (and it is in so many ways), we are reaching a point where the famous Moore’s law is reaching the end of the road – CPUs have drastically decreased in size since the 1970s, but we are reaching the physical limits of how many transistors we can put on those silicon chips.
Right now, AI research is still playing catch up to the advances in computing hardware. But pretty soon, they will need smarter, more energy efficient chips. And right now, there is no clear roadmap towards an alternative to silicon-based processing hardware.
The Parallels With Quantum Computing Stack Nicely
The next big leap in AI research might have to wait for more advances in quantum computing. True AI research requires massive amounts of computational power. When traditional computers start becoming inadequate, more powerful quantum computers as theorized by Benioff, Feynman and others as early as the 1980s could provide the answer.
Advances in quantum computing have already been reported by tech giants like IBM and Google, most famously using photons of light. But given the current state of the technology, it could take another 10-20 years or more before the hardware actually catches up with the theory.
This timeline again brings us to the 2050 cut-off for True AI, with singularity following may be within a decade. But right now, with the world thrown into chaos by the COVID-19 pandemic, even 2050 can seem like an early optimistic timeline for True AI!