According to Musk, safely designing AI should be a top priority, or things could get ugly for us.
Artificial Intelligence is a common theme in Blade Runner, Ex Machina, I Robot, The Terminator, The Matrix, and even Wall-E. In these films, intelligent machines ultimately overtake their human creators, posing a direct threat to humanity's survival. The risks of artificial intelligence have long been a common trope in popular culture.
What was once regarded as a fascinating and looming threat has devolved into a cheesy artefact that has been overplayed. In the face of more urgent and immediate challenges, super-intelligent robots are no longer so frightening. Furthermore, we are still a long way from the next levels of AI, with some influential researchers predicting that humans will never get there.
The question of whether or not humans can produce superintelligence, on the other hand, is not so easy. On the other side of the aisle, leading figures are raising valid concerns about the technology. Should we be concerned if we are on this path? And, more importantly, what steps do we take to ensure that technology is developed responsibly? Leading advocates of AI's existential danger say it is not only unavoidable but also coming to a town near you soon. Elon Musk, the meme wizard and tech billionaire, is one of the most outspoken opponents of the rise of robots, as you would imagine.
Elon Musk is worried about AI's prospects.
Much of Musk's apprehensions sound like the makings of a great science-fiction villain. Nonetheless, people like the late Stephen Hawking, Ray Kurzweil, and Bill Gates have voiced similar concerns on different levels. The Tesla CEO has spoken out on the dangers of artificial intelligence on several occasions. Musk told the New York Times in 2020 in one of his most successful interviews that in less than five years, AI will be vastly more intelligent than humans. But don't get too worked up just yet. This is merely Musk's perspective.
Even if you think it's possible, Musk cautioned, "It isn't to say that everything will fall apart in five years. It simply means that things become dysfunctional or strange "In his interview, the billionaire revealed.
Musk does, after all, have a difficult relationship with artificial intelligence. He does not assume that AI is inherently poor or that it should be stopped at all costs. In reality, AI plays a significant role in all of his businesses in some way. Furthermore, Musk is worried about more realistic problems with AI in general, such as work loss caused by automation.
He does, however, want the technology to be established wisely, with the necessary insight and oversight. And if governments do not take action, he would. Over the last ten years, the tech mogul has poured millions of dollars into businesses and technologies that facilitate the responsible production of intelligent machines. More importantly, he is said to be working on technologies that would give humans an advantage in the event of an AI apocalypse.
If humans are to stand a chance against AI, they will have to combine with computers.
At least, that's how Elon Musk sees it. A "Fitbit in your skull with tiny wires" is one of the billionaire's most secretive and contentious ventures. The neural tech start-up, dubbed Neuralink, is working on an electronic brain-computer interface that can be rapidly and easily implanted into the human brain. These brain-computer interfaces will be used to improve people's abilities all over the world, altering how we communicate with technology and treating neurological and mobility problems.
While this technology isn't new — brain-computer interface systems have been around for decades, and more than 300,000 people now use one — what Neuralink wants to do with it is. The business has a far more ambitious target in mind: AI symbiosis.
Even for Musk, things get a little "science-fictiony" here. People who believe in transhumanism, such as futurist Dr. Ian Pearson, believe that this future is probable and may be humanity's next evolutionary step. Neuralink-like technology may be our antidote to AI. It could be used to boost human intelligence and skills, enabling us to deal with super-intelligent machines.
Humans could, like Neo in the Matrix, download abilities, information, and ideas directly into their minds in the future. Humans will be able to offload their consciousness into machines or other robotic bodies in the far future, effectively rendering us immortal.
Humans, according to Musk, are now cyborgs. Every day, we use computers and smartphones that are extensions of ourselves. Humans already have a tertiary digital layer. So why not increase its bandwidth by extending it? Neuralink aspires to be the solution.
Neuralink's team of 100 workers also has a long way to go before AI human-hybrids become a reality. The tech firm must also overcome numerous bureaucratic, legal, and technical roadblocks. The technology's human trials could start as soon as this year.
OpenAI was established with the aim of responsibly developing artificial intelligence.
One of the most effective ways to keep rogue AI from escaping is to build it responsibly. This is a fundamental conviction of the OpenAI team. The AI research and development non-profit, which was established in 2015 by a group of tech entrepreneurs, including Musk, is working to develop artificial general intelligence (AGI) that is both secure and beneficial to humanity. In a nutshell, Google DeepMind's rival aims to develop AI that is friendly to humans. They achieve this by designing machine learning systems that are consistent with our own human values.
How has the organisation done in relation to its objectives? Who you ask is the determining factor. Musk resigned from Tesla's board of directors in 2018, citing a possible future conflict of interest with the company's AI production for self-driving cars. He is, however, still a supporter of the company. Musk would later tweet that he disagreed with some of the objectives that Open AI was attempting to accomplish.
A new AI that can produce realistic text snippets is identified in one of the company's more contentious research papers. Thankfully, the team decided against making the fully qualified model public because it could easily be used to spread false information around the internet. Nonetheless, most OpenAI research ventures are relatively harmless and are currently not on the verge of producing super-intelligent machines.
Elon Musk has also donated millions to AI research organisations.
Elon Musk also became a major donor to the Future Life Institute in 2015. (FLI). The volunteer-run research and outreach group, similar to Open AI, has been working to counter existential threats to humanity, such as AI. FLI specialises in assisting researchers in a number of AI-related areas, such as economics, law, ethics, and policy.
Other notable figures associated with FLI include Nick Bostrom, Stephen Hawking, computer scientists Stuart J. Russell and Francesca Rossi, biologist George Church, cosmologist Saul Perlmutter, and astrophysicist Sandra Faber.
Mars has the potential to save humanity from a possible apocalypse.
Musk created his aerospace company SpaceX in 2002 with the stated aim of making humans an interplanetary species. Over the decades, a business that was on the brink of bankruptcy has achieved a number of promising milestones. Last year, the rocket maker had its first-ever astronaut flight. Most of the company's creativity, on the other hand, is setting the foundations for possible missions to our big red neighbour.
Musk predicts that mankind will be able to reach Mars in the coming decades, but this is still a long way off. This small move toward interplanetary travel could be critical to our species' survival. According to Musk and those who share his concerns, our species is only one major catastrophe away from extinction. From environmental to extraterrestrial hazards, one calamity haunts Musk the most: artificial intelligence.
The tech tycoon has made it clear that his ambitious colonisation plans are his top priority. What is the explanation for this? It has the power to shield us from malicious AI. In short, he claims that if AI goes rogue and turns on mankind, Mars will be the ideal safe haven. Back on Earth, SpaceX is working on a variety of projects, including a mission to Mars in 2026.
But don't start planning your trips to Mars just yet. Within the billionaire's own circle, critics such as Jeff Bezos have expressed concern that concentrating our attention on Mars rather than addressing more pressing problems on Earth might be a problem. In the same breath, he likened the surface of Mars to a garden paradise, describing Mt. Everest's summit as such. This Mars-bound mission is still beset by logistical and technical obstacles. What's more, if AI is intelligent enough to take over the Earth, what's to stop it from reaching us on Mars? Nonetheless, a second planet could give humans a fighting chance in an AI dark age, at least theoretically.
OKAY, Here is the Bottom Line
Is artificial intelligence anything to be scared of?
Among entrepreneurs and researchers, AI and its potential is a hotly debated subject. People on the other side of the aisle find it difficult to believe Musk's allegations, even going so far as to term him a sensationalist. AI has the potential to transform people's lives all over the world, resulting in positive disruptive change. Transportation, forestry, smart cities, and business processes will all benefit from AI to save time and money while also providing individuals with a future free of pressing concerns and overwork. We may use AI to improve global healthcare and human health. All have the potential to improve.
What if Musk is correct? Another common disaster movie trope is when a character (usually a scientist) is branded insane by his peers for warning the world about impending doom, only to be proven correct later in the plot. Musk has thrived in markets that have repeatedly backed him against the odds. However, he is not a prophet and has been proved incorrect on several occasions. His experience has also sparked some ground-breaking new ideas. Can artificial intelligence take over your life tomorrow? Almost certainly not. The worst AI can currently do in your personal life is misinterpret a voice command or make an uncomfortable suggestion while streaming.
However, regardless of where you stand on this issue, we are unlikely to learn from our AI missteps.
To help their work, Newsmusk allows writers to use primary sources. White papers, government data, initial reporting, and interviews with industry experts are only a few examples. Where relevant, we also cite original research from other respected publishers.
Source- Interesting Engineering, Neuralink