top of page
Writer's pictureShreyasi Bose

Will AI Mend Or End our World?

AI is the future of technology but it may change or ruin our world.


Source - Economic Ethics


Source- DeepMind

AlphaZero, a chess-playing artificial intelligence (AI) created by Google, beat Stockfish 8, the reigning world champion program at the time, in December 2017. According to The Guardian, AlphaZero estimates about 80,000 moves per second. 70 million stockfish.


Nonetheless, AlphaZero won 28 and tied 72 matches out of 100.


Over the years, human feedback has been used to continuously improve Stockfish's open-source algorithm. According to The New Yorker, coders propose an idea to update the algorithm, and the two implementations are then matched against each other for thousands of matches to see which one wins.



Google says that AlphaZero's machine learning algorithm obtained no human feedback other than the programming of simple chess rules. This is a type of deep learning in which program perform complex tasks without human involvement or supervision. After being taught the fundamentals of the game, AlphaZero was set free to teach itself how to improve.


So, how quickly was the AI able to build an algorithm capable of defeating one of the world's most sophisticated chess programs?


It took four hours.


People were astounded not just by the pace at which it machine-learned its way to chess mastery. For lack of a better term, it was AlphaZero's imagination. Yuval Noah Harari, author of Sapiens: A Brief History of Mankind and Homo Deus: A Brief History of Tomorrow, writes in The Atlantic that some of AlphaZero's tactics are brilliant.


Everything about AlphaZero is indicative of how fast and how acute the AI revolution is likely

to be. Programs like this will essentially be doing the same kind of information processing our brains do except better — far better — with a breadth and depth that no biological system (including the human brain) could ever hope to compete with.


Aside from the debates over consciousness and free will, these systems will certainly be intelligent in some sense. With no biological constraints and a human-like capacity to learn and correct course, the potential for improvement is so vast that it may be difficult to comprehend, let alone anticipate.


We do, however, have some ideas about where we might end up.


But, in order to fully understand what the final finish line could look like, we must first understand what artificial intelligence is.



What is the concept of artificial intelligence? Since there is no single, widely agreed concept of AI, it is easy to get lost in the metaphysical and technical woods when attempting to outline it. However, there are a few key points on which researchers agree are important to any description.


According to the Stanford Encyclopedia of Philosophy, many scientists and philosophers have attempted to describe AI using the principle of rationality, which can be expressed in either computer thought or actions. According to a 2019 European Commission study, AI programming achieves rationality by perceiving the environment, analyzing the knowledge contained within it, and then deciding on the best course of action for a specific target, potentially altering the environment in the process.


The MIT-IBM Watson AI Lab was created in 2017 by experts from IBM and the Massachusetts Institute of Technology (MIT), and it provides valuable insights on how to think about the technology. You could remember the lab's name; Watson was the software that defeated two human competitors to win the game show Jeopardy in 2011. AI, according to the lab, is the ability of “computers and machines to imitate the cognition, learning, problem-solving, and decision-making capacities of the human mind.” This is a broad term that does an outstanding job of capturing the main concept.


Importantly, the lab then differentiates between three forms of AI. "Narrow AI" is made up of algorithms that perform complex tasks at a breakneck rate. Narrow AI covers the majority of AI technologies in use today, including voice assistance technology, translation services, and the chess programs listed above.


The Watson AI Lab aims to advance AI by two crucial steps. First, consider "large AI," which refers to programming systems that can learn with greater flexibility. And, finally," artificial general intelligence," which are “systems capable of complex reasoning and full autonomy.”


This final category will be something akin to the archetypal sci-fi version of self-driving cars.



For the time being, the majority of AI technology is classified as narrow. We can see signs of what the future could bring by looking at the recent success of that narrow AI and its benefits and risks.


What could go wrong and the advantages of AI


Taking a step back from the slightly esoteric essence of programmers like AlphaZero, Stockfish, and Watson, we can see how AI's scope is currently extending into the lives of ordinary people. Every day, millions of people use applications like Siri and Alexa. Chatbots assist customers in troubleshooting issues, and online translation services are used by foreign language students and travelers all over the world. When you perform a basic Google search, human-tweaked algorithms carefully plan what you see and don't see.


A ride to the hospital or clinic can also bring you into touch with AI. While the majority of current medical AI applications deal with simple numerical or image-based data, such as measuring blood pressure or MRIs, Harvard University announced in 2019 that the technology is advancing to impact health in far larger ways. Researchers from Seoul National University Hospital and College of Medicine, for example, have created an algorithm capable of detecting anomalies in cell development, including cancers. As compared to the results of 18 real-life doctors, the algorithm outperformed 17 of them.


Meteorology is also benefiting from AI. Microsoft and the University of Washington collaborated to create a weather prediction model that uses approximately 7,000 times less computational power to produce forecasts than conventional models. While the predictions were less reliable than the most sophisticated models currently in use, this work represents a significant step forward in reducing the time and energy required to develop weather and climate models, which may someday save lives.


Another industry that would benefit greatly from the production of such weather-predicting AI in agriculture. Those in agriculture are equally busy integrating technology into much of what they do.


According to Forbes, by 2025, investment in smart agriculture technology would exceed $15 billion. This artificial intelligence is beginning to transform the field, increasing crop yields and lowering production costs. AI, in conjunction with drones and field-based sensors, is assisting in the generation of entirely new knowledge pools that the sector has never had access to before, enabling farmers to better assess fertilizer effectiveness, enhance pest control, and track livestock health.



Machine learning is also being used to build systems that imitate human traits such as humor. Wired published in 2019 on researchers who created an AI capable of making puns. In the near future, you might be able to strike up a conversation with a linguistically astute Siri or Alexa, sharing wordplay as you go. You've used the expression "an eye for an AI."


This is all really exciting. Despite the game-changing levels of hope and optimism that AI is ushering in for humanity's future, there are inevitable discussions about the dangers it growing pose.


What could go wrong, and what the risks of AI are


There are several hazards associated with using AI. It is important to note that, no matter how bright AI can be in the future, it may also be used to bring in behaviors that would be perfectly at home in an Orwellian or Huxleyan context.


Significant ethical issues are being posed in virtually every area where AI is being used. Critically, any issues AI encounters in the future will most likely be representations and extensions of the humans who created it.

Unfortunately, a look at easy, daily Google searches demonstrates how human input can influence machine learning for better or worse. The Wall Street Journal reported in 2019 that “Google's algorithms are subject to regular tinkering from executives and developers who are trying to produce meaningful search results, while also satisfying a wide range of powerful interests and driving its parent company's more than $30 billion in annual profit.” This raises questions about who controls what billions of search engine users see on a daily basis, and how that influence can shift in response to undisclosed agendas.

It has the ability to contribute to terrifyingly powerful propaganda and social engineering.


And, while AI is revolutionizing the medical world in life-saving ways, the advantages it offers are not without significant drawbacks.



Maximilian Kiener warns in the journal AI & Society in 2020 that machine learning is vulnerable to cyber attacks, data mismatching, and the prejudices of its programmers, at the very least. Kiener cites a study in which, based on AI-generated scans, black women being screened for breast cancer had a lower risk of possible mutations than white women, despite having a comparable risk in practice.


Errors like this have the potential to be fatal, and they may result in particular groups and classes of people being unable to profit from modern medicine.


As AI becomes more integrated with medical technology, it is critical that certain risks be disclosed to patients.


Similarly, self-driving cars face a slew of challenging technological and ethical problems. In 2018, a self-driving Uber car collided with an Arizona pedestrian, who died at the hospital as a result of her injuries. According to NBC News, there was no glitch in the car's AI programming; it had been programmed to identify pedestrians only at crosswalks, not when jaywalking.




It may seem to be a minor omission, but once fully incorporated into our infrastructure, AI systems that are similarly "blind" may result in a catastrophic loss of life.


AI has even made its way through the world's conflicts. Militaries competing in this generation's arms race are attempting to perfect technology in automated weapons systems. While this has the potential to do a lot of good in terms of minimizing deaths, the issue of how comfortable society is with machine learning determining who lives and who dies in such circumstances is one that we are all dealing with.


In other areas, governments and private security firms have already used facial recognition tools to terrifying effects. For example, China's use of technology to profile Uyghur citizens within its borders has raised moral concerns for some time.



According to a study published in the journal Nature at the end of 2020, some academics are beginning to push back against those who have published papers on how to develop facial recognition algorithms.

Amazon, one of the main suppliers of facial recognition AI to both the US government and private security companies in China, has come under fire for its role in human rights violations. The MIT Technology Review announced in June 2020 that, in response to public outrage and pressure from the American Civil Liberties Union, the company agreed to halt sales of facial recognition technology for a year, following similar announcements by IBM and Microsoft. According to the BBC, Amazon is waiting for Congress to enact new rules governing the use of the technology.


For the time being? There is no regulation regulating where and how technology is used, whether it is to apprehend accused offenders or illegal immigrants or to track where you shop, what you buy, and who you dine with.


How we design and use AI has a direct impact on human psychology and social cohesion, especially in terms of the types of information we are exposed to online.


Back in 2018, the Wall Street Journal focused on how YouTube's algorithms suggest high-traffic videos that are more likely to keep users on the web and watching. Whether by design or not, this often leads to audiences consuming increasingly extreme content, "even though certain users haven't shown any interest in such content." Since it has been proven that the internet is especially vulnerable to promoting the creation of conspiracy theories, concerns about how these algorithms contribute to society's problems and radicalization may be well-founded.


The social implications may be far-reaching. Dirk Helbing is an ETHZurich professor of computational social science whose research focuses on the phenomena of social coordination, conflict, and collective opinion-forming using computer modeling and simulations. He, Bruce Frey of the University of Kansas, and seven other researchers write lucidly in the book Towards Digital Enlightenment: Essays on the Dark and Light Sides of the Digital Revolution about how the relationship between coder and coded is becoming a two-way street.


Persuasive computing” is becoming more common on some software platforms. In the future, these networks will be able to direct us through entire courses of action, whether for the execution of complicated work processes or for the generation of free content for Internet platforms, from which companies receive billions. The trend is moving away from programming machines and toward programming people.”



Yuval Noah Harari proposes similar unsettling scenarios. Although he warns that dystopian visions of malevolent leaders using AI to track citizens' biometrics and psyches are a real possibility, it may not be the one we should be most concerned about:

“We are unlikely to face a rebellion of sentient machines in the coming decades,” he writes in The Atlantic, “but we might have to deal with hordes of bots that know how to press our emotional buttons better than our mother does and that use this uncanny ability, at the behest of a human elite, to try to sell us something — be it a car, a politician, or an entire ideology.”

To secure the future, we must make the right choices now.

Finally, it is difficult to list all of the advantages and disadvantages of artificial intelligence because technology already affects almost every part of our lives. Everything from what we watch and what we buy and how we think and what we know is influenced by what we watch. Similarly, it is difficult to predict where we will end up. However, the variety of choices here emphasizes how crucial this topic is, and they make one thing perfectly clear: the decisions we make today determine where we end up tomorrow, which is why it is critical to go slow and not "go quick and break stuff."


The realization of general AI, which appears to be a foregone conclusion at this stage, could end up being humanity's greatest technological achievement — or our demise.


In a TedTalks speech in Alberta, Canada in 2016, neuroscientist and AI commentator Sam Harris stressed the importance of getting the initial conditions of the achievement right:

“When you’re talking about super intelligent AI that can make changes to itself, it seems to me that we only have one chance to get [it] right. The moment we admit that information processing is the source of intelligence, [...] and we admit that we will improve these systems continuously, and we admit that the horizon of cognition very likely far exceeds what we currently know — then we have to admit that we’re in the process of building some sort of god. Now would be a good time to make sure it’s a god we can live with.”


_________________________________________________________________

To help their work, Newsmusk allows writers to use primary sources. White papers, government data, initial reporting, and interviews with industry experts are only a few examples. Where relevant, we also cite original research from other respected publishers.

Source - Interesting Engineering

________________________________________________________________________

Recent Posts

See All

Comments


bottom of page