Artificial intelligence (AI), one of the critical innovations in modern technology is also the subject of an expanding debate. AI has brought to reality a world that erstwhile only existed in science fiction. The world where a computer reasons out and solve complex logical issues is now a reality. However, the science fiction world where computers run amok and act against the will of their human masters is also a reality. AI is at the center of the two realities above, hence the debate about this form of technology. The article Calderon (2019) makes a benign evaluation on the subject of AI. The article presents AI as the only hope for a world ravaged by cybersecurity threats. Conversely, the Yampolskiy (2016) takes a critical look at the destructive capabilities of AI. The article looks at the inadvertent ways that AI can lead to destruction alongside ways that reprobates may prepare AI specifically to cause death and destruction. It is important to note that each of the two articles takes a partially nuanced position on the capabilities of AI. Based on a careful evaluation and comparison of the two articles it is evident that Yampolskiy (2016) makes a better argument on the vagaries of AI than Calderon (2019) argues for its benefits, although a nuanced way of looking at AI is most apposite.
Article Review
Each of the two articles makes an ample argument about the positive and negative aspects of artificial intelligence respectively. Calderon (2019) limits itself to the issue of cybersecurity and the extent to which cybercriminals have perfected the ability to breach secure cybersystems. As per the article, an element of impunity has developed in the cybercriminal world, as no matter how advanced security installations are, criminals are able to find an avenue for exploitation. The author then proceeds to present AI as the only way to secure a cyber-network. AI has the ability to assess and recognize malware despite efforts to disguise. Further, AI can also detect a novel kind of malware based on its logical capabilities. Conversely, Yampolskiy (2016) makes a wide evaluation of the many ways that AI can be dangerous to human life and property. For a start, the article evaluates the many ways that AI technology designed for positive objectives can malfunction and cause wanton destruction. The very nature of AI augments its capability for destruction even as it diminishes human abilities to detect and mitigate the malfunction in time. From a different perspective, Yampolskiy (2016) presents a number of ways that criminals or even terrorists can use AI.
Delegate your assignment to our experts and they will do the rest.
Research Comparison and the Article Making the Better Argument
Based on the nature of research and the contents of the article, Yampolskiy (2016) has an exponentially better argument that Calderon (2019). The article Calderon (2019) limits itself to a single area of AI technology, being its ability to detect and stop cyberattacks. However, as reflected in Yampolskiy (2016), AI is just as capable of making cyber-attacks. The argument that AI can stop cyberattacks fails to take cognizance of the fact that if the cybercriminals are also using AI technology, the then ability of AI to detect and stop the attack may be neutralized. Further, as argued in Yampolskiy (2016), it is possible for AI designed for a benign purpose, in this case stopping a cyber-attack to malfunction and have a destructive effect. For example, AI designed to protect the privacy of data at all cost may end up destroying the data, depending on how the AI interprets if core objectives. Based on the argument above, Yampolskiy (2016) makes a better argument against AI than Calderon (2019) argues for AI.
Opinion on the Better Argument
Available research and expert commentary seem to agree with Yampolskiy (2016) that artificial intelligence is dangerous or in the very least harmful to humans. According to Anderson, Rainie & Luchsinger (2018), even in the best-case scenario where AI technology works exactly as expected, it will eventually become better than humans will in terms of logic. Humankind will thus transit from being a dominant species intellectually into being reliant on AI. Such reliance could be dangerous in the case of an AI malfunction. Similarly, according to Turchin & Denkenberger (2018), AI has the potential to create a bleak future for humanity. As per the article, much of what is known about AI reflects many dangers including bioterrorism and late state AI halting. Greater risk relating AI stem from the fact that there are several adverse outcomes yet unknown to modern humanity. Finally, according to Sotala (2018) it may be possible for future AI systems to react to humans whom they consider a threat to themselves. A good example would be an AI-based system taking steps to prevent humans from switching it off! Available research thus joins issue with Yampolskiy (2016) on the dangers posed by AI.
Conclusion
Based on the research and analysis above, it is evident that that argument against AI is currently winning against the argument about the benefits of AI. Yampolskiy (2016) presents better research and a stronger argument than Calderon (2019). Results from further research as presented above also supports the contentions outlined in Yampolskiy (2016) about the manifest dangers of AI. However, all the five articles analyzed in this research paper agree that under the right parameters, AI presents major benefits for humankind. Therefore, despite the fact that AI is dangerous, it is critical to take a nuanced approach to the concept of AI. After all, the greatest inventions of all time, the wheel has both benefits and vagaries. Humanity should not do away with good technology because of the propensity for harm when it is possible to mitigate the harm.
References
Anderson, J., Rainie, L., & Luchsinger, A. (2018). Artificial Intelligence and the Future of Humans. Pew Research Center, December .
Calderon, R (2019). The Benefits of Artificial Intelligence in Cybersecurity. Economic Crime Forensics Capstones . 36. https://digitalcommons.lasalle.edu/ecf_capstones/36
Sotala, K. (2018). Disjunctive Scenarios of Catastrophic AI Risk. In Artificial Intelligence Safety and Security . Chapman and Hall/CRC. PP 315-334
Turchin, A., & Denkenberger, D. (2018). Classification of global catastrophic risks connected with artificial intelligence. AI & SOCIETY , 1-17.
Yampolskiy, R. V. (2016, March). Taxonomy of pathways to dangerous artificial intelligence. In Workshops at the Thirtieth AAAI Conference on Artificial Intelligence .