Google Duplex, a recent Google project, is recently live in the majority of the United States, and it speaks to users in making restaurant reservations through phones with the help of Google Assistant with AL-based. According to CEO Sundar Pichai, the Google Duplex was created to help individuals in making appointments to businesses through the phone without interacting with the user (Visner 2013). This paper demonstrates that ethical concerns of the Google Duplex is based on the fact that a person is unable to know when talking to the tech or to a real person. It also identifies computer warfare looms as the next significant conflict international law and cyber threats to critical infrastructures that depend much on the information technology.
Ethical concerns of Google Duplex.
The ethical concern about Google Duplex is that the person making the appointment is not aware if he or she is talking to a robot or a human being. The Duplex project is a good tech, but it has bad ethics. From the demonstration of the Google Duplex project shown in the video, the AI voice proved that it has the capability to understand perfectly the human voice and respond back with the real human voice. It was designed to apply certain words like “um” as well as pause breaks, which makes it impossible to differentiate when one is talking to the Google Duplex tech and when talking to a real person to sound like the human voice.
Delegate your assignment to our experts and they will do the rest.
Concern about computers that can mimic human speech
In my opinion, we should be concerned about computers which have the potential of speaking like human beings. In this case, one is not able to differentiate when talking to the computer and when talking to a real person ( Gattuso 2018). We should be granted our rights to know if we are talking to a computer or to a human being so that we can choose whether an individual wants to continue with the conversation with the network or not. Due to the cybersecurity issues, we should have concern about these computers because they can be hacked and thus misguiding the customers.
Other concerns about AI
The other concerns about the rapid improvements of the technological enhancements of AI federal and other international government bodies are computer warfare looms as the next significant conflict in the international law, and cyber threats to critical infrastructures that depends much on the information technology. In computer warfare appears the case, the tech can be designed to attack other nations. For instance, in 2009, Stuxnet (a computer warm) crawled into the data language of the system’s programmable logic controllers, and it sent back images to the operators of the machine deceiving them that everything was going well as expected (Brust 2012). Thus, AI improvements can lead to the holding of the mechanics of a computer and set out to destroy the whole system.
Innovation in AI
I believe that innovation in AI should be subject to grander regulation by states, federal, and international government bodies. This innovation should be regulated by the states to meet the required ethics such that a person can know when talking to the machine. Besides, it should be restricted to point that it cannot be used to cause cybersecurity threats to individuals and nations (Visner 2013). Due to the rapid improvements in information technology, the AI can be used by terrorists and as well as other countries to attack other countries. The international government bodies should regulate the innovation in AI because the cyber operations outside a particular sovereign territory of a country lead to a willful incursion into another nation’s sovereign territory.
References
Brust, R. (2012). Cyberattacks: computer warfare looms as next big conflict in international law. ABAJ , 98 , 40.
Visner, S. (2013). Cyber Security's Next Agenda. Georgetown Journal of International Affairs , 89-99.
https://youtu.be/Y0otgRmscUk