Though the recent rapid technological development has come with numerous new openings and potential foundation for competency for all kinds of organizations, it has also come with unprecedented threats. Cyberspace, the fifth war domain has emerged. From the unregulated nature of cyberspace, the ease, and secrecy in which cyber crime can be conducted; business organizations have been left exposed making cyber security a critical tenet for all business organizations which have embraced digital technology. These organizations need to protect themselves against cyber crimes, wars, and terror. The cyber security adopted by the different organization is usually different based on their risk assessment. There are two major types of cyber attacks which all business organizations face; the deliberate and opportunistic attack (IT Governance, 2016).
Deliberate attacks take place when the attackers want to access high valuable or profile data from an organization or there is publicity benefit in a successful attack on the organization. The opportunistic attack comes as a result of exploitable vulnerability due to the presence of automated digital systems. These autonomous systems provide new opportunities for cyber-attacks which could devastate the automation process bringing the company to a grinding halt. These challenges or security issues arising from the technology of intelligent, autonomous machines is what is forms the basis of this paper. In particular, the paper is going to assess the cyber-security issues of future autonomous business systems.
Delegate your assignment to our experts and they will do the rest.
Vulnerability of Autonomous Systems
The technology used in autonomous systems goes beyond conservative automation processes. It is developed by means of materially different algorithms and software system designs. As such, automation technology enables a system to explore the promises for action and come up with a resultant action plan i.e. “What to do next” with the minimal or total absence of human participation. This happens usually happen in unstructured conditions which may have noteworthy uncertainty. Though mostly used in the virtual world and in movies, the autonomous system has spread to other sectors which touch our ordinary real life. These systems are found in driverless cars in the streets, in home appliances such as the autonomous vacuum cleaners or autonomous robotic guides at museums, building CCTV detectors amongst others. In business organizations, the autonomous system has enabled businesses to integrate business processes to streamline their operations. The system can be used to streamline operation in all kind of industries from hospitality to financial services. However, due to the inherent vulnerability of the autonomous computer-based system brought by the sharing of IT infrastructures and integration of corporate IT to industrial control systems, organizations have been left exposed to different types of cyber-attacks, and require some degree of cyber security (Mylrea, 2015).
The three vulnerabilities threatening the autonomous system include the availability of data whereby data can be interrupted, confidentiality of the system where data can be intercepted and integrity i.e. modification of data. Also, fabrication of data by an outsider or by a non-trusted source is another area of vulnerability in the automation systems (Lera et.al, 2016). The automated systems are usually governed or controlled by hardware abstractions such as sensors and actuators and inter-process communications which receive information, process them and then execute or send back messages. Unfortunately, these hardware abstractions do not have any security in their communication mechanisms to protect them from an attacker, who can make the network resources unavailable, an example of a denial-of-service attack. The interception, seizure or interruption and modification of commands about the source or the user are another forms of attack which could affect the autonomous system. An attacker could also replicate a sensor generating data and send incorrect or misleading information to the system.
Security Issues that arises from Autonomous Systems
Chances of attack on an autonomous system arise from the vulnerable nature of an autonomous system various security issues arise. Also, there exists fault modes for independent systems that that have never been unexplored and as such their implications are still unknown (Atkinson, 2015). All this increases potential susceptibility to attack, exploitation, and subversion of the system. Some of the cyber-security issues which arise from known vulnerability include the challenge of identifying an intelligent and autonomous system has been compromised. Other issues include resilience and repair, consequences that arise from the vulnerabilities and isolation of context of autonomous system faults and possible trickery.
Probability of System Failure
The probability of failure of an automated system is a real issue surrounding security of the cyber system. In one reason or another system can fail to compromise the information generated. The probability of system failure is usually possible depending on type and quality of software used to make the system and the system’s architecture (ISPE Gamp Forum, 2003). If these building blocks are weak or are of low quality, they present an avenue for a possible cyber attack. As such, depending on the nature of records or information stored and processed by an automated system, one must ensure that its building blocks are up to standard. This will not necessary protect the system from attack but will surely reduce the vulnerability of the system.
Likelihood of Detection
Another cyber security issue involved with the autonomous system is on how to determine the system’s susceptibility i.e. the ability of the system to detect corruption or interference. This is a very complicated to detect as it not more about the integrity of the records generated by the system or system failure. A hacker or an employee with access can intentionally alter or delete some commands of the system altering how things operate without exhibiting any sign of malfunction. This can only be detected by auditing and assessment of the system limiting the rights of people who can access or change the commands of the system. As such a system should have an in-built checksum system which can gauge the likelihood of unauthorized human changes hence making detection of illegal entry possible.
Isolation in the Context of Autonomous System Faults and Possible Trickeries
The limitation of human computational complexity (constraints on algorithms) has contributed to the existence fault modes in the intelligence system. This potential fault modes arise from functional areas like when the formulating goal for the system, inference reasoning, knowledge, and belief, learning and finally planning and execution controls. Fault modes related to goals include disorders of attention, goal conflict, self motivated behavior, and inference. These faults do not have the fail safe and fail operational features which provide that if a specific code or event fails, some inherent response overrides or completes the action intended (Clark & Bringsjord, 2008). As such, a hacker or an adversary knowing these fault modes could trick the system and either gain entrance or corrupt it immobilizing the business unit. These fault modes are best tacked by enhancing test and evaluation processes during the development of the systems and before their adoption.
During goal generation, out of limited information, scarce computing resources might be diverted or misappropriated away from what the autonomous system was supposed to do. This could cause potential vulnerability to the autonomous systems which happen as a result of other types of faults (Hawes, 2011). The inference is another type of fault which may come as a result of an intelligent system finalizing that a human-provided objective has lacking prioritization compared to other objectives. Another reason why this fault may occur if the system do not have the capability to actually attain the objective or a goal is immaterial. If this happens, the system develops apathy towards the goal i.e. do not react towards meeting the goal.
Faults related to inference and reasoning come from invalid logic, the fallacy, and solipsism. Failure of coding an inclusive reasoning may result into the system approving some invalid logic or marginalizing the same rather than aiding in a corrective in a corrective function. This fault mode is complemented by the fallacy fault. With the missing or incomplete logic, the system is unable to draw relevant and logical conclusion hence, bringing forth error. At times still due to the incomplete coding of logic the system could resort to logical minimalism where it. This risk is more likely to actually happen when solipsism demeans outwardly compulsory policy-guided restrictions on conduct by authority.
There are also very many faults which arise during the planning and execution control process. The potential fault modes of planning and execution include failure of ethical behavior and the rise of emergent behavior (Arkin 2012). Bounded rationally of all relevant information during the planning process could lead to the creation of an ethical code which is totally unambiguous. This can be corrected by having an implementation monitoring system with proscription power. Learning information and belief is another group of probable fault modes which arises from the autonomous processes concerned with the creation, maintenance, and adaptation of what an smart system considers to be true. From the cyber-security concern, an adversary or hacker could, though the challenging dialog process, undermine or trick an intelligent system’s beliefs.
Autonomous Recovery from Insertion of Hostile Codes
Cyber security has always been designed to achieve the three goals of confidentiality, availability of IT-asset and integrity (Kerner & Shokri, 2012). But with the introduction of large IT set-ups, where it is economically indefensible to seek complete security for all IT components the system have become vulnerable to external attacks. Also, the introduction of mobile tactical missions has further exposed the vulnerability where the IT centric cyber security is incapable of taking into account the dynamic behavior of the missions (Jacobson, 2013). Within such complex systems, the ability of the system to overcome challenges is very fundamental. However, due to the existence of fault modes, some of which we are limited to understand creates an avenue for one to circumvent the system.
Within our hostile environment with the sophisticated attackers, business needs an intelligence-led strategy which ensures that they not only react to cyber attacks but stay ahead of emerging threats. As such, the focus on cyber attack should not be on security breaches or infiltration as it is today but on how to detect live threats in real time (Lynch, 2012). To ensure early detection and dealing with cyber threats in real time, an autonomous system should have a reflective capability which enables it to detect problems and recover from them without external intervention.
Conclusion
The design of any autonomous intelligent system is always to take actions or make decisions about what to do within a specific environment with little or zero participation of human beings. However, due to bounded rationality, the formulation of the logarithms guiding the systems could lack in one way or another giving rise to fault modes that an adversary can exploit. This is a serious challenge within the cyberspace and though one cannot completely protect himself against an attack one needs to properly analyze a system from its development to how it works before implementing it. Actually, most of the faults highlighted can be exploited by hackers and as a result, we now have a new and exceptional cyber-security concerning the environment around autonomous systems that must be looked into. As such, more research is required in the field of meta-cognition which will help improve the security the autonomous systems. The vulnerabilities which will need more research include those along the fault modes, detection vulnerability, isolation, resilience, and repair.
References
Arkin, R. (2012). ‘Moral Decision Making in Autonomous Systems: Enforcement, Moral Emotions, Dignity, Trust, and Deception.’ PROC IEEE . 100(3):571-589.
Atkinson, D (2015). Emerging Cyber-Security Issues of Autonomy and the Psychopathology of Intelligent Machines. Retrieved from http://www.aaai.org/ocs/index.php/SSS/SSS15/paper/viewFile/10219/10049
Clark, M., and Bringsjord, S. (2008). ‘Persuasion Technology through Mechanical Sophistry.’ Communication and Social Intelligence . 51-54.
Hawes, N. (2011). ‘A survey of motivation frameworks for intelligent systems.’ Artificial Intelligence . 175:1020-1036.
IT Governance (2016). What is Cyber Security? Retrieved from http://www.itgovernance.co.uk/what-is-cybersecurity.aspx
Jacobson, G (2013 ). Mission-Centricity in Cyber Security: Architecting Cyber Attack Resilient Missions . Retrieved from https://ccdcoe.org/cycon/2013/proceedings/d1r2s1_jakobson.pdf
Kerner, J & Shokri, E (2012) ‘Cybersecurity Challenges in a Net-Centric World.’ Aerospace Crosslink Magazine, Spring 2012
Lera, F., Balsa,J., Casado, F., Fernandez, C ., Rico, F & Matellan, V (2016). Cybersecurity in Autonomous Systems: Evaluating the performance of hardening ROS. Retrieved from http://riasc.unileon.es/archivos/papers/waf2016.pdf
Lynch, M (2012) Cyber-activity roars to life. Retrieved from http://files.bakerbotts.com/file_upload/documents/CyberSecurityBrochure .pdf
Mylrea, M (2015). ‘Cyber Security and Optimization in smart ‘Autonomous’ buildings.’ Retrieved from https://www.aaai.org/ocs/index.php/SSS/SSS15/paper/download/10339/10056