Introduction
Botnets present a serious threat due to their ability to compromise the normal functioning of computers. Notably, it is considered a strong malware since it has the potential to interfere with operations of a network of computers. In the modern world, every human activity includes the use of information technology. Therefore, intrusion by a malware affects various spheres of human activity. It has thus become important for individuals to protect themselves from the threats posed by malwares to avoid disruption of their normal activities. Malware is dangerous because it compromises the normal functioning of a computer by destroying the information that is contained in it. The problem then progressively increases as a result of the accelerated development being experienced in the field of informatics. The prevalence of botnets in the modern information technology world has been precipitated by the advances in artificial intelligence.
Botnet is generally used to refer to a net of bots. They consist of malware software that is installed on a computer that has been compromised due to lack of sufficient security. The malware is dangerous because it has the ability to multiply its effects to other computer hosts that are connected in the same network. All the computers that are connected to the compromised computer will exhibit a synchronized behavior that conforms to the characteristics of the net of bots. The operations of the botnet malware are remotely controlled through a Botmaster hence allowing it to perform various malicious activities in the infected computer (Ring, Landes & Hotho, 2018). The operations of a botnet are difficult to detect since it can be simply viewed as a tool. It is thus used for several criminal uses that in effect affect the computer users that are connected to a similar network. Its effects are considered dangerous because of its ability to compromise several computers that are connected to a similar network. Besides, the malware has the capacity to continuously manifest new behaviors that cannot be detected easily. However, there are various ways that can be used to detect the presence of the malware in a computer network. In order to detect them, an individual has to realize and have knowledge of how the botnets function. One must understand the common characteristics that are manifested by the malware. Therefore studying how botnet behaves enables early detection hence helping in the mitigation of their harmful effects.
Delegate your assignment to our experts and they will do the rest.
Research Questions
The research questions for the study include:
What are the most appropriate botnet detection methods?
What are the different network features that are able to recognize the presence of botnets?
What are the classification methods of botnets?
Literature Review
Botnets
As the word suggests, botnet is used to refer to a net that is composed of bots. The bots interfere with the normal functioning of the network of computers. The word ‘bot’ is derived from the word ‘robot’ thus implying that bots are smart programs that allow for automatic execution (Bilge et al., 2014). The bots are able to fulfill various functions based on the intention of the person that is remote controlling them. There is no direct human intervention that is required once the program intrudes into a network. The botnets work by attacking and taking new hosts with an intention of making them conform to the intentions of the program. The process allows for the growth of the botnet as it occurs automatically. Due to the way botnet executes it functions, it is referred to as a ‘zombie army.’ The zombie army is utilized for different purposes that is inclusive of spamming (Ring, Landes & Hotho, 2018). There are different control mechanisms that are used by botmasters to manage the botnets, and they include commands and protocols that are utilized in controlling the programs. The network architectures can be classified into two categories that include centralized and decentralized network architectures. In a centralized system, the bots are connected to a centralized server while in a decentralized system the botnets have their own server (Ring, Landes & Hotho, 2018). However, the way the smart programs utilize there server is different because they communicate through peer-to-peer methods (P2P). Besides, servers also differ in that there are ones that use Hyper Text Transfer Protocol (HTTP) while others utilize the Internet Relay Chat (IRC) protocol.
Intrusion Detection Methods
There are numerous detection methods for botnets that can be utilized by computer security experts. The detection techniques are dependent on varying principles due to its complex nature. The existing tools for detection utilize high level approaches. The detection methods can be categorized into two methods that include packet-based detection and flow-based detection (Stevanovic & Pedersen, 2015). The first method is used to conduct analysis of packet contents while the second category reveals attacks by relying on statistics of traffic. The detection methods that are described in this paper mainly include flow-based detection.
One of the detection methods includes detecting attacks in service by utilizing anomalous host. It involves the analysis of behavior that is based on the flow statistics. It helps in the detection of nuisance attackers that have traffic that cannot be initially differentiated from genuine or legitimate traffic. It involves the collection of data without having any predefined patterns. The process takes into consideration the specific behaviors that are often displayed by the attackers. The process recognizes object ports that include TCP/80, TCP/25 and TCP/443 that are utilized by global networks, and are only enabled in local networks (Zhao et al., 2013). An obvious attacker will target the decoy and object ports while a simple attacker directs the flow of traffic specifically to the decoy ports (Adamov, Hahanov & Carlsson, 2014). Another attacker known as the nuisance attacker will direct his/her efforts to the object ports. The term ‘weird hosts’ is used to refer to both simple and obvious attackers because they are easy to detect in a system. Authors thus create a system that is used in detecting the nuisance attackers.
The process of detection begins by collection and analysis of the flow of traffic that is linked to the object port. The obvious attackers are used as the flow samples. A weird host list is then generated from the information that has been collected. The step is followed by a determination of the obvious attackers as well as a calculation of the flow statistics so as to identify their tendencies (Stevanovic, & Pedersen, 2015). Nuisance attackers are detected using the information generated from the flow statistics of the obvious attackers. The nuisance attackers’ identification is based on the resemblance of the features of each of the hosts that are involved in sending flows to the samples and object ports (Zhao et al., 2013). There are three metrics that have to be taken into account in the process and they include bytes per packet, distribution ratio as well as packets per flow. The detection process has a 90 percent proved detection rate.
Another detection method includes disclosure. It is large scale detection system that is designed to identify the command and control servers through the use of NetFlow analysis. While other detection methods focus on identifying bot infected machines, disclosure method is aimed at detecting command and control servers that help in dismantling the entire botnet system when it is taken down (Adamov, Hahanov & Carlsson, 2014). It detects the attacks through distinguishing the client access patterns, external reputation scores, flow sizes, as well as temporal behavior. The detection system is designed to work into two phases that include the detection model generation process and the detection phase. The first phase involves the processing of flows to enable identification of benign and command and control servers that leads to generation of models. The second phase mainly includes the identification of benign traffic from the botnet communication. Features are extracted to enable data processing thus leading to a generation of a final list.
Another detection system is referred to as BotMiner. It is a detection system that operates independently of the command and control protocol, infection, and structure model (Bilge et al., 2014). The detection system is not affected by a change of the command and control server address. It helps in the detection of compromised machines that are in a monitored network and also constitute the botnet. The system does not require any previous knowledge of a specific botnet. The system helps in identifying who is communicating to whom as well as monitoring who is doing what. It utilizes the communication patterns that are found in the centralized and peer-to-peer botnets.
Network Features
Features are used to refer to particular characteristics of a particular set of data and can be got from network traffic captures. The features can be categorized based on the network traffic source or the computational resources that are to be obtained. The first classification is used to obtain flow, packet and payload features from the information network connections, packet headers, and packet payload (Yan et al. 2015). The second classification involves two cases namely low-level and high-level features. Low-level features are obtained from raw traffic captures while high –level features are the results obtained from traffic capture processing. Therefore, the network traffic feature that are utilized in botnet detection include packet-based features, flow-based features, and payload based features.
Classification Methods
Botnet can be detected through network traffic classification methods. This type of classification is considered among the best methods of detecting botnets in a network system. The classification method makes an assumption that botnets work through creation of distinguishable patterns, which can be accurately detected by using supervised machine learning algorithms (Stevanovic, & Pedersen, 2015). There are various detection methods that have been proposed and rely on such a method of classification. The modern traffic monitoring detection systems are classified as ones that are implemented closer to the client’s machines as well as ones that are implemented far away as possible from the hosts in a higher network tier. Important features of botnet operations are captured by the detection approaches that are further away from the hosts. The properties detected include synchronicity and behavior of compromised machines. While the method helps in most detections, there are problems associated with the processing of high volume of traffic as well as the identification of compromised hosts. Therefore approaches that target high network tiers are focused on sampled traffic that include Domain Name System (DNS) traffic or NetFlow (Yan et al., 2015). This is a representation of only a part of the total traffic. Detection systems that are implemented closer to host’s machine are used in processing small traffic. They therefore provide detailed traffic analysis that enables capturing of the finite patterns of the botnet activities.
Modern detection methods use varied methods of traffic analysis thus allowing them to have a variety of detection capabilities and scope. The analysis of traffic involves an observation of traffic flows that include user datagram protocol (UDP) and transmission control protocol (TCP) traffic (Adamov, Hahanov & Carlsson, 2014). There are other approaches that are mainly focused on the analysis of DNS traffic that is between a local host and a resolver. The analysis can also be done in the upper DNS hierarchies. The various analysis methods lead to classification methods that include Fully Qualified Domain Names, which is dependent on features that have been extracted from DNS queries (Bilge et al., 2014). There are approaches that utilize features that have been obtained from host machines. Such detection systems require permission to access client machines.
Conclusion
Botnets pose a serious threat to the normal functioning of the computer due to their ability to operate remotely in a network of computers. It can be difficult to detect unless appropriate detection features are utilized by computer security experts. Detecting and eliminating them requires an understanding of their behavior. Studying the behavior and features of the botnets promotes an understanding that enables early detection thus helping in the mitigation of the harmful effects. The research will thus focus on analyzing various botnet detection methods so as to understand the one that works best. It will also focus on understanding various network features that can be used in the recognition of botnet activities. Finally, the research will explain the various classifications that are used in botnet detection.
References
Adamov, A., Hahanov, V., & Carlsson, A. (2014, September). Discovering new indicators for botnet traffic detection. In Proceedings of IEEE East-West Design & Test Symposium (EWDTS 2014) (pp. 1-5). IEEE.
Bilge, L., Sen, S., Balzarotti, D., Kirda, E., & Kruegel, C. (2014). Exposure: A passive dns analysis service to detect and report malicious domains. ACM Transactions on Information and System Security (TISSEC) , 16 (4), 14.
Ring, M., Landes, D., & Hotho, A. (2018). Detection of slow port scans in flow-based network traffic. PloS one , 13 (9), e0204507.
Stevanovic, M., & Pedersen, J. M. (2015, June). An analysis of network traffic classification for botnet detection. In 2015 International Conference on Cyber Situational Awareness, Data Analytics and Assessment (CyberSA) (pp. 1-8). IEEE.
Yan, Q., Zheng, Y., Jiang, T., Lou, W., & Hou, Y. T. (2015, April). Peerclean: Unveiling peer-to-peer botnets through dynamic group behavior analysis. In 2015 IEEE Conference on Computer Communications (INFOCOM) (pp. 316-324). IEEE.
Zhao, D., Traore, I., Sayed, B., Lu, W., Saad, S., Ghorbani, A., & Garant, D. (2013). Botnet detection based on traffic behavior analysis and flow intervals. Computers & Security , 39 , 2-16.