Simulated User Bots: Real Time Testing of Insider Threat Detection Systems
The insider threat is one of the most serious security problems faced by modern organizations. High profile cases demonstrate the serious consequences of successful attacks. The problem has been studied for many years leading to a number of technologies and products that have been widely deployed in government and commercial enterprises. A fundamental question is how well do these systems work? How may they be tested and how computationally expensive a widely deployed monitoring infrastructure cost? Measuring real systems that are dynamic in nature, encounter unknown configuration bugs and have sensitivities to the vagaries of human nature and adversarial behavior require a formal means to continuously test and evaluate deployed detection systems. We present a framework to deploy in situ simulated user bots (SUBs) that can emulate the actions of real users. By creating a user account and running a host in the enterprise network, a SUB can be introduced into an enterprise system that runs at a realistic pace and does not interfere with normal operations. Infusing malicious behavior into the SUB should be detected by the insider threat monitoring infrastructure. The SUB framework can be controlled to explore the limits of deployed systems and to test the effectiveness of insider evasion tactics, especially low and slow behaviors. We demonstrate our framework by generating user data to test the detection of malicious users and our ability to produce variable ground truths through intrusion detection testing using several commonly used machine learning techniques.
- bots.pdf application/pdf 617 KB Download File
More About This Work
- Academic Units
- Computer Science
- Published Here
- April 3, 2019