2007 Reports
Post-Patch Retraining for Host-Based Anomaly Detection
Applying patches, although a disruptive activity, remains a vital part of software maintenance and defense. When host-based anomaly detection (AD) sensors monitor an application, patching the application requires a corresponding update of the sensor's behavioral model. Otherwise, the sensor may incorrectly classify new behavior as malicious (a false positive) or assert that old, incorrect behavior is normal (a false negative). Although the problem of "model drift" is an almost universally acknowledged hazard for AD sensors, relatively little work has been done to understand the process of re-training a "live" AD model --- especially in response to legal behavioral updates like vendor patches or repairs produced by a self-healing system. We investigate the feasibility of automatically deriving and applying a "model patch" that describes the changes necessary to update a "reasonable" host-based AD behavioral model ({\it i.e.,} a model whose structure follows the core design principles of existing host--based anomaly models). We aim to avoid extensive retraining and regeneration of the entire AD model when only parts may have changed --- a task that seems especially undesirable after the exhaustive testing necessary to deploy a patch.
Subjects
Files
-
cucs-035-07.pdf application/pdf 94 KB Download File
More About This Work
- Academic Units
- Computer Science
- Publisher
- Department of Computer Science, Columbia University
- Series
- Columbia University Computer Science Technical Reports, CUCS-035-07
- Published Here
- April 27, 2011