The paper “Reinforcement Learning for Intrusion Detection: More Model Longness and Fewer Updates”, co-authored by LASIGE’s integrated member Vinicius V. Cogo, was published in the IEEE Transactions on Network and Service Management (TNSM), a top-ranked journal (Scimago Q1; h5-index 53). This work is co-authored by (and extends a previous collaboration with) Roger dos Santos, Altair Santin from PPGIa (PUCPR, BR), and Eduardo Viegas from PPGIa (PUCPR, BR) and the Secure Systems Research Center (TII, UAE).
Related works can provide high detection accuracies for network-based intrusion detection by using machine learning techniques, but they fail to adequately handle the changes in network traffic behavior as time passes. This article proposed a new intrusion detection model based on a reinforcement learning (RL) approach that aims to support extended periods without model updates. The proposal is divided into two strategies. First, it applies machine learning scheme as a reinforcement learning task to long-term learning, maintaining high reliability and high classification accuracies over time. Second, model updates are performed using a transfer learning technique coped with a sliding window mechanism that significantly decreases the need for computational resources and human intervention. Experiments performed using a new dataset spanning 8TB of data and four years of real network traffic indicate that current approaches in the literature cannot handle the evolving behavior of network traffic. Nevertheless, the proposed technique without periodic model updates achieves similar accuracy rates to traditional detection schemes implemented with semestral updates. In the case of performing periodic updates on our proposed model, it decreases the false positives up to 8%, false negatives up to 34%, with an accuracy variation up to only 6%, while demanding only seven days of training data and almost five times fewer computational resources when compared to traditional approaches.
The paper is available here.