As is the industry trend, ML and AI are inevitable courses of action to be undertaken by most technology-oriented organizations. Classification of malware (and/or malicious behavior in the context of Cyber Security) is a major challenge and ML algorithms help ease this burden to a significant extent. However, the key to the successful execution of ML workflow is to identify the need (and level) of human interaction required in conjunction with ML framework, and if so – how can this be achieved.
Management and decision-makers, the likes of CISOs, CIOs, and CTOs, need to consider several factors before employing and integrating classification and prediction tools (ML/AI) into the fabric of their Cyber Security function.
Regarding positioning and improving security posture with ML, it is not a question of “if”, but “where” and “how”. As the tech wave inundates us with these forms of forecasting, prediction, and classification algorithm-centric products, we (as Cyber Security professionals) must be wise in choosing where to draw the line between technology and human expertise for effective alerting, triage, and remediation. i.e. enabling and planning for a cohabiting ecosystem between people, processes, and technologies.
One of the key challenges with ML algorithms is dealing with and minimizing “false” or incorrect classifications. In general terms, you may have come across jargon such as – false positives, prediction accuracy, and error rates. Let’s focus on “false positives”. Put simply, this is nothing but a system (algorithm) misclassification, where the classification “sounds the alarm” (or raises a flag) where in actuality there’s no need for an alert and the classification made was in fact incorrect. To make it clear let’s take an example of a spam email classifying algorithm. Incorrectly tagging (classifying) a legitimate email as spam would be considered a “false positive”, and conversely not classifying a spam email and letting it pass through without any action on it would be a “false negative”. Sounds simple, works complex.
Addressing your Challenges with Human Expertise (Ingram Micro Cyber Security Services), alongside Machine Learning tools:
There can be entire divisions and teams of skilled professionals dedicated to studying the mathematics of the above scenario, with the goal of tweaking a system to perform at an optimum level. In many cases, Data Science and Business Intelligence focused teams have done exactly that. Furthermore, several fields have witnessed successful implementations of fully automated (Artificial Intelligence aided) solutions without the need for human intervention. That said, Cyber Security, however, is not one of them due to the intricacies and high stakes nature of the domain (low tolerance for false positives, and accounting for accuracy) – human expertise bites off a majority of the chunk. Therefore, the a growing need for Information Security teams to align and work closely with services-oriented teams and entities, focusing on risk assessments, governance, compliance models, and regulatory standards. In conclusion, Machine Learning is a tool for addressing some of the challenges associated with a vast landscape of technology-driven industries, and specifically so for the Cyber Security domain.