New AI security system cleverly combines machine learning and human intuition

MIT's new platform detected 85% of cyber-attacks in initial testing


MIT researchers have announced that they've concocted a new artificial intelligence system capable of successfully detecting 85% of cyber-attacks.

The AI2 platform, produced by the MIT's Computer Science and Artificial Intelligence Laboratory (in conjunction with PatternEx, a machine learning startup), has notched up a much better record than previous systems.

The 85% accuracy rating is three times better than previous benchmarks which have been recorded, and it also produced far less false positives, in fact a reduction of a factor of five was observed.

MIT notes that AI2's initial testing ran over a period of three months and involved combing through some 3.6 billion log lines looking for suspicious activity, using machine learning to make the initial detections and then putting those in front of a human security analyst who confirmed whether or not a detection was an actual cyber-attack.

AI2 then learned from that feedback, improving its routines for the next round of detection.

Rapid honing

Essentially, the system utilises the best in artificial intelligence smarts combined with human error correction which feeds the machine learning process, and AI2 is apparently capable of honing itself rapidly indeed.

Kalyan Veeramachaneni, the MIT research scientist who developed the system (along with PatternEx's chief data scientist Ignacio Arnaldo), commented: "You can think about the system as a virtual analyst. It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly."

AI2 actually makes use of three different unsupervised-learning methods when it comes to picking out suspected attacks. The more feedback it receives, the more accurate AI2's machine learning-driven analysis becomes in what Veeramachaneni describes as a human-machine interaction that "creates a beautiful, cascading effect".

Nitesh Chawla, a computer science professor at the University of Notre Dame, further noted: "This research has the potential to become a line of defence against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems."

While AI may be a boon to security, in recent times, much of the discussion around the topic has centred on how artificial intelligence could affect employment – with predictions such as AI snaffling the jobs of half the world's population by 2045 obviously sparking much debate.

Via: MIT News

Article continues below