Dear Visitor,

Our system has found that you are using an ad-blocking browser add-on.

We just wanted to let you know that our site content is, of course, available to you absolutely free of charge.

Our ads are the only way we have to be able to bring you the latest high-quality content, which is written by professional journalists, with the help of editors, graphic designers, and our site production and I.T. staff, as well as many other talented people who work around the clock for this site.

So, we ask you to add this site to your Ad Blocker’s "white list" or to simply disable your Ad Blocker while visiting this site.

Continue on this site freely
You are here: Home / Enterprise Software / MIT Creates New AI To Detect Attacks
MIT Develops Machine Learning AI To Detect Cyberattacks
MIT Develops Machine Learning AI To Detect Cyberattacks
By Jef Cozza / CRM Daily Like this on Facebook Tweet this Link thison Linkedin Link this on Google Plus
A new artificial intelligence platform developed by MIT and PatternEx can identify up to 85 percent of cyberattacks, according to a new research paper. Dubbed AI2, the platform is said to be significantly better at predicting cyberattacks than similar systems because it continuously incorporates new input provided by human experts.

“Today’s security systems usually fall into one of two categories: man or machine," Adam Conner-Simon from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) wrote in a post on the MIT News site.

"So-called ‘analyst-driven solutions’ rely on rules created by human experts and therefore miss any attacks that don’t match the rules," he said. "Meanwhile, today’s machine-learning approaches rely on ‘anomaly detection,’ which tends to trigger false positives that both create distrust of the system and end up having to be investigated by humans, anyway.” The MIT and PatternEx platform attempts to merge those two approaches.

An Automated Analyst

AI2 predicts attacks by combing through data and detecting suspicious activity by clustering it into meaningful patterns using unsupervised machine learning, according to researchers at MIT. It then presents the activity to human analysts who confirm which events are actual attacks. AI2 then incorporates that feedback into its models for the next set of data.

“You can think about the system as an automated analyst,” said CSAIL research scientist Kalyan Veeramachaneni, who developed AI2 with Ignacio Arnaldo (pictured above), a chief data scientist at PatternEx and a former CSAIL postdoctoral associate. “It continuously generates new models that it can refine in as little as a few hours, meaning it can improve its detection rates significantly and rapidly.” Veeramachaneni presented a paper about the system at last week’s IEEE International Conference on Big Data Security in New York City.

Machine learning algorithms typically rely on the work of many individuals helping to “teach” them how to identify the relevant data. But the advanced technical nature of threat analysis makes it difficult for anyone who isn’t an expert in data security to contribute. With such experts in high demand and with little time to spare to pore over mountains of data, finding less labor-intensive ways to develop security algorithms has been crucial.

Combining Expert Analysis with Machine Learning

AI2 attempts to combine human input with machine learning through an iterative process. The platform uses multiple autonomous-learning approaches to identify potential attacks, then shows the most likely hits to information security analysts for further analysis. The analysts’ decisions are then fed back into the algorithm, allowing it to refine its decision-making process. Because it is constantly refining its criteria based on human input, the system is able to continually improve its detection methodology. As a result, false positives are kept to a minimum.

“This paper brings together the strengths of analyst intuition and machine learning, and ultimately drives down both false positives and false negatives,” Nitesh Chawla, a computer science professor at the University of Notre Dame, said in MIT's blog post. “This research has the potential to become a line of defense against attacks such as fraud, service abuse and account takeover, which are major challenges faced by consumer-facing systems.”

Image Credit: Al screenshot and photo of Ignacio Arnaldo, via PatternEx.

Tell Us What You Think


Like Us on FacebookFollow Us on Twitter
© Copyright 2018 NewsFactor Network. All rights reserved. Member of Accuserve Ad Network.