Hackers May Meet Their Match With AI

There are many organizations in the world that simply can?t have cybercriminals and hackers interfering with their data. One of these organizations, CERN (whose acronym translates to the European Laboratory for Particle Physics) has far too powerful of a computer grid to allow hackers to access it. To keep it safe, CERN has deployed what may be the future of cybersecurity: artificial intelligence.


The use of artificial intelligence in security makes a lot of sense for a few reasons. First and foremost, it gives us a chance of keeping up with the changes that malware undergoes. To combat this, the scientists at CERN have been busy teaching their AI to identify threats on their network and to take appropriate action against them.

This is no easy feat when one considers the resources that CERN requires to operate its Large Hadron Collider and Worldwide LHC Computing Grid. The LHC collects a truly vast amount of data–around 50 petabytes between the beginning of 2017 to June–and shares it across a network of 170 research facilities worldwide, also providing computing resources to these facilities as needed.

This creates a unique challenge to maintaining cybersecurity–how to maintain computing power and storage capabilities while keeping the global network secured.

As a result, CERN is turning to AI and machine learning to allow their security to identify between typical network activity and that of a more malicious nature. While CERN is still testing its new artificial intelligence, there are ways that businesses can leverage similar concepts to help protect their own networks.

As of right now, when we say AI, we?re not talking about machines with human-like qualities you?d see in movies today. CERN isn?t going to be teaching their security AI the concept of love and friendship anytime soon. Instead, it?s actually a very simple tool you probably use every day. Take Google, for example. When you do a Google search, you are getting results that are indexed and categorized without the direct influence of a human operator. Google?s computers crawl the Internet and use machine learning and hundreds of various factors to deliver the search results most relevant to what you need based on a whole slew of conditions.

The benefit of using this form of AI means results are delivered incredibly quickly, and a mind-numbing amount of data can be collated and delivered at the blink of an eye. If Google employed humans to deliver search results, the system would be flawed by human biases, the costs of employing so many people to meet demand, and it would simply be slow to get the results on-demand.

AI empowering security could quickly scan for flaws on a network, run ongoing penetration tests, and constantly patch vulnerabilities. It could work day and night to improve spam and firewall capabilities. AI would have access to a lot of security resources and be able to react much faster, making it harder for hackers to overcome. Although we?re a long way out from seeing something like this fully implemented, we?re already seeing a lot of virtually intelligent systems collating and delivering data, and we can?t wait to see more?

Just as long as we don?t flip the switch on Skynet.

Do you think that AI is a viable resource to keep business networks secure, or is the technology involved potentially too untested to have the reliability that modern enterprises need over time? Leave your thoughts in the comments section below.

Related Posts