Skip to main content

AI still needs humans: How to harness the power of both

July 18, 2023
clock 3 MIN READ

Monitoring logs and analyzing data is mind-numbing and time-consuming work; it’s a fast path to burnout. AI has taken over this work, offering organizations a fast and efficient way to monitor their systems for potential threats. According to a Security Intelligence article,1 AI has improved cybersecurity operations in the following ways:

  • Earlier recognition of threats via pattern identification and anomaly detection 
  • More informed ability to analyze data and identify potential attack vectors
  • Improved threat intelligence due to ability to mine data sources 
  • Elimination of repetitive tasks via automation and orchestration

AI still lacks full capabilities

Companies benefitting from incorporating AI in their cybersecurity programs still require human intervention for the same reasons Tesla’s “Full Self-Driving” capability still requires a fully attentive driver: the deep learning capabilities of AI, where an algorithm completely mimics the structure and function of the brain, are just not there yet. 

From an accuracy perspective, we often forget that AI is as inherently biased as the humans who created it. When fed distorted datasets – especially when the dataset is small – the precision of AI suffers greatly. 

Even more alarming is when faulty conclusions are drawn within industries where lives and livelihoods are at stake (read: healthcare and finance). One algorithmic error has the potential to produce the false negative that gets your organization breached. Assets and users can be tagged as business-critical or high-risk to aid with risk scoring, but there is little nuance beyond this weighting. 

A model for AI and human coexistence 

Thankfully, effective models for AI/human coexistence are available. Human-in-the-loop (HITL) reinforcement learning works like call centers – AI is confined to assist with only a handful of predetermined actions, beyond which the situation escalates to an analyst.2 AI does the heavy lifting when it comes to monitoring, alerting human analysts when a possible threat is detected. They can then address and resolve issues more efficiently. 

The benefits speak for themselves. MIT researchers discovered that when HITL was included in a security system, detection rates more than tripled, and false positives were reduced fivefold when compared to an unsupervised security system.

Cybersecurity foundations for AI with a human touch

Follow these recommendations when incorporating AI into your cybersecurity program: 

  1. Build a strong foundation. “If you’re not already doing something, you can’t automate it,” says John Pescatore, Director of Merging Security Trends at SANS. Ensure the basics of your cybersecurity program are strong before entertaining the idea of AI. 
  2. Select use cases wisely. Pick the low-hanging fruit by opting for large, quality data sets with readily available subject matter experts that can train the algorithm well. Capgemini Research Institute recommends the following to start: malware detection, intrusion detection, risk scoring, fraud detection, and user/machine behavior analytics.
  3. Deploy SOAR workflows. Security, orchestration, automation, and response (SOAR)  tools allow analysts to implement automatic workflows with manual decision points where appropriate.
  4. Educate cyber analysts on AI technology. It’s essential for your staff to understand basic machine learning concepts and how your specific AI tool functions before going live with any AI tool.  

With the buzz that surrounds new AI technologies like ChatGPT, it’s easy to get carried away with all the good AI could do for cybersecurity. But, like all things in this industry, the smartest course is often the most boring: balanced investments between the sacred triad of people, process, and technology are still critical.  

Sources

1 Gerald Parham, “4 Ways AI Capabilities Transform Security,” Security Intelligence, August 25, 2022.
2 Alessandro Civati, “Human-in-the-loop Model – Why AI Needs Human Intervention,” July 5, 2022.  
3 K. Veeramachaneni, I. Arnaldo, V. Korrapati, et al. “AI^2: Training a Big Data Machine to Defend,” 2016 IEEE 2nd International Conference on Big Data Security on Cloud (BigDataSecurity).
4 Reinventing Cybersecurity with Artificial Intelligence: The new frontier in digital security,” Capgemini Research Institute, 2019. 
 

Insights for cybersecurity professionals

Let's connect

Learn more about how we can help enhance your cybersecurity posture.