Cybersecurity is evolving at an unprecedented pace. Traditional methods alone aren’t enough to keep up with today’s sophisticated threats. Now, artificial intelligence (AI) and automation are central to how organizations detect, respond to, and prevent attacks.
We spoke with Craig Prigaro, our Senior Cloud and Automation Engineer, about how these technologies are shaping security operations. From detecting subtle network anomalies to the potential future of AI in cybersecurity, Craig shares his perspective on the current and future impact of AI and automation in keeping organizations secure.
Q: Before we get started, tell me what you do at Versetal?
I’m a Senior Cloud and Automation Engineer here at Versetal, where I take on a range of roles focused on all things “Cloud.” I work closely with our clients who have resources in AWS and GCP, offering consultative and technical support. This includes everything from advising on best practices for deploying services to helping design and optimize DevOps pipelines, and even troubleshooting issues within their cloud environments. I also collaborate with our internal teams to ensure we’re aligned on strategy and solutions. Additionally, I meet with clients to discuss their AI and Automation goals, exploring how Versetal can help drive their initiatives forward.
Q: How have AI and automation evolved in security operations over the past few years?
AI and automation have brought rapid advancements in security operations, especially in areas like Threat Detection, Behavioral Analytics, Vulnerability Management, and Fraud Detection. AI’s ability to analyze vast amounts of data has enabled faster anomaly detection. For instance, AI can detect changes in network traffic or user behavior that might indicate a security threat. Automation, on the other hand, allows us to respond almost instantly—sometimes even resolving incidents without manual intervention. This has drastically reduced response times, freeing up teams to focus on higher-level, strategic tasks.
Q: What excites you the most about the current and future role of AI in security?
The potential is incredible, especially with advancing processing power. While the concept of AI dates back to the 1950s, we were limited by available technology for a long time. In 2017, the introduction of Transformer-based models unlocked new capabilities, but we’re on the brink of another leap forward with Quantum Computing. It could revolutionize our processing abilities, making AI even more integral to security operations. This technology will be transformative, and I’m excited to see how it unfolds.
Q: Can you give an example of a security process that’s been fundamentally transformed by AI?
Absolutely. While AI has greatly enhanced threat and vulnerability detection, it has also introduced new challenges like deepfakes, where bad actors can replicate voices, images, and videos of individuals within an organization. These can be hard to detect and may lead to serious breaches. Luckily, we now have AI-driven tools designed to identify synthetic media, which has been critical in helping us stay ahead of these newer threats.
Q: Are there specific security threats where AI outshines traditional methods?
Anomaly detection using Behavioral Analytics has seen a major improvement with AI. Traditional methods required known patterns to identify anomalies, but humans can’t efficiently process the huge volumes of data generated in security logs. AI now helps us not only parse that data but also uncover previously unknown patterns that could indicate threats.
Q: What are the advantages of AI-driven tools for identifying and responding to threats?
SPEED… the speed of response is a huge advantage. AI-driven tools can spot and address vulnerabilities before they become full-blown threats. When a threat is detected, these tools can act quickly to mitigate it, preventing escalation and protecting critical assets. This agility helps keep organizations a step ahead of potential attackers.
Q: How can AI help organizations stay ahead of emerging security threats that are constantly evolving?
AI-driven tools offer a few standout capabilities: Real-Time Threat Intelligence, Adaptive Behavioral Analysis, and Adaptive Learning. These tools can aggregate data from threat feeds and even dark web activity, staying up-to-date with the latest risks. They can also detect patterns based on unseen anomalies and learn continuously from new data, improving accuracy over time.
Q: How does AI-driven automation help reduce response times during a security incident?
AI-driven automation can identify and alert on potential incidents within moments, which minimizes the likelihood of escalation. In cases where threats are known, AI can automatically close off vulnerable pathways before an incident spreads. This fast response time allows our teams to focus on more complex challenges.
Q: What are some potential risks involved in automating security processes?
Automation has enormous benefits, but it’s important to set limits. For example, while it’s helpful for automating low-level tasks like ticket generation, stale backup cleanup, or report creation, there must be safeguards in place. If you automate OS patching, for example, you need an alert system and a rollback plan in case something goes wrong. Thinking through these contingencies from the outset helps prevent problems down the line.
Q: How do you see Versetal handling these risks to maintain robust security?
Similar to manual processes, any automation needs a rollback plan for failures. This includes setting alerts and making sure teams are prepared to respond swiftly. Pairing automation with monitoring is essential so that we can quickly intervene if anything goes off track, keeping systems resilient and adaptable.
Q: Why is human expertise still essential in AI-driven security, even with advanced tools?
While AI can process massive datasets and spot anomalies, it’s not ready to fully automate complex problem-solving. Automation is great for lower-level tasks, but higher-impact situations still require human judgment and expertise to make nuanced decisions.
Q: What are some scenarios where AI might fall short and human intervention becomes necessary?
AI can only act based on its training data, and large language models aren’t yet foolproof. They can make mistakes or “hallucinate” information, especially when facing new types of threats. Humans play a critical role in identifying these errors and ensuring that actions taken are appropriate and accurate.
Q: In your experience, where is human oversight most crucial, even in highly automated environments?
Automation handles repetitive, time-consuming tasks well, but when complex logic or judgment calls are required, automation alone isn’t enough. Decisions with high stakes or involving unknown variables are best managed by humans, who can apply critical thinking and context.
Q: How do you envision the relationship between human security experts and AI evolving in the near future?
I expect that we’ll see more integration of LLMs into day-to-day operations. As research progresses, we’ll likely have smaller, purpose-built models that can handle specific tasks without overwhelming resources. These models will be able to collaborate with each other, supporting human experts in a more streamlined way. However, I don’t see AI replacing the human element anytime soon—humans and AI will continue to work together to address the increasingly complex challenges in cybersecurity.
Craig’s insights highlight a central theme in today’s cybersecurity world: while AI and automation are revolutionizing threat detection and response, there’s still a critical role for human expertise.
The future of security lies in the effective partnership between advanced technology and skilled professionals who can oversee and guide these powerful tools. As we continue to adapt to evolving threats, AI and automation will be indispensable—but only as part of a broader, balanced approach to cybersecurity.