We’ve spoken to numerous CISO’s recently who have expressed frustration with just how much of their day is taken up with dealing with false alerts and flags from traditional automated vulnerability and pentesting suites. Our human middleware is now effectively spending nearly all of its time trying to verify the actual real underlying risks to the business derived from tooling that isn’t smart enough to give them the right answers in the first place. This is an own goal for the ages.
In effect our human middleware is just labelling data for a dumb system that lacks the intelligence to process the labels in the first place. The CISO’s sector expertise, years of experience, knowledge of likely attacks, and God help us, intuition and gut feeling, is effectively reduced to correcting a broken process.
So what is the real job that needs to be done?
Let’s start with looking at the jobs that aren’t going to be needed.
If your job is to scan an external environment, and then report back the results of the scan, that’s not an actual job that is going to exist for too much longer. Many people’s jobs and many CPU cycles are just so much wasted effort. We’ve built up an incredible industry around compliance and tick-box testing, which has not moved the needle on actually preventing cybersecurity attacks.
Naturally, most pentesters have long evolved to adding human analysis on top of the basic scanning. However, this analysis is increasingly competing with the insight provided by deep research methods using machine learning that can operate not only at massive scale, but with profound analytical research.The deep learning algorithms are proving to be effective at extremely complex sector specific human speciality areas, for example in cancer scanning, where the consistency and accuracy of the deep learning has already proven to be higher than is obtained from highly trained specialist doctors.
All of the Cybersecurity risk frameworks then force us to effectively aggregate and summarize internally all our threat data, and report it throughout the organisation. This effectively is the human pooling layer function in a Convolutional Neural Network (CNN) applied inside the human organization. The pooling layer provides the filtering and iteration to take the threat data and extract the essential items we’re going to prioritize in our cyber defence policy. However, unlike the pooling layer in a CNN, the human middleware pooling layer effectively kills our data source, and takes the data out of real-time feed and into a report. As we add our human middleware layer - we’re inevitably further poisoning that signal with our human summary and recall bias. We’re also slowing the entire process down - a lot.
Problems with Automated Penetration Testing
As we have seen, automated penetration testing lacks the adaptive decision-making that human testers bring to the table. This gap often leads to inaccurate findings, missed vulnerabilities, and a false sense of security. Let’s outline the key stages of the traditional pentesting cycle.
The Key Stages of Traditional Automated Penetration Testing
The Hidden Perils of Traditional Automated Penetration Testing
1. False Positives and False Negatives
Traditional automated tools rely on predefined vulnerability signatures and static logic. This creates a dangerous imbalance between detection accuracy and contextual understanding.
- False positives waste valuable remediation time by flagging non-existent vulnerabilities.
- False negatives are even riskier, as critical threats may go completely unnoticed.
Without validation, organizations risk acting on misleading data or, worse, ignoring genuine security flaws.
2. Lack of Contextual Intelligence
Automation cannot interpret business logic, application workflows, or unique infrastructure nuances. Many vulnerabilities depend on contextual understanding—such as chained exploits or logic flaws—that machines fail to grasp.
A tool may detect an exposed port but cannot assess whether it leads to a sensitive database or a non-critical service. This lack of context results in poor prioritization and misplaced mitigation efforts.
3. Limited Exploitation Capabilities
While automated scanners can identify vulnerabilities, they often struggle to exploit them reliably or safely. Real attackers think creatively, chaining multiple weaknesses together for lateral movement or privilege escalation—something most automated systems cannot emulate.
Automated tools typically:
- Fail to handle multi-layered authentication mechanisms.
- Struggle with obfuscated or custom application logic.
- Miss cross-system vulnerabilities requiring deep analysis.
4. Incomplete Coverage
No automated tool can test what it cannot see. Hidden assets, shadow IT systems, or misconfigured endpoints frequently go unnoticed by scanners. Additionally, web applications using dynamic content, API-driven architectures, or complex authentication flows often elude automated detection altogether.
This limited visibility creates dangerous blind spots, leaving organizations with an incomplete security picture and a misplaced sense of confidence.
5. Compliance Over Security
Many organizations adopt automated penetration testing solely to meet regulatory compliance—checking boxes rather than securing systems. Automated reports may meet compliance documentation requirements but rarely deliver actionable insights.
Compliance-driven automation encourages a “scan and forget” mindset, where vulnerabilities are logged but not meaningfully addressed. Real security requires active analysis, contextual understanding, and continuous validation.
6. Poor Remediation Guidance
Automated reports typically provide technical vulnerability summaries without explaining how to fix them or why they matter. They rarely prioritize risks based on impact, business function, or exploitability.
As a result, IT teams may struggle to interpret the findings or allocate resources effectively—delaying remediation and increasing exposure time.
The Real Job to be Done - Human Expertise
What is needed is a hybrid approach where we can combine the unique human benefits with advanced machine learning to provide a new workflow blueprint.
We need professional cybersecurity offensive experts, who know and understand the business value chain, and how it may be technically exploitable, to provide the business context and oversight to enable the smart automated agents to work at scale
Essentially this is a hybrid multi-disciplinary approach- taking advanced cybersecurity expertise, business expertise, with a highly specialized knowledge of threats in the relevant industry sector, to add human intelligence and input to drive our Agentic AI process.
This is the only way to make up for the lack of suitably qualified human resources to make this possible, not to mention the sheer size of the attack surface area and complexity of the environment.
What about using our human intelligence alone? Why don’t we use our unique human intelligence to perform a Red Team attack simulation, with active knowledge of the sector attacks methods, and use our human skill and ingenuity to discover zero day exploits with hand-crafted code customised for the attack target variables? For extremely high value assets this approach would undoubtedly produce optimal results, but for most use cases, where the lack of qualified professionals and the sheer size of the attack surface this method is precluded.
The most effective approach is a truly hybrid assessment model; the human security functions perform in-depth contextual analysis, and assignment of business impact, while empowering the risk models to act intelligently..
A hybrid model provides:
- Improved accuracy through manual validation of automated findings.
- Contextual prioritization of vulnerabilities based on real-world risk.
- Tailored remediation strategies aligned with business objectives.
- Continuous improvement through feedback-driven refinement of automated processes.
- Ensures specialist business and cybersecurity knowledge isn’t siloed but actively distributed using intelligent agents.
The VerifiedThreat Hybrid Model for Continual Smart Assessments
Best Practices to Avoid the Pitfalls of Automated Penetration Testing
- Continual Assessment: Move to a continual assessment model aligned with business impact: Align testing goals with specific business and security needs.
- Align Asset Discovery with Business impact: to ensure all core assets are specified, in scope and continually monitored.
- Integrate Automation Wisely: Use tools to enhance—not replace—expert analysis, by embedding the business impact value into the system.
- Maintain Comprehensive Asset Visibility: Ensure all systems, APIs, and cloud assets are included in scope each with their own customisable KPIs and real-time-reporting.
- Prioritize Risk Over Quantity: Focus on vulnerabilities with high business impact - this not only cuts down the noise but helps to weed our the false positives.
- Incorporate Continuous Improvement in Defences: Move beyond one-time assessments demonstrable progress in threat defences.
The Ideal Approach: Combining Business Smarts with Intelligent Agents
Relying solely on automated tools creates obvious blind spots. VerifiedThreat combines intelligent agents with human business impact analysis, to offer a much more resilient and proactive approach to overall security. This combined strategy ensures organizations not only prevent attacks but also anticipate and neutralize unknown threats.
- VerifiedThreat Product See our product pages for a feature summary
- VerifiedThreat Case Studies See our Case Studies here.
- VerifiedThreat Blog: Read in-depth on using VerifiedThreat dynamically here
- VerifiedThreat Demo: View our demo and arrange a call with our team.
