Within the CONSOLE project, validation was not treated as a final checkpoint, but as a structured process embedded into real operational environments.
Rather than limiting the platform to control demonstrations, CONSOLE was configured and tested across three internal pilot environments operating under different infrastructure setups, including cloud-based and fully on-premises deployments. The objective was clear: introduce the platform into real development workflows, execute defined security scenarios, and observe measurable outcomes.
This internal validation phase lays the foundation for the next stage of evaluation, where six external SMEs will assess the platform within their own development environments as part of the External Validation Activities.
Before execution began, a common validation framework was defined to ensure that testing would be consistent across environments while remaining technically relevant for each deployment.
The framework specified which security components of CONSOLE would be activated, how projects should be onboarded, and how testing scenarios should be structured and executed. Instead of generic scans, the validation relied on predefined scenarios aligned with concrete vulnerability categories and typical software development risks.
Each scenario was designed to verify specific capabilities of the platform, including detection accuracy, severity classification, dashboard aggregation, and integration into development workflows. At the same time, the framework allowed each environment to apply the methodology according to its own infrastructure model and technology stack.
This ensured that validation was both comparable across ecosystems and representative of real operational conditions.
Following the configuration phase, the defined validation scenarios were executed within active development environments.
Controlled vulnerabilities and configuration weaknesses were intentionally introduced into real projects to verify the platform’s detection capabilities. These scenarios targeted specific categories of software risks and allowed the pilots to assess how CONSOLE handled issue identification, classification, and aggregation.
The evaluation did not focus solely on whether vulnerabilities were detected. It also examined how findings were presented through the dashboard, how severity levels were assigned, and how effectively issues could be tracked and managed across user roles.
Across the different infrastructure setups, CONSOLE detected the introduced issues and consolidated results from multiple analysis components into a unified interface. This enabled structured tracking of security clues and project-level visibility that was previously fragmented across separate tools.
Over successive scan cycles, teams monitored how findings evolved, adjusted rule configurations to improve signal quality, and observed remediation progress. The validation also assessed integration into existing workflows, including compatibility with development pipelines and operational constraints specific to each environment.
As a result of this process, the pilots reported clearer visibility into their risk exposure and a more structured approach to managing vulnerabilities within their development lifecycle. This contributed to more informed release decisions and enabled teams to address issues that had previously remained undetected.
As testing progressed, the pilots monitored how vulnerabilities evolved over time and how remediation efforts were carried out across scan cycles.
The evaluation confirmed that CONSOLE maintains consistent detection, aggregation, and reporting capabilities across heterogeneous environments. It operated within CI/CD-based cloud infrastructures as well as in fully isolated on-premises systems without relying on external services.
By executing structured validation activities in operational settings, the project demonstrated that security analysis can be integrated into development lifecycles in a controlled and measurable manner.
The result is a platform that has been configured, tested, and evaluated under real conditions, demonstrating its applicability across different software ecosystems and supporting the practical adoption of secure-by-design development practices.
Other relevant blog posts: