Generative AI in Application Security report from Checkmarx

Posted on Monday, August 12, 2024 by RICHARD HARRIS, Executive Editor

Checkmarx, the in-cloud-native application security provider, has published its Seven Steps to Safely Use Generative AI in Application Security report, which analyzes key concerns, usage patterns, and buying behaviors relating to the use of AI in enterprise application development. The global study exposed the tension between the need to empower both development and application security (AppSec) teams with the productivity benefits of AI tools and establish governance to mitigate emerging risks.

"Enterprise CISOs are grappling with the need to understand and manage new risks around generative AI without stifling innovation and becoming roadblocks within their organizations. GenAI can help time-pressured development teams scale to produce more code more quickly, but emerging problems such as AI hallucinations usher in a new era of risk that can be hard to quantify. Checkmarx has successfully foreseen the problems that can arise with AI-generated code and we’re proud to be delivering next-stage solutions within the Checkmarx One platform today," said Sandeep Johri, CEO at Checkmarx.

Highlights of the Generative AI in Application Security report from Checkmarx include these findings showing the difficulty of establishing and enforcing governance

  • Only 29% of organizations have established any form of governance
  • 15% of respondents have explicitly prohibited the use of AI tools for code generation within their organizations
  • 99% report that AI code-generation tools are being used regardless of prohibitions
  • 70% say there is no centralized strategy for generative AI, with purchasing decisions made on an ad hoc basis by individual departments
  • 60% are worried about GenAI attacks such as AI hallucinations 
  • 80% are worried about security threats stemming from developers using AI
     

Only 29% of organizations have established any type of governance on the use of GenAI tools in their organizations

Many CISOs are seeking to build the right level and types of governance in order to permit their application development teams to use AI coding tools. Given its ease of adoption, flexibility and utility, security leaders clearly understand its potential for helping to speed and scale application development in a time-pressured business environment.

However, generative AI is currently unable to follow secure coding practices or to produce truly secure code, which motivates some security teams to consider AI-driven security tools to help manage the proliferation of development teams’ AI-generated code. The Checkmarx study found that:

  • 47% of respondents indicated interest in allowing AI to make unsupervised changes to code
  • 6% said that they wouldn’t trust AI to be involved in security actions within their vendor tools
     

"The responses of these global CISOs expose the reality that developers are using AI for application development even though it can’t reliably create secure code, which means that security teams are being hit with a flood of new, vulnerable code to manage. This illustrates the need for security teams to have their own productivity tools to manage, correlate and help them prioritize vulnerabilities, as Checkmarx One is designed to help them do," said Kobi Tzruya, Chief Product Officer at Checkmarx.

Methodology

In early 2024 Checkmarx commissioned a global research firm to conduct a survey of 900 CISOs and application security professionals in companies in North America, Europe and Asia-Pacific with annual revenue of $750 million or more.

More App Developer News

Tether QVAC SDK Powers AI Across Devices and Platforms



APAC 5G expansion to fuel 347B mobile market by 2030



How AI is causing app litter everywhere



The App Economy Is Thriving



NIKKE 3.5 anniversary update livestream coming soon



New AI tool targets early dementia detection



Jentic launch gives AI agents api access



Experts warn ai-generated health content risks misinterpretation without human oversight



Ludo.ai Unveils API and MCP Beta to Power AI Game Asset Pipelines



AccuWeather Launches ChatGPT Integration for Live Weather Updates



Stop Using Business Jargon: 5 Ways Buzzwords Damage Job Performance



IT spending rises as banks balance legacy and innovation



Tech hiring slumps as Software Developer job postings fall



AI is becoming more widespread in collaboration tools



FCC prohibits new foreign router models citing critical infrastructure risks



ChatGPT Carbon Footprint Matches 1.3 Million Cars Report Finds



Lens Launches MCP Server to Connect AI Coding Assistants with Kubernetes



Accelerating corporate ai investment returns



Enviromates tech startup launches global participation platform



Private Repository Secures the AI-driven Development Boom



UK Fintech Platform Enviromates Connects Projects Brands and Consumers



Env Zero and CloudQuery Announce Merger



How Industrial AI Is Transforming Operations in 2026



AI generated work from managers is damaging trust among employees



Foresight Secures $25M to Bridge Infrastructure Execution Gap



Copyright © 2026 by Moonbeam

Address:
1855 S Ingram Mill Rd
STE# 201
Springfield, Mo 65804

Phone: 1-844-277-3386

Fax:417-429-2935

E-Mail: contact@appdevelopermagazine.com