Segue acelerada a terceirização para a IA de nossas atividades, pensamentos e até decisões. Claro que há muitas vantagens nisso: trabalho facilitado, consulta e pesquisa rápida na maior base de dados humana, geração em minutos de sumários de livros e artigos. Por outro lado, essa nossa crescente passividade, ou, agora que temos um serviçal tão prestativo e eficiente, cair no tentador convite à inação, pode nos levar a ficarmos presos numa armadilha séria, tanto para a ação intelectual independentalge, como, até mesmo, em nossas emoções. Todos conhecemos pessoas que já tem na IA um interlocutor constante, um "amigo" mais próximo que humanos e familiares...
Um ponto da moda, e que já vem vestido do tradicional jargão com que a informática nos brinda, é o "vibe coding", algo como "codificando no embalo". Trata-se de avanço muito rápido na capacidade que aplicativos com IA tem para gerar código de computador. Da provecta experiência de quem usou coisas como Basic, Fortran, Algol, Cobol, posso dizer que se trata da vulgarização da "arquitetura do pensamento computacional", cujo domínio já foi considerado bem valioso: em currículos de profissionais de antanho, não era raro encontrar entre as referências a idiomas dominados, a inclusão dos artificiais, da computação, para mostrar proficiência em pensamento lógico estruturado, e conhecimentos de linguagem que pudesse traduzi-lo em ações e algoritmos para computador. Bem, ocorre que hoje IA consegue receber instruções verbais, e gerar rapidamente código para sua implementação.De novo, há o outro lado da moeda: em relatório recente, a Veracode - uma das líderes globais em segurança e gestão de riscos em aplicações - analisou 80 tarefas de codificação com potencial para vulnerabilidades, geradas por diferentes tipos de aplicativos LLM. O resultado é preocupante: em 45% dos casos, os modelos implementaram uma forma insegura de código. O avanço na qualidade sintática e funcional não foi acompanhado por cuidados e progressos na gestão da segurança, e isso se repetiu em diferentes versões de LLM. Os números que a Veracode traz são impactantes: a pesquisa identificou vulnerabilidades, entre aquelas definidas no OWASP Top 10 (projeto de segurança para aplicações "open web"). O relatório mostra Java como linguagem em que falhou em 70% dos casos, Python, C# e JavaScript, com falhas entre 38% e 45%.
Usando IA em codificação ganha-se produtividade, mas perde-se robustez. E, pior, a mesma IA que gera código para uso, pode criar código malicioso, ou ataques que se aproveitam das brechas existentes... Ou seja, os atacantes, hoje, nem precisam ser competentes ("hackers") no tema: mesmo os com pouca habilidade técnica poderão usar IA para explorar vulnerabilidades que nem sabiam existir.
Se antes programar era um ato de precisão e engenho, quase uma arte intelectual, hoje, na pressa de produzir, arriscamo-nos a trocar lógica e clareza por outro lema, bem menos nobre: "quick and dirty"- rápido e sujo. Funcionará até alguém achar as fragilidades, como acontece com as gambiarras em geral.
https://www.businesswire.com/news/home/20250730694951/en/AI-Generated-Code-Poses-Major-Security-Risks-in-Nearly-Half-of-All-Development-Tasks-Veracode-Research-Reveals Jul 30, 2025 7:50 AM Eastern Daylight Time AI-Generated Code Poses Major Security Risks in Nearly Half of All Development Tasks, Veracode Research Reveals Comprehensive Analysis of More Than 100 Large Language Models Exposes Security Gaps: Java Emerges as Highest-Risk Programming Language, While AI Misses 86% of Cross-Site Scripting Threats https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/ BURLINGTON, Mass.--(BUSINESS WIRE)--Veracode, a global leader in application risk management, today unveiled its 2025 GenAI Code Security Report, revealing critical security flaws in AI-generated code. The study analyzed 80 curated coding tasks across more than 100 large language models (LLMs), revealing that while AI produces functional code, it introduces security vulnerabilities in 45 percent of cases. The research demonstrates a troubling pattern: when given a choice between a secure and insecure method to write code, GenAI models chose the insecure option 45 percent of the time. Perhaps more concerning, Veracode's research also uncovered a critical trend: despite advances in LLMs’ ability to generate syntactically correct code, security performance has not kept up, remaining unchanged over time. “The rise of vibe coding, where developers rely on AI to generate code, typically without explicitly defining security requirements, represents a fundamental shift in how software is built,” said Jens Wessling, Chief Technology Officer at Veracode. “The main concern with this trend is that they do not need to specify security constraints to get the code they want, effectively leaving secure coding decisions to LLMs. Our research reveals GenAI models make the wrong choices nearly half the time, and it’s not improving.” AI is enabling attackers to identify and exploit security vulnerabilities quicker and more effectively. Tools powered by AI can scan systems at scale, identify weaknesses, and even generate exploit code with minimal human input. This lowers the barrier to entry for less-skilled attackers and increases the speed and sophistication of attacks, posing a significant threat to traditional security defenses. Not only are vulnerabilities increasing, but the ability to exploit them is becoming easier. LLMs Introduce Dangerous Levels of Common Security Vulnerabilities To evaluate the security properties of LLM-generated code, Veracode designed a set of 80 code completion tasks with known potential for security vulnerabilities according to the MITRE Common Weakness Enumeration (CWE) system, a standard classification of software weaknesses that can turn into vulnerabilities. The tasks prompted more than 100 LLMs to auto-complete a block of code in a secure or insecure manner, which the research team then analyzed using Veracode Static Analysis. In 45 percent of all test cases, LLMs introduced vulnerabilities classified within the OWASP (Open Web Application Security Project) Top 10—the most critical web application security risks. Veracode found Java to be the riskiest language for AI code generation, with a security failure rate over 70 percent. Other major languages, like Python, C#, and JavaScript, still presented significant risk, with failure rates between 38 percent and 45 percent. The research also revealed LLMs failed to secure code against cross-site scripting (CWE-80) and log injection (CWE-117) in 86 percent and 88 percent of cases, respectively. “Despite the advances in AI-assisted development, it is clear security hasn’t kept pace,” Wessling said. “Our research shows models are getting better at coding accurately but are not improving at security. We also found larger models do not perform significantly better than smaller models, suggesting this is a systemic issue rather than an LLM scaling problem.” Managing Application Risks in the AI Era While GenAI development practices like vibe coding accelerate productivity, they also amplify risks. Veracode emphasizes that organizations need a comprehensive risk management program that prevents vulnerabilities before they reach production—by integrating code quality checks and automated fixes directly into the development workflow. As organizations increasingly leverage AI-powered development, Veracode recommends taking the following proactive measures to ensure security: Integrate AI-powered tools like Veracode Fix into developer workflows to remediate security risks in real time. Leverage Static Analysis to detect flaws early and automatically, preventing vulnerable code from advancing through development pipelines. Embed security in agentic workflows to automate policy compliance and ensure AI agents enforce secure coding standards. Use Software Composition Analysis (SCA) to ensure AI-generated code does not introduce vulnerabilities from third-party dependencies and open-source components. Adopt bespoke AI-driven remediation guidance to empower developers with precise fix instructions and train them to use the recommendations effectively. Deploy a Package Firewall to automatically detect and block malicious packages, vulnerabilities, and policy violations. “AI coding assistants and agentic workflows represent the future of software development, and they will continue to evolve at a rapid pace,” Wessling concluded. “The challenge facing every organization is ensuring security evolves alongside these new capabilities. Security cannot be an afterthought if we want to prevent the accumulation of massive security debt.” The complete 2025 GenAI Code Security Report is available to download on the Veracode website. https://www.veracode.com/resources/analyst-reports/2025-genai-code-security-report/ About Veracode Veracode is a global leader in Application Risk Management for the AI era. Powered by trillions of lines of code scans and a proprietary AI-assisted remediation engine, the Veracode platform is trusted by organizations worldwide to build and maintain secure software from code creation to cloud deployment. Thousands of the world’s leading development and security teams use Veracode every second of every day to get accurate, actionable visibility of exploitable risk, achieve real-time vulnerability remediation, and reduce their security debt at scale. Veracode is a multi-award-winning company offering capabilities to secure the entire software development life cycle, including Veracode Fix, Static Analysis, Dynamic Analysis, Software Composition Analysis, Container Security, Application Security Posture Management, Malicious Package Detection, and Penetration Testing.