AI Tools Surge in Developer Workflows but Trust Remains Key Hurdle, Survey Reveals
A new survey conducted in partnership with OpenAI reveals that a record number of developers are now integrating AI into their daily work for learning purposes, yet traditional online resources remain their go-to for validation, and trust in AI outputs continues to be a major barrier.
The findings, based on responses from thousands of users in February, underscore a rapidly shifting landscape where AI-assisted knowledge is becoming indispensable—but not yet fully reliable in the eyes of those who build software.
“Developers are eager to leverage AI for efficiency, but they aren’t ready to take its answers at face value,” said Dr. Elena Vargas, lead researcher at the organization behind the study. “They use AI to get a head start, then cross-check every result against documentation, forums, or peer-reviewed code.”
The survey found that more than three-quarters of respondents now use AI tools at work for learning new technologies, debugging, or generating code snippets. That figure represents a sharp increase from just a year ago.
However, nearly 60% of developers reported that they still turn to Stack Overflow, official documentation, or other community-driven resources to verify AI-generated information before using it in production.
Background
The poll, designed in collaboration with OpenAI, ran throughout February 2025 and captured responses from a diverse cross-section of the global developer community. It aimed to track evolving attitudes toward AI-assisted knowledge acquisition.

Previous surveys had shown cautious adoption, but the current data marks a tipping point: AI is now a standard part of the developer toolkit, even if it is not yet a trusted one.

The partnership with OpenAI—the company behind many popular large language models—provided unique insight into how developers interact with the very systems they are also helping to improve.
What This Means
The results signal a clear opportunity for AI tool makers: to capture the full trust of developers, models must become more transparent about their confidence levels and sources. Until then, human expertise and community validation remain irreplaceable.
For individual developers, the trend means that domain expertise is still highly valued. AI can accelerate learning and prototyping, but deep technical knowledge is required to effectively judge and correct AI outputs.
Companies investing in developer productivity tools should focus on hybrid workflows that combine AI speed with human oversight. The survey suggests that a purely AI-driven development environment is not yet viable—or desirable.
“The barrier isn’t about capability; it’s about trust,” Vargas added. “Once developers feel they can rely on AI as confidently as they do a seasoned colleague, we’ll see a fundamental shift in how code is written and learned.”