The 80-Point Trust Collapse
Why Enterprises Are Racing to Deploy AI They Don't Actually Trust
Why it matters: Organisations are pouring investment into autonomous AI systems whilst simultaneously acknowledging they cannot responsibly deploy them. This 80 percentage-point gap between ambition and readiness creates compounding risks—operational failures, compliance violations, and wasted capital—that boards and security leaders must address before scaling further.
The Paradox at the Heart of Enterprise AI
Something peculiar is happening in boardrooms across the developed world. Executives are signing off on substantial investments in agentic AI; systems designed to operate autonomously, make decisions, and take actions without human intervention. Yet when asked whether they trust these systems to actually do so, the answer is an overwhelming no.
The numbers are stark. According to research conducted by Harvard Business Review in partnership with Workato and AWS, 86% of organisations plan to increase their investment in agentic AI over the next two years. Yet only 6% fully trust AI agents to autonomously handle core business processes [3].
That 80 percentage-point gap represents one of the most significant market dysfunctions in enterprise technology today. Organisations are not simply being cautious—they are actively investing in capabilities they acknowledge they cannot responsibly use.
Three Studies, One Consistent Finding
The trust gap is not an isolated observation. Three independently conducted research programmes, released between January and December 2025, reveal the same pattern across different populations and methodologies.
The SAS and IDC Data and AI Impact Report examined trustworthiness and business impact across global regions and industries. It found that whilst 92% of Australian and New Zealand organisations have progressed beyond AI experimentation to pursue high-impact use cases, only 40% have implemented trustworthy governance systems to support these deployments [1]. Jonathan Butow, SAS Head of AI and Innovation for Australia and New Zealand, characterised this 52 percentage-point disconnect as “one of the most important issues shaping the market” [1].
Stack Overflow’s 2025 Developer Survey, which gathered responses from over 49,000 developers globally, found that 84% now use AI tools daily; up from 76% in 2024. Yet trust has moved in the opposite direction. Only 33% of developers trust the accuracy of AI-generated output, and a mere 3% report “highly trusting” AI results [2]. More concerning still: 46% of developers actively distrust AI accuracy, a sharp rise from 31% the previous year [2].
The Harvard Business Review study went further, examining specifically how enterprises approach autonomous AI systems. Beyond the headline 6% trust figure, it found that 43% of organisations trust AI agents only with limited or routine operational tasks, whilst 39% restrict them to supervised use cases or non-core processes [3].
Where Infrastructure Falls Short
The trust deficit is not simply a matter of cultural caution or change management. Organisations cite concrete technical and operational barriers that prevent responsible autonomous deployment.
When enterprise leaders were asked what prevents them from trusting AI agents with core processes, four constraints emerged consistently:
Cybersecurity and privacy concerns topped the list at 31%; fear of unauthorised access, data exposure, or the possibility that AI agents could be hijacked or manipulated [3].
Data output quality came second at 23%; concern that agent outputs will be inaccurate, incomplete, or biased [3].
Unready business processes accounted for 22%; existing workflows simply are not structured for automated decision-making [3].
Technology infrastructure limitations matched at 22%; organisations lack scalable, secure, auditable systems to support autonomous AI [3].
The infrastructure picture is particularly sobering. Only 20% of organisations report that their technology infrastructure is fully prepared for agentic AI in critical processes. Just 15% say their data and systems are ready. And only 12% have adequate risk and governance controls in place [3].
Using a composite readiness index, the Harvard Business Review research classified organisations into three groups:
Leaders (27%) with mature infrastructure, governance, and risk controls;
Followers (50%) with mixed readiness;
Laggards (24%) with inadequate foundational capabilities [3].
The Developer Warning Signal
Perhaps the most telling indicator comes from those closest to AI systems in practice. Experienced developers—those with accountability for production systems—show the lowest confidence in AI outputs.
Among senior developers, only 2.6% report “high trust” in AI-generated results, whilst 20% express “high distrust” [2]. This is not scepticism born of unfamiliarity. These are practitioners who use AI tools daily and understand their limitations intimately.
The root cause appears to be code quality. Stack Overflow CEO Prashanth Chandrasekar reported that developer frustration centres on debugging “AI solutions that are almost right, but not quite” [2]. Sixty-six percent of developers now spend more time fixing AI-generated code than they anticipated [2].
This creates a verification burden that organisations may be underestimating. Despite relying on AI daily, 35% of developers turn to Stack Overflow specifically after AI-generated code fails [2]. Human-verified, community-sourced knowledge remains more reliable than AI output for critical validation.
My Take: The gap between developer distrust and executive enthusiasm suggests potential misalignment in organisational decision-making. Technical teams understand current AI limitations; leadership may not. Organisations would be wise to create structured feedback loops between these groups before scaling autonomous deployment.
Sectoral and Regional Variations
Not all organisations face the trust gap equally. The SAS-IDC research identifies meaningful variations across sectors and regions that offer both warnings and blueprints.
Financial services leads responsible AI adoption globally, driven by regulatory oversight, mature risk management frameworks, and established model validation practices [1]. Banking regulations explicitly govern model governance, and customer trust serves as a competitive differentiator. This sector’s relative success demonstrates that governance-first approaches yield both trustworthiness and business value.
Government organisations present a different picture. Only 15.3% operate at the highest level of trustworthy AI practices, compared to the global average of 19.8% [1]. More concerning, 46% of government organisations fall into what researchers term “underutilisation or overreliance quadrants”—meaning many place strong confidence in AI systems that may not yet be trustworthy [1].
Given that government systems affect citizens’ access to services, benefits, and legal rights, this gap poses significant public sector risk.
Regionally, Australia and New Zealand emerge as leaders in combining high trustworthiness with strong business impact; a rare combination globally [1]. This success reflects investment in governance frameworks, mature analytics cultures, regulatory alignment, and sustained focus on responsible AI practices.
Risks and Constraints
The trust gap creates compounding risks that organisations must acknowledge honestly.
Operational risk emerges when organisations deploy AI systems without mature governance frameworks. When AI failures occur in production—incorrect recommendations, misclassified data, biased decisions—organisations often lack audit trails to explain failures, accountability structures to assign responsibility, rollback capabilities to recover quickly, and governance records to satisfy regulatory inquiry.
Compliance and regulatory risk is intensifying. UK organisations face increasing scrutiny through AI Bill proposals emphasising transparency and accountability, Financial Conduct Authority expectations on AI governance, Data Protection Act compliance requirements for automated decision-making, and emerging UK AI Standards frameworks. Organisations without governance maturity will find themselves increasingly non-compliant with evolving regulations.
Financial risk stems from a capital efficiency problem. Investments in AI capability are not translating into measurable business value because governance constraints prevent deployment at scale. The developers’ experience provides a proxy: verification and debugging overhead, technical debt accumulation, quality assurance resources diverted to AI review, and extended time-to-production for AI-assisted projects. Scaled across an organisation, these hidden costs may exceed anticipated efficiency gains.
Talent risk compounds everything else. Forty-four percent of organisations are prioritising training in agentic AI oversight, and 39% are building governance frameworks [3]. However, many organisations lack AI governance expertise within existing teams, frameworks for AI accountability and decision-making authority, and processes to embed responsible AI practices across functions. The shortage of skilled AI governance professionals means organisations competing for limited talent will face extended timelines and elevated costs.
What to Do Next
The research offers a clear pathway forward for organisations willing to prioritise governance alongside capability.
For enterprise leaders: Establish trustworthy AI governance frameworks before scaling autonomous deployment. Use available benchmarking tools; such as the SAS-IDC Trustworthy AI Index; to assess current maturity. Australia and New Zealand’s success demonstrates that governance-first approaches yield both trustworthiness and business impact [1].
Invest in governance talent and capability now. With 44% of organisations prioritising AI oversight training, competing early for governance expertise is critical [3]. Build cross-functional AI governance teams spanning technology, compliance, and business process ownership.
Conduct infrastructure and risk readiness assessments. Only 20% of organisations report full infrastructure readiness [3]; gap analysis is essential. Evaluate data quality, cybersecurity posture, and governance control maturity before committing to autonomous deployment timelines.
For government organisations: Develop sector-wide AI governance standards to accelerate capability building. Government organisations lag significantly in trustworthy AI practices, and shared frameworks could close the gap more efficiently than individual efforts [1].
Conduct urgent audits of existing AI deployments. With 46% of government organisations falling into overreliance quadrants [1], assessing trustworthiness of current systems should be an immediate priority.
For boards and security leaders: Bridge the gap between technical teams and executive leadership. Create structured feedback loops that surface developer concerns about AI limitations before they manifest as production failures.
Treat governance investment as enabling, not constraining. The research is clear: organisations with mature governance are better positioned to scale AI confidently and capture value faster than those racing ahead without safeguards.
Disclaimer: This article represents analysis based on publicly available research as of December 2025. Specific organisational circumstances may vary, and readers should conduct their own assessments before making governance or investment decisions.
References
[1] SAS and IDC. “Data and AI Impact Report: The Trust Imperative.” SAS Institute Inc., January 2025. https://www.sas.com/en_us/news/analyst-viewpoints/idc-data-ai-impact-report.html
[2] Stack Overflow. “2025 Developer Survey.” Stack Overflow Inc., July 2025. https://survey.stackoverflow.co/2025/
[3] Harvard Business Review, Workato, and AWS. “The Enterprise AI Trust Gap: How Leaders Are Building Agentic AI Responsibly.” Harvard Business Review, December 2025. https://fortune.com/2025/12/09/harvard-business-review-survey-only-6-percent-companies-trust-ai-agents/





