Singapore – A new global report by SAS and IDC recently revealed how business and technology leaders are increasingly confident in generative AI, even as many organisations have insufficient governance and ethical standards.
The study found that those prioritising AI governance, transparency and ethical standards were significantly more likely to achieve stronger returns from their AI projects, while nearly two-thirds of organisations still underinvest in mechanisms to ensure AI reliability.
A notable finding of the research was the perceived trust gap: organisations with minimal investment in AI governance reported generative AI as being substantially more trustworthy than traditional AI models, such as machine learning, even though the latter are more established and technically explainable.
Bryan Harris, chief technology officer at SAS, mentioned that developing trust in AI in a rapidly evolving world of digital transformation is imperative.
“To achieve this, the AI industry must increase the success rate of implementations, humans must critically review AI results, and leadership must empower the workforce with AI.”
The global survey gathered responses from 2,375 professionals across the Asia Pacific, North America, Latin America, Europe, the Middle East, and Africa. Participants included both IT experts and business decision makers, offering a balanced view of technology and operational priorities.
Generative AI and agentic AI were identified as the most trusted forms of artificial intelligence. Nearly half of the respondents expressed complete trust in generative AI, while one-third did so for agentic AI.
In contrast, fewer than one in five reported full trust in traditional AI systems. Respondents also raised key concerns related to data privacy, transparency and the ethical use of AI. Interest in quantum AI is also rising, with almost one-third of decision makers claiming familiarity with the technology, despite most real-world applications still being at an early stage.
The research also highlighted the rapid acceleration of AI adoption, particularly in generative models, which have now overtaken traditional AI in both usage and visibility. However, this expansion has brought new operational and ethical challenges.
While nearly eight in ten organisations claim to fully trust their AI systems, less than half have implemented governance frameworks or safeguards to ensure these systems are truly reliable.
Developing governance and responsible AI policies remains a low strategic priority for many businesses, which could restrict long-term value creation.
“Trying to scale GenAI on weak foundations is like building a skyscraper on quicksand,” Derek Yueh, research lead of LinkedIn B2B Institute, commented.
“The report unpacks the nuances for trust in AI and the impact of AI, and offers essential guidance for leaders navigating the fast-moving world of technology.”
IDC classified respondents into two groups: trustworthy AI leaders and followers. Organisations in the leader group—those investing heavily in governance, ethical standards and transparency—were considerably more likely to achieve double or higher returns on AI investments compared to their peers.
As AI becomes more embedded in business operations, the quality and governance of underlying data have become increasingly critical. The study identified weak data infrastructure, limited governance mechanisms and a shortage of AI talent as the primary challenges facing organisations.
Nearly half of respondents cited fragmented or inefficient data environments as the biggest barrier to AI success. Additional issues included insufficient governance processes and difficulty accessing relevant data sources, alongside ongoing concerns about privacy, compliance and data quality.
“Change is the only constant and AI is evolving faster than ever, but speed without trust is a risk we can’t afford,” Preeti Shivpuri, partner and national leader of trustworthy AI and data risk at Deloitte, stated.
“As a passionate leader in trustworthy AI, I see the path forward is clear– trust and data governance must drive the race.”
IDC also introduced two new measures — the Trustworthy AI Index and the Impact Index — to assess how governance practices and business value align globally.
Countries such as Ireland, Australia, and New Zealand scored highly on both indices, while others, including China and South Korea, displayed strong adoption but weaker safeguards. These findings underscore that trust alone does not guarantee impact; tangible investment in governance and transparency remains essential to achieving sustainable value from AI.