
"Only 40% of organizations invest in "trustworthy AI," or AI with guardrails. Yet, those investing the least view genAI as 200% more trustworthy than traditional, proven machine learning - despite the latter being more established and having greater reliability and explainability. "Our research shows a contradiction: that forms of AI with humanlike interactivity and social familiarity seem to encourage the greatest trust, regardless of actual reliability or accuracy," said Kathy Lange, research director of the AI and Automation Practice at IDC."
"IDC's study, sponsored by SAS, found that organizations that build governance, ethics, and transparency guardrails are 60% more likely to double AI project ROI - highlighting the cost of ignoring responsible AI practices. The global survey of 2,375 IT professionals and line-of-business leaders found that strategic AI use, not just cost-cutting, drives market share and customer gains. GenAI has rapidly outpaced traditional AI, and as organizations move toward agentic AI, its influence on decision-making - often hidden - will only grow."
Trust in generative AI has surged globally because its humanlike responses make it appear more trustworthy. Only 40% of organizations invest in trustworthy AI guardrails such as governance, ethics, and transparency. Organizations that implement these guardrails are 60% more likely to double AI project ROI. Many organizations that invest the least perceive genAI as substantially more trustworthy than traditional machine learning despite traditional methods offering greater reliability and explainability. GenAI and emerging agentic AI are rapidly influencing decision-making. Nearly half of companies face a trust gap that reduces ROI and slows progress.
Read at Computerworld
Unable to calculate read time
Collection
[
|
...
]