Rising Global Trust in Generative AI Amid Security Concerns and Challenges
Increasing Trust in Generative AI Despite Security Issues
A groundbreaking study presented by SAS, a global leader in data and AI, examines the increasing confidence in generative AI among organizations, evaluating its usage, impacts, and trust levels. The IDC Data and AI Impact Report, commissioned by SAS, highlights a significant shift in how IT professionals and business leaders perceive generative AI compared to traditional AI methods.
The report reveals an interesting paradox: while only 40% of organizations are investing to ensure the reliability of their AI systems through governance, explainability, and ethical safety measures, those that prioritize trustworthy AI are 60% more likely to double their return on investment. Strikingly, generative AI technologies, such as ChatGPT, are viewed as 200% more trustworthy than established AI practices like machine learning, despite the latter being well recognized for its stability and transparency.
Kathy Lange, a research director in AI and Automation at IDC, emphasizes that this indicates a disconnect whereby AI systems that engage users in a more human-like manner generate greater trust, regardless of their actual reliability or accuracy. As discussions around generative AI increase, important questions arise: Is generative AI truly trustworthy, or does it merely create an illusion of trustworthiness? Are leaders implementing necessary safeguards and governance over this emerging technology?
Global Survey Insights
The comprehensive global survey included 2,375 participants from North America, Latin America, Europe, the Middle East, Africa, and Asia-Pacific, balancing perspectives from IT professionals and business leaders alike. Notably, nearly half (48%) of respondents reported having