As AI adoption accelerates, the consequences-intended and not-are becoming harder to ignore. From biased algorithms to opaque decision-making and chatbot misinformation, companies are increasingly exposed to legal, reputational, and ethical risks. And with the rollback of federal regulation, many are navigating this landscape with fewer guardrails. But fewer guardrails doesn't mean fewer consequences-only that the burden of responsibility shifts more squarely onto the businesses deploying these systems. Legal, financial, and reputational risks haven't disappeared; they've just moved upstream.
Over 40 minutes, the panel returned again and again to three themes: data quality, organizational alignment and cultural readiness. The consensus was clear: AI doesn't create order from chaos. If organizations don't evolve their culture and their standards, AI will accelerate dysfunction, not fix it. Clean data isn't optional anymore Allen set the tone from the executive perspective. He argued that enterprises must build alignment on high-quality, structured and standardized data within teams and across workflows, applications and departments.
Hallucinations have commonly been considered a problem for generative AI, with chatbots such as ChatGPT, Claude, or Gemini prone to producing 'confidently incorrect' answers in response to queries. This can pose a serious problem for users. There are several cases of lawyers, for example, citing non-existent cases as precedent or presenting the wrong conclusions and outcomes from cases that really do exist. Unfortunately for said lawyers, we only know about these instances because they're embarrassingly public, but it's an experience all users will have had at some point.
Every Fortune 500 CEO investing in AI right now faces the same brutal math. They're spending $590-$1,400 per employee annually on AI tools while 95% of their corporate AI initiatives fail to reach production. Meanwhile, employees using personal AI tools succeed at a 40% rate. The disconnect isn't technological-it's operational. Companies are struggling with a crisis in AI measurement.
Since the AI boom kicked off with ChatGPT's debut about three years ago, the technology's breathtaking capabilities have amazed the world. Tech companies have raced to develop better AI systems even as experts warn of its risks, including existential threats like engineered pandemics, large-scale misinformation or rogue AIs running out of control, and call for safeguards.The U.N.'s adoption of a new governance architecture is the latest and biggest effort to rein in AI.
As the number of foundation models proliferates and enterprises increasingly build applications or code on top of them, it becomes imperative for CIOs and IT leaders to establish and follow a robust multi-level due diligence framework, Shah said. That framework should ensure training data transparency, strong data privacy, security governance policies, and at the very least, rigorous checks for geopolitical biases, censorship influence, and potential IP violations.
EPAM is building its DIAL platform to become one of the most advanced enterprise AI orchestration systems in operation. With its recent DIAL 3.0 release, it addresses how to harness AI at scale without sacrificing governance, cost control, or transparency. We spoke with Arseny Gorokh, VP of AI Enablement & Growth at EPAM, about the platform. DIAL might not be the most known technology out there, but it has some history to build on.
" Microsoft and OpenAI have signed a non-binding memorandum of understanding for the next phase of our partnership," the companies said in a document described as a joint statement, continuing, "Together, we remain focused on delivering the best AI tools for everyone, grounded in our shared commitment to safety."
Its main nonprofit organization will control a new public benefit corporation that will house OpenAI's for-profit operations. The restructuring will make it easier for OpenAI to issue traditional equity to new investors, allowing the startup to raise the massive amount of money needed to pursue its ambitious plans. The OpenAI nonprofit doesn't just get control. It also gets an equity stake in the new business that is worth more than $100 billion, Taylor said.
It's been well established in the first year of Trump's second presidency that AI is a priority for the administration. Even prior to Trump taking office, government generative AI use cases had surged, growing ninefold between 2023 and 2024. In recent months, agencies have cut numerous deals with most leading AI companies under the General Services Administration's Trump-driven OneGov contracting strategy.
The General Assembly is the primary deliberative body of the United Nations and, in effect, of global diplomacy. This year's session will comprise delegations from all 193 UN member states, which all have equal representation on a "one state, one vote" basis. Unlike other UN bodies, such as the Security Council, this means all members have the same power when it comes to voting on resolutions. It is also the only forum where all member states are represented.
For the past five years, much of the enterprise conversation around artificial intelligence (AI) has revolved around access - with access to application programming interfaces (APIs) from hyperscalers, pre-trained models, and plug-and-play integrations promising productivity gains. This phase made sense. Leaders wanted to move quickly, experimenting with AI without the cost of building models from scratch. " AI-as-a-service " lowered barriers and accelerated adoption.
When you hear "governance," what pops to mind? We often picture limitations or roadblocks. Yet the opposite is true, particularly with artificial intelligence. AI governance is about discovering what your organization can do. Organizations implementing comprehensive AI governance experience measurable returns, including $840,000 in operational efficiency gains and 80% in productivity improvements. Building out your AI roadmap requires a mindset shift. These ideas will move you from limitation to acceleration.
In most cases, employees are driving adoption from the bottom up, often without oversight, while governance frameworks are still being defined from the top down. Even if they have enterprise-sanctioned tools, they are often eschewing these in favor of other newer tools that are better-placed to improve their productivity. Unless security leaders understand this reality, uncover and govern this activity, they are exposing the business to significant risks.
A new survey reveals a striking "AI readiness gap" in the modern workplace: those using AI tools the most-including top executives and Gen Z employees-are often the least likely to receive meaningful guidance, training, or even company approval for their use. The findings come from WalkMe, an SAP company, which surveyed over 1,000 U.S. workers for the 2025 edition of its " AI in the Workplace " survey.
Rather than pursuing massive, resource-intensive AI initiatives that take years to deliver, Huss argues for Minimum Viable AI - a pragmatic approach that focuses on getting functional, well-governed AI into production quickly. It's not about building the flashiest model or chasing state-of-the-art benchmarks; it's about delivering something useful, measurable, and adaptable from day one.
In the race to stay ahead, organizations have thrown open the doors to every AI tool under the sun. The result? AI overload. According to the Wharton School, AI spending has skyrocketed by 130% in just the past year, and 72% of companies are planning to invest even more in 2025. Yet, here's the kicker: 80% of organizations report no tangible enterprise-wide impact from their generative AI investments.
Erika Stael von Holstein, 41, has been advising European institutions on science, technology, society, and democracy for two decades. She's the founder and director of Re-Imagine Europe, a think tank focused on depolarization. The Stockholm-born advisor has also promoted Nodes.eu, a European observatory of narratives against disinformation. Von Holstein is a member of the expert council on artificial intelligence convened by the Spanish government. This work brought her to Madrid, where she met with EL PAIS.
AI is showing up in every corner of the business world - but in high-stakes fields like finance and tax, its real value isn't speed for speed's sake. It's about reducing friction, increasing accuracy and giving overworked teams the tools to focus on what matters most.
Countries must ensure they are not impeding open source platforms, as Yann LeCun advocates for collaborative international regulation of open-source AI.