News | BRG press release

Just 36% of Corporate Leaders Believe AI Policy Will Provide Necessary Guardrails, BRG’s Global AI Regulation Report Finds

June 20, 2024

As debates over how to regulate artificial intelligence heat up, new research reveals concerns over current and future policy effectiveness—and a lack of compliance readiness

In today’s fast-developing artificial intelligence (AI) regulatory landscape, only about one-third of corporate leaders believe current regulations are very effective and that future policy will provide necessary guardrails, according to BRG’s Global AI Regulation Report, released today.

Drawing on survey responses from over 200 corporate leaders and executive-level lawyers in diverse industries around the world—plus in-depth interviews with executives, attorneys, and BRG experts—the new report assesses where AI regulation currently stands, challenges organizations face in complying and what key stakeholders see as most important for the development of effective AI policy. The report includes breakdowns of data in key industries (retail and consumer goods, technology, and financial services), regions (North America; Europe, the Middle East and Africa; and Asia–Pacific) and roles (lawyers and executives).

Additionally, the report discusses implications for the US healthcare sector, drawing on findings covered in BRG’s recent AI and the Future of Healthcare report.

Organizations lack confidence in their compliance readiness

In today’s uncertain regulatory landscape—where the misuse of AI creates significant regulatory, litigation and reputational risk—just four in 10 respondents are highly confident in their organization’s ability to comply with current regulations and guidance. When it comes to internal safeguards to promote responsible and effective AI development and use, the majority of respondents—and particularly those in the retail and consumer goods sector—have yet to implement any of them.

Lawyers, as well as respondents from North America generally, are particularly skeptical about the efficacy of current and future AI regulation. But uncertainty also breeds opportunity.

“More and more, we’re seeing a gap between what outside counsel recommends and what executives are open to when it comes to AI policies and procedures,” says Amy Worley, a managing director and associate general counsel at BRG. “Good advisers can say yes, there is a lot of regulatory uncertainty, and where there is uncertainty there is also value.”

Future AI policy priorities

Respondents broadly agreed that the three most important future focus areas for AI regulation are data integrity, security and accuracy/reliability. Yet priorities diverge when the survey results are broken out by region and industry. Executives want policy to be more adaptable/flexible and transparent/explainable, while lawyers are most concerned about it being enforceable. Technology and financial services respondents prioritize adaptability/flexibility too, while retail and consumer goods respondents favor strictness.

All want comprehensiveness, though this may not be so simple.

“Creating broad, comprehensive guidelines may prove more difficult than people imagine,” says Richard Finkelman, a managing director at BRG. “A fault line already exists, for instance, between the US and the EU over AI regulation and ethics—and it’s getting larger, not smaller.”

The report also offers a thorough snapshot of where current AI policy stands, from the EU’s recently passed AI Act to the US’s more decentralized approach to the Association of Southeast Asian Nations’ more business-friendly Guide on AI Governance and Ethics. It also delves into mounting issues with AI-generated fake evidence and the risks of noncompliance

Download the full report.

BRG Experts

Related Professionals

Amy Worley

Managing Director & Associate General Counsel

Washington, DC

Richard Finkelman

Managing Director

Washington, DC