Insights
publication | BRG Healthcare

Q&A Corporate Compliance’s AI Balancing Act: Innovation versus Risk

October 2024

Tom O’Neil is a BRG managing director and leads the firm’s Governance, Risk & Compliance (GRC) practice. He has broad private- and public-sector experience including leadership roles in boardrooms and C-suites of companies in the consumer, financial services, and health sectors.

Amy Worley is a managing director and data protection officer at BRG. She is an expert in global data and data protection regulation, data governance, and data ethics, including the growing field of artificial intelligence (AI) regulation.

Tom and Amy recently discussed challenges and opportunities AI presents for board members, executives, and compliance officers. Below is an excerpt from their conversation.

How can organizations use AI to enhance governance, compliance, and ethical practices that will foster (and not impede) commercial growth?

Amy: We counsel our clients to take a principles-based approach when thinking about AI ethics and governance programs. We approach this much the way we have with GDPR [General Data Protection Regulation] by focusing on transparency, fairness, and accountability. We see that different industries have different tolerance for risk when balancing protection and privacy regulations with new AI technologies. For example, ecommerce and technology companies typically take more risks, but more regulated industries like healthcare and finance generally have a less-aggressive approach. Some industries are inherently more risk averse than others—and that’s okay.

Tom: It sounds like the more highly regulated the industry, the more likely they are to stay on the fairway.

Amy: I think that is a fair assessment.

How can organizations ensure C-suite alignment and successful AI implementation across business units and corporate functions?

Amy: When we work with clients we take a multidisciplinary approach, listening to and collecting feedback from governance and regulatory teams, technologists, and engineers who really understand the technology, and commercial leaders who can talk about their markets. We are often retained because it can be difficult for board members and executives to know where to start. Some of our more innovative clients tend to focus on what’s allowed and what’s not allowed. They will want to understand the approval process so they can move forward more quickly—I call this the “Breaks on a Ferrari Model.”

Tom: This reminds me of a conversation I had several years ago with a very accomplished CEO who said, in essence, if an organization’s compliance and ethics program is operating successfully, it will create guardrails around every turn so that the executive leadership team can accelerate the car safely. Initially, this analogy made me very uncomfortable, but it sounds to me like that’s very much along the lines of the work that you’re doing at this stage of the journey.

Amy: I like the analogy from that CEO because for a compliance and ethics program to add value to the business, it must become part of that organization’s culture.

How can compliance and ethics executives ensure AI systems align with their organizational values to mitigate biases that can cause reputational harm?

Amy: This is something that executives need to take seriously. Every industry has examples of AI algorithms that have gone awry. This happens because most algorithm models “hallucinate.” What I mean is that large language models can generate incorrect responses because they can lack the ability to accurately assess the reliability and currency of the data they take in. This can lead to potential inaccuracies in their outputs, but this can be accounted for and overcome with rigorous human oversight, carefully training these systems with the right information and putting safeguards in place.

Tom: This has real implications for compliance and ethics departments looking to capitalize on AI’s potential while addressing potential risks across the enterprise.

It isn’t hard to imagine compliance and ethics leaders in the future putting their existing KPIs and metrics into an AI algorithm … to make more informed risk calculations.

Amy Worley

Looking to the future, what are potential applications of AI that can help elevate and transform compliance and ethics functions?

Amy: It isn’t hard to imagine compliance and ethics leaders in the future putting their existing KPIs [key performance indicators] and metrics into an AI algorithm that is looking at enforcement decisions on a large scale to help identify specific factors to make more informed risk calculations. I think mature compliance programs will gather metrics across their programs for use in predictive algorithms. We’ve been asked to look at signals for some time in compliance. The next step forward is for our tooling to identify signals and go one step further to identify likely future areas of risk.

Tom: Boards of directors and executives would love to get their hands on that information. Compliance and ethics departments are operationally embedded, so the ability to more intelligently resource an organization is really exciting—a potential game changer. The ability to audit and monitor compliance metrics using generational and predictive AI is an entirely new frontier in the GRC world.

Amy: One of the most common questions I’m asked by general counsel and board members is: “Which problem do I solve first? Because we can’t do everything right now.” So having that insight and being able to make more informed risk calculations could go a long way to helping those executives/leaders know where to focus first.

Tom: This has real implications for compliance and ethics departments looking to capitalize on AI’s potential while addressing potential risks across the enterprise.

The ability to more intelligently resource an organization is really exciting—a potential game changer. The ability to audit and monitor compliance metrics using generational and predictive AI is an entirely new frontier in the GRC world.

Tom O'Neil

What does a new corporate director need to know to fulfill their oversight responsibilities?

Amy: I don’t think one needs to become an expert, but you need to do enough reading to understand the basic problems, the alignment problems, the bias problems that can exist. As a fiduciary you do need to understand the interplay between IT security and privacy and AI governance.

Tom: I agree. The board’s role is one of oversight, and it spans the spectrum from the leadership team’s development and execution of the organization’s strategy to its risk identification and mitigation initiatives. AI unquestionably straddles both ends of this spectrum.

For smaller and midsized companies, does it make sense to consolidate enterprise risk management oversight under the audit committee as a cost-effective approach until resources allow for a more specialized committee?

Amy: I think so. The BRG Global AI Regulation Report found a stark divide between CEO enthusiasm and lawyer apprehension toward AI. This reflects the complex reality: AI offers significant commercial opportunities but presents substantial risks. While businesses must strategically leverage AI, they must also mitigate potential legal issues. The key findings from BRG’s report were informed by survey results from more than 200 global corporate leaders and lawyers as well insights learned from in-depth interviews with global executives. Below are some of the report’s insights.

  • AI regulation is still emerging, and there is wide disparity on opinions about its effectiveness
  • Only 40 percent of surveyed respondents expressed high confidence in their organizations’ abilities to comply with emerging regulations and current guidance
  • Fewer than half of all organizations have implemented internal safeguards to promote responsible and effective AI development and use