Mandating Use of AI? Are You Sure?

Most companies are moving from AI-shy to all-in on AI. And that can be great. Who doesn’t want increased productivity and efficiency, not to mention outsourcing dull routine tasks to a coworker who never calls in sick and doesn’t complain.

Some companies encourage people to use AI whenever possible, others are making it a mandate.

My friend Matt Kelly reviewed this in his blog post on Radical Compliance (HERE) where he reported that “Consulting firm Accenture has told senior staff that they will need to demonstrate “regular adoption” of AI tools if they want to be promoted; and the firm is tracking individual weekly log-ins to those tools to see whether senior people really are using AI as fully as possible.”

Encouraging people to adopt AI tools after they’ve been properly trained and when they feel it will increase their ability to do the job well is one thing. It’s another to mandate its use without guardrails and good governance in place.

A Cautionary Tale of AI Agents

According to an article in the Financial Times, Amazon has a strict policy that requires their engineers to use AI. In fact, Amazon has a target that 80% of their developers use AI at least once per week. This is closely monitored.

Amazon developers and engineers are also encouraged to create AI agents to help do their jobs. No problem there, right?

Wrong. There was a problem. One of those agents decided unilaterally to delete and recreate an AWS system that customers use to measure the cost of services.

The system was down for 13 hours.

Why did this happen? Well, the AI Agent was given the same permissions as its creator, and its creator had permission to shut down those systems.

Protocols required two top level human engineers to agree that the system be shut down. They would have been jointly responsible if things went wrong.

But the AI agent? It didn’t need more permission. Its decisions weren’t challenged ahead of time. It wasn’t put on a performance plan and forced to worry about whether it would be able to pay its mortgage. It did what it thought best without a human in the loop as a control.

The Reality

I love this story because it is an early example of what we’ll see more and more of if we aren’t careful with how we mandate that employees use AI. Right now, most companies don’t have agents running amuck with permissions that could quickly affect their customer relationships or core service offerings. But that is bound to change.

Let’s remember that a recent study by Howdy.com found that 1 in 5 workers say they feel pressured to use it in situations they’re unsure about — and 1 in 6 say they sometimes pretend to use AI.

How to Fix it

Compliance can’t stop the creation of AI agents, even if it wanted to. And overall, we don’t want to. Heck, I know some compliance officers using AI agents now for some tasks. But we have a special responsibility to ensure that the business is protected from AI use gone wrong.

A judge's gavel rests beside a computer chip labeled "AI," with a digital human head outline nearby, symbolizing AI and legal themes in technology.

Training

Many people still fear AI. If they don’t know how to use it but are mandated to do so, it’s easy to foresee bad outcomes.

Training needs to take two forms. First, people need to be trained on the tools available to them. Simply installing Copilot or a GPT isn’t enough. People need to understand the AI’s uses, capabilities, and drawbacks.

The other form of training is responsible use. Responsible use training doesn’t apply to any one system. Instead, its based on principles people should take into account no matter what kind of system they are dealing with. Principles like protecting privacy, confidential information, and trade secrets are critical whether you’re dealing with machine learning or generative AI.

Policies and Procedures

A good AI Governance and Responsible Use policy can take you a long way, as can clear procedures for getting AI tools evaluated and approved. People aren’t going to stop using AI, so it’s important for them to understand that the company has made it available to them in a thoughtful way.

Creating agents can be very useful, but implementation needs rigor and testing. Having a good procedure for this is key to ensuring the agent does what the employee thinks it will without unintended consequences.

Monitoring and Auditing

Information Technology, Information Security and/or Internal Audit need to be monitoring the use of AI, including how agents are being deployed. Monitoring can catch otherwise invisible problems that will surface in time. A regular, planned schedule for monitoring is a critical control, especially if employees are only using AI because it’s mandated.

AI Mandates Need Real Support

Companies that mandate the use of AI must equip employees to use it ethically and responsibly. Without good training, policies, procedures, communications, and monitoring, employees can and will engage in AI and/or create agents that wreak havoc on the business.  

Don’t believe me? Give it a year.