Skip to content

Blog

How to Handle Rogue AI in Your Organization

How to Handle Rogue AI in Your Organization
4:39

 

Dr. Ryan Ries here.

This week, I want to share my thoughts on a question that came up during our recent Ask Us Anything session.

How do you handle "rogue AI" in your organization?

First, I say, “Good luck!”

(kidding… kind of)

Here is the hard truth: You probably can't completely control rogue AI use in your organization.

These tools are advancing SO quickly.

The latest updates to these tools are often only available through the model provider's sites, and you have to wait weeks/months for them to be available on cloud providers — making it that much more difficult for IT departments to monitor, enforce updates, and provide guardrails.

When you're competing with something as convenient as ChatGPT, you're facing a pretty high bar.

It's similar to the early days of public cloud when developers started using AWS instead of waiting weeks for IT to provision servers. Here's what makes this situation unique — unlike most enterprise technology, generative AI didn't emerge from the traditional tech sector.

It burst onto the scene through consumer-facing platforms like ChatGPT and Claude, making it instantly accessible to everyone.

Your employees aren't waiting for IT approval — they're already using these tools for day-to-day work and in their home life.

So What Can You Actually Do?

Education Over Prohibition

Instead of trying to enforce a complete lockdown (which likely won't work), focus on educating your teams about:

  • When it's safe to use public AI tools
  • What information should never be shared
  • How to verify AI-generated code and content
  • Best practices for security

For example, at Mission, we had the team take an AI Foundation course (that our team developed internally) so everyone has a basic understanding of what AI is, how it works, and how to use it safely.

We also have a list of approved tools and systems in place for employees to gain access to them. This helps us avoid issues where people pay for third-party tools without telling anyone (aka Shadow IT).

Provide Better Alternatives

The key is offering options that are equally convenient, just as powerful, more secure, and under your control.

Think chat interfaces hosted in your own AWS environment or enterprise accounts with major providers.

This is why we built Cloudia, our internal chatbot, which is built directly on Bedrock (using the boto3 SDK for Python) and runs on Claude specifically.

We built Cloudia as a safe way for employees to use AI to accomplish tasks faster and more efficiently.

Cloudia can do a lot of different things, but some of the team’s favorite use cases for Cloudia are:

  • AWS Update Summarization
  • Deciphering Customer Requests
  • Troubleshooting and Debugging
  • Assist with IaC

Set Clear Guidelines

For public AI tool use, we recommend:

  • Making sure your employees are not inputting corporate, confidential, or not publicly available information;
  • Your team is always taking the time to verify AI-generated code;
  • Being vigilant about unexpected behaviors or calls. These models aren’t perfect, and they can (and will) hallucinate from time to time;
  • Using common sense about data sensitivity.

We wrote a blog post a while back about creating our own LLM policy internally. Check it out here (especially if you don’t already have an LLM policy in place in your organization). 

You vs. Shadow IT

The reality is you're not just competing with shadow IT.

You're competing with convenience.

The best strategy is to embrace this reality and create a framework that protects your organization's sensitive data, gives employees the tools they need, maintains security and compliance, and is cost-effective.

Just like the cloud computing transition — resistance is futile, but smart adaptation is essential.

For company-specific projects, you're always safer running things in your own AWS ecosystem, where you control the security posture. And if you're committed to OpenAI, consider running it through Azure for added security.

What's Your Take?

How is your organization handling the rise of rogue AI? Have you found effective ways to balance security with convenience? Or is this something that’s been keeping you up at night?

Until next time,

Ryan Ries

Now, time for this week's AI-generated image and the prompt I used to create it.

DALL·E 2024-11-12 15.10.40 - A playful and whimsical wild west showdown scene in a dusty western town. An AI cowboy with a futuristic, friendly robotic appearance, dressed in trad"Create an image of AI in the Wild West. The AI should be a cowboy getting ready to face off with a muppet called "Shadow IT." The setting is a dusty western town."

Sign up for Ryan's weekly newsletter to get early access to his content every Wednesday.

Author Spotlight:

Ryan Ries

Keep Up To Date With AWS News

Stay up to date with the latest AWS services, latest architecture, cloud-native solutions and more.