If you’re ancient like me, you probably remember Lotus Notes. The leading groupware platform of the last millennium, it not only provided corporate email and pre-Slack communications, it also empowered anyone in the organization to build and publish mini-web sites for anyone to use.
It didn’t take long for this whole employee empowerment train to go off the rails. Suddenly, Madge in accounting could slap up a site that exposed private corporate data – with the IT organization none the wiser. No testing, no compliance, no oversight at all.
Shadow IT had reared its ugly head. IT had heretofore controlled all technology deployments across the organization, holding the tech reins for any department or individual looking to deploy any software-based asset.
Now, anyone could do so, essentially mounting their own IT effort in the shadows.
Today, AI is facing a similar challenge. Not too long ago, all corporate AI efforts required deep technology expertise, and thus they resided under the control of IT (or some other technology department) that presumably knew what it was doing.
Machine learning, deep learning (like computer vision), AIOps, and the rest of the AI pantheon were comfortably ensconced in the cubicles of the technologists.
Then along came generative AI (genAI). Suddenly, anyone can work directly with AI. Even Madge can type whatever prompt she wanted into ChatGPT or some other tool – including corporate AI tools that can access internal data.
Welcome to your shadow AI nightmare.
The Seven Pitfalls
GenAI opens up new dimensions of shadowness well beyond the shadow IT of old. Here are seven examples.
Pitfall #1: Shadow access to corporate data
One of the most popular uses of genAI in businesses today is exposing corporate data for official use, including searching, summarization, and analysis. Clearly, genAI is a powerful tool for such purposes.
The dark underbelly of such usage is when people use AI to access corporate data in unauthorized ways for unauthorized purposes.
The more powerful the tool, the more dangerous it becomes, as people figure out ways around whatever restrictions the IT organization puts into place.
Pitfall #2: Shadow Plagiarism
LLMs that leverage public information sources typically have no regard for copyrights or other intellectual property protections. AI-generated content can easily violate such restrictions.
Your high schoolers are using ChatGPT to write their term papers (sorry, parents!). Not only is such use an abuse of intellectual property rights, but it defeats the purpose of writing the papers in the first place.
The same is true in the corporate world. Plagiarized content is simply not fit for purpose – no matter what that business purpose might be.
Pitfall #3: Shadow Laziness
GenAI makes it dead simple to generate a wide range of content – content that human beings had previously generated the old fashioned way.
On the one hand, genAI is a boon to such content creators. Just think of all the time savings that will result from having AI do your job for you!
On the other hand, it’s simply a way to get around having to do your job – in particular, when AI-generated content doesn’t meet the business requirement.
Copywriting – creating text content for web sites, brochures, etc. – is one of the best examples of this pitfall. Why spend time writing copy when genAI can create similar (but inferior) content?
Pitfall #4: Shadow phone use
The ‘bring your own device’ (BYOD) dilemma has caused no end of headaches for IT departments as they struggle to avoid cybersecurity breaches, exfiltration of corporate data, and all manner of other ills resulting from people bringing personal phones to work.
AI complicates the BYOD problem, as all phones now have their own genAI capabilities – bringing all the various challenges of genAI directly into organizations without going through firewalls or any other protections.
In other words, personal phones act as a force multiplier for the other pitfalls on this list. What good are corporate controls on genAI if employees can simply email themselves AI-generated content from their phones?
Pitfall #5: Using public data for corporate purposes
GenAI search tools make it dead simple to take Internet search to the next level. People can get detailed answers to increasingly sophisticated questions – answers that can find themselves in corporate scenarios, with no one the wiser.
See those data points in that PowerPoint someone is presenting to management? Did they come from a verified source, or are they a genAI hallucination? How can you tell?
Pitfall #6: Shadow AI agents
GenAI agents are autonomous computer programs that leverage genAI to complete tasks – and the best ones even learn over time.
Agentic AI is all the rage today, even though useful agent technologies are only now coming to market.
This immature state of available agents exposes all manner of risks, as people can find themselves deploying AI agents for all manner of tasks off the radar of the IT department.
Just as with personal phones, AI agents are a force multiplier for other AI pitfalls, but unlike phones, the agents are operating on their own somewhere in the corporate network.
And they’re getting smarter as they go, for better or worse.
Pitfall #7: Shadow vibe coding
Vibe coding is a new term for leveraging genAI in the software coding process.
In the right hands, code generation tools can streamline the developer’s work, improving their productivity as well as the developer experience.
In the wrong hands – that is, in the hands of someone who doesn’t know how do address vibe coding’s deficiencies – using genAI to create applications can lead to all manner of issues, including software vulnerabilities, poor quality, and additional technical debt.
Does your IT department – or in this case, perhaps your DevOps team – know about all the vibe coding going on in your organization? If not, why not?
Vibe coding is the shadow AI gift that keeps on giving, as AI-generated code might remain in production for years. As vibe coding tools improve, it will become increasingly difficult to identify, let alone fix, all the AI-generated code in the organization.
The Challenge of AI Governance
Addressing the problems of shadow AI falls under the burgeoning area of AI governance. Indeed, new AI governance tools are springing up like weeds, and many of them hope to address the issues in this article.
If you’re shopping for an AI governance tool, however, you need to be careful, as the various offerings on the market differ dramatically from one another.
Many such AI governance tools focus on regulatory compliance. New AI regulations are springing up around the world – and enterprises are struggling to keep up.
AI governance tools should unquestioningly be an important tool in your compliance toolbelt. However, the shadow AI problem may only be tangentially related to regulatory compliance.
Many of the pitfalls above are more about people doing their jobs improperly rather than any kind of formal rule-breaking. It’s essential, therefore, that your AI governance tool is up to the shadow AI challenge.
The Intellyx Take
I began this article by comparing shadow AI to the shadow IT problems of old, as Lotus Notes so justly represented.
Notes led to exposure of corporate data, poor quality software, and occasional regulatory violations – just as genAI does now.
Where the shadow AI and shadow IT patterns differ, however, is how simple genAI is for anyone to use regardless of technical skills.
Lotus Notes lowered the technology bar substantially, but genAI lowers it even further. We can also argue that genAI is far more powerful than Notes ever was.
People can leverage this new technology to achieve all manner of business value – and simultaneously, all manner of mischief.
Don’t get caught by surprise by shadow AI – or you and your organization will end up living in this nightmare.
Copyright © Intellyx BV. Intellyx is an industry analysis and advisory firm focused on enterprise digital transformation. Covering every angle of enterprise IT from mainframes to artificial intelligence, our broad focus across technologies allows business executives and IT professionals to connect the dots among disruptive trends. None of the organizations mentioned in this article is an Intellyx customer. No AI was used to write this article. Image credit: Craiyon.