Imagine a home construction site where workers start performing structural tasks based on intuition or quick online searches, ignoring official regulations and even not seeking the builder’s approval. While their intentions might be noble, it makes it challenging to maintain quality standards and ensure compliance with building codes.
A similar situation happens in organizations every day in the form of shadow IT — employees using unapproved tools or services that can compromise security and oversight. According to Gartner, 75% of employees will acquire, modify or create technology outside IT’s visibility by 2027, mainly due to fast cloud adoption and easy access to cloud tools. Another research by Microsoft and LinkedIn found that 78% of employees bring their own AI tools to work. Shadow IT increases security risks by making it very difficult to enforce consistent security protocols across the IT ecosystem and maintain control over users’ actions.
This article explores what leads employees to use unapproved tools, how AI is making shadow IT easier and more frequent, and how organizations can respond effectively to this serious challenge.
The Many Forms of Shadow IT
Anything in the user space that isn’t vetted and explicitly approved by the IT team can be categorized as shadow IT. The most common example is bring-your-own-device (BYOD) initiatives, which result in employees using their personal devices for work-related tasks. However, there are many other forms of shadow IT, including cloud storage and file sharing services, software-as-service (SaaS) applications, messaging and collaboration apps, unapproved programming libraries, and AI assistants. Plus, sometimes employees surreptitiously continue using applications after they are no longer permitted by IT, such as Adobe Flash, which was once widely embraced but eventually recognized as a significant security liability.
Uncovering instances of shadow IT can be quite difficult, especially when employees work remotely. If an organization doesn’t have robust access monitoring in place, scrutinizing departmental expense reports may prove the only real option.
AI-Powered Shadow IT
The rise of AI tools is expanding the risks of shadow IT. It’s not just about employees using unapproved solutions — many organizations today encourage employees to use AI tools like ChatGPT, Grok or Microsoft 365 Copilot to boost productivity. For instance, these tools can streamline tasks like drafting PowerPoint presentations or other documents.
However, AI tools can also potentially interact with sensitive company data. Key concerns include whether this data is ingested and stored by the large language model (LLM) or exposed elsewhere. Between March 2023 and March 2024, the amount of corporate data being fed into AI tools surged by 485%, and the share of sensitive data within those inputs nearly tripled. This raises questions such as:
- What happens when you input a document into Microsoft 365 Copilot?
- Is sensitive content that employees paste into ChatGPT or Grok used to train models?
- Could sensitive data leak or resurface elsewhere?
The ease of access to AI tools amplifies the risk of data loss, challenging traditional data loss prevention (DLP) strategies. Organizations need to get ahead of this through a combination of clear policies on AI usage, thorough employee training and advanced DLP tools.
Avoiding Shadow IT by Turning “No” into Collaboration
Why do users resort to using unapproved devices or software? One key reason is that when they find a tool they think will help them do their job, they anticipate the IT department will deny their request to use it. Accordingly, instead of defaulting to rejection, IT teams should foster a collaborative culture with the mantra, “Let’s find the solution together.”
No one knows the needs of their job role more than the employees doing the actual job, and no one knows security better than the IT security team, so take full advantage of all players. Security teams should engage with departments to understand their needs and why current approved tools fall short. They also should identify departments’ goals and work jointly to find solutions that meet their objectives — while ensuring IT’s requirements for security and manageability are upheld.
Ultimately, security specialists must emphasize that by collaborating as partners to choose tools, IT can provide valuable benefits such as support, streamlined management and enhanced security.
A New Approach to Awareness Training
The adage “users are the weakest link” is quite popular among security specialists, and it’s quite applicable to the realm of shadow IT. Employees are on the front lines of an organization’s security, and their buy-in is crucial for the success of any initiative.
Accordingly, organizations need to invest in broad awareness training for all users and to include lessons on how AI models work, their limitations, and the importance of data privacy when interacting with them. They also need to emphasize that AI outputs must always be verified and checked for plausibility. When users understand their role in security, the risks of unvetted tools and the output of these, they’re far more likely to collaborate with the IT team rather than work around them.
Conclusion
Like any other technological advancement, use of AI and AI-based SaaS solutions creates both opportunities and challenges. It helps organizations to be faster and more flexible but increases security risks, including shadow IT. Effective strategies for mitigating those risks include ensuring collaboration between IT teams and business users regarding the tools they need, providing better security awareness training, and clearly communicating with solution providers. By combining those techniques with the right security solutions, organizations can protect their sensitive data and systems from threats while empowering their teams to innovate responsibly.
About the Author
Dirk Schrader is Resident CISO (EMEA) and VP of Security Research at Netwrix. A 25-year veteran in IT security with certifications as CISSP (ISC²) and CISM (ISACA), he works to advance cyber resilience as a modern approach to tackling cyber threats. As the VP of Security Research, Dirk is working on focused research for specific industries like Healthcare, Energy or Finance. As the Field CISO EMEA he ‘speaks the language’ of Netwrix’ customers & prospects to facilitate a fit for purpose solution delivery. Dirk has published numerous articles addressing cyber risk management, IT security tactics and operations, and reported hundreds of unprotected, vulnerable critical medical devices to authorities and health providers around the globe.