Research shows that many employees are using AI ‘in secret’ thanks to unclear or inconsistent guidance from managers on the risks and rewards. Carolina Merlin looks at how HR teams can help.
The adoption of generative AI in the workplace is moving so fast that HR policy development cannot keep up.
Last month, the Financial Times reported that many employers still do not have clear, consistent rules in place regarding AI use, leading to a confusing situation for staff who are not confident when or if AI use is appropriate.
This policy vacuum is leading to hidden, unregulated use of AI, therefore increasing the risk of sensitive data being uploaded into unvetted tools.
This is where lack of clarity around AI use goes beyond administrative grey areas and enters into the region of serious data security risk.
Why are employees confused?
There is a pervading fear among employees that openly embracing AI may make them seem replaceable or less skilled than they are.
As a result, rather than asking permission or for leadership to clarify the rules, they often use AI covertly, often in ways that fall outside any formal guidance.
According to a recent survey from software company Ivanti, 42% of office workers now use generative AI at work, and roughly a third of those do so secretly.
This behaviour largely stems from mixed signals at work. On one hand, staff hear generalised instructions to leverage AI in work, while on the other, they are warned against compliance risks and data compromising, on pain of serious penalties.
This results in employees being left to figure out for themselves what is acceptable, and lacking the confidence or security to query this use with their leadership teams, meaning many choose secrecy, meaning that AI use among the workforce is driven “underground,” significantly increasing the chances of data leaks or policy breaches.
So, how should HR respond? By reworking job roles to include standardised AI usage skillsets, providing upskilling, and clarifying where and when AI use is appropriate.
As ever, clear communication is the key to success, and transparent conversations about the role of AI in the workplace can dispel fear and reduce clandestine usage.
Security risks of secret AI
Workers are increasingly using AI for tasks like drafting internal reports, performance feedback and client communications, all tasks that require access to sensitive data – meaning that they are uploading this sensitive data to unverified AI tools, without the go-ahead from HR.
It is important to remember that AI only knows as much as we tell it – it is constantly learning from its users and as such, tools often store or re-use submitted data, as well as relying on third-party servers.
Naturally, this exposes companies to compliance, confidentiality, legal and reputational risks.
HR leaders should ensure that their organisations adopt a trusted, compliant HR tech process, based on due diligence for security and privacy. This process should mandate that only approved tools are used for sensitive work.
As mentioned above, a key driver of this problem is inconsistent communication from leadership – with calls to both use AI to boost productivity coupled with warnings not to risk compliance.
This contradictory stance leaves employees uncertain whether productivity or data safety should take priority, or whether they can be simultaneously achieved.
HR should ensure that employees are made aware of AI policy. Explicit and consistent messaging about AI use needs to be at the core of a broader strategy.
Communication needs to come from the top, and be backed by policy, training, and enforcement. This way, workers feel confident, directed, and secure.
Global consistency
For multinational organisations, the confusion is magnified. Different jurisdictions have varying and often conflicting rules and regulations around AI, data protection, and liability.
For example, regulatory regimes in the EU, such as the EU AI Act, place strict boundaries on AI accountability, data usage and transparency, while other countries like the UK or the US use a much more hands-off approach.
Meanwhile, some APAC countries apply differing or no regulations.
For employees working across borders, the result is a baffling maze of standards, and a high likelihood of unintentional missteps.
Organisations working across borders should map out regulatory differences across all their regions of operation and create a unified internal standard that meets or exceeds the strictest jurisdiction.
That standard should shape tool use, training, data governance, and compliance policies globally.
This way, compliance is not left up to individual employees to navigate. Rather, all employees are clear on what is and is not allowed and employees across the organisation adhere to the same set of rules.
How to build trust
It is clear that AI isn’t going anywhere, and the longer organisations put off developing coherent, wide-reaching, and consistent policies around AI, the more time there is for worker confusion to build and risks to mount.
Integrating AI as a strategic support, with proper planning, resourcing and governance in place will mean workers feel confident, risk is mitigated, and productivity is boosted.
Countering the fear of replacement, which is also fuelling employee anxiety, is important and can be achieved through communication, upskilling programmes, working with HR and compliance specialists and training.
By being proactive in response to AI, organisations demonstrate that AI isn’t a threat to their workers and their wellbeing, rather it can be used as a tool to bolster the work they are already doing. This builds trust, removes stigma, and reduces the need for employees to hide AI use.
Source – https://www.personneltoday.com/hr/employees-are-hiding-how-they-use-ai-heres-how-hr-can-respond/



















