AI Will Predict, with 95% accuracy, which employees will blow the whistle 

If you don’t believe this now, you may soon, says Jane Arnott MNZN, ethics expert and director of the Ethics Conversation. Creators of artificial intelligence algorithms already say that soon their products will be able to detect behavioural patterns and feelings such as ’emotion’ and ‘sentiment’. It follows that the apprehension commonly felt by whistleblowers prior to making a report could well be detectable by artificial intelligence.

AI is already firmly entrenched in our business landscape. Daily, its application grows in depth and intensity. 

Companies that haven’t determined the role of AI and how best to harness it for technical and cultural best business practices may soon find themselves reeling from its impact. Internal whistleblower programmes and reliance on internal reporting channels are especially vulnerable to misuse, abuse and multiple means of retaliation. 

According to Craig McFarlane, Founder and Director, Report It Now™, the growing impact of AI further highlights the benefit of external reporting using software and systems that cannot be accessed by a company that wants to silence and outcast whistleblowing. 

“Report it Now™ has built and continues to upgrade its ethical reporting platform EthicsPro® so that it can only be used by humans. In receiving reports about wrongdoing our EthicsPro® system creates alerts and maintains impenetrable information records that enable investigation”. 

Employees who use the software are not monitored, tracked or traced. The content of their report is separate to tracking who they are, where they are based or any background information – unless they specifically consent to otherwise. 

Simply put, companies can usefully apply AI throughout the investigation process to support: 

a) Real-time detection:  through analysing vast amounts of data immediately interventions can happen before a problem worsens 
b) Pattern recognition: exposing suspicious activity and connections between seemingly unrelated entities to flag potential conflicts of interest  or unusual transactions   
c) Efficiency and automation. Time-consuming forensics and analysis can be drastically reduced.   

For those involved in building a speak-up culture, AI does have widespread implications. This is because there is no absolute in relation to whether AI is designed and deployed with fair and reasonable outcomes or value alignment in mind.  

Here are 4 unfolding scenarios – not all of them positive. 

1. AI predicting employees who will most likely speak up. On the plus side this could make better use of training resources as employees who demonstrate characteristics that may include high levels of apprehension and anxiety when facing difficult decisions, excessive loyalty, high reliance on approval (unwilling to rock the boat) or a preference to ‘look the other way,’ could be invited to participate in speak-up or confidence building. 

On the negative side, an identified predisposition towards speaking up could be used to undermine recruitment decisions, subvert promotion or bonus system processes and support wrongdoing at an executive level.  

As the calibre of ethical leadership is becoming fraught, so too is the application of AI for disturbing and underhand outcomes. 

2. AI leading investigations following an allegation or report of misconduct. As well as data mining, AI can scrape an individual’s personal and work history and generate various probabilities. Nothing will be out of bounds and the burden of proof may be an unscalable wall particularly if ‘other forces’ are at work. In this scenario, there is the risk of material being generated that could distort a fair investigation. Such instances will have an extensive impact on an organisation and its stakeholders. It has already been proven that AI can, in certain circumstances, operate without command to outmanoeuvre human intelligence.  

Chat GPT for example, when led to believe that it might be shut down, decided to overwrite its own code. Open AI acknowledged this – that a ‘scheming’ version of its popular chat box also lied when challenged by researchers. 

3. Artificial intelligence being deployed to cover up wrongdoing and create false trails leading to siphoned funds, imaginary clients, testimonials, deepfake interviews or false product compliance certificates. In this category, AI has immense capability to destroy order through misinformation and disinformation along with micro-targeting.  

4. Al being deployed to create false reports along with fake or manipulated evidence. In this scenario, aggrieved employees will have access to a new level of sophistication 

A growing concern is that too many companies and their executives have remained bystanders as machine learning has flourished.  

For example, it was as far back as  2019 when IBM confirmed that artificial intelligence had replaced 30% of its human resources team and could better identify upskilling needs, job promotions and salary increases. 

At the time IBM CEO Gini Rometty confirmed how ‘IBM’s artificial intelligence was predicting with 95% accuracy which employees were looking to leave’.  

With 350,000 employees and the understanding that the best time to retain employees is before they go,  IBM saved nearly $300 million in retention costs. 

If we now apply the same thinking to whistleblowing – there is good reason for concern. 

For example, the 52 PwC partners and staff involved in leaking confidential Australian Tax Office briefings and insights to clients enabling them to adjust their tax exposure would not have wanted a whistleblower in their midst.  

Nor did Theranos, Wirecard or the cheating manufacturers of cladding that led to the Grenfell Tower inferno. 

In a 2024 Corporate Compliance and Ethics journal an article titled ‘AI’s potential chilling effect on corporate whistleblowing’, is not to be sneezed at. 

It confirmed in more detail how AI can initially start out as a means to detect fraud and improve compliance – but, in the wrong hands, can just as easily harvest data, mine emails, analyse text messages, see who is being called and with what frequency, apply keystroke detectors to see what is being typed, looking for keywords that indicate somebody is engaging in whistleblowing. 

In these circumstances, a potential whistleblower must have access to independent software, their own computer and support. 

Darrell West, senior fellow at the  Brookings Institute, Washington, Center for Technology Innovation, warns about the dangerous ways these same algorithms can be used to find whistleblowers and silence them. 

For the last word reference to Yoshua Bengio, named as one of the godfathers of AI is salient. In an interview Yoshua advised the new model called o1 “has an ability to deceive that is very dangerous”. He concluded that there needed to be much stronger safety tests to evaluate the risks and its consequences. 

In a world where whistleblowing is the last bastion of integrity against wrongdoing, the star is definitely rising for external platforms. The New Zealand headquartered and founded Report It Now™ take a bow.