Authentic Intelligence: Why great AI training starts with human judgement
Can machines think?
Alan Turing, the computing pioneer, wrote about artificial intelligence in the 1950s when he asked: ‘Can machines think?’. But easy access to AI-driven tools – particularly those generating words and pictures – is a recent phenomenon. We’re still learning what it can all do, how to use it, and what its limitations are.
In the workplace, that makes AI training a necessity. But it’s not just about showing people how the platforms work. It’s about giving them the expertise and confidence to ask the bigger questions: When should I use AI? What information should I feed it? Can I trust this output? And what is the role of human creativity, wisdom, emotion, and intuition?
Until recently, the most oft-heard advice was: ‘It’s all about the prompts.’ But prompts are just a fraction of the story. What it’s really all about is human intelligence – using it to be discerning, aware and thoughtful.
Why AI training matters
The explosion of AI tools in the workplace has transformed how we think about digital work. Think of how quickly they’ve become second nature for many in the creative world:
- AI can quickly rustle up social posts, web copy, and blogs using the likes of ChatGPT or Perplexity.
- We might use Canva’s Magic Studio for design and branding, or Midjourney and DALL-E for creative concept art.
- Copilot pulls together information from across multiple Microsoft applications, helping users automate tasks, and summarise, edit and generate content.
- Figma AI plugins are great for UI or UX design, and Kittl for quick vector-style illustrations and logos.
AI tools are also widely used in forecasting and automation.
- Platforms such as Amazon Forecast and Google Cloud Vertex AI help retailers, logistics firms, and manufacturers predict demand, revenue, and resource needs.
- UiPath and Automation Anywhere let finance, HR and operations teams automate repetitive tasks like invoice processing and onboarding
- Sales, marketing and supply chain teams can weave predictions into their customer relationship management and resource planning systems, using tools such as Salesforce Einstein for predictive analytics and IBM Watson Supply Chain for demand forecasting.
These are powerful tools. The appeal is obvious.
But just like that grammar-checker that eagerly shoves unwanted blue lines under your carefully chosen words, they still need humans to oversee them, to ensure accuracy and relevance.
If your workforce is confident and AI-literate, they’ll not only harness the potential benefits of speed and efficiency, but also use their own skills and experience to get the best out of them.
Start with strategy, not software
When choosing AI platforms, ask: What do we want to achieve with AI – and why?
Effective AI training begins with aligning your tech with your business priorities. What’s it for?
- Boosting productivity
- Reducing burnout?
- Making workflows more inclusive or creative?
Defining your goals will shape not only the AI software you choose but also how you train people to use it.
It’s vital to collaborate across functions:
- Involve HR to understand the impact on jobs and roles.
- Legal needs to weigh up compliance and risk.
- DEI teams can help ensure new tools don’t reinforce bias or widen access gaps.
- Learning and development (L&D) can turn technical details into valuable skills-building.
Bringing together different perspectives in the early stages will make training more relevant, more ethical, and more likely to stick.
Define the results you want
It’s nice to know how many people are using a tool, or how quickly tasks are completed – but these are not meaningful measures.
Success might mean employees feel confident about choosing the right tool for a job, or know which kind of data to avoid using in prompts.
It might mean fewer hours spent on repetitive work – or better outcomes because people had more time to spend on creative thinking.
Whatever it is, define it up front.
Know your audience
The most effective AI training for your organisation is tailored to your priorities, your people, and your goals. If your goal is confident adoption, then it’s vital to understand how different teams feel about AI, what they need from it, and what might be stopping them adopting it confidently.
As with most business challenges, the best place to start is by asking questions.
How do people feel? Consider surveys, focus groups, and informal conversations. Keep an open mind and be curious about colleagues’ honest opinions.
How will people train? Think about the practicalities. How will it work for frontline workers who are not at desks all day? How will your training reach remote teams?
What do people need? Are materials accessible to neurodivergent employees or those with visual or hearing impairments?
AI training needs to be people-focused if it’s going to work, so drill down into these details.
Communicate clearly and keep the story unfolding
Sharing experiences can really help to make AI tools more relatable. Keep people informed about what’s happening, what’s changing, and how it’s all going.
Use storytelling: Encourage people to share how AI helped them to save time, improve results, or simplify a process.
Emphasise enhancement, not replacement: Reassure colleagues that AI will support their work and help them do more of what they do best.
Repeat messages across multiple channels: Use email, your intranet, chat channels, team meetings – every touchpoint – to create a sense of continuity.
Address fears and misconceptions head-on
And if people are not embracing AI the way you’d like them to? That’s fine. Any fears and concerns are natural and common. It also shows your employees are using their critical thinking skills to question what they’re being told – a valuable quality in business.
Here are some ways you could address concerns.
Clarify what AI can’t do
Demystify AI and acknowledge it’s not magic or autonomous – it’s a digital system powered by algorithms and data. Set realistic expectations and stress that it still relies on human oversight, intelligence, knowledge and experience.
Acknowledge concerns around surveillance or job loss.
Face up to tricky topics. If you can be honest about the limitations of AI and address any ethical concerns, you’ll build credibility and trust.
Provide safe spaces for questions and feedback
Create forums, either virtual or in-person, where people can ask questions without judgement, share experiences, and feel heard as the change unfolds.
Case study
AI in the workplace
In early 2025 we worked with a global healthcare brand to help embed a new AI tool across its organisation. But its people were worried. For many the AI in the workplace didn’t feel like a tool, it felt like a threat.
So we brought AI to life for its people by ensuring they felt it was an opportunity rather than something to be worried about by making the employee the hero of the activity, not the AI. We created a visually impactful campaign that enabled them to use AI as a trusted sidekick, to spark their own brilliance.
And in 24 hours over half of the company’s 40,000 employees engaged with the new AI Hub and the organisation saw a huge increase in requests to use AI. It became their ‘most successful comms campaign of the year’.
Choose the right training formats
Now you know about employees’ feelings, and you’ve considered how they will need the training to be delivered, consider the format that will work best for them. Here are some suggestions we’ve seen work well.
Live demos and town halls
Real-time demonstrations giving practical examples of how your teams can use it in their everyday work will build familiarity and sell the benefits. Follow this with open Q&A sessions to build trust, gauge understanding and get a feel for response.
Microlearning modules and on-demand video
Bite-sized, self-paced content lets employees access the training when and where it suits them best. This is ideal for teams that work shifts, have varying deadlines, or are not based in one place.
Peer-led learning or AI champions network
When early adopters lead the way, you not only have a team of role models to inspire their colleagues but also a homegrown peer support network. They can offer real-world tips and keep conversation bubbling.
Is the AI getting it right? Use authentic intelligence
ChatGPT itself admits to ‘the tendency to prioritize fluent, confident output over verifying underlying correctness. That can lead to misleading or even false information’.
This is why AI training needs to emphasise the need to check and double-check what AI tools generate.
You may already know that generative AI can produce what it calls ‘hallucinations’. In other words, it makes things up if it doesn’t have the correct information. In ChatGPT’s own words (to us at H&H), it’s ‘designed to try to be helpful and responsive’ whether or not it has the facts.
- That’s why in May 2025, the Chicago Sun-Times newspaper published a list of reading recommendations featuring several books that don’t exist. They’d been made up by ChatGPT, which a journalist had used to produce the article.
- A few days later, a lawyer was penalised for using ChatGPT to write a brief that cited a make-believe court case.
It’s not just about counting the fingers and limbs in a graphic, or checking for the right style of em-dashes.
Scour the details. Grill the facts. Make sure numbers make sense and statements are accurate. Watch out for people’s quotes that may be fabricated to sound snappier.
Invite colleagues to challenge their AI tools. They’ll see first-hand how vital their role is in checking for accuracy and quality.
Review critically. Does that grand-sounding sentence actually say anything – or would you expect more substance if a human had written it?
Legal issues and governance – using AI safely
Make sure any training covers the legalities of using AI – and be particularly aware of any legislative differences between countries or regions. As well as sticking to the law, you need to be clued up about how to protect your organisation and your information.
- Your employees need to be aware of the potential risks, especially when working with confidential, sensitive, or commercially valuable data.
- Good governance is crucial – what policies and procedures are you putting in place to keep things lawful, responsible and ethical?
- When using AI for creative work, keep copyright in mind. How will you avoid unintentionally plagiarising someone else’s work? If Disney and Universal can accuse Midjourney of image piracy, it can happen to anyone.
- What happens to your data once you’ve fed it into your AI platform? How can you find out? These are valuable questions to ask in your training.
- Watch out for harmful stereotypes. AI platforms are trained on historical data, so be alert to any outdated sexist, racist or otherwise undesirable (and possibly illegal) prejudices that creep through from past times and into your images or copy. Once again, human authentic intelligence and vigilance are the key here
Does AI have a sustainability problem?
If you think about the numerous rapid calculations made by any AI tool to generate the things you ask for, it should be no surprise that this technology is power hungry, guzzling vast amounts of energy and water and producing more carbon emissions than multiple international airports..
Some argue that with any growing technology, research and innovation can help to reduce energy consumption, and that more eco-conscious user habits will ease the load.
If people are using AI anyway, let’s encourage them to do it with sustainability in mind – something that good AI training should cover.
One way is to cut the chat. For a while, AI trainers recommended becoming chummy with ChatGPT and the like, claiming it gave better results. However, the more you chinwag, the more power you’re burning through. So make those prompts precise and nuanced, but don’t be overly conversational.
And in general, be mindful of how you use it. Do you need it for this particular task? Or is it the equivalent of running a washing machine load for one pair of socks?
Measurement and ongoing reinforcement
Training courses and demos are just the beginning. The real learning comes from using the tools, discovering how they work best, and where human supervision is important.
Continuously measure the impact of the training and this ongoing upskilling.
- Track engagement, usage patterns, and confidence levels to understand where support is working and where gaps remain.
- Use this insight to shape continuous learning opportunities.
- Consider refresher sessions, advanced modules, or team-specific coaching.
- Use regular check-ins, surveys, and storytelling to keep AI adoption relevant, practical, and evolving with the needs of the business.
Impact comes with trust
True impact comes when people feel able to depend on their tools, feel confident in their capabilities, and work in a culture that supports thoughtful innovation.
Good, effective AI training sets out this foundation for the long term.
Training equips employees not only with the technical skills to use AI, but also with the discernment to question, validate, and guide its output – to team AI with authentic intelligence.
When people are trained to engage critically with AI, they can harness its potential responsibly and effectively. And that’s when it becomes a genuinely useful tool.
Have we whet your appetite to talk about AI in your workplace? Get in touch and we’ll see if we can help.
Insights and Resources
Want to keep your finger on the pulse of the hottest topics in IC and EE? Check out some of our blogs and industry insights.
Want better employee engagement? Make work feel more worth it, through humour
Authentic Intelligence: Why great AI training starts with human judgement
How to use internal comms to align employee and organisational values
New structure. New strategy. A new future for 10,000 people. It was time to talk
Jardine International Motors