Part 2: Don’t fall victim to Artificial ‘limited’ Intelligence

While AI tools can generate impressive and informative content, they are not immune to errors. If you spend any time on the internet, you’ll have likely been witness to some tragic and unfortunate AI fails – but have you ever stopped to consider how these errors occur and how they can be avoided?

Part 2: Don’t fall victim to Artificial ‘limited’ Intelligence

10 Dec 2024

While AI tools can generate impressive and informative content, they are not immune to errors. If you spend any time on the internet, you’ll have likely been witness to some tragic and unfortunate AI fails – but have you ever stopped to consider how these errors occur and how they can be avoided?

We have examined some famous AI fails and considered how they occurred, the consequences, and how you can avoid making the same mistakes.

Verification and the New York attorneys

In Part 1 (‘Prompt Power – Transform Your AI Interactions’) we suggested a fact check of any information you have received from AI to avoid possible legal implications and the spread of misinformation. Failing to fact check could cause reputational damage for your business and scepticism towards future articles you produce. In the worst cases it can increase your vulnerability to potential legal consequences as two New York attorneys found out when they were fined $5,000 for using fake ChatGPT cases in their legal briefs.

Fact checking AI – some helpful tips:
  • AI is known to ‘hallucinate’. This means AI systems can generate false information or misleading results whilst presenting them as fact. This was what happened to the New York attorneys referenced above. The trial judge noted they had “made a good faith mistake in failing to believe that a piece of technology could be making up cases out of whole cloth”.
  • To use AI better we suggest asking the AI you are using to:
    • provide its reference websites and sources;
    • check each link to ensure it works and is current;
    • verify each link/sources with your own judgement
  • Another tip is to provide AI at the outset with a list of your own reputable sources which it should use when assisting you. For example, ‘only using content from X news website/media outlet, please explain the current debate on farming subsidies’. Following this step provides additional comfort of accurate source information however it does not negate the need to check opinion from fact.

Bias AI

In 2015, Amazon, one of the world’s largest tech giants realised that the algorithm in their AI system used for hiring employees was biased against women.

An unnamed AI system engineer who helped build the Amazon AI system said “they wanted it to be an engine where if it was given 100 resumes, it would spit out the top five, and we’d hire those”. To do this, over a 10-year period, Amazon trained the AI on data submitted by applicants, however most of the applicants used to train the system were men. Due to the dataset being made up of predominantly male CVs, the AI was consequently trained to favour men over women and began penalising female applications.

This is a prime example of how AI tools can reflect biases in their training data. The result is the requirement to always be critical of the information being input, and to consistently review and consider different perspectives. Not doing so could reinforce and perpetuate existing system prejudices, discrimination, and unfair treatment. You can use AI to assist you in this by asking it for a critical analysis, asking additional questions such as ‘what evidence are you using to ensure this content is accurate?’, or ‘how would you rewrite your statement from a different perspective?’

Knowing your date limit

Not all AI models have up to date information. For example, ChatGPT’s knowledge cut-off date is at least one year previous. As a result, the information obtained may be incomplete or even factually incorrect. For example, when asking ChatGPT about current geo-political events and their influence on trade, ChatGPT could not include the results of the recent US elections and rhetoric about proposed trade tariffs which would be essential for any analysis.

This is a known issue with large AI language models; they don’t actually have true “knowledge” – they are just predicting text based on patterns in their training data. To help combat this, you must perform your own research based on the information output, ensuring that the content is still relevant and remains true.

‘Technophobia’ – Fear, dislike, or avoidance of new technology

A recent phenomenon is the fear of AI replacing us. This is probably no surprise as we’re constantly bombarded with the fast advancements of artificial intelligence which lends itself to the misconception that AI is now ‘better’ than humans.

The 2020 case of the unmanned, AI controlled camera that was used to capture football matches at a Caledonian Stadium may help alleviate this fear. At the time COVID-19 restrictions were in place, the match was broadcasted directly to ticket holders using an unmanned camera. However, for a significant proportion of the match, the AI’s object recognition technology repeatedly mistook a referee’s bald head for the ball.

Many of the viewers complained they had missed their team scoring a goal due to the camera continuously swinging to follow the referee instead of the actual game. Some viewers even suggested the club provide the referee with a toupee, or a hat to avoid this problem in the future.

The AI failures highlighted above are used to emphasise the necessity of human oversight. These cases are only a fraction of the countless real-life examples that underscore the critical importance of maintaining a balanced approach to AI and illustrate the potential consequences of neglecting this balance.

By prioritising fact checking when using AI to assist you with your day-to-day tasks, you can both harness the benefits of AI, as well as help to safeguard yourself and your company against the potential ramifications.

If you want to see more from Martyn Fiddler, please follow us on linkedin: Martyn Fiddler

LinkedIn

Follow us on LinkedIn to stay up to date with all our latest news and events.

YouTube

Subscribe to our channel to watch our webinar series and IOM Aviation Conference videos.

Related posts

Martyn Fiddler January Update – Part 2

2025 01 23 All News

Keeping ahead of the latest regulatory, legal and policy movements is part of what we do. Here is part 2 of the latest updates from our team.
Martyn Fiddler update – January 2025

2025 01 15 All News

Keeping ahead of the latest regulatory, legal and policy movements is part of what we do. Here are the latest updates from our team.
Martyn Fiddler set to attend CJI London 2025

2025 01 07 All News

Heather Gordon, Tobias Heining, Greta Kemper, Martin Kennaugh and Barbara Shaw are all ready to attend Corporate Jet Investor London on 3rd-5th February 2025.

UPDATES

Sign up for our newsletter

Get our latest updates an event notifications