Part 2: Don’t fall victim to Artificial ‘limited’ Intelligence
While AI tools can generate impressive and informative content, they are not immune to errors. If you spend any time on the internet, you’ll have likely been witness to some tragic and unfortunate AI fails – but have you ever stopped to consider how these errors occur and how they can be avoided?
Part 2: Don’t fall victim to Artificial ‘limited’ Intelligence
10 Dec 2024
While AI tools can generate impressive and informative content, they are not immune to errors. If you spend any time on the internet, you’ll have likely been witness to some tragic and unfortunate AI fails – but have you ever stopped to consider how these errors occur and how they can be avoided?
We have examined some famous AI fails and considered how they occurred, the consequences, and how you can avoid making the same mistakes.
In Part 1 (‘Prompt Power – Transform Your AI Interactions’) we suggested a fact check of any information you have received from AI to avoid possible legal implications and the spread of misinformation. Failing to fact check could cause reputational damage for your business and scepticism towards future articles you produce. In the worst cases it can increase your vulnerability to potential legal consequences as two New York attorneys found out when they were fined $5,000 for using fake ChatGPT cases in their legal briefs.
In 2015, Amazon, one of the world’s largest tech giants realised that the algorithm in their AI system used for hiring employees was biased against women.
An unnamed AI system engineer who helped build the Amazon AI system said “they wanted it to be an engine where if it was given 100 resumes, it would spit out the top five, and we’d hire those”. To do this, over a 10-year period, Amazon trained the AI on data submitted by applicants, however most of the applicants used to train the system were men. Due to the dataset being made up of predominantly male CVs, the AI was consequently trained to favour men over women and began penalising female applications.
This is a prime example of how AI tools can reflect biases in their training data. The result is the requirement to always be critical of the information being input, and to consistently review and consider different perspectives. Not doing so could reinforce and perpetuate existing system prejudices, discrimination, and unfair treatment. You can use AI to assist you in this by asking it for a critical analysis, asking additional questions such as ‘what evidence are you using to ensure this content is accurate?’, or ‘how would you rewrite your statement from a different perspective?’
Not all AI models have up to date information. For example, ChatGPT’s knowledge cut-off date is at least one year previous. As a result, the information obtained may be incomplete or even factually incorrect. For example, when asking ChatGPT about current geo-political events and their influence on trade, ChatGPT could not include the results of the recent US elections and rhetoric about proposed trade tariffs which would be essential for any analysis.
This is a known issue with large AI language models; they don’t actually have true “knowledge” – they are just predicting text based on patterns in their training data. To help combat this, you must perform your own research based on the information output, ensuring that the content is still relevant and remains true.
A recent phenomenon is the fear of AI replacing us. This is probably no surprise as we’re constantly bombarded with the fast advancements of artificial intelligence which lends itself to the misconception that AI is now ‘better’ than humans.
The 2020 case of the unmanned, AI controlled camera that was used to capture football matches at a Caledonian Stadium may help alleviate this fear. At the time COVID-19 restrictions were in place, the match was broadcasted directly to ticket holders using an unmanned camera. However, for a significant proportion of the match, the AI’s object recognition technology repeatedly mistook a referee’s bald head for the ball.
Many of the viewers complained they had missed their team scoring a goal due to the camera continuously swinging to follow the referee instead of the actual game. Some viewers even suggested the club provide the referee with a toupee, or a hat to avoid this problem in the future.
The AI failures highlighted above are used to emphasise the necessity of human oversight. These cases are only a fraction of the countless real-life examples that underscore the critical importance of maintaining a balanced approach to AI and illustrate the potential consequences of neglecting this balance.
By prioritising fact checking when using AI to assist you with your day-to-day tasks, you can both harness the benefits of AI, as well as help to safeguard yourself and your company against the potential ramifications.
If you want to see more from Martyn Fiddler, please follow us on linkedin: Martyn Fiddler