Generative AI (GenAI) tools are impressive, often producing content that seems near-human. They use machine learning to take-in a lot of data and create content based on what you tell them. They're a bit like the word suggestions on your phone when you type a message, they get better as they learn more. But they have their limits. Their outputs are only as good as the datasets they've been trained on. Ask any GenAI chatbot a question or a request – what it gives you can be helpful, funny, flawed or just plain made-up.
However, GenAI can give you unreliable or flat-out wrong content if the data has issues (gaps or inaccuracies) or if your instructions are unclear. GenAI tools create content based on patterns they’ve learned and not through actual understanding. That means the content can be inaccurate, nonsensical, biased or superficial. At times GenAI tools can hallucinate responses. That's why it's always a good idea to check their responses carefully, especially when the answers are important.
While GenAI tools are a groundbreaking development in the realm of artificial intelligence, it’s crucial to approach their use with an understanding of their limitations. They are tools best used with a blend of human judgement and technological assistance, ensuring their outputs are leveraged effectively and responsibly.
Explore our collage of examples below. Click on the plus (+) icons to read the prompt behind the output and the discussion about what wasn’t quite right in what was generated.
This interactive image hotspot provides some examples of AI generated infographics. Clicking on the hotspots reveals information about the outputs. Hotspots are displayed as plus (+) icons.
Prompt asked: Create a powerpoint slide about using AI in the library.
Image description: A grey robot holding a stack of books, surrounded by a mindmap of random images inside circles such as cogs, technology and the brain.
What was wrong with the output:
What was helpful:
Main take-away: It’s important to use the right GenAI tool for the output you need.
Prompt asked: Create a data visualisation in a pie chart of all the regions in the world and digital poverty statistics.
Image description: Circular chart that has multicoloured segments on the left and depicts clouds and continents on the right.
What was wrong with the output:
What was helpful?
Looking and evaluating this output made it clear that a pie chart wouldn’t work well as data visualisation. The positive outcome was being able to test visualisation ideas before building them.
Main take-away: GenAI tools won’t always recognise their limitations and can provide wrong format or information as a result.
Prompt asked: Create an image that represents artists.
Image description: Black and white image of 4 men and 3 women standing/sitting around a table. They are all focused on painting. In the centre of the table are jars of paint brushes.
What was wrong with the output:
This image reflects bias in a number of ways:
When GenAI produces an image that confirms or reinforces existing stereotypes or assumptions, the type of bias is often referred to as "confirmation bias". In an AI or machine learning context it's more accurately "algorithmic bias" or "data bias."
What was helpful in the output:
Main take-away: You need to be specific with your prompt, the more detail the better as this helps create a quality output that mitigates bias.
Prompt asked: Create a pros and cons table of using Generative AI in study.
Image description: Top-down view of a table. In the centre of the table is a diagram divided into 4 sections, and each section contains lots of boxes with random indistinct imagery. Surrounding the diagram are objects such as pens, pencils and cups of coffee.
What was wrong with the output:
Bing Chat gave us two outputs for this prompt: 1. an image of a table and 2. a text-based table.
The problem with the image output:
The problem with the text-based output:
What was helpful:
Bing Chat gave us two outputs for this prompt. The image output was not usable but the text-based table could be refined or extended on.
Prompt asked: Create me an infographic about mis, dis and malinformation in connection to generative AI.
Image description: An infographic that is black, blue and orange, which contains illegible words and a bunch of random images such as clouds, computer screens, cogs and more. It is very cluttered.
What was wrong with the output:
What was helpful?
Main take-away: Pause, reflect and dive deeper into the output. The image is a great reminder to always assess the generated output.
Humans are biased and we are typically unaware of our own bias. We make assumptions based on our own understandings and experiences. We see or react to things through a very individual lens. This kind of thinking is known as cognitive bias. Confirmation bias is when we look for information to confirm our own biases which can distort the output of the tool.
Human bias impacts how we interact with these tools.
If the data used to train the AI is biased, the generated outputs reflect that bias. This perpetuates stereotypes, misinformation and other inaccuracies.
Read through the types of data bias that GenAI can include in the interactive below. Match the type and the definition by drag and drop.
GenAI tools use pre-defined sets of training information, with its predictive technology identifying patterns in this information. The tools then use these predictive patterns when generating content in response to user prompts. ‘Hallucinations’ are AI generated responses that are fabricated, inaccurate or incorrect. These occur when the GenAI tool becomes erratic as it attempts to respond to prompts for which it has insufficient or flawed information. As machines, GenAI tools do not have the capacity to reason or reflect on the sense of content they generate.
There are several reasons for hallucinations. Click on the flip cards below to learn more.
GenAI tools have varying levels of development that impacts their effectiveness. Tool purpose must match what users ask it to do.
Garbage In = Garbage Out. Quality of the training data impacts output quality. Data limitations (for example, insufficient or outdated) means the tool generates inaccurate responses.
Clear instructions and limiting ambiguity are critical. Prompts need to give specifics of what you need the output to be. Quality of wording impacts the quality of output.
Malicious prompts are deliberately designed to confuse the GenAI tool or to corrupt the training data and can lead to hallucinations.