Skip to Main Content

AI basics

We delve into the fascinating realm of GenAI and how you might apply it responsibly to your studies at ACU.

Limitations of GenAI

Generative AI (GenAI) tools are impressive, often producing content that seems near-human. They use machine learning to take-in a lot of data and create content based on what you tell them. They're a bit like the word suggestions on your phone when you type a message, they get better as they learn more. But they have their limits. Their outputs are only as good as the datasets they've been trained on. Ask any GenAI chatbot a question or a request – what it gives you can be helpful, funny, flawed or just plain made-up.

However, GenAI can give you unreliable or flat-out wrong content if the data has issues (gaps or inaccuracies) or if your instructions are unclear. GenAI tools create content based on patterns they’ve learned and not through actual understanding. That means the content can be inaccurate, nonsensical, biased or superficial. At times GenAI tools can hallucinate responses. That's why it's always a good idea to check their responses carefully, especially when the answers are important.

While GenAI tools are a groundbreaking development in the realm of artificial intelligence, it’s crucial to approach their use with an understanding of their limitations. They are tools best used with a blend of human judgement and technological assistance, ensuring their outputs are leveraged effectively and responsibly.

Exploring output fails

Explore our collage of examples below. Click on the plus (+) icons to read the prompt behind the output and the discussion about what wasn’t quite right in what was generated.  

Activity overview

This interactive image hotspot provides some examples of AI generated infographics. Clicking on the hotspots reveals information about the outputs. Hotspots are displayed as plus (+) icons.

Hotspot 1: AI in the library

Prompt asked: Create a powerpoint slide about using AI in the library.

Image description: A grey robot holding a stack of books, surrounded by a mindmap of random images inside circles such as cogs, technology and the brain.

What was wrong with the output:

  • The text in the image – at a glance the title and subtitle read well. Closer examination shows obvious spelling and formatting errors.
  • The graphics in the image – stereotyped and one-dimensional representation in the choice of images (library is depicted as just books and AI is a robot).
  • The output format as an image was also problematic, we asked for a PowerPoint slide and it produced a JPEG file.

What was helpful:

  • It’s a good example of how information can be presented using visual cues in a presentation slide.
  • The short text chunks are a good design benchmark for presenting written information.

Main take-away: It’s important to use the right GenAI tool for the output you need.

Hotspot 2: Data visualisation

Prompt asked: Create a data visualisation in a pie chart of all the regions in the world and digital poverty statistics.

Image description: Circular chart that has multicoloured segments on the left and depicts clouds and continents on the right.

What was wrong with the output:

  • No data provided or visualised.
  • Too much content within the image makes it unreadable.
  • Lack of logical information means there’s no meaning within the image.
  • The prompt asked for a pie chart but this type of data (demographic and statistical) would be better visualised in other formats. GenAI tools currently don’t have evaluative judgment, they action a prompt without suggesting fit for purpose options.

What was helpful?

Looking and evaluating this output made it clear that a pie chart wouldn’t work well as data visualisation. The positive outcome was being able to test visualisation ideas before building them.

Main take-away: GenAI tools won’t always recognise their limitations and can provide wrong format or information as a result.

Hotspot 3: Representation in images

Prompt asked: Create an image that represents artists.

Image description: Black and white image of 4 men and 3 women standing/sitting around a table. They are all focused on painting. In the centre of the table are jars of paint brushes.

What was wrong with the output:

This image reflects bias in a number of ways:

  • Artists are all young with no age diversity
  • Minimal racial diversity in the group
  • Artists seem to have similar body types
  • Assumption that art = painting and sketching. Digital art or other forms of art practice are not considered.

When GenAI produces an image that confirms or reinforces existing stereotypes or assumptions, the type of bias is often referred to as "confirmation bias". In an AI or machine learning context it's more accurately "algorithmic bias" or "data bias."

What was helpful in the output:

  • This output steps away from male artist as hero stereotype.
  • The overall image quality would work well for use in a presentation slide deck. It would also function well as a stimulus image that an artist then creates work in response or extension to.

Main take-away: You need to be specific with your prompt, the more detail the better as this helps create a quality output that mitigates bias.

Hotspot 4: Pros Cons table

Prompt asked: Create a pros and cons table of using Generative AI in study.

Image description: Top-down view of a table. In the centre of the table is a diagram divided into 4 sections, and each section contains lots of boxes with random indistinct imagery. Surrounding the diagram are objects such as pens, pencils and cups of coffee.

What was wrong with the output:

Bing Chat gave us two outputs for this prompt: 1. an image of a table and 2. a text-based table.

The problem with the image output:

  • A text-based response was required but Bing Chat guessed we also wanted an image because of the chat prompt history.
  • No useful information at all. Words are unreadable. Images have no meaning.
  • Gave as an actual table scattered with irrelevant things.

The problem with the text-based output:

  • Teacher rather than student perspective in the content
  • Errors in text with random numbers added.
  • American rather than Australian English language used.

What was helpful:

Bing Chat gave us two outputs for this prompt. The image output was not usable but the text-based table could be refined or extended on.

Hotspot 5: Infographic about information

Prompt asked: Create me an infographic about mis, dis and malinformation in connection to generative AI.

Image description: An infographic that is black, blue and orange, which contains illegible words and a bunch of random images such as clouds, computer screens, cogs and more. It is very cluttered.

What was wrong with the output:

  • It looks like an infographic from a distance but closer examination shows none of the content makes any sense.
  • Data visualisations are generic images not actual analysis.
  • Infographics have a balance between meaningful text and visualisation, this ouput has no real text, data is missing, and images have no meaning.

What was helpful?

  • The design and layout of the infographic is a starting point for an infographic creation.
  • Good balance of text, data and imagery in design.
  • Colour palette and visual elements are well designed.

Main take-away: Pause, reflect and dive deeper into the output. The image is a great reminder to always assess the generated output.

Biases

Humans are biased and we are typically unaware of our own bias. We make assumptions based on our own understandings and experiences. We see or react to things through a very individual lens. This kind of thinking is known as cognitive bias. Confirmation bias is when we look for information to confirm our own biases which can distort the output of the tool.  

Human bias impacts how we interact with these tools. 

  • GenAI tools are influenced when designers or moderators either knowingly or unknowingly introduce their own cognitive biases as they develop and iterate the tool.
  • Users also inadvertently introduce their biases to GenAI tool through the types of data they enter, including prompts. An example of generating self-confirming output from the tool is 'give me a picture of a parent with a spoon in her hand'.
  • Our biases also impact how we interpret the GenAI information or output. We favour results that align with our beliefs or expectations and disregard outputs that contradict them.

Training Data Bias  

If the data used to train the AI is biased, the generated outputs reflect that bias. This perpetuates stereotypes, misinformation and other inaccuracies. 

Read through the types of data bias that GenAI can include in the interactive below. Match the type and the definition by drag and drop. 

Hallucinations

GenAI tools use pre-defined sets of training information, with its predictive technology identifying patterns in this information. The tools then use these predictive patterns when generating content in response to user prompts. ‘Hallucinations’ are AI generated responses that are fabricated, inaccurate or incorrect. These occur when the GenAI tool becomes erratic as it attempts to respond to prompts for which it has insufficient or flawed information. As machines, GenAI tools do not have the capacity to reason or reflect on the sense of content they generate.

What causes hallucinations?

There are several reasons for hallucinations. Click on the flip cards below to learn more.

 
Tool quality and purpose
GenAI tools have varying levels of development that impacts their effectiveness.
Tool purpose must match what users ask it to do.
 
Training data quality
Garbage In = Garbage Out.
Quality of the training data impacts output quality.
Data limitations (for example, insufficient or outdated) means the tool generates inaccurate responses.
 
Prompt wording
Clear instructions and limiting ambiguity are critical. Prompts need to give specifics of what you need the output to be.
Quality of wording impacts the quality of output.
 
Malicious prompts
Malicious prompts are deliberately designed to confuse the GenAI tool or to corrupt the training data and can lead to hallucinations.

Tool quality and purpose

GenAI tools have varying levels of development that impacts their effectiveness. Tool purpose must match what users ask it to do.

Training data quality

Garbage In = Garbage Out. Quality of the training data impacts output quality. Data limitations (for example, insufficient or outdated) means the tool generates inaccurate responses.

Prompt wording

Clear instructions and limiting ambiguity are critical. Prompts need to give specifics of what you need the output to be. Quality of wording impacts the quality of output.

Malicious prompts

Malicious prompts are deliberately designed to confuse the GenAI tool or to corrupt the training data and can lead to hallucinations.