Trust plays a critical role when using any information source or digital tool. As a GenAI tools user you need to decide how much you trust the output being produced. Taking an active role in generating, interpreting and extending the generated content is a key strategy in effective GenAI use.
Here are some questions and considerations to think about when using AI tools or AI-generated content:
- Does the output sound like it was created by me?
- Am I using someone else's work without authorisation or recognition?
- What does this tool do with my prompts, data, or files that I might give it?
- Does this tool create outputs that are racist, sexist or marginalise others?
- Will this tool save me time? Or will it take so long to get the prompts right or to edit it, I may as well just do it from scratch myself?
- Am I using someone else's image, identity or voice?
- Am I passing this output off as entirely my own creation?
- What is the risk/exposure of the output? To create something fun for my friends versus decision making on public policy.
- Does this tool have current and regularly updated data?
- Have I cross-checked the GenAI output with other credible sources?
- Does the generated output make sense to you?
- Can you identify potential biases in the output?
- Do I need to modify my prompts to get more relevant output?
Generative AI has the potential to be both a timesaver and time waster and can both help and hinder with accessibility and equity. You need to be mindful when using AI tools and spend sometime reflecting on how you are planning to use them.