What Is Visual Generative AI?

Before you start: Please download this Participation Information Sheet, that explains about data collected in this tool.

The Basics

Visual Generative AI refers to a type of artificial intelligence that creates new images based on prompts—these can include text descriptions, rough sketches, or even other images. Unlike editing or filtering tools, these systems generate entirely new visuals from scratch, which can range from realistic photographs to surreal or stylised artworks.

How It Works

At the core of these systems are machine learning models trained on massive datasets containing millions of images. Tools like Midjourney, DALL·E, and Stable Diffusion learn from these examples—absorbing patterns in shape, colour, composition, and style—to produce outputs that match a given prompt. Though they don’t “see” like humans do, they’re adept at blending ideas and mimicking visual styles.

Types of Models

Most visual generative AI tools rely on diffusion models or GANs (Generative Adversarial Networks). In simple terms, these models start with a field of noise and gradually refine it into a coherent image, guided by what the AI has learned during training. For example, if you ask for “a sunset over a futuristic city,” the AI constructs an image by drawing on relevant visual concepts from its training data.

Important Disclaimer

  • Not all AI-generated images are created with malicious intent. Many are used for legitimate artistic, commercial, and educational purposes.

Why Does It Matter?

AI-generated images are becoming increasingly realistic—and with that realism comes influence. These visuals can shape opinions, stir emotions, and even alter how we understand events, people, and social movements. From synthetic political rallies to distorted beauty standards, visual AI has real-world impact.

While these tools open up space for creativity, resistance, and expression, they can also be used to mislead, manipulate, and cause harm—especially when images are taken out of context or designed to deceive.

Understanding the potential impacts—both positive and negative—helps us engage more critically and ethically. From misinformation and deepfakes to creative resistance and therapeutic applications, the effects of AI imagery are wide-ranging.

The more visually literate we become, the more prepared we are to spot manipulation and harness these tools for good.

This section explores the wide range of consequences, both positive and negative, to help you navigate this evolving landscape with greater awareness and confidence:

Where AI Images Are Making an Impact

Misinformation & Political Manipulation

AI-generated images can fabricate events, people, or crowds—altering how we perceive political truth.

These tools have been used to depict fictional protests, digitally insert politicians, or exaggerate public support.

This poses serious risks to trust in media and democracy.

Misinformation: Political & War Images 

Trauma & Conflict Imagery

Synthetic war and disaster images may be used to evoke strong emotions or simulate atrocities that never occurred. While some are intended to raise awareness, others may be used maliciously or without regard for ethical boundaries—potentially triggering distress or spreading misinformation.

Trauma & Conflict Imagery

Beauty Standards & Fake Bodies

AI tools are now used to create flawless, unrealistic human figures. These bodies—often shared on social media or promotional platforms—can distort our perception of normal appearance and reinforce narrow, exclusionary beauty ideals, especially for younger audiences.

Beauty Standards & Fake Bodies

Creative Resistance & Activism

Not all AI use is harmful. Artists and activists are using visual AI to challenge censorship, explore identity, and create work that speaks to marginalised experiences. When used ethically, these tools can amplify voices and foster cultural expression in new ways.

Visual AI for community cohesion and global activism

Your Experience

Interpret

AI-generated images are made using powerful systems trained to recognise and replicate visual patterns. But understanding how they’re created can help us interpret them more critically. This section breaks down the process behind AI imagery—and shows you what to look out for when trying to spot synthetic visuals.

By exploring the tools, techniques, and tell-tale signs of image generation, you’ll gain the skills to read AI content with greater awareness and confidence.

How to ‘Read’ AI Images

AI-generated visuals often look convincing at first glance—but many carry subtle clues that reveal they weren’t made by a human hand. Learning to ‘read’ these images means slowing down, looking closer, and asking the right questions.

This section helps you train your eye to detect inconsistencies in detail, lighting, perspective, and realism. Whether you’re spotting a fake news image or simply trying to understand if an artwork is AI-generated, these cues can help you interpret what’s real, what’s synthetic, and why that matters.

Key cues to include:

  • Inconsistent lighting or multiple shadows

  • Repeated or melted background elements

  • Unusual body parts (e.g., too many fingers or twisted limbs)

  • Implausible object relationships or logic gaps (e.g., floating hands)

  • Surfaces that are overly smooth or textureless

  • Misspelled or warped text in signage

🧩 “Ask Yourself: Does this image seem too perfect, too chaotic, or somehow ‘off’?”

How Are AI Images Made?

AI images don’t just appear—they’re created through a process that starts with a user prompt and ends with an entirely new image. This process combines creative input with machine learning to produce visuals based on patterns in data.

Here’s a simplified breakdown of the steps most platforms follow:

  1. Text/Image Prompt Input – The user describes what they want to see (e.g. “a tiger made of leaves”).

  2. Neural Network Processing – The AI interprets the prompt using training data from millions of images.

  3. Image/Video Generation – A new image is created by blending patterns and features it has learned.

Key Platforms for making AI Images

  • 🎨 MidJourney: Web and Discord-based, artistic and video focus (instructions via prompts and visual references).
    Explore more here.

  • 🤖 ChatGPT 4o Images: Web and App-based (instructions via conversational text).
    Explore more here.

  • 🧠 Leonardo AI: Web and App-based (instructions via prompts and visual references).
    Explore more here.

The Process

  1. Text/Image Prompt Input

    User describes what they want to see, or shares an image showing what they want things to look like

  2. Neural Network Processing

    AI interprets the prompt using training data

  3. Image/Video Generation

    Creates new image/video from learned patterns

Quick Poll

Which platform have you heard of or used before?

Your answers help us understand which tools are most familiar and where more explanation might be helpful!

Create

Introduction

AI image tools give us the power to visualise things that don’t yet exist—alternate futures, powerful emotions, or complex ideas. But with that power comes responsibility. This section invites you to explore AI image generation ethically, using creativity to inspire, include, and inform—rather than mislead.

Whether you’re designing flyers, storytelling through visuals, or advocating for change, this is a space to think about how to create with care.

Ethical Image Creation with AI

Before generating AI visuals, consider the following:

  • What is the goal of the image?

    Is it to educate, persuade, provoke thought, or bring joy?

  • Is the image respectful?

    Avoid using AI to mimic real people without consent, or reproduce harmful stereotypes.

  • Could the image mislead someone?

    If yes, consider clearly labelling it as AI-generated or using design cues to prevent confusion.

  • Can you add context?

    When appropriate, pair images with explanations or captions to help others interpret them responsibly.

💡 Remember: ethical AI creativity means being mindful of the impact your image might have—on individuals, on trust, and on truth.

Examples of Ethical Communication and Activism

Despite the potential harms, Al images can also be used as a powerful tool for creative activism and resistance. AI-generated visuals can be a powerful tool for public engagement—especially when time, budget, or visual materials are limited.

  • Visualizing alternative futures and possibilities for social change
  • Creating powerful symbolic imagery for causes with limited resources
  • Developing attention-grabbing visual campaigns for activism
  • Demonstrating potential impacts of climate change or other crises

 

In this example from a vaccine confidence campaign, AI was used to:

  • Design inclusive workshop flyers showing diverse community settings

  • Create visual metaphors (e.g. “Vaccination as a Community Shield”) to explain complex ideas simply

  • Produce culturally relevant, audience-centred visuals that feel familiar and empowering

  • Engage participants in co-design by iterating on AI-generated drafts together

These images helped spark meaningful dialogue and were clearly labelled as AI-assisted. The goal wasn’t to trick the viewer, but to encourage understanding and belonging.

AI Art for Good example AI Art for Good example

Try It Yourself: Prompt with Purpose

Now it’s your turn to experiment with AI image generation. You don’t need advanced tools—just a good idea and a thoughtful prompt.

Activity Prompt:

Ask ChatGPT (or another image tool) to:

“Generate a visual for a health campaign that builds trust.”

You might want to include:

  • The tone or emotion you want to evoke (e.g. calm, caring, hopeful)

  • The audience you’re speaking to (e.g. parents, young adults, elders)

  • Visual themes (e.g. community, protection, science, family)

  • Colours or symbols that convey safety or care

Example prompt:

“Create an image of a warm, community health setting where diverse families are learning about vaccines. The tone should feel welcoming, with natural light, posters about trust and science in the background.”

DETECT

Can You Tell the Difference?

Now… have you been paying attention?

This is your chance to test what you’ve learned so far. Each image below has a story—some are real, and some are AI-generated.
Can you tell which is which?

Click an image to reveal the answer and the context.

  • Let’s see how sharp your visual literacy skills really are.

Which of the images below are fake or real? 

This is a real photograph

It was taken at the funeral of Nelson Mandela in 2013.

At the time, while there was controversy over whether or not it was disrespectful to smile and take a selfie at a state funeral, the photo itself was fact-checked as real.

This is a fake AI-generated photograph

It was created with MidJourney, and shows a fake, but realistic image Obama taking a selfie with famous Indian leader, Mahatma Ghandi who died in 1948. President Barack Obama was born in 1961, so they would never have met.

What Did You Notice?

Whether you got them all right or were surprised by a few, the goal of this activity isn’t perfection—it’s perception.

AI-generated images are designed to look convincing. But if you look closely, you may start to spot:

  • Subtle inconsistencies in lighting, facial expressions, or proportions

  • Implausible scenes or people together in ways that don’t make sense historically

  • Unrealistic textures or “perfect” symmetry that feels uncanny

  • Emotional cues that seem exaggerated or oddly neutral

Let’s Recap

  • In the “Interpret” tab, you explored how AI images are made and how to read them.

  • In “Why?”, you learned why these images matter—and how they can be helpful or harmful.

  • And in “Create”, you discovered how to make images ethically and responsibly.

This Detect challenge brings it all together: seeing isn’t always believing—and the more visually literate you are, the more power you have to question, challenge, and choose what to trust.

Head to the Download tab to grab your Visual AI Literacy Checklist—a practical tool to help you stay sharp and share what you’ve learned with others.

What Can We Do?

So what are the tools and strategies for identifying AI-generated images and promoting visual literacy?

Detection Tools

Visual Inspection

Visual inspection involves looking closely at certain features that often signal AI manipulation or generation.

Common signs include distorted anatomy, inconsistent lighting or reflections, nonsensical text, and repetitive or unnatural patterns. By training your eye to spot these details, you can develop a more critical and informed approach to evaluating digital images.

Look for common AI artifacts:

  • Unusual hand proportions
  • Inconsistent lighting
  • Illogical text or writing
  • Repetitive patterns

Reverse Image Search

  • Reverse image search can help you trace where an image appears online, check for duplicates, or verify its original source.
  • Reverse image search tools analyse visual features—such as size, shape, colour, and patterns—using recognition algorithms. Some advanced tools can even detect faces or objects to improve search accuracy.


Using a reverse image search is simple:

  • You either upload an image file or paste an image URL into the search bar.
  • The tool then scans online databases to return matches, including pages where the image is used, similar visuals, or possible sources.

These tools are accessible via both desktop and mobile devices—and some are already built into platforms you might use every day.

Here are some tools you can try:

Context Clues

Investigate where and how the image is being used:

  • Check the date and source of the post or article
  • Look at the account or website sharing the image – is it credible?
  • Read the caption or surrounding text – does it match what’s shown?
  • Compare with known events or headlines from trusted sources
  • Be cautious with viral posts lacking clear attribution

Visual AI Literacy Checklist

Here is a summary of things for you to watch out for. You can  click on the Download button to save a copy to use for later: