An image of Pope Francis, the leader of the Catholic church, wearing a large, white puffer jacket has gone viral on social media in the past few days. The 86-year-old pontiff looks stylish, with many people commenting on his fashionable clothes. There is just one problem: the image isn’t real.
The pictures above were generated by the artificial intelligence Midjourney, which produces images based on text prompts, and were posted on Reddit on 24 March by an artist who goes by the name of u/trippy_art_special. The user’s account has since been suspended, but one image (the left-hand one) has since spread across Twitter, where it has fooled many.
Should we be worried? Web culture expert Ryan Broderick has called the pope image “the first real mass-level AI misinformation case”. But the issue has actually been brewing for a few weeks, following an update to Midjourney that significantly improved the standard of output. Earlier in March, Midjourney-created images of former US president Donald Trump being arrested similarly went viral. Those images were generated from prompts provided by Eliot Higgins, the founder of Bellingcat, an investigative journalism group.
“I think this is an example of a wider problem of technologies being pushed into our societies without any oversight, regulation or standards,” says Elinor Carmi at City, University of London.
Fears of AI fakery aren’t new. For several years, we have faced the threat of deepfaked images of people’s faces, produced by earlier generations of AI trained on smaller volumes of information, but they have frequently had telltale signs of fakery, such as non-blinking eyes or blurred ears. Midjourney still struggles with hands, often adding additional fingers, but when confronted by an image where hands aren’t the focus, such as the AI pope, people can be fooled.
There is also an issue of scale, says Agnes Venema at the University of Malta. The r/midjourney subreddit where the pope images were posted has examples of other, equally convincing AI-generated images produced by its 143,000 members. They include a series of photographs documenting a fictional earthquake that hit the US and Canada in 2001 that has inspired its own lore. The top-voted comment on the post reads: “People in 2025 are going to have a real difficult time with misinformation. People in 2100 won’t know which parts of history were real…”
“I think the fact that so many people can now access it – in a way, it is more democratic – means that, in a way, the floodgates have opened,” says Venema. “The more realistic it gets and the more people gain access, the more careful we should be and the more risk there is of someone acting on this type of deception.”
Ultimately, the rapid rise of AI means some disruption is inevitable. Carmi says we are being expected to hop on board the AI revolution without fully grasping its impact – meaning we need better media literacy of how easy it is to create and spread fake images. “Most of our society has been left behind, not understanding how these technologies work, for what purposes and what are the consequences of that,” she says.