Since last year, a group of artists have been using an AI image generator called Midjourney to create stills from movies that don’t exist. They call the trend “AI cinema.” We spoke to one of her practitioners, Julie Wieland, and asked her about her technique, which she calls “synthography,” for synthetic photography.
The origins of “AI cinema” as a still image art form
Last year, image synthesis models such as DALL-E 2, Stable Diffusion, and Midjourney began allowing anyone with a text description (called a “prompt”) to generate a still image in many different styles. The technique has been controversial among some artists, but other artists have embraced the new tools and started working with them.
While anyone can create an AI-generated image with a prompt, it quickly became apparent that some people possessed a special knack for fine-tuning these new AI tools to produce better content. Just like painting or photography, the human creative spark is still needed to consistently produce remarkable results.
Not long after the marvel of solo image generation emerged, some artists started creating multiple AI-generated images with the same theme – and they did it with a wide, movie-like aspect ratio. They strung them together to tell a story and posted them on Twitter with the hashtag #aicinema. Due to technological limitations, the images did not (yet) move, but the group of images gave the aesthetic impression that they all came from the same film.
The nice thing is that these movies don’t exist.
The first tweet we could find tagged #aicinema and the well-known four movie-like images with a related theme came from Jon Finger on September 28, 2022. Wieland, a graphic designer by day who has been practicing AI cinema for several months, acknowledges Finger’s pioneering role in the art form, along with another artist. “I probably saw it first John Meta And jon finger,” she says.
It’s worth noting that the AI cinema movement in its current still-frame form may be short-lived once text2video models like Runway’s Gen-2 become more capable and widespread. But for now, we’ll try to capture the zeitgeist of this brief moment in AI time.
Julie Wieland’s AI Story
To gain more insight into the #aicinema movement, we spoke to Wieland, who lives in Germany and has amassed a significant following on Twitter by posting conspicuously works of art generated by Midjourney. We’ve previously featured her work in an article on Midjourney v5, a recent upgrade to the model that added more realism.
AI art has been a fertile field for Wieland, who feels that Midjourney not only gives her a creative outlet, but also speeds up her professional workflow. This interview was conducted via direct messages on Twitter and her answers have been edited for clarity and length.
Ars: What inspired you to create AI-generated film stills?
wieland: It started playing in DALL-E when I finally got access by being on the waiting list for a few weeks. Honestly, I’m not really into the “painted astronaut dog in space” aesthetic that was very popular in the summer of 2022, so I wanted to test what else is out there in the AI universe. I thought it would be really hard to capture photos and film stills, but I found ways to get good results, and I used them pretty quickly in my day job as a graphic designer for mood boards and pitches.
With Midjourney I’ve reduced my time from looking for inspiration on Pinterest and stock sites from two days work to maybe 2-4 hours because I can generate exactly the feeling I need, to convey it so clients know how it will his “feel.” Onboarding illustrators, photographers and videographers has never been easier since then.