A filmmaker employed the unreal intelligence (AI) picture generator Midjourney to make a surreal video edit of a music pageant.
Though Midjourney isn’t recognized for creating video, Rufus Blackwell coupled the generative AI device with movement monitoring strategies and different AI instruments to create a singular movie.
“Whereas AI artwork and video are exploding proper now, AI video technology is lagging when it comes to the standard and consistency achieved in nonetheless imagery,” Blackwell tells PetaPixel.
“The intricate nature of producing transferring pictures poses far better challenges than nonetheless life. My objective was to fuse the distinctive picture high quality of Midjourney with the dynamic nature of movies, leveraging my VFX device set.”
Making a Video With Midjourney
Blackwell extracted key frames from the footage he shot of a boutique seashore pageant in Vietnam.
“These frames have been then fed into Midjourney, prompting it to generate a completely new scene,” he explains.
“As an illustration, I may present an aerial extensive shot of the pageant at night time as a reference and use a immediate like the next: “Think about a futuristic neon-lit alien cityscape, set in a dystopian future, the place colossal skyscrapers dominate a post-apocalyptic wasteland.”
Blackwell says the important thing skill of AI picture mills like Midjourney is their skill to “match shade grading, lighting situations, and total power of the unique scene.”
As soon as he had the brand new AI picture, Blackwell fastidiously painted it again into the scene utilizing motion-tracking strategies to make sure correct alignment with the unique video.
“Movement monitoring captures the motion and parallax of the video sequence, integrating the brand new scene seamlessly with the right movement dynamics,” he says.
“The movement monitoring can create some bizarre warpy artifacts, however the outcomes are typically attention-grabbing.”
Nonetheless, it’s not so simple as the above. Individuals within the body add one other layer of complexity because it disrupts the movement monitoring.
To treatment this, Blackwell utilized one other AI too, Runway ML, to take away the folks and create a clear monitoring plate.
“Afterwards, the characters will be remoted utilizing Runway ML to create an alpha channel for the character, permitting for his or her placement in opposition to the brand new background,” he explains.
“It’s astonishing how quickly AI workflows have seamlessly built-in into my artistic course of.”
Blackwell additionally used Runway ML to artificially swap the garments of the folks within the video, resulting in some groovy outfit modifications.
He additionally used the Vid to Vid device on Runway ML to realize a extra “movement graphics really feel,” by exporting a five-second phase he may apply a photograph immediate to which generates a model of the video in the same fashion.
“The ensuing video may then be simply layered on high utilizing mixing modes like Add/Display screen,” Blackwell says.
“The trick with these is to make use of a picture immediate that might create a extremely stylized output so it overlays including to the unique scene.”
Blackwell wish to use AI instruments to “elevate and improve” his remaining video output. The VFX artist, drone pilot, and journey filmmaker is predicated in Saigon.
Picture credit: Photos courtesy of Rufus Blackwell.