‘Image to Video’ generation is still very hit or miss. Sora experiments have given a very skewed idea of what can be accomplished with public-facing platforms. For this stage of the process, we ran each image through both Runway 2 and Leonardo.ai. It was a literal coin flip with results (as of April 30, 2024).
Runway (hypothetically) allows for more camera and motion control, but the quality is very dependent on which random ‘seed’ assigned to “provide a stylistic starting point.” If you find one that delivers, save it! You can lock in your preferred seed on future video generations.
Leonardo was better for creating more motion in a static frame. People would move their arms and legs more, without transforming into entirely new objects or people.
*Updates happen so frequently on these platforms and their models that this information may only be relevant for the next 4 - 6 weeks.