NEW Open Source AI Video (Multi-Consistent Characters + 30 Second Videos + More)

  • There is a new open-source game preview featuring a video model known as "Fusion Fusion," which demonstrates remarkable character stability and adherence to real-world physics in short video creation.

  • Fusion Fusion surpasses previous video models in maintaining character consistency, such as facial stability and body types, allowing for the creation of believable characters and high-quality videos.

  • The video model, "Story Diffusion," improves upon the issues seen in earlier developments where characters underwent unintended transformations or inconsistencies appeared (e.g., extra characters like dogs appearing from nowhere, basketball passing through a metal rim).

  • Story Diffusion employs a deep understanding of reality to maintain consistency across images in terms of face and clothing, using emotion prediction models to anticipate movement between images, resulting in more natural animations.

  • Examples highlight limitations and challenges still present, such as issues with hand animations and changes in character appearances across frames.

  • The new model outperforms previous ones by maintaining character consistency, even when viewed in side-by-side comparisons, and does so having been trained on only eight GPUs, which is significantly less than the estimated number used for other models.

  • The model converts a sequence of images into video by paying attention to consistent characteristics across frames and using a motion prediction model to animate between the frames.

  • The resulting animations are effective for creating anime-style content and full movies, given the tool's potential to handle a variety of scene dynamics.

  • The model has applications for storytelling, enabling the creation of coherent images and videos that capture the essence of real life, enhancing the realism and fluidity of video content.

  • Story Diffusion is considered a significant advancement in video model technology due to its stability and coherence in scene creation, and its open-source nature allows for wide accessibility and experimentation.

Share these insights with your friends