5 thoughts on “Stable Video Diffusion Online Demo Test”

  1. Context length isn't everything. Someone posted test results indicating that performance degrades a lot beyond the first 19K tokens in Claude 2.1, compared to beyond 64K in GPT-4 Turbo.

    Reply
  2. IF stable diffusion video is anything like stable diffusion for image, this is gonna be revolutionnary! might take some time like when the original models came out it was like dall E and Crayon etc.. pretty useless, but in less than a year it went from garbage to absolutely stunning and people already made GOOD extension to make videos out of SD in comfyUI.. really looking forward for it though

    Reply
  3. Image complexity doesn't seem to matter as long as you have shallow depth field like a distinguishing difference between the foreground person or object and the background such as a clear border.

    Reply
  4. If I install any of the custom nodes that is needed for this workflow my VRAM usage jumps from just under 8GB and about 6 minutes to render to over 10.7 and 23 mins to render (if it didn't crash) and then it crashes because I am using a 1080 ti but then when I uninstall these custom nodes seanlynch ComfyUI Optical Flow, Nuked ComfyUI-N-Nodes and YOUR-WORST-TACO ComfyUI-TacoNodes it goes back down to under 8GB and 3 times faster render. Anyone know why?

    Reply
  5. Omg the advancements in video/animation are crazy lately 🤯. I think i still like having control on the animation with motion loras more but hey this setup is so easy. I can also imagine it being used in conjunction with other methods

    Reply

Leave a Comment