LTX-2 THE NEW IMAGE TO VIDEO WITH SOUND AI KING WITH LOWVRAM!
Hey everyone! I've created a 1-click installer for the new IMAGE TO VIDEO WITH SOUND KING, called “LTX-2”. It’s an absolutely insane open source model that can generates videos with sound from either a simple text prompt, an image or even a sound file and can run on LESS THAN 8GB OF VRAM! This is one of the most fun model I have ever tried!
You can check out the video right here: https://youtu.be/KKdEgpA3rjw
The workflow: https://www.patreon.com/posts/148469815
The installer of course automates the entire install process for comfyUI and the models downloads!
LOCAL INSTALL!!! RECOMMENDED TO START FROM SCRATCH
Download the LTX-2-ULTRA-COMFYUI-MANAGER_AUTO_INSTALL.bat
Run the bat file
Wait for the install to finish
The webui will launch, load the LTX-2_ULTRA_WORKFLOW.json file and there you go?
DON'T FORGET TO GO UPDATE COMFYUI BY GOING INSIDE THE UPDATE FOLDER AND RUNNING THE COMFYUI UPDATE BAT FILE THEN CHOOSING THE CORRECT LAUNCHER FOR YOUR GPU!!
IF YOU ALREADY HAVE COMFYUI INSTALLED USING A PREVIOUS INSTALLER:
Download the LTX-2-MODELS-NODES_INSTALL.bat and place it in the ComfyUI_windows_portable\ComfyUI folder
Run the bat file
Wait for the install to finish
???
Profit ?
IF YOU GET AN ERROR WHEN LAUNCHING COMFYUI AFTER USING THIS INSTALLER SAYING "AssertionError: Torch not compiled with CUDA enabled" it's ok don't panic.
Just go inside the python embedded folder, click on the folder path, type cmd press enter, this will bring the command prompt window and there type:
python.exe -m pip uninstall torch
python.exe -s -m pip install --upgrade torch torchvision torchaudio --index-url https://download.pytorch.org/whl/cu128
--->replace cu128 by your cuda version (if you don't know your cuda version just open a new cmd window and type nvcc --version it will then tell you at the end Cuda compilation tools, release 12.8, V12.8.93 or something different like 12.6, in that case just use cu126 instead of cu128)
IF YOU ARE USING RUNPOD:
Create an account if you haven't already: Runpod
Click on Pod (on the left side) then click deploy
Choose a GPU with at least 24gb of VRAM (4090 is best), click here for the comfyui template (aitrepreneur/comfyui), then edit the template and choose 100gb for both the container and volume disk, then deploy on demand
Go to my pods, wait for everything to finish and then click "connect", then "Connect to HTTP SERVICE port 3000" and click on the manager to update comfyui and restart it
Then go to the port 8888, go inside the comfyui folder and then drag and drop the LTX-2-AUTO_INSTALL-RUNPOD.sh file on the left side of the UI then click on the "Terminal" icon on the right side on the UI
Copy and paste these two lines then press enter:
chmod +x LTX-2-AUTO_INSTALL-RUNPOD.sh
./LTX-2-AUTO_INSTALL-RUNPOD.sh
Wait for everything to be installed
Go back to the port 3000 and click on the manager and restart it
Load the LTX-2_ULTRA_WORKFLOW.json file and there you go?
Also use the command:
tail -f comfyui.log
inside the workspace/logs to get the real time logs so that you always know when the install is done. It will then ask you to restart, so click on that restart button, wait for the webui to reboot, click refresh once it's done and there you go.
IF YOU GET OOM error :
Click on the little 3 dots button, arguments variables then enter
EXTRA_ARGS --reserve-vram 10 --cache-none
then restart the pod
As always, supporting me on Patreon allows me to keep creating helpful resources like this for the community. Thank you for your support - now go have some fun?!
