AnimateDiff Tutorial: Turn Videos to A.I Animation | IPAdapter x ComfyUI
The first 500 people to use my link will get a 1 month free trial of Skillshare skl.sh/mdmz01241
Transform your videos into anything you can imagine.
ComfyUI: bit.ly/3LM1hbN
ComfyUI Manager: git clone github.com/ltdrdata/ComfyUI-M...
Guide: bit.ly/3ubx4gw
Important use this base workflow if you're having issues with IPAdapter: bit.ly/3ITvSlQ
Models:
ProtoVision XL: bit.ly/3U8ps9l
DreamShaper XL: bit.ly/3Sa9h8W
CounterfeitXL: bit.ly/3OkptmL
SDXL VAE: bit.ly/4b6NXtv
IPAdapter Plus: bit.ly/3vLiI78
Image Encoder: bit.ly/42aaDoC
Controlnet model: bit.ly/42bmGC2
HotshotXL bit.ly/3HxJRx0
How to find prompts: • How to Write A Prompt ?
➕Positive Prompt: ((masterpiece, best quality)), Origami young man, folding sculpture, wearing green origami shirt, blue origami jeans, white origami shoes, depth of field, detailed, sharp, 8k resolution, very detailed, cinematic lighting, trending on artstation, hyperdetailed
➖ Prompt: (bad quality, Worst quality), NSFW, nude, text, watermark, low quality, medium quality, blurry, censored, wrinkles, deformed, mutated
⚙️Setting Files:
bit.ly/3vJNaOZ
🔗 Software & Plugins:
Topaz Video AI: bit.ly/3t04Otl
©️ Credits:
Sock videos from @PexelsPhotos
⏲ Chapters:
0:00 Intro
0:24 Install ComfyUI
1:31 Base Workflow
1:54 Install missing nodes
2:22 Models
4:23 Settings
10:36 Animation outputs
Support me on Patreon:
bit.ly/2MW56A1
🎵 Where I get my Music:
bit.ly/3boTeyv
🎤 My Microphone:
amzn.to/3kuHeki
🔈 Join my Discord server:
bit.ly/3qixniz
Join me!
Instagram: / justmdmz
Tiktok: / justmdmz
Twitter: / justmdmz
Facebook: / medmehrez.bss
Website: medmehrez.com/
hashtags...
Who am I?
-----------------------------------------
My name is Mohamed Mehrez and I create videos around visual effects and filmmaking techniques. I currently focus on making tutorials in the areas of digital art, visual effects, and incorporating AI in creative projects.
Few tips: - "the following node types were not found:IPAdapterApply" error: update all from the manager, and use this base workflow instead: bit.ly/3ITvSlQ (ignore Noise setting shown in the video) - if you're having issues loading the IPAdapter model, try creating a new "ipadapter" folder under ComfyUI\models\ and place the models there - I forgot to mention that for some users, it's necessary to install Git manually first thing from: git-scm.com/ - To preview results before processing the whole video: set the select_every_nth to 10 or higher. - If the process is slow or stops halfway due to low VRAM, try rendering at a lower resolution and upscale afterwards (480p to 1080p) - for MAC users: there are instructions on how to install ComfyUI on MAC in the installation guide. - I recommend a minimum of 12GB VRAM - Make sure your GPU driver is up to date, and that your GPU is not being overtaken by another application. - If you lack the minimum hardware, try run_cpu.bat instead, or you can run ComfyUI on the cloud: kzhead.info/sun/i7SLiMymqZ-wras/bejne.html - if you don't see a solution to your error on this comment, please try googling the error text or share more details on Discord: bit.ly/3qixniz Update 03/06: I encountered some errors, re-installed ComfyUI from scratch, re-followed the process and it works fine. so if you keep running into node related errors, please try reinstalling.kzhead.infogaming/emoji/7ff574f2/emoji_u1f508.png
Thanks, It works with me select_every_nth : 15 and 480-1080, but it is taking too long, I have CP Config with 20GB RAM, Core i3, and Win 11. Let me know if there is any process to fast, I want to create 20-second video, can I upload the image segment in "Keyframe IPAdapter -Load Image" to speedup process?
It's taking too long to create videos, so I'm considering generating animated sequence images instead. I'll merge these sequence images using Premiere Pro and create the video myself.
@@bhabasankardagar5810 the process relies heavily on your GPU's VRAM
@@bhabasankardagar5810 that's a good workaround
@MDMZ Can confirm as of 12/03 that following your tutorial steps work perfectly. Was not a ComfyUI user (InvokeAI) - but I needed a solution that can work with video. I will try to combine it with the new 4 step SDXL Lightning or JuggernautXL Lightning models. Seem a PERFECT fit for good quality vs speed IF it works.
Wow this is a great tutorial. It's taking its sweet time on my PC LOL but none the less it is actually working! I've seen so many vid to vid confyui videos, and everyon is jumping from left to right, with no coherency, no explaination about what model, and nodes do what, thanks for being super clear about those things. You single handedly just made this whole thing easy!
that's really great to hear. Thank you 🙏
Will it work on rtx 3060?
@@Aryannnnnn217 well mine is a 3060 but the 12 gig version, also I kinda improved a few bits on the workflow and it is actually really good now
@@randy2d mine is also 12 gig version, but i just shifted to comfyui, in A1111 my 3060 couldn't do controlnet and hires fix in sdxl models.. so im wondering if this workflow will work on my system? thanks for reply
@@Aryannnnnn217 I never ran videos higher resolution than 960x512 because the upscaler I use I can just set the size I want to upsale to and than send it to the video combine to export
Thanks for the video! Most creators forget, to show which models they got and where to put them in the ComfyUi folder. This step by step video helped a lot.
glad it was helpful
Your tutorials are one of the best and even beginners can become almost like pros by seeing your videos 🙌🏻
Happy to hear that!
Incredible man!! Thanks, I was waiting for this! I prefer Comfyui than deforum. you are the best!💪
Glad you like it!
Excellent explanation! Kudos bro... You deserve millions of subscribers!
Thank you so much 😀
4:20 after download everything. Thank you for amazing tutorial!!
Glad it helped!
Thanks for the clarity.
Glad it was helpful!
Such a useful video, thanks heaps for putting this together.
Glad it was helpful!
One idea to improve vídeo background. Try remove background first. Then apply a specific node only for background generation to avoid flickering. if you see flickering on hands you may use a technique by creating a boundering box that stylizes only hands and uses any hands detailers tools (lora, or node).
great tips!
your the best sir
Best lesson ever. It's a pleasure to listen you)
glad you liked the video
It's interesting, but without even paying much attention I can tell it takes a level of involvement comparable with traditional methods. Until so called generative AI offers simplistic prompting, nothing changes. You'll end up having to pay experts to use these systems. I see no benefit to anyone apart perhaps for those owning severs, sifting through endless input codes, searching for some kind of pay-dirt. It's a hard ask. A.I. systems (a fad) will NOT replace traditional techniques.
@@handlenumber707 You didn't just write "you'll end up having to pay experts to use it" and "i see no benefit to anyone" in the same sentence 😅
@@MDMZ Wasn't the whole idea to avoid paying people to do things?
You are amazing ya akhi 😎 as always awesome and creative videos 👍🏻❤️
frr mahabtch tslahli ma3lich fb tfhmni kimah psk dert kifo mhbtch tmchi mlwl
🙏
brilliant
the future of art... downloading the newest hard to find files Thanks for the tut, it was helpful I don't know how I would have figured out all those steps
Think of it as modern day gold mining, nothing more than just another scheme to get people to hand over their ideas for free.
Buckle up, creators! This tutorial featuring ComfyUI IPAdapter + HotshotXL is your ticket to a whole new dimension of video wizardry. Transform your content with the power of A.I., and let the magic unfold! 🌟🤖
Let me get this right. You take a video of someone moving around. Then you upload the video, paying money to use this service. Then you type in some prompts, and you get an animated character back? You can do this for free on your own computer, without the middleman, and without sharing your ideas. It's called motion capture.
very nice and love using AI for animated features. Great sharing
Thank you! Cheers!
thnaks mdmz for yr effort :)
Good information thanks 😊
Thank you very-very much !❤️👍👍
You're welcome 😊
Thank you. 감사합니다.
seeing this awesome stuff will eventually wear down my distaste for node based systems, still clinging to A1111 for now, as an aside, this same reason I cannot abandon Carrara 3D for Blender 😛it even uses it's lovely shader tree in Octane render keeping those awful nodes hidden unless I choose to torture myself with bricks and spaghetti
I was scared of it at first too, just like any other tools, ComfyUI gets easier the more you used it
Thanks so much for the tutorial! just wondering how to keep the character and background to be consistent and a bit stable? I had kept similar setting multiple times but the outcome of human and background still changing a lot.
you can try playing with the main settings I mentioned. there's no exact formula, so try different combinations, I've tried to explain what every one of those settings does in the video
Thanks a lot
excelente🥥
Hey really nice video i have watched it like 10 times on the last month. I have a question, is there a way to only animate the character but keeping the background static? Would be really awesome
you can rotoscope the subject out of your video first, and run it separately from the background, there may be a way to do it directly on ComfyUI but I haven't looked into it
Thank you for the video. Is there a SD1.5 alternative of HotshotXL?
yes, mm-Stabilized_high is a good one
Hi! Thanks for the tutorial. One question, which controlnet model are you using? depth, openpose etc...thanks.
hi, it's depth, as mentioned in the video 😉
Amazing Video. I love it. 😍 Bro, can you please make a video where I can change the character in the video to my character and transform it ? I am actually looking for somethings like this for long days.
great idea, I will look into it
Thanks a lot! You are so kind 🤗. I am very happy that you read my comment and replied. I will stay tuned.@@MDMZ
great tutorial! well done! In the Video Combine window, I don't have any video formats like your video, only 2 image formats, do you know what that's about?
make sure you update ComfyUI and all the nodes
Nice 🤩😍
Thank u bro
Thank you very much; you explained it really nicely. I'm currently at 50% and curious to see what comes out of it :) PS: Can the same thing be done with plants? For example, modeling how a plant grows and continues to change over time?
have fun! I'm really not sure about your question, sounds like you're talking about generating video from scratch?
This guide is so cool. What do I need to change so it will have better result for celebs?
a model trained on celebrity pics would probably help, but sometimes using the person's name in the prompt works fine
I have tried that, but I get many artifacts on face and cloths@@MDMZ
Thank you very much 👏🏼 One question. Is it possible to make a 15-minute video like this? Or is it only suitable for short videos of a few seconds? Thank you in advance
I haven't tried a video that long, I haven't encountered restrictions on video duration, but a 15 mins video will surely take so long to process if it works, why not give it a shot ?
15 minute or 15 sec .15 minute destroy ur pc bro😂😂😂
Great tutorial! Thank You! Could You please help me with one thing? My "Video Combine VHS" nodes are missing video formats - only "image/gif" and "image/webp" is available. WHat did I miss?
are you using the same workflow from this video? in any case, you can export to webp and convert later
the node "animatediff combine" change to "video combine" . there are two nodes look like the same one . but different. try it again.
Man you are awesome, thanks for your time and effort❤❤❤ do you know is it possible to use multiple controlnets in this pipeline? Depth+edge detection? I tried to use multi controlnet node but I got error with ip adapter then😢
theoretically, it should be possible, I haven't tried it myself.
i tried to with multiple control net, but it work just with about 20 frames, but when i try to make more frames of video there is error
Thank you for your clean and helpful video. I tried to run this on my local machine but unfortunately I do not have enough vram. Do you have any recommendation on any cloud service?
Thinkdiffusion is one of them, but I can't guarantee that all nodes are available on online services
Great tutorial. Can we do sizes like 1920x1080 and how long would that take ie 5-10 seconds. Is there anyway to have it create a sequence instead of a mp4 incase it fails to continue?
you can definitely go with other resolutions, time is almost impossible to predict, give it a shot
hi, thanks for the tutorial it was a great help for a beginner like me. how can I Add my own custom SDXL Lora to the prompts here? like where do I connect em? thanks in advance
This would be a seperate tutorial on its own, did you try finding other videos on youtube ?
Tx 4 video. One question: is it possible to have different animations but with the same character that I'm designing? If I filmed my little reference videos to animate my character, I can have this character with these sequences filmed for a short film ¿?
the best way to get the same character is to train a model on a set of images of that character
@@MDMZ Ok. Have you or know good one video tutorial for that¿? Many thanks
First of all thank you for your efforts for this great information and video. i am a mac user. i am using m2. zsh: killed, TypeError: Failed to fetch. does this mean that the RAM is not enough? What are the minimum computer specs I should have. I would really appreciate your help. Sincerely,
Hi, I can't tell for sure, but I know that it's challenging to make this work on Mac, have you followed the installation guide for MAC on the official guide ? and also, what's the full error text
great video, I just want to know how could I train my own model data from my own art set 2:30 and use that as the style reference?
yeah you can do that, I don't have a video on training your own model, but there are several tutorials on youtube
bien explicado felicitaciones solo una duda , me sale: When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph.
check the pinned comment, just shared a fix
Can you share how much time did it take to train and GPU used? I used A6000 for several hrs suddenly saw Kill.. (May be my computer went to sleep) I should have chose 16 versions, Is there anyway to save progress on each steps? Thank you so much!
what do u mean by 16 versions ? Anyways, it's normal for this process to be a little slow, if it stops running, check the cmd window for errors
Thanks for sharing. I have a question I would like to ask. What are the minimum requirements for a graphics card?
I recommend atleast 12GB of VRAM
Is it alternative of warpfusion because I was going to buy that one, should I use this or wrap fusion
I find this much more consistent, warpfusion is getting better too
great!! thank you! to integrate a selfmade lora file: is it the best way to put it between the checkpoint and positiv text prompt or what would be your suggestion? thx in advance!
hi, tbh I'm not entirely sure, I will need to look into it
Great video I got it to work on an m2 ultra 192gb. currently trying to integrate animatelcm and lcmlora to this workflow for faster generation and enhanced quality . let me know if you have any advice and ill stay tuned for more videos :)
that's awesome, i'm really curious, how's the speed on the m2 ?
thank for ur great tutorial.is there any limition for frame rendering? i use ur workflow for a 32 seconds video file and its like 30 frames(1000 png) and i got this error after 1 hours render time on my 3090ti: numpy.core._exceptions._ArrayMemoryError: Unable to allocate 6.43 GiB for an array with shape (976, 1024, 576, 3) and data type float32
probably running out of VRAM, make sure your GPU is not being overused by other apps during the process
Thanks for the video! But why do I have a lot of paper cranes on my background, the original video is clean white, how do you make sure that the background is rarely affected
that shouldnt happen, try reinstalling comfyUI, could be a software issue
Great tutorial man, but please next time tell us we need to install git from the official page in the first place xD
oh, I didn't realized it was necessary to do manually, may I know at which step you realized that and how you found out that u need to install it? I will pin the solution fore everyone else who runs into the same issue, thanks a lot!
@@MDMZ Ehi there, I needed to install Git when I first run cmd and pasted the link
I thought I messed up on the first step, thanks bro
Thank you. no workflow json link?
its in the guide the second file. he shows in the video. maybe watch again ;)
Do you need a video card for this? or can it run on Google Colab? Thank you
for this method, you need a video card, if you don't have a decent one, you can run it on the cloud(for a fee): kzhead.info/sun/i7SLiMymqZ-wras/bejne.html
First comment ❤❤❤
it took about 1-2 hrs on 4090 but its beautiful)
how long was your clip? 10 seconds?
@@matthewgiardino9252 16 sec 30fps 1280x1024, I didnt wait for the upscaled version
@@matthewgiardino9252 one more question: does this only work with sdxl? I want to try SD1.5 models, do I still need to download another vae, hotshot, encoder, etc.?
Great question, I believe this is meant to work with SDXL, I might need to experiment with 1.5
In order to Apply IPAdapter, can you provide the reference image of Origami
When using open pose and hed - does that lock you into not changing the style or look of a character? You way seems more creative friendly in design.
hmm I'm not sure, I haven't tested that
@@MDMZ *Edit solved it. I did a git pull on the ipadatper for an update and I made a ipadapter folder in the comfyui/models area and it worked. Awesome Tutorial - Going back throw and following along but for some reason I have the Ip adaptor in the same spot but for some reason the node is undefined. What would be the work around for that node to load the ip-adapter-plus_sdxl_vit-h.safetensors?
@@Nibot2023 Nice! thanks for sharing how you solved it
It would be great if you mention in video name or video itself (but better both) that this workflow is for SDXL.
the guide page has two workflows for 1.5 and sdxl
I get this error and I don't know how to solve it : 'T2IAdapter' object has no attribute 'compression_ratio'
Go to manager and press Update ComfyUI. Fixed it for me. After that, I got "ModelPatcherAndInjector.patch_model() got an unexpected keyword argument 'patch_weights'", which I fixed by once again going to the manager and pressing "Update all". Now it works and a few warnings I was getting also disappeared 😁
I haven't encountered this one yet
Thanks for the tutorial bro! Yet i can't run Comfy UI from the batch folder. It shows an error about Nvidia old version and Cuda drive not compaticle with pytorch or something like that. Can you give me tip to solve this.
make sure your NVidia GPU is up to date, you might find more help here: discuss.pytorch.org/t/cuda-versioning-and-pytorch-compatibility/189777/9
Awesome video 🤩
I am on Mac and this seems to be PC windows only .... but interesting to know about its existence. How would you rate this tool for video stylisation/transformation compared to RunwayML video to video?
this in my opinion is much better than RML, there are ways to run it on MAC, but it's huge difference when using an NVIDIA GPU
Is there any way to do it with image to video pose instead of prompt to video pose?
Hi, Thanks for your awesome video, would you please tell us how to add lora?
I might need to make a separate video on that
I really appreciate you kind help, I just tried to connect Lora from another tutorials, and it works well, but the final result has too much flicker, and I don't know which part to adjust to minimize it, also if you do text2video with multi prompts would be great one, because all the videos on youtube is not working well as your videos, I just wanna tell you that you are really give the best results over the rest. @@MDMZ
@@EtherealEchoesUS can you please share the Lora tutorial? I'm curious I already have a tutorial on text2video with multipromots 😉
I used this just to connect Lora to your video2video workflow: kzhead.info/sun/lNt7iZqIjIhtbJE/bejne.html - and this is the workflow from you: kzhead.info/sun/dNmgdq98fquMqK8/bejne.html@@MDMZ
how i can fix this ( When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph.
there's a solution in the pinned comment
@@MDMZ sir, I want use my GPU for Rendering in comfyUI, GPU is getting 5 so 14% and Ram is getting 80%
Thanks for the step by step tutorial. I am almost there... Can't figure out the below error tho: Error occurred when executing MiDaS-NormalMapPreprocessor: name 'midas' is not defined
can you share your workflow on discord ?
Work this with 3d render cutscenes too? I want shadow of rome looks better
I don;t see why not, you can try
ksampler is not running for me before that the que gets stopped also i can't see any preview video how to fix this ???
could be a memory issue, check the pinned comment
iwonder why or where is the controlnet pose ? so how can this get the tracking pose? thanks to anybody to answer.
Can we use it in a vps with a good gpu ? Or google collab pro ?
yes, checkout thinkdiffusion
What's the most effective way to change the background/scenes?
I also have the same question!
Thanks for the video! Tell me, is my 4070 video card with 12 gigabytes of memory suitable for this configuration? Because according to your configuration, the video memory is fully loaded and the processing of the video for 100 frames stops at 5% of the progress.
12Gb should be alright, try rendering at a lower resolution, or lower steps
Great guide, my PC is fairly decent 3060 GPU and it takes forever to make a video, anything to speed it up? Ty
update your GPU driver, make sure your GPU is not being overtaken by other software
@@MDMZ We can do with 3060 if so why not COLAB T4!! what about 3050TI 4GB RAM
nice one . .,how long it take u to render,. why in my setting in low v ram its too slow. although i have good gpu. ,2080 super
if am not mistaken, the 2080 super has 8GB of VRAM ? which is considered a little low for this, you need atleast 12, it won't be blazing fast even with 24GB
Hi @MDMZ... Great video... I am running into an error tho that states. "RuntimeError: mixed dtype (CPU): expect parameter to have scalar type of Float" Do you know any solution as it would be really helpful. This happens at ksampler Thanks
I don't think I came across this one yet, I'm adding solved errors to the pinned comment
I remember you mentioned a cloud site I could use to run Comfy since my PC takes forever. What was that site? I couldn't find it.
ThinkDiffusion
Sir help me, how to fix this Error(s) in loading state_dict for ImageProjModel: size mismatch for proj.weight: copying a param with shape torch.Size([8192, 1280]) from checkpoint, the shape in current model is torch.Size([8192, 1024])
Reply me
Hi, please check the pinned comment
I had a similar issue, it was because I didn't select the image_encoder in the Load CLIP Vision node
I would like to ask what the configuration requirements of this computer are
there's no official minimum requirements from the developers, but I recommend a minimum of 12GB of GPU VRAM
Hi, please I need your help, I just updated the comfyui, did update all, and I lost apply ipadapter within the video reference, and also the ipadapter from the keyframe adapter section.
I just found your comment about the update, thanks a lot, شكرا يا حبيبي
u r welcome, glad it worked
please help with mentioned bellowed errors, 1070 8gb gpu and ram also 8gb
I have a 3060 12 gb, 5 times I get an error that there is not enough memory.
8GB VRAM might be a bit too low for this
Hey @MDMZ I have installed Comfy UI on my Mac M2. using another tutorial. Then when I came to go through your tutorial It seems I don't have access to the manager tab nor the share tab available to me on that window. I don't suppose you know why that is? Anyway thanks for sharing either way.
The whole tab is not showing or just the manager button ? Try updating comfyui, there's another solution for the manager button disappearing, in the pinned comment
@@MDMZ Just the manager button.
Ive have figured it out!
Hi my friend. I was trying to keep a similar face to video origin but I couldn't do it yet. Maybe trying another ip adapter?
looks up IP-Adapter-FaceID
Maybe a dumb question, but is this at all possible on Mac machines? I have an M3 Pro with 36G of shared RAM and would love to try this out
the installation process is different, technically, it works, but the power of an NVIDIA GPU is unmatched when it comes to AI processing
Enhance ur video quality to 1080p
When loading the graph, the following node types were not found: IPAdapterApply Nodes that have failed to load will show as red on the graph.
check the pinned comment, just shared a fix
Guess im learning comfyui now haha...how about stable diffusion ?
ComfyUI is the future
Yo broh... I'm dev please make a video of editing sence like how to edit cuts, sound design and colour grading etc...
bro i was getting midas deapth map ppreprocessor error please help me in solving that problem
hi, check the pinned comment
Thanks, this is an awesome tutorial. 😎 But I have a question, how did you make the glass man? I've already tried a lot of options, but I can't get such a polygonal glass person 🥲
the prompts are available for patreon subscribers, for the glass one, you can use "crystal" keyword
Where can I add Lora block? Thanks in advance.
I tested your workflow and noticed, that it works only with Protovision checkpoint. Can you explain what unique specifies it has ? And what other chekpoints works with that workflow?
make sure you use an SDXL checkpoint, I tested it with atleast 5 models other than protovision, shouldnt be an issue
I have only one problem with this workflow: Following all your settings, the time from when I hit "queue promt" until I see my first frame is too long. What settings can I tweak to make it faster, without affecting it too much? When I find the look I like I'll go back to the high settings.
you can try setting the extract_every_nth to 10 or something higher, this way you'll process less frames and get to see what it looks like in much shorter time
Hi! Thank you for incredible usefull tutorial! After you tutorial - I have a lot confetti all over the image =) Is it anyway to fix it? I think all is good and I change just a prompt: "((masterpiece, best quality)), a mid-30s man with short blond hair, dressed in a casual long-sleeve grey sweater, stack of old colorful cars, beautiful clouds and canyon on the background, in the style of modernist photography, depth of field, detailed,sharp, 8k resolution, very detailed, cinematic lighting, trending on artstation, hyperdetailed"
weird, I cant tell for sure why that's happening, can u share your workflow on discord ?
i get this error when it gets to the sampler : Error occurred when executing KSamplerAdvanced: Expected query, key, and value to have the same dtype, but got query.dtype: struct c10::Half key.dtype: float and value.dtype: float instead. any idea what should i do ?
hi, you can check the pinned comment
I got this error: "could not be loaded with cv" pointing to the image_encoder. After downloading the encoder recommended in the IPAdapter Plus page I got it working. The link in the description points to a G model, while the IPAdapter Plus is an H model. Not sure if this is important, but it seemed to be in my case.
thanks for sharing, I will look into it, I'm using the same exact files in the tutorial and it works fine for me
Can you also do a tutorial using thinkdiffusion version of animatediff
it's really the same process, except that you probably wont need to install anything, just start a machine on TD and import the workflow
KSampler stays at 33% Although I waited for 4.5 hours, it still did not work at the same steps 30, I tried at 25, it is the same again, the last time I was able to run it at 9, it also stayed at 33% Is there a solution? System: Ryzen 5 3600/gtx1070Ti 8GB/16GB 3200mhz Ram/500gb SSD
8GB might be a little low for it, but it could also be happening for another reason, did you try setting a lower resolution ? maybe 4080p
Does it need an external GPU to run ComfyUI IPAdapter + HotshotXL? Or it can be used without an external GPU?
It sounds painful.
is there a way to use ipadapter to put a different face on the video?
lookup ipadapter face id
Gentelmen may i know what is the maximum duration of video we can upload on it
Honestly, I don't have the answer, how long is the video you're trying to process ?