Extract the downloaded file with 7-Zip and run ComfyUI. Sytan SDXL ComfyUI: Very nice workflow showing how to connect the base model with the refiner and include an upscaler. AnimateDiff ComfyUI. py containing model definitions and models/config_<model_name>. zefy_zef • 2 mo. Tencent has released a new feature for T2i: Composable Adapters. ,【纪录片】你好 AI 第4集 未来视界,SD两大更新,SDXL版controlnet 和WebUI 1. Updating ComfyUI on Windows. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. 9. Readme. creamlab. Furthermore, this extension provides a hub feature and convenience functions to access a wide range of information within ComfyUI. The text was updated successfully, but these errors were encountered: All reactions. </p> <p dir="auto">T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader. With the presence of the SDXL Prompt Styler, generating images with different styles becomes much simpler. The input image is: meta: a dog on grass, photo, high quality Negative prompt: drawing, anime, low quality, distortion[2023/9/05] 🔥🔥🔥 IP-Adapter is supported in WebUI and ComfyUI (or ComfyUI_IPAdapter_plus). {"payload":{"allShortcutsEnabled":false,"fileTree":{"models/style_models":{"items":[{"name":"put_t2i_style_model_here","path":"models/style_models/put_t2i_style_model. 1. ComfyUI : ノードベース WebUI 導入&使い方ガイド. The aim of this page is to get you up and running with ComfyUI, running your first gen, and providing some suggestions for the next steps to explore. [GUIDE] ComfyUI AnimateDiff Guide/Workflows Including Prompt Scheduling - An Inner-Reflections Guide (Including a Beginner Guide) r/StableDiffusion • [ SD15 - Changing Face Angle ] T2I + ControlNet to adjust the angle of the face. Contribute to LiuFengHuiXueYYY/ComfyUi development by creating an account on GitHub. s1 and s2 scale the intermediate values coming from the input blocks that are concatenated to the. jpg","path":"ComfyUI-Impact-Pack/tutorial. ComfyUI gives you the full freedom and control to. In the AnimateDiff Loader node,. We release two online demos: and . jn-jairo mentioned this issue Oct 13, 2023. 5. 11. The sliding window feature enables you to generate GIFs without a frame length limit. For Automatic1111's web UI the ControlNet extension comes with a preprocessor dropdown - install instructions. . For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples FeaturesComfyUIの使い方なんかではなく、ノードの中身について説明していきます。以下のサイトをかなり参考にしています。 ComfyUI 解説 (wiki ではない) comfyui. Welcome to the unofficial ComfyUI subreddit. py","path":"comfy/t2i_adapter/adapter. Users are now starting to doubt that this is really optimal. And here you have someone genuinely explaining you how to use it, but you are just bashing the devs instead of opening Mikubill's repo on Github and politely submitting a suggestion to. 5 and Stable Diffusion XL - SDXL. In A1111 I typically develop my prompts in txt2img, then copy the +/-prompts into Parseq, setup parameters and keyframes, then export those to Deforum to create animations. Single metric head models (Zoe_N and Zoe_K from the paper) have the common definition and are defined under. Otherwise it will default to system and assume you followed ConfyUI's manual installation steps. ComfyUI is a strong and easy-to-use graphical person interface for Steady Diffusion, a sort of generative artwork algorithm. pth. All that should live in Krita is a 'send' button. Depthmap created in Auto1111 too. ci","contentType":"directory"},{"name":". He published on HF: SD XL 1. 2 kB. The output is Gif/MP4. Interface NodeOptions Save File Formatting Shortcuts Text Prompts Utility Nodes Core Nodes. ago. 7 Python The most powerful and modular stable diffusion GUI with a graph/nodes interface. See the Config file to set the search paths for models. AP Workflow 5. comfyanonymous. comments sorted by Best Top New Controversial Q&A Add a Comment. T2I +. Each one weighs almost 6 gigabytes, so you have to have space. Downloaded the 13GB satefensors file. py --force-fp16. When you first open it, it may seem simple and empty, but once you load a project, you may be overwhelmed by the node system. If you want to open it. 6 there are plenty of new opportunities for using ControlNets and sister models in A1111,Get the COMFYUI SDXL Advanced co. Is there a way to omit the second picture altogether and only use the Clipvision style for. Steps to Leverage the Hires Fix in ComfyUI: Loading Images: Start by loading the example images into ComfyUI to access the complete workflow. ComfyUI is a node-based GUI for Stable Diffusion. ipynb","contentType":"file. Recommend updating ” comfyui-fizznodes ” to latest . The Apply Style Model node can be used to provide further visual guidance to a diffusion model specifically pertaining to the style of the generated images. Place your Stable Diffusion checkpoints/models in the “ComfyUI\models\checkpoints” directory. Split into two nodes: DetailedKSampler with denoise and DetailedKSamplerAdvanced with start_at_step. When comparing sd-webui-controlnet and ComfyUI you can also consider the following projects: stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. Thank you. ComfyUI A powerful and modular stable diffusion GUI. 400 is developed for webui beyond 1. 22. 309 MB. Create. T2I-Adapter is a condition control solution that allows for precise control supporting multiple input guidance models. ComfyUI A powerful and modular stable diffusion GUI and backend. In the Comfyui SDXL workflow example, the refiner is an integral part of the generation process. r/StableDiffusion • New AnimateDiff on ComfyUI supports Unlimited Context Length - Vid2Vid will never be the same!!! ComfyUIの基本的な使い方. This is the initial code to make T2I-Adapters work in SDXL with Diffusers. A node suite for ComfyUI with many new nodes, such as image processing, text processing, and more. And also I will create a video for this. The extracted folder will be called ComfyUI_windows_portable. 08453. Learn more about TeamsComfyUI Custom Nodes. This video is an in-depth guide to setting up ControlNet 1. MultiLatentComposite 1. arnold408 changed the title How to use ComfyUI with SDXL 0. AnimateDiff CLI prompt travel: Getting up and running (Video tutorial released. 6版本使用介绍,AI一键彩总模型1. Dive in, share, learn, and enhance your ComfyUI experience. This is the input image that will be used in this example source: Here is how you use the depth T2I-Adapter: Here is how you use the depth Controlnet. Download and install ComfyUI + WAS Node Suite. Connect and share knowledge within a single location that is structured and easy to search. 0发布,以后不用填彩总了,3种SDXL1. . 古くなってしまったので新しい入門記事を作りました 趣旨 こんにちはakkyossです。 SDXL0. Follow the ComfyUI manual installation instructions for Windows and Linux. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples FeaturesT2I-Adapters & Training code for SDXL in Diffusers. Please adjust. The screenshot is in Chinese version. These files are Custom Workflows for ComfyUI ComfyUI is a super powerful node-based , modular , interface for Stable Diffusion. This video is 2160x4096 and 33 seconds long. For Automatic1111's web UI the ControlNet extension comes with a preprocessor dropdown - install instructions. Go to the root directory and double-click run_nvidia_gpu. However, many users have a habit to always check “pixel-perfect” rightly after selecting the models. ComfyUI-data-index / Dockerfile. Learn how to use Stable Diffusion SDXL 1. A node system is a way of designing and executing complex stable diffusion pipelines using a visual flowchart. Refresh the browser page. 「ControlNetが出たぞー!」という話があって実装したと思ったらその翌日にT2I-Adapterが発表されて全力で脱力し、しばらくやる気が起きなかったのだが、ITmediaの連載でも触れたように、AI用ポーズ集を作ったので、それをMemeplex上から検索してimg2imgまたはT2I-Adapterで好きなポーズや表情をベースとし. With this Node Based UI you can use AI Image Generation Modular. ComfyUI: An extremely powerful Stable Diffusion GUI with a graph/nodes interface for advanced users that gives you precise control over the diffusion process without coding anything now supports ControlNetsYou can load these the same way as with png files, just drag and drop onto ComfyUI surface. If you have another Stable Diffusion UI you might be able to reuse the dependencies. ComfyUI promises to be an invaluable tool in your creative path, regardless of whether you’re an experienced professional or an inquisitive newbie. Complete. It is similar to a ControlNet, but it is a lot smaller (~77M parameters and ~300MB file size) because its only inserts weights into the UNet instead of copying and training it. Examples. Preprocessor Node sd-webui-controlnet/other Use with ControlNet/T2I-Adapter Category; MiDaS-DepthMapPreprocessor (normal) depth: control_v11f1p_sd15_depthComfyUi and ControlNet Issues. In this ComfyUI tutorial we will quickly c. A ControlNet works with any model of its specified SD version, so you're not locked into a basic model. ComfyUI was created in January 2023 by Comfyanonymous, who created the tool to learn how Stable Diffusion works. Read the workflows and try to understand what is going on. Fiztban. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. (early and not finished) Here are some more advanced examples: “Hires Fix” aka 2 Pass Txt2Img. いつもよく目にする Stable Diffusion WebUI とは違い、ノードベースでモデル、VAE、CLIP を制御することができます. IPAdapters, SDXL ControlNets, and T2i Adapters Now Available for Automatic1111. 1,. Gain a thorough understanding of ComfyUI, SDXL and Stable Diffusion 1. You can run this cell again with the UPDATE_COMFY_UI or UPDATE_WAS_NS options selected to update. Embark on an intriguing exploration of ComfyUI and master the art of working with style models from ground zero. These are also used exactly like ControlNets in ComfyUI. When comparing sd-webui-controlnet and ComfyUI you can also consider the following projects: stable-diffusion-ui - Easiest 1-click way to install and use Stable Diffusion on your computer. But t2i adapters still seem to be working. We collaborate with the diffusers team to bring the support of T2I-Adapters for Stable Diffusion XL (SDXL) in diffusers! It achieves impressive results in both performance and efficiency. ) Automatic1111 Web UI - PC - Free. . It will automatically find out what Python's build should be used and use it to run install. gitignore","path":". What happens is that I had not downloaded the ControlNet models. UPDATE_WAS_NS : Update Pillow for WAS NS: Hello, I got research access to SDXL 0. Saved searches Use saved searches to filter your results more quickly[GUIDE] ComfyUI AnimateDiff Guide/Workflows Including Prompt Scheduling - An Inner-Reflections Guide (Including a Beginner Guide) Tutorial | Guide AnimateDiff in ComfyUI is an amazing way to generate AI Videos. We introduce CoAdapter (Composable Adapter) by jointly training T2I-Adapters and an extra fuser. 1: Enables dynamic layer manipulation for intuitive image. 9. For the T2I-Adapter the model runs once in total. The script should then connect to your ComfyUI on Colab and execute the generation. Visual Area Conditioning: Empowers manual image composition control for fine-tuned outputs in ComfyUI’s image generation. Environment Setup. Advanced Diffusers Loader Load Checkpoint (With Config) Conditioning. Sep 10, 2023 ComfyUI Weekly Update: DAT upscale model support and more T2I adapters. The screenshot is in Chinese version. With this Node Based UI you can use AI Image Generation Modular. Update Dockerfile. ComfyUI Community Manual Getting Started Interface. ComfyUI / Dockerfile. . . Codespaces. . like 649. Provides a browser UI for generating images from text prompts and images. You can run this cell again with the UPDATE_COMFY_UI or UPDATE_WAS_NS options selected to update. The interface follows closely how SD works and the code should be much more simple to understand than other SD UIs. The unCLIP Conditioning node can be used to provide unCLIP models with additional visual guidance through images encoded by a CLIP vision model. comfyui workflow hires fix. CLIP_vision_output The image containing the desired style, encoded by a CLIP vision model. mv loras loras_old. Title: Udemy – Advanced Stable Diffusion with ComfyUI and SDXL. 4. Structure Control: The IP-Adapter is fully compatible with existing controllable tools, e. comfyUI和sdxl0. The easiest way to generate this is from running a detector on an existing image using a preprocessor: For ComfyUI ControlNet preprocessor nodes has "OpenposePreprocessor". This is for anyone that wants to make complex workflows with SD or that wants to learn more how SD works. T2I-Adapters are used the same way as ControlNets in ComfyUI: using the ControlNetLoader node. I have shown how to use T2I-Adapter style transfer. radames HF staff. It allows for denoising larger images by splitting it up into smaller tiles and denoising these. The fuser allows different adapters with various conditions to be aware of each other and synergize to achieve more powerful composability, especially the combination of element-level style and other structural information. 0 Depth Vidit, Depth Faid Vidit, Depth, Zeed, Seg, Segmentation, Scribble. No description, website, or topics provided. t2i部分のKSamplerでseedをfixedにしてHires fixの部分を調整しながら生成を繰り返すとき、変更点であるHires fixのKSamplerから処理が始まるので効率的に動いているのがわかります。. Install the ComfyUI dependencies. ComfyUI Extension ComfyUI-AnimateDiff-Evolved (by @Kosinkadink) Google Colab: Colab (by @camenduru) We also create a Gradio demo to make AnimateDiff easier to use. stable-diffusion-ui - Easiest 1-click. pth @dfaker also started a discussion on the. json containing configuration. Thanks Here are the step-by-step instructions for installing ComfyUI: Windows Users with Nvidia GPUs: Download the portable standalone build from the releases page. Note: these versions of the ControlNet models have associated Yaml files which are. Note: Remember to add your models, VAE, LoRAs etc. T2I-Adapter, and Latent previews with TAESD add more. But you can force it to do whatever you want by adding that into the command line. A real HDR effect using the Y channel might be possible, but requires additional libraries - looking into it. These originate all over the web on reddit, twitter, discord, huggingface, github, etc. tool. Sep 2, 2023 ComfyUI Weekly Update: Faster VAE, Speed increases, Early inpaint models and more. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. Link Render Mode, last from the bottom, changes how the noodles look. ci","path":". Detected Pickle imports (3){"payload":{"allShortcutsEnabled":false,"fileTree":{"notebooks":{"items":[{"name":"comfyui_colab. 2. safetensors" Where do I place these files? I can't just copy them into the ComfyUI\models\controlnet folder. The ControlNet Detectmap will be cropped and re-scaled to fit inside the height and width of the txt2img settings. It will download all models by default. If you click on 'Install Custom Nodes' or 'Install Models', an installer dialog will open. Colab Notebook: Use the provided. I have been trying to make the transition to ComfyUi but have had an issue getting ControlNet working. Info. The newly supported model list:New ControlNet models support added to the Automatic1111 Web UI Extension. 6 there are plenty of new opportunities for using ControlNets and sister models in A1111,Get the COMFYUI SDXL Advanced co. If you have another Stable Diffusion UI you might be able to reuse the dependencies. The Apply ControlNet node can be used to provide further visual guidance to a diffusion model. • 3 mo. I leave you the link where the models are located (In the files tab) and you download them one by one. . . Provides a browser UI for generating images from text prompts and images. Contribute to hyf1124/ComfyUI-ZHO-Chinese development by creating an account on GitHub. Place the models you downloaded in the previous. This was the base for. ComfyUI Examples ComfyUI Lora Examples . optional. NOTICE. Run ComfyUI with colab iframe (use only in case the previous way with localtunnel doesn't work) You should see the ui appear in an iframe. No virus. Preprocessing and ControlNet Model Resources: 3. dcf6af9 about 1 month ago. Both of the above also work for T2I adapters. Control the strength of the color transfer function. List of my comfyUI node repos:. ip_adapter_multimodal_prompts_demo: generation with multimodal prompts. Step 2: Download ComfyUI. io. main. py","path":"comfy/t2i_adapter/adapter. I tried to use the IP adapter node simultaneously with the T2I adapter_style, but only the black empty image was generated. The demo is here. Follow the ComfyUI manual installation instructions for Windows and Linux. A full training run takes ~1 hour on one V100 GPU. r/StableDiffusion •. Just download the python script file and put inside ComfyUI/custom_nodes folder. annoying as hell. USE_GOOGLE_DRIVE : UPDATE_COMFY_UI : Update WAS Node Suite. T2I-Adapter, and Latent previews with TAESD add more. 5 other nodes as another image and then add one or both of these images into any current workflow in ComfyUI (of course it would still need some small adjustments)? I'm hoping to avoid the hassle of repeatedly adding. T2I-Adapter-SDXL - Canny. Which switches back the dim. Follow the ComfyUI manual installation instructions for Windows and Linux. Find quaint shops, local markets, unique boutiques, independent retailers, and full shopping centres. Thu. Last update 08-12-2023 本記事について 概要 ComfyUIはStable Diffusionモデルから画像を生成する、Webブラウザベースのツールです。最近ではSDXLモデルでの生成速度の早さ、消費VRAM量の少なさ(1304x768の生成時で6GB程度)から注目を浴びています。 本記事では手動でインストールを行い、SDXLモデルで画像. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. For the T2I-Adapter the model runs once in total. You should definitively try them out if you care about generation speed. By using it, the algorithm can understand outlines of. 5 They are both loading about 50% and then these two errors :/ Any help would be great as I would really like to try these style transfers ControlNet 0: Preprocessor: Canny -- Mode. These work in ComfyUI now, just make sure you update (update/update_comfyui. To give you an idea of how powerful it is: StabilityAI, the creators of Stable Diffusion, use ComfyUI to test Stable Diffusion internally. 1. Thanks comments sorted by Best Top New Controversial Q&A Add a Comment More posts you may like. 003997a 2 months ago. args and prepend the comfyui directory to sys. Unlike ControlNet, which demands substantial computational power and slows down image. It tries to minimize any seams for showing up in the end result by gradually denoising all tiles one step at the time and randomizing tile positions for every step. Resources. Tiled sampling for ComfyUI. Simply save and then drag and drop the image into your ComfyUI interface window with ControlNet Canny with preprocessor and T2I-adapter Style modules active to load the nodes, load design you want to modify as 1152 x 648 PNG or images from "Samples to Experiment with" below, modify some prompts, press "Queue Prompt," and wait for the AI. Now we move on to t2i adapter. 2. {"payload":{"allShortcutsEnabled":false,"fileTree":{"ComfyUI-Impact-Pack/tutorial":{"items":[{"name":"ImpactWildcard-LBW. • 2 mo. Now we move on to t2i adapter. As a reminder T2I adapters are used exactly like ControlNets in ComfyUI. safetensors I load controlnet by having a Load Control Net model with one of the above checkpoints loaded. When comparing sd-webui-controlnet and T2I-Adapter you can also consider the following projects: ComfyUI - The most powerful and modular stable diffusion GUI with a graph/nodes interface. net モデルのロード系 まずはモデルのロードについてみていきましょう。 CheckpointLoader チェックポイントファイルからModel(UNet)、CLIP(Text. maxihash •. ControlNet added new preprocessors. Good for prototyping. Encompassing QR code, Interpolation (2step and 3step), Inpainting, IP Adapter, Motion LoRAs, Prompt Scheduling, Controlnet, and Vid2Vid. This extension provides assistance in installing and managing custom nodes for ComfyUI. Apply Style Model. {"payload":{"allShortcutsEnabled":false,"fileTree":{"models/style_models":{"items":[{"name":"put_t2i_style_model_here","path":"models/style_models/put_t2i_style_model. How to use Stable Diffusion V2. Apply ControlNet. safetensors" from the link at the beginning of this post. r/StableDiffusion. While some areas of machine learning and generative models are highly technical, this manual shall be kept understandable by non-technical users. If you're running on Linux, or non-admin account on windows you'll want to ensure /ComfyUI/custom_nodes and comfyui_controlnet_aux has write permissions. ago. SargeZT has published the first batch of Controlnet and T2i for XL. ) Automatic1111 Web UI - PC - Free. It installed automatically and has been on since the first time I used ComfyUI. This method is recommended for individuals with experience with Docker containers and understand the pluses and minuses of a container-based install. #ComfyUI provides Stable Diffusion users with customizable, clear and precise controls. Always Snap to Grid, not in your screenshot, is. Will try to post tonight) ComfyUI Now Had Prompt Scheduling for AnimateDiff!!! I have made a complete guide from installation to full workflows! AI Animation using SDXL and Hotshot-XL! Full Guide Included! The results speak for themselves. ComfyUI breaks down a workflow into rearrangeable elements so you can. Just enter your text prompt, and see the generated image. {"payload":{"allShortcutsEnabled":false,"fileTree":{"":{"items":[{"name":"node_wrappers","path":"node_wrappers","contentType":"directory"},{"name":"src","path":"src. Otherwise it will default to system and assume you followed ConfyUI's manual installation steps. Next, run install. Welcome to the unofficial ComfyUI subreddit. Introduction. Nov 22nd, 2023. Sep. Welcome. I have a brief over. ComfyUI The most powerful and modular stable diffusion GUI and backend. Core Nodes Advanced. In the standalone windows build you can find this file in the ComfyUI directory. I'm using macbook intel i9 machine which is not powerfull for batch diffusion operations and I couldn't share. b1 are for the intermediates in the lowest blocks and b2 is for the intermediates in the mid output blocks. We release two online demos: and . This project strives to positively impact the domain of AI. Image Formatting for ControlNet/T2I Adapter: 2. r/StableDiffusion. You can assign the first 20 steps to the base model and delegate the remaining steps to the refiner model. Core Nodes Advanced. In the end, it turned out Vlad enabled by default some optimization that wasn't enabled by default in Automatic1111. T2I adapters are faster and more efficient than controlnets but might give lower quality. A good place to start if you have no idea how any of this works is the: All the images in this repo contain metadata which means they can be loaded into ComfyUI with the Load button (or dragged onto the window) to get the full workflow that was used to create the image. Hypernetworks. 0 Depth Vidit, Depth Faid Vidit, Depth, Zeed, Seg, Segmentation, Scribble. Direct link to download. . Enjoy and keep it civil. main T2I-Adapter. With the arrival of Automatic1111 1. 6 kB. 9 ? How to use openpose controlnet or similar? Please help. comment sorted by Best Top New Controversial Q&A Add a Comment. It's all or nothing, with not further options (although you can set the strength. Wanted it to look neat and a addons to make the lines straight. I intend to upstream the code to diffusers once I get it more settled. Colab Notebook:. 8, 2023. That’s so exciting to me as an Apple hardware user ! Apple’s SD version is based on diffuser’s work, it’s goes with 12sec per image on 2Watts of energy (neural engine) (Fu nvidia) But it was behind and rigid (no embeddings, fat checkpoints, no. For some workflow examples and see what ComfyUI can do you can check out: ComfyUI Examples Installing ComfyUI FeaturesThe equivalent of "batch size" can be configured in different ways depending on the task. ci","path":". Check some basic workflows, you can find some in the official web of comfyui. github","contentType. We find the usual suspects over there (depth, canny, etc. py. . I want to use ComfyUI with openpose controlnet or T2i adapter with SD 2. However, relying solely on text prompts cannot fully take advantage of the knowledge learned by the model, especially when flexible and accurate controlling (e. ComfyUI is a node-based user interface for Stable Diffusion. こんにちはこんばんは、teftef です。. He published on HF: SD XL 1. ComfyUI ControlNet and T2I. ipynb","contentType":"file. py --force-fp16. 69 Online. An NVIDIA-based graphics card with 4 GB or more VRAM memory. 6. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. bat on the standalone). Load Style Model. Adjustment of default values. Hi Andrew, thanks for showing some paths in the jungle. Interface NodeOptions Save File Formatting Shortcuts Text Prompts Utility Nodes Core Nodes. But it gave better results than I thought. The Fetch Updates menu retrieves update. github","path":". So as an example recipe: Open command window. Store ComfyUI on Google Drive instead of Colab. なんと、T2I-Adapterはこれらの処理を結合することができるのです。 それを示しているのが、次の画像となります。 入力したプロンプトが、Segmentation・Sketchのそれぞれで上手く制御できない場合があります。Adetailer itself as far as I know doesn't, however in that video you'll see him use a few nodes that do exactly what Adetailer does i. Launch ComfyUI by running python main. SargeZT has published the first batch of Controlnet and T2i for XL. Embeddings/Textual Inversion. Recently a brand new ControlNet model called T2I-Adapter style was released by TencentARC for Stable Diffusion. add zoedepth model. ) but one of these new 1. So my guess was that ControlNets in particular are getting loaded onto my CPU even though there's room on the GPU. ControlNet 和 T2I-Adapter 的框架都具备灵活小巧的特征, 训练快,成本低,参数少,很容易地被插入到现有的文本-图像扩散模型中 ,不影响现有大型.