Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[pull] master from comfyanonymous:master #106

Open
wants to merge 1,585 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
1585 commits
Select commit Hold shift + click to select a range
754597c
Clean up some controlnet code.
comfyanonymous Oct 23, 2024
66b0961
Fix ControlLora issue with last commit.
comfyanonymous Oct 23, 2024
af8cf79
support SimpleTuner lycoris lora for SD3 (#5340)
PsychoLogicAu Oct 24, 2024
5281090
Add a model merge node for SD3.5 large.
comfyanonymous Oct 24, 2024
ce759b7
Revert download to .tmp in frontend_management (#5369)
huchenlei Oct 25, 2024
d605677
Make euler_ancestral work on flow models (credit: Ashen).
comfyanonymous Oct 25, 2024
c3ffbae
Make LatentUpscale nodes work on 3d latents.
comfyanonymous Oct 26, 2024
5cbb01b
Basic Genmo Mochi video model support.
comfyanonymous Oct 26, 2024
9ee0a65
float16 inference is a bit broken on mochi.
comfyanonymous Oct 27, 2024
669d9e4
Set default shift on mochi to 6.0
comfyanonymous Oct 28, 2024
c0b0cfa
Update web content to release v1.3.21 (#5351)
huchenlei Oct 28, 2024
c320801
Remove useless line.
comfyanonymous Oct 28, 2024
13b0ff8
Update SD3 code.
comfyanonymous Oct 29, 2024
30c0c81
Add a way to patch blocks in SD3.
comfyanonymous Oct 29, 2024
954683d
SLG first implementation for SD3.5 (#5404)
Dango233 Oct 29, 2024
770ab20
Cleanup SkipLayerGuidanceSD3 node.
comfyanonymous Oct 29, 2024
65a8659
Update web content to release v1.3.26 (#5413)
huchenlei Oct 29, 2024
09fdb2b
Support SD3.5 medium diffusers format weights and loras.
comfyanonymous Oct 30, 2024
daa1565
Fix diffusers flux controlnet regression.
comfyanonymous Oct 30, 2024
f2aaa0a
Rename `ImageCrop` to `Image Crop` (#5424)
KoreTeknology Oct 31, 2024
1af4a47
Bump up mac version for attention upcast bug workaround.
comfyanonymous Oct 31, 2024
1c8286a
Avoid SyntaxWarning in UniPC docstring (#5442)
akx Oct 31, 2024
cc9cf6d
Rename some nodes in Display Name Mappings (nodes.py) (#5439)
KoreTeknology Oct 31, 2024
fabf449
Mochi VAE encoder.
comfyanonymous Nov 1, 2024
ee8abf0
Update folder paths: "clip" -> "text_encoders"
comfyanonymous Nov 2, 2024
6c9dbde
Fix mochi all in one checkpoint t5xxl key names.
comfyanonymous Nov 3, 2024
6966729
Add mochi support to readme.
comfyanonymous Nov 4, 2024
c49025f
Allow POST `/userdata/{file}` endpoint to return full file info (#5446)
huchenlei Nov 4, 2024
69694f4
fix dynamic shape export (#5490)
contentis Nov 4, 2024
8afb97c
Fix unknown VAE being detected as the mochi VAE.
comfyanonymous Nov 5, 2024
5e29e7a
Remove scaled_fp8 key after reading it to silence warning.
comfyanonymous Nov 6, 2024
b49616f
Make VAEDecodeTiled node work with video VAEs.
comfyanonymous Nov 7, 2024
2865f91
Free memory before doing tiled decode.
comfyanonymous Nov 7, 2024
75a818c
Move mochi latent node to: latent/video.
comfyanonymous Nov 8, 2024
dd5b57e
fix for SAG with Kohya HRFix/ Deep Shrink (#5546)
DenOfEquity Nov 8, 2024
6ee066a
Live terminal output (#5396)
pythongosssss Nov 9, 2024
8b90e50
Properly handle and reshape masks when used on 3d latents.
comfyanonymous Nov 9, 2024
9c1ed58
proper fix for sag.
comfyanonymous Nov 10, 2024
bdeb1c1
Fast previews for mochi.
comfyanonymous Nov 10, 2024
8a52810
Fix some custom nodes.
comfyanonymous Nov 11, 2024
2a18e98
Refactor so that zsnr can be set in the sampling_settings.
comfyanonymous Nov 11, 2024
8b275ce
Support auto detecting some zsnr anime checkpoints.
comfyanonymous Nov 11, 2024
2d28b0b
improve: add descriptions for clip loaders (#5576)
ltdrdata Nov 11, 2024
eb476e6
Allow 1D masks for 1D latents.
comfyanonymous Nov 11, 2024
a72d152
fix --cuda-device arg for AMD/HIP devices (#5586)
Bratzmeister Nov 12, 2024
8ebf2d8
Add block replace transformer_options to flux.
comfyanonymous Nov 12, 2024
3748e7e
Fix regression.
comfyanonymous Nov 13, 2024
3b9a6cf
Fix issue with 3d masks.
comfyanonymous Nov 13, 2024
122c9ca
Add advanced model merging node for mochi.
comfyanonymous Nov 14, 2024
5fb59c8
Add a node to block merge auraflow models.
comfyanonymous Nov 15, 2024
4ac401a
Update web content to release v1.3.44 (#5620)
huchenlei Nov 16, 2024
22a1d7c
Fix 3.8 compatibility in user_manager.py (#5645)
huchenlei Nov 17, 2024
41886af
Add transformer options blocks replace patch to mochi.
comfyanonymous Nov 16, 2024
d9f9096
Support block replace patches in auraflow.
comfyanonymous Nov 17, 2024
9a0a5d3
Add a skip layer guidance node that can also skip single layers.
comfyanonymous Nov 18, 2024
9cc90ee
Update UI screenshot in README (#5666)
huchenlei Nov 18, 2024
b699a15
Refactor inpaint/ip2p code.
comfyanonymous Nov 19, 2024
f498d85
Add terminal size fallback (#5623)
yoland68 Nov 19, 2024
156a287
Add boolean to InpaintModelConditioning to disable the noise mask.
comfyanonymous Nov 19, 2024
8986151
Rename add_noise_mask -> noise_mask.
comfyanonymous Nov 19, 2024
22535d0
Skip layer guidance now works on stable audio model.
comfyanonymous Nov 20, 2024
07f6eea
Fix mask issue with attention_xformers.
comfyanonymous Nov 20, 2024
772e620
Update readme.
comfyanonymous Nov 21, 2024
41444b5
Add some new weight patching functionality.
comfyanonymous Nov 21, 2024
8f0009a
Support new flux model variants.
comfyanonymous Nov 21, 2024
2fd9c13
Fix mask issue in some attention functions.
comfyanonymous Nov 22, 2024
5e16f1d
Support Lightricks LTX-Video model.
comfyanonymous Nov 22, 2024
0b734de
Add LTX-Video support to the Readme.
comfyanonymous Nov 22, 2024
5818f6c
Remove print.
comfyanonymous Nov 22, 2024
94323a2
Remove prints.
comfyanonymous Nov 22, 2024
bc6be6c
Some fixes to the lowvram system.
comfyanonymous Nov 22, 2024
e5c3f4b
LTXV lowvram fixes.
comfyanonymous Nov 22, 2024
6e8cdcd
Fix some tiled VAE decoding issues with LTX-Video.
comfyanonymous Nov 22, 2024
839ed33
Some improvements to the lowvram unloading.
comfyanonymous Nov 23, 2024
ab885b3
Skip layer guidance node now works on LTX-Video.
comfyanonymous Nov 23, 2024
7126ecf
set LTX min length to 1 for t2i (#5750)
spacepxl Nov 24, 2024
3d80271
Update README.md (#5707)
40476 Nov 24, 2024
b4526d3
Skip layer guidance now works on hydit model.
comfyanonymous Nov 24, 2024
61196d8
Add option to inference the diffusion model in fp32 and fp64.
comfyanonymous Nov 25, 2024
b7143b7
Flux inpaint model does not work in fp16.
comfyanonymous Nov 26, 2024
15c39ea
Support for the official mochi lora format.
comfyanonymous Nov 26, 2024
4c82741
Support official SD3.5 Controlnets.
comfyanonymous Nov 26, 2024
24dc581
fix multi add makedirs error (#5786)
lyksdu Nov 26, 2024
497db62
Alternative fix for #5767
comfyanonymous Nov 26, 2024
0d4e29f
LTXV model merging node.
comfyanonymous Nov 27, 2024
95d8713
Missing parentheses.
comfyanonymous Nov 27, 2024
b666539
Remove print.
comfyanonymous Nov 28, 2024
20879c7
Remove internal model download endpoint (#5432)
huchenlei Nov 28, 2024
53646e0
Update web content to release v1.4.13 (#5807)
huchenlei Nov 28, 2024
bf2650a
Fast previews for ltxv.
comfyanonymous Nov 28, 2024
26fb2c6
Add a way to disable cropping in the CLIPVisionEncode node.
comfyanonymous Nov 29, 2024
82c5308
Backward compatibility patch for changes in the method signature of `…
ltdrdata Nov 29, 2024
20a560e
How to enable experimental memory efficient attention on ROCm RDNA3.
comfyanonymous Nov 29, 2024
3fc6ebc
Add basic style model "multiply" strength.
comfyanonymous Nov 30, 2024
8e4118c
make dpm_2_ancestral work with rectified flow.
comfyanonymous Dec 1, 2024
2d5b3e0
Remove useless code.
comfyanonymous Dec 2, 2024
79d5cea
Improved memory management. (#5450)
comfyanonymous Dec 2, 2024
0ee322e
ModelPatcher Overhaul and Hook Support (#5583)
Kosinkadink Dec 2, 2024
57e8bf6
Fix case where a memory leak could cause crash.
comfyanonymous Dec 3, 2024
8d4e063
Add union link connection type support (#5806)
huchenlei Dec 3, 2024
cdc3b97
resolve relative paths in YAML configuration for extra model paths (#…
bigcat88 Dec 3, 2024
c1b92b7
Some optimizations to euler a.
comfyanonymous Dec 3, 2024
bf9a90a
Revert "Add union link connection type support (#5806)" (#5889)
huchenlei Dec 3, 2024
452179f
Make ModelPatcher class clone function work with inheritance.
comfyanonymous Dec 3, 2024
f7695b5
Add Create Hook Keyframes Interp. node to simplify creating groups of…
Kosinkadink Dec 4, 2024
4827244
[Developer Experience] Add node typing (#5676)
huchenlei Dec 4, 2024
4e402b1
Reland union type (#5900)
webfiltered Dec 4, 2024
3bed56b
Add another ROCm tip.
comfyanonymous Dec 4, 2024
9a616b8
Add rescaling_scale from STG to SkipLayerGuidanceDiT.
comfyanonymous Dec 5, 2024
1e21f4c
Make timestep ranges more usable on rectified flow models.
comfyanonymous Dec 5, 2024
005d2d3
ltxv: add noise to guidance image to ensure generated motion. (#5937)
michaellightricks Dec 6, 2024
8af9a91
A few improvements to #5937.
comfyanonymous Dec 6, 2024
93477f8
Add code owners (#5873)
huchenlei Dec 7, 2024
fbf68c4
clamp input (#5928)
Haoming02 Dec 7, 2024
ac2f052
Set env vars to disable telemetry in libs used by some custom nodes.
comfyanonymous Dec 7, 2024
6579632
Remove unused imports and variables.
comfyanonymous Dec 8, 2024
e2fafe0
Make CLIP set last layer node work with t5 models.
comfyanonymous Dec 9, 2024
0fd4e6c
Lint unused import (#5973)
huchenlei Dec 9, 2024
23827ca
Add `cond_scale` to `sampler_post_cfg_function` (#5985)
catboxanon Dec 10, 2024
a220d11
Replace pylint with ruff (#5987)
huchenlei Dec 10, 2024
1c8d11e
Support different types of tokenizers.
comfyanonymous Dec 10, 2024
44db978
Fix a few things in text enc code for models with no eos token.
comfyanonymous Dec 11, 2024
7a7efe8
Support loading some checkpoint files with nested dicts.
comfyanonymous Dec 11, 2024
5def9fb
Update CI workflow to remove Windows testing configuration (#6007)
yoland68 Dec 11, 2024
5bea1d2
Add MaHiRo (improved/alternate CFG) (#5975)
yoinked-h Dec 11, 2024
5747bc6
Optimize model library (#5841)
hayden-fr Dec 11, 2024
3dfdddc
Update README (Add new keybinding entries) (#6020)
huchenlei Dec 11, 2024
fd5dfb8
Set initial load devices for te and model to mps device on mac.
comfyanonymous Dec 12, 2024
d9d7f3c
Lint all unused variables (#5989)
huchenlei Dec 12, 2024
d4426dc
Lint and fix undefined names (2/N) (#6029)
huchenlei Dec 12, 2024
60749f3
Lint and fix undefined names (3/N) (#6030)
huchenlei Dec 12, 2024
2cddbf0
Lint and fix undefined names (1/N) (#6028)
huchenlei Dec 12, 2024
6c0377f
Enforce F821 undefined-name (#6032)
huchenlei Dec 13, 2024
563291e
Enforce all pyflake lint rules (#6033)
huchenlei Dec 13, 2024
59d58b1
[Security] Fix potential XSS on /view (#6034)
huchenlei Dec 13, 2024
4e14032
Make pad_to_patch_size function work on multi dim.
comfyanonymous Dec 13, 2024
bdf3937
add load 3d node support (#5564)
jtydhr88 Dec 13, 2024
caf2074
add_model_folder_path: ensure unique paths by removing duplicates (#5…
bigcat88 Dec 13, 2024
558b7d8
fix: prestartup script is not applied due to `extra_model_paths.yaml`…
ltdrdata Dec 13, 2024
e83063b
Support conv3d in PatchEmbed.
comfyanonymous Dec 14, 2024
1b3a650
(fix): added "model_type" to photomaker node (#6047)
bigcat88 Dec 15, 2024
6d1a3f7
Fix case of ExecutionBlocker not handled correctly with INPUT_IS_LIST.
comfyanonymous Dec 15, 2024
cc550d5
use String directly to set bg color for load 3d canvas (#6057)
jtydhr88 Dec 16, 2024
5262901
Update web content to release v1.5.18 (#6075)
huchenlei Dec 16, 2024
0f954f3
Update README.md (#6071)
zlobniyshurik Dec 16, 2024
61b5072
Add support for attention masking in Flux (#5942)
Slickytail Dec 16, 2024
19ee5d9
Don't expand mask when not necessary.
comfyanonymous Dec 16, 2024
bda1482
Basic Hunyuan Video model support.
comfyanonymous Dec 17, 2024
0b25f47
Add some missing imports.
comfyanonymous Dec 17, 2024
39b1fc4
Adjust used dtypes for hunyuan video VAE and diffusion model.
comfyanonymous Dec 17, 2024
f4cdede
Fix regression with ltxv VAE.
comfyanonymous Dec 17, 2024
d6656b0
Support llama hunyuan video text encoder in scaled fp8 format.
comfyanonymous Dec 17, 2024
e4e1bff
Support diffusion-pipe hunyuan video lora format.
comfyanonymous Dec 17, 2024
517669a
add preview 3d node (#6070)
jtydhr88 Dec 17, 2024
cd6f615
Fix tiled vae not working with some shapes.
comfyanonymous Dec 17, 2024
ca457f7
Properly tokenize the template for hunyuan video.
comfyanonymous Dec 17, 2024
a4f59bc
Pick attention implementation based on device in llama code.
comfyanonymous Dec 18, 2024
37e5390
Add: --use-sage-attention to enable SageAttention.
comfyanonymous Dec 18, 2024
79badea
Add ConditioningStableAudio.
comfyanonymous Dec 18, 2024
4c5c4dd
Fix regression in VAE code on old pytorch versions.
comfyanonymous Dec 18, 2024
ff2ff02
Support old diffusion-pipe hunyuan video loras.
comfyanonymous Dec 18, 2024
416ccc9
Update web content to release v1.5.19 (#6105)
huchenlei Dec 18, 2024
0c04a6a
Add .github folder to maintainer owner list (#6027)
huchenlei Dec 18, 2024
cbbf077
Small optimizations.
comfyanonymous Dec 18, 2024
9f4b181
Add fast previews for hunyuan video.
comfyanonymous Dec 18, 2024
c441048
Make VAE Encode tiled node work with video VAE.
comfyanonymous Dec 19, 2024
3ad3248
Fix lowvram bug when using a model multiple times in a row.
comfyanonymous Dec 19, 2024
2dda7c1
More proper fix for the memory issue.
comfyanonymous Dec 19, 2024
3cacd3f
Support preview images embedded in safetensors metadata (#6119)
catboxanon Dec 19, 2024
52c1d93
Fix tiled hunyuan video VAE encode issue.
comfyanonymous Dec 20, 2024
cac68ca
Fix some more video tiled encode issues.
comfyanonymous Dec 20, 2024
418eb70
Support new LTXV VAE.
comfyanonymous Dec 20, 2024
bddb026
Add PixArt model support (#6055)
city96 Dec 20, 2024
d7969cb
Replace print with logging (#6138)
huchenlei Dec 20, 2024
e946667
Some fixes/cleanups to pixart code.
comfyanonymous Dec 20, 2024
b5fe392
Remove some useless code.
comfyanonymous Dec 20, 2024
c86cd58
Remove useless code.
comfyanonymous Dec 20, 2024
da13b6b
Get rid of meshgrid warning.
comfyanonymous Dec 20, 2024
1419dee
Update README.md for Intel GPUs (#6069)
qiacheng Dec 20, 2024
341667c
remove minimum step count for AYS (#6137)
TechnoByteJS Dec 21, 2024
601ff9e
Add that Hunyuan Video and Pixart are supported to readme.
comfyanonymous Dec 21, 2024
57f330c
Relax minimum ratio of weights loaded in memory on nvidia.
comfyanonymous Dec 22, 2024
80f0795
Fix lowvram issue with ltxv vae.
comfyanonymous Dec 23, 2024
f7d83b7
fixed a bug in ldm/pixart/blocks.py (#6158)
zhangp365 Dec 23, 2024
56bc64f
Comment out some useless code.
comfyanonymous Dec 23, 2024
e44d0ac
Make --novram completely offload weights.
comfyanonymous Dec 23, 2024
c6b9c11
Add oneAPI device selector for xpu and some other changes. (#6112)
simonlui Dec 23, 2024
1556468
Add a try except block so if torch version is weird it won't crash.
comfyanonymous Dec 23, 2024
f18ebbd
Use raw dir name to serve static web content (#6107)
huchenlei Dec 23, 2024
bc6dac4
Add temporal tiling to VAE Decode (Tiled) node.
comfyanonymous Dec 24, 2024
26e0ba8
Enable External Event Loop Integration for ComfyUI [refactor] (#6114)
bigcat88 Dec 24, 2024
5388df7
Add temporal tiling to VAE Encode (Tiled) node.
comfyanonymous Dec 24, 2024
73e0498
Prevent black images in VAE Decode (Tiled) node.
comfyanonymous Dec 24, 2024
99a1fb6
Make fast fp8 take a bit less peak memory.
comfyanonymous Dec 24, 2024
1ed75ab
Update nightly pytorch instructions in readme for nvidia.
comfyanonymous Dec 25, 2024
0229228
Clean up the VAE dtypes code.
comfyanonymous Dec 25, 2024
b486885
Disable bfloat16 on older mac.
comfyanonymous Dec 25, 2024
19a64d6
Cleanup some mac related code.
comfyanonymous Dec 25, 2024
ee9547b
Improve temporal VAE Encode (Tiled) math.
comfyanonymous Dec 26, 2024
c4bfdba
Support ascend npu (#5436)
ji-huazhong Dec 27, 2024
160ca08
Use python 3.9 in launch test instead of 3.8
comfyanonymous Dec 27, 2024
ceb50b2
Closer memory estimation for pixart models.
comfyanonymous Dec 27, 2024
4b5bcd8
Closer memory estimation for hunyuan dit model.
comfyanonymous Dec 27, 2024
9cfd185
Add option to log non-error output to stdout (#6243)
webfiltered Dec 27, 2024
d170292
Remove some trailing white space.
comfyanonymous Dec 27, 2024
b504bd6
Add ruff rule for empty line with trailing whitespace.
comfyanonymous Dec 28, 2024
96697c4
serve workflow templates from custom_nodes (#6193)
bezo97 Dec 28, 2024
e1dec3c
Fix formatting.
comfyanonymous Dec 28, 2024
a618f76
Auto reshape 2d to 3d latent for single image generation on video model.
comfyanonymous Dec 29, 2024
82ecb02
Remove duplicate calls to INPUT_TYPES (#6249)
catboxanon Dec 30, 2024
3507870
Add 'sigmas' to transformer_options so that downstream code can know …
Kosinkadink Dec 30, 2024
d9b7cfa
Fix and enforce new lines at the end of files.
comfyanonymous Dec 30, 2024
a90aafa
Add kl_optimal scheduler (#6206)
blepping Dec 30, 2024
b7572b2
Fix and enforce no trailing whitespace.
comfyanonymous Dec 31, 2024
02eef72
fixed "verbose" argument (#6289)
bigcat88 Dec 31, 2024
67758f5
Fix custom node type-hinting examples (#6281)
webfiltered Dec 31, 2024
1c99734
Add missing model_options param (#6296)
Kosinkadink Dec 31, 2024
c0338a4
Fix unknown sampler error handling in calculate_sigmas function (#6280)
blepping Dec 31, 2024
79eea51
Fix and enforce all ruff W rules.
comfyanonymous Jan 1, 2025
0f11d60
Fix temporal tiling for decoder, remove redundant tiles. (#6306)
kvochko Jan 1, 2025
9e9c8a1
Clear cache as often on AMD as Nvidia.
comfyanonymous Jan 2, 2025
a39ea87
Update web content to release v1.6.14 (#6312)
huchenlei Jan 2, 2025
953693b
add fov and mask for load 3d node (#6308)
jtydhr88 Jan 3, 2025
0b9839e
Update web content to release v1.6.15 (#6324)
huchenlei Jan 3, 2025
8f29664
Change defaults in nightly package workflow.
comfyanonymous Jan 3, 2025
45671cd
Update web content to release v1.6.16 (#6335)
huchenlei Jan 3, 2025
caa6476
Update web content to release v1.6.17 (#6337)
huchenlei Jan 3, 2025
d45ebb6
Remove old unused function.
comfyanonymous Jan 4, 2025
5cbf797
Add advanced device option to clip loader nodes.
comfyanonymous Jan 5, 2025
c8a3492
Make the device an optional parameter in the clip loaders.
comfyanonymous Jan 5, 2025
b65b83a
Add update-frontend github action (#6336)
huchenlei Jan 5, 2025
7da85fa
Update CODEOWNERS (#6338)
yoland68 Jan 5, 2025
c496e53
In inner_sample, change "sigmas" to "sampler_sigmas" in transformer_o…
Kosinkadink Jan 6, 2025
916d1e1
Make ancestral samplers more deterministic.
comfyanonymous Jan 6, 2025
eeab420
Update frontend to v1.6.18 (#6368)
huchenlei Jan 6, 2025
d055325
Document get_attr and get_model_object (#6357)
huchenlei Jan 7, 2025
4209edf
Make a few more samplers deterministic.
comfyanonymous Jan 7, 2025
c515bdf
fixed: robust loading `comfy.settings.json` (#6383)
ltdrdata Jan 7, 2025
d0f3752
Properly calculate inner dim for t5 model.
comfyanonymous Jan 7, 2025
2307ff6
Improve some logging messages.
comfyanonymous Jan 9, 2025
ff83865
Cleaner handling of attention mask in ltxv model code.
comfyanonymous Jan 9, 2025
129d890
Add argument to skip the output reshaping in the attention functions.
comfyanonymous Jan 10, 2025
2ff3104
WIP support for Nvidia Cosmos 7B and 14B text to world (video) models.
comfyanonymous Jan 10, 2025
adea2be
Add edm option to ModelSamplingContinuousEDM for Cosmos.
comfyanonymous Jan 11, 2025
9c773a2
Add pyproject.toml (#6386)
huchenlei Jan 11, 2025
ee8a7ab
Fast latent preview for Cosmos.
comfyanonymous Jan 11, 2025
6c9bd11
Hooks Part 2 - TransformerOptionsHook and AdditionalModelsHook (#6377)
Kosinkadink Jan 11, 2025
42086af
Merge ruff.toml into pyproject.toml (#6431)
huchenlei Jan 11, 2025
b9d9bcb
fixed a bug where a relative path was not converted to a full path (#…
bigcat88 Jan 12, 2025
90f349f
Add res_multistep sampler from the cosmos code.
comfyanonymous Jan 12, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  

This file was deleted.

115 changes: 98 additions & 17 deletions .ci/update_windows/update.py
Original file line number Diff line number Diff line change
@@ -1,6 +1,9 @@
import pygit2
from datetime import datetime
import sys
import os
import shutil
import filecmp

def pull(repo, remote_name='origin', branch='master'):
for remote in repo.remotes:
Expand All @@ -25,41 +28,119 @@ def pull(repo, remote_name='origin', branch='master'):

if repo.index.conflicts is not None:
for conflict in repo.index.conflicts:
print('Conflicts found in:', conflict[0].path)
print('Conflicts found in:', conflict[0].path) # noqa: T201
raise AssertionError('Conflicts, ahhhhh!!')

user = repo.default_signature
tree = repo.index.write_tree()
commit = repo.create_commit('HEAD',
user,
user,
'Merge!',
tree,
[repo.head.target, remote_master_id])
repo.create_commit('HEAD',
user,
user,
'Merge!',
tree,
[repo.head.target, remote_master_id])
# We need to do this or git CLI will think we are still merging.
repo.state_cleanup()
else:
raise AssertionError('Unknown merge analysis result')

pygit2.option(pygit2.GIT_OPT_SET_OWNER_VALIDATION, 0)
repo = pygit2.Repository(str(sys.argv[1]))
repo_path = str(sys.argv[1])
repo = pygit2.Repository(repo_path)
ident = pygit2.Signature('comfyui', 'comfy@ui')
try:
print("stashing current changes")
print("stashing current changes") # noqa: T201
repo.stash(ident)
except KeyError:
print("nothing to stash")
print("nothing to stash") # noqa: T201
backup_branch_name = 'backup_branch_{}'.format(datetime.today().strftime('%Y-%m-%d_%H_%M_%S'))
print("creating backup branch: {}".format(backup_branch_name))
repo.branches.local.create(backup_branch_name, repo.head.peel())
print("creating backup branch: {}".format(backup_branch_name)) # noqa: T201
try:
repo.branches.local.create(backup_branch_name, repo.head.peel())
except:
pass

print("checking out master branch")
print("checking out master branch") # noqa: T201
branch = repo.lookup_branch('master')
ref = repo.lookup_reference(branch.name)
repo.checkout(ref)
if branch is None:
ref = repo.lookup_reference('refs/remotes/origin/master')
repo.checkout(ref)
branch = repo.lookup_branch('master')
if branch is None:
repo.create_branch('master', repo.get(ref.target))
else:
ref = repo.lookup_reference(branch.name)
repo.checkout(ref)

print("pulling latest changes")
print("pulling latest changes") # noqa: T201
pull(repo)

print("Done!")
if "--stable" in sys.argv:
def latest_tag(repo):
versions = []
for k in repo.references:
try:
prefix = "refs/tags/v"
if k.startswith(prefix):
version = list(map(int, k[len(prefix):].split(".")))
versions.append((version[0] * 10000000000 + version[1] * 100000 + version[2], k))
except:
pass
versions.sort()
if len(versions) > 0:
return versions[-1][1]
return None
latest_tag = latest_tag(repo)
if latest_tag is not None:
repo.checkout(latest_tag)

print("Done!") # noqa: T201

self_update = True
if len(sys.argv) > 2:
self_update = '--skip_self_update' not in sys.argv

update_py_path = os.path.realpath(__file__)
repo_update_py_path = os.path.join(repo_path, ".ci/update_windows/update.py")

cur_path = os.path.dirname(update_py_path)


req_path = os.path.join(cur_path, "current_requirements.txt")
repo_req_path = os.path.join(repo_path, "requirements.txt")


def files_equal(file1, file2):
try:
return filecmp.cmp(file1, file2, shallow=False)
except:
return False

def file_size(f):
try:
return os.path.getsize(f)
except:
return 0


if self_update and not files_equal(update_py_path, repo_update_py_path) and file_size(repo_update_py_path) > 10:
shutil.copy(repo_update_py_path, os.path.join(cur_path, "update_new.py"))
exit()

if not os.path.exists(req_path) or not files_equal(repo_req_path, req_path):
import subprocess
try:
subprocess.check_call([sys.executable, '-s', '-m', 'pip', 'install', '-r', repo_req_path])
shutil.copy(repo_req_path, req_path)
except:
pass


stable_update_script = os.path.join(repo_path, ".ci/update_windows/update_comfyui_stable.bat")
stable_update_script_to = os.path.join(cur_path, "update_comfyui_stable.bat")

try:
if not file_size(stable_update_script_to) > 10:
shutil.copy(stable_update_script, stable_update_script_to)
except:
pass
8 changes: 7 additions & 1 deletion .ci/update_windows/update_comfyui.bat
Original file line number Diff line number Diff line change
@@ -1,2 +1,8 @@
@echo off
..\python_embeded\python.exe .\update.py ..\ComfyUI\
pause
if exist update_new.py (
move /y update_new.py update.py
echo Running updater again since it got updated.
..\python_embeded\python.exe .\update.py ..\ComfyUI\ --skip_self_update
)
if "%~1"=="" pause
3 changes: 0 additions & 3 deletions .ci/update_windows/update_comfyui_and_python_dependencies.bat

This file was deleted.

8 changes: 8 additions & 0 deletions .ci/update_windows/update_comfyui_stable.bat
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
@echo off
..\python_embeded\python.exe .\update.py ..\ComfyUI\ --stable
if exist update_new.py (
move /y update_new.py update.py
echo Running updater again since it got updated.
..\python_embeded\python.exe .\update.py ..\ComfyUI\ --skip_self_update --stable
)
if "%~1"=="" pause

This file was deleted.

2 changes: 1 addition & 1 deletion .ci/windows_base_files/README_VERY_IMPORTANT.txt
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@ run_cpu.bat

IF YOU GET A RED ERROR IN THE UI MAKE SURE YOU HAVE A MODEL/CHECKPOINT IN: ComfyUI\models\checkpoints

You can download the stable diffusion 1.5 one from: https://huggingface.co/runwayml/stable-diffusion-v1-5/blob/main/v1-5-pruned-emaonly.ckpt
You can download the stable diffusion 1.5 one from: https://huggingface.co/Comfy-Org/stable-diffusion-v1-5-archive/blob/main/v1-5-pruned-emaonly-fp16.safetensors


RECOMMENDED WAY TO UPDATE:
Expand Down
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --use-pytorch-cross-attention
.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build --fast
pause
2 changes: 2 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
/web/assets/** linguist-generated
/web/** linguist-vendored
48 changes: 48 additions & 0 deletions .github/ISSUE_TEMPLATE/bug-report.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
name: Bug Report
description: "Something is broken inside of ComfyUI. (Do not use this if you're just having issues and need help, or if the issue relates to a custom node)"
labels: ["Potential Bug"]
body:
- type: markdown
attributes:
value: |
Before submitting a **Bug Report**, please ensure the following:

- **1:** You are running the latest version of ComfyUI.
- **2:** You have looked at the existing bug reports and made sure this isn't already reported.
- **3:** You confirmed that the bug is not caused by a custom node. You can disable all custom nodes by passing
`--disable-all-custom-nodes` command line argument.
- **4:** This is an actual bug in ComfyUI, not just a support question. A bug is when you can specify exact
steps to replicate what went wrong and others will be able to repeat your steps and see the same issue happen.

If unsure, ask on the [ComfyUI Matrix Space](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) or the [Comfy Org Discord](https://discord.gg/comfyorg) first.
- type: textarea
attributes:
label: Expected Behavior
description: "What you expected to happen."
validations:
required: true
- type: textarea
attributes:
label: Actual Behavior
description: "What actually happened. Please include a screenshot of the issue if possible."
validations:
required: true
- type: textarea
attributes:
label: Steps to Reproduce
description: "Describe how to reproduce the issue. Please be sure to attach a workflow JSON or PNG, ideally one that doesn't require custom nodes to test. If the bug open happens when certain custom nodes are used, most likely that custom node is what has the bug rather than ComfyUI, in which case it should be reported to the node's author."
validations:
required: true
- type: textarea
attributes:
label: Debug Logs
description: "Please copy the output from your terminal logs here."
render: powershell
validations:
required: true
- type: textarea
attributes:
label: Other
description: "Any other additional information you think might be helpful."
validations:
required: false
11 changes: 11 additions & 0 deletions .github/ISSUE_TEMPLATE/config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,11 @@
blank_issues_enabled: true
contact_links:
- name: ComfyUI Frontend Issues
url: https://github.com/Comfy-Org/ComfyUI_frontend/issues
about: Issues related to the ComfyUI frontend (display issues, user interaction bugs), please go to the frontend repo to file the issue
- name: ComfyUI Matrix Space
url: https://app.element.io/#/room/%23comfyui_space%3Amatrix.org
about: The ComfyUI Matrix Space is available for support and general discussion related to ComfyUI (Matrix is like Discord but open source).
- name: Comfy Org Discord
url: https://discord.gg/comfyorg
about: The Comfy Org Discord is available for support and general discussion related to ComfyUI.
32 changes: 32 additions & 0 deletions .github/ISSUE_TEMPLATE/feature-request.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: Feature Request
description: "You have an idea for something new you would like to see added to ComfyUI's core."
labels: [ "Feature" ]
body:
- type: markdown
attributes:
value: |
Before submitting a **Feature Request**, please ensure the following:

**1:** You are running the latest version of ComfyUI.
**2:** You have looked to make sure there is not already a feature that does what you need, and there is not already a Feature Request listed for the same idea.
**3:** This is something that makes sense to add to ComfyUI Core, and wouldn't make more sense as a custom node.

If unsure, ask on the [ComfyUI Matrix Space](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) or the [Comfy Org Discord](https://discord.gg/comfyorg) first.
- type: textarea
attributes:
label: Feature Idea
description: "Describe the feature you want to see."
validations:
required: true
- type: textarea
attributes:
label: Existing Solutions
description: "Please search through available custom nodes / extensions to see if there are existing custom solutions for this. If so, please link the options you found here as a reference."
validations:
required: false
- type: textarea
attributes:
label: Other
description: "Any other additional information you think might be helpful."
validations:
required: false
32 changes: 32 additions & 0 deletions .github/ISSUE_TEMPLATE/user-support.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
name: User Support
description: "Use this if you need help with something, or you're experiencing an issue."
labels: [ "User Support" ]
body:
- type: markdown
attributes:
value: |
Before submitting a **User Report** issue, please ensure the following:

**1:** You are running the latest version of ComfyUI.
**2:** You have made an effort to find public answers to your question before asking here. In other words, you googled it first, and scrolled through recent help topics.

If unsure, ask on the [ComfyUI Matrix Space](https://app.element.io/#/room/%23comfyui_space%3Amatrix.org) or the [Comfy Org Discord](https://discord.gg/comfyorg) first.
- type: textarea
attributes:
label: Your question
description: "Post your question here. Please be as detailed as possible."
validations:
required: true
- type: textarea
attributes:
label: Logs
description: "If your question relates to an issue you're experiencing, please go to `Server` -> `Logs` -> potentially set `View Type` to `Debug` as well, then copypaste all the text into here."
render: powershell
validations:
required: false
- type: textarea
attributes:
label: Other
description: "Any other additional information you think might be helpful."
validations:
required: false
53 changes: 53 additions & 0 deletions .github/workflows/pullrequest-ci-run.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,53 @@
# This is the GitHub Workflow that drives full-GPU-enabled tests of pull requests to ComfyUI, when the 'Run-CI-Test' label is added
# Results are reported as checkmarks on the commits, as well as onto https://ci.comfy.org/
name: Pull Request CI Workflow Runs
on:
pull_request_target:
types: [labeled]

jobs:
pr-test-stable:
if: ${{ github.event.label.name == 'Run-CI-Test' }}
strategy:
fail-fast: false
matrix:
os: [macos, linux, windows]
python_version: ["3.9", "3.10", "3.11", "3.12"]
cuda_version: ["12.1"]
torch_version: ["stable"]
include:
- os: macos
runner_label: [self-hosted, macOS]
flags: "--use-pytorch-cross-attention"
- os: linux
runner_label: [self-hosted, Linux]
flags: ""
- os: windows
runner_label: [self-hosted, Windows]
flags: ""
runs-on: ${{ matrix.runner_label }}
steps:
- name: Test Workflows
uses: comfy-org/comfy-action@main
with:
os: ${{ matrix.os }}
python_version: ${{ matrix.python_version }}
torch_version: ${{ matrix.torch_version }}
google_credentials: ${{ secrets.GCS_SERVICE_ACCOUNT_JSON }}
comfyui_flags: ${{ matrix.flags }}
use_prior_commit: 'true'
comment:
if: ${{ github.event.label.name == 'Run-CI-Test' }}
runs-on: ubuntu-latest
permissions:
pull-requests: write
steps:
- uses: actions/github-script@v6
with:
script: |
github.rest.issues.createComment({
issue_number: context.issue.number,
owner: context.repo.owner,
repo: context.repo.repo,
body: '(Automated Bot Message) CI Tests are running, you can view the results at https://ci.comfy.org/?branch=${{ github.event.pull_request.number }}%2Fmerge'
})
Loading