Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merging the latest master changes from the original repo #3

Draft
wants to merge 695 commits into
base: master
Choose a base branch
from

Conversation

ghost
Copy link

@ghost ghost commented Jul 17, 2024

Merging the latest master changes from the original repo

@ghost
Copy link
Author

ghost commented Jul 17, 2024

@fksato
why can't i add reviewers in this PR ?

@ghost ghost self-assigned this Jul 23, 2024
@ghost ghost marked this pull request as draft July 23, 2024 09:18
comfyanonymous and others added 27 commits November 8, 2024 08:33
now works with arbitrary downscale factors
* Add /logs/raw and /logs/subscribe for getting logs on frontend
Hijacks stderr/stdout to send all output data to the client on flush

* Use existing send sync method

* Fix get_logs should return string

* Fix bug

* pass no server

* fix tests

* Fix output flush on linux
* fix --cuda-device arg for AMD/HIP devices

CUDA_VISIBLE_DEVICES is ignored for HIP devices/backend. Instead it uses HIP_VISIBLE_DEVICES. Setting this environment variable has no side effect for CUDA/NVIDIA so it can safely be set in any case and vice versa.

* deleted accidental if
* Update web content to release v1.3.44

* nit
This one should work for skipping the single layers of models like Flux
and Auraflow.

If you want to see how these models work and how many double/single layers
they have see the "ModelMerge*" nodes for the specific model.
* Update UI ScreenShot in README

* Remove legacy UI screenshot file

* nit

* nit
catboxanon and others added 30 commits January 14, 2025 19:05
Useful for models utilizing ztSNR. See: https://arxiv.org/abs/2409.15997
If this breaks something for you make an issue.
* Use `torch.special.expm1`

This function provides greater precision than `exp(x) - 1` for small values of `x`.

Found with TorchFix https://github.com/pytorch-labs/torchfix/

* Use non-alias
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.