How much learning capacity does Pixart have? #64
-
Hei, since pixart uses very low parameter count, how much information is it able to store? Stable diffusion is able to store lots of different concepts in a single checkpoint without bleeding into other concepts if trained right. Is pixart able to do the same? Im not too sure how pixart handles it, so any insight would be great. |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 2 replies
-
IMO, the restriction is more likely from the dataset. As long as your dataset has enough concepts, you will get what you want. The model size may have less impact on concept learning. That's why Stable Diffusion XL extends millions of internal datasets for both concept learning and image quality improvement. PixArt has a similar parameter as SD2.1, so I don't think this will be an issue. BTW, a larger model definitely will improve the capacity, a smaller model, like PixArt can accelerate the training process. Therefore, after the concept learning, the Transformer-based model PixArt can be easily scaled up to a larger model size. We consider this training process should be more reasonable. |
Beta Was this translation helpful? Give feedback.
IMO, the restriction is more likely from the dataset. As long as your dataset has enough concepts, you will get what you want. The model size may have less impact on concept learning. That's why Stable Diffusion XL extends millions of internal datasets for both concept learning and image quality improvement. PixArt has a similar parameter as SD2.1, so I don't think this will be an issue. BTW, a larger model definitely will improve the capacity, a smaller model, like PixArt can accelerate the training process. Therefore, after the concept learning, the Transformer-based model PixArt can be easily scaled up to a larger model size. We consider this training process should be more reasonable.