You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I generated images for 50 different strings. 20% of them have half sentences or random strokes. I set bias to 1 and thickness to 10 (default). Is there any limit on word count or other variable effecting output image?
The text was updated successfully, but these errors were encountered:
This is a known problem. Sometimes, the model goes crazy after sampling certain number of points. Also, the model seems to struggle with producing rare letters and words. It is possible that it is somewhat undertrained or the dataset it was trained on contains some corrupted examples. Perhaps, there is also a bug in data preparation code. In either case, there is no way to flexibly control the output and prevent those failure cases.
You can try different checkpoints or experiment with the bias parameter. Surprisingly, I observed quite a bit more failures with bias=1 than with smaller ones.
As for the word count, there is a hardcoded attribute num_steps set to 1500 on this line:
I generated images for 50 different strings. 20% of them have half sentences or random strokes. I set bias to 1 and thickness to 10 (default). Is there any limit on word count or other variable effecting output image?
The text was updated successfully, but these errors were encountered: