Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Features Planned #6

Closed
15 of 16 tasks
Neet-Nestor opened this issue May 15, 2024 · 6 comments
Closed
15 of 16 tasks

Features Planned #6

Neet-Nestor opened this issue May 15, 2024 · 6 comments
Assignees
Labels
enhancement New feature or request

Comments

@Neet-Nestor
Copy link
Collaborator

Neet-Nestor commented May 15, 2024

  • Display Model Information: The name of the AI model being used should always be displayed in the user interface, as changing the model variant is a key feature.
  • Show Performance Stats: Provide a section in the UI to display performance statistics related to the response generation process.
  • Revise User and Bot Avatars: Replace the user and bot avatars with more meaningful representations, such as the MLC logo plus model name for the bot, and default to the user's text for the user.
  • Provide Guidance for Shortcuts: Add a section on the left side of the UI to show guidance on important shortcut settings.
  • Rearrange Settings:
    • Move the setting for changing sampling parameters (e.g., temperature) to a more prominent location.
    • If the model name is displayed separately, the model selection setting may be less important.
  • Clarify Terminology: Renaming "mask" to "system prompt template."
  • Streamline Long Chat Experience:
    • Optimize the multi-round chat experience to avoid excessive processing time and unintended behavior due to context memorization.
    • Consider removing the contextual prompt feature initially and focus on providing a streamlined, slick UI for multi-round chat.

Ensure the following features are available and easily accessible

  • Changing settings (e.g., temperature) and system prompts
  • Selecting AI models (with eager loading to reduce bootstrap time)
  • Providing a simple set of validated prompt libraries
  • Ability to stop generation and reset the AI model (in case of bugs)
  • Ability to share the current session settings (system prompt, model, and other settings) via a shareable URL
@Neet-Nestor Neet-Nestor self-assigned this May 15, 2024
@flatsiedatsie
Copy link

Perhaps it would an idea to offer a toggle between 'normal' and 'advanced' mode in settings somewhere? Then, in the normal mode the advanced settings could be hidden (e.g. by toggling a CSS class in the body).

In my own project I kept wanting to "keep things simple", yet at the same time offer more advanced features to people who are more familiar with LLM's. That toggle is how I solved it (somewhat).

The cool thing about WebLLM is that is makes LLM's extremely accesible. All you need is a web browser. To me it made sense to build on that strength by keeping the initial UI experience very basic, so as not to overwhelm users at first.

Of course, just my 2 cents, your mileage may vary, etc. I'll definitely be keeping my eye on this project, it looks (also literally) very nice.

@Neet-Nestor
Copy link
Collaborator Author

Neet-Nestor commented May 17, 2024

Perhaps it would an idea to offer a toggle between 'normal' and 'advanced' mode in settings somewhere? Then, in the normal mode the advanced settings could be hidden (e.g. by toggling a CSS class in the body).

In my own project I kept wanting to "keep things simple", yet at the same time offer more advanced features to people who are more familiar with LLM's. That toggle is how I solved it (somewhat).

The cool thing about WebLLM is that is makes LLM's extremely accesible. All you need is a web browser. To me it made sense to build on that strength by keeping the initial UI experience very basic, so as not to overwhelm users at first.

Of course, just my 2 cents, your mileage may vary, etc. I'll definitely be keeping my eye on this project, it looks (also literally) very nice.

Thanks for your input here! This project was just born one week ago so I'm still working on optimizing the UI layouts and UX. Make the overall product accessible and simple to use is definitely our end goal, so your suggestion here is invaluable and definitely aligns with my thoughts.

I'll update you after more iterations and please always feel free to give any of your frank opinions. Sincerely appreciated.

@flatsiedatsie
Copy link

flatsiedatsie commented May 17, 2024

Actually, I only later realized you're integrating WebLLM into a pre-built UI project. Smart.

But that makes my comment mostly moot, except for the very personal opinion that I'm so eager to see projects that bring this technology to a wider audience. Most non-techy people around me have never even tried anything with AI yet. It's partially why I'm building my project: to have something my friends could try safely, with a low barrier to entry.

I haven't been able to actually use your project yet unfortunately. I'm assuming it's just not ready yet. It's stuck on 'Typing...'.

Clarify Terminology: Renaming "mask" to "system prompt template."

I was actually thinking that 'mask' was a nice metaphore, and that I might steal that idea :-)

@Neet-Nestor Neet-Nestor added the enhancement New feature or request label May 20, 2024
@mikestaub
Copy link

Any plans to add RAG capabilities like https://github.com/addyosmani/chatty ?

@Neet-Nestor
Copy link
Collaborator Author

Any plans to add RAG capabilities like https://github.com/addyosmani/chatty ?

That's such a cool project. I will have a look to see if we can do something similar.

@flatsiedatsie
Copy link

Just want to say: keep up the great work Neet-Nestor!

*golfclap

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

3 participants