Skip to content

Commit

Permalink
0.28.0 (#148)
Browse files Browse the repository at this point in the history
* fix issue where saving a new scene would save into a "new scenario" directory instead instead of a relevantly named directory

* implement function to fork new scene file from specific message

* dynamic choice generation

* dynamic choice generation progress

* prompt tweaks

* disable choice generation by default
prompt tweaks

* prompt tweaks for assisted RAG tasks

* allow analyze_text_and_extract_context to include character context

* more prompt tweaks for RAG assist during conversation generation

* open director settings from dynamic action dialog

* adjust wording

* remove player choice message if the trigger message is removed (or regenerated)

* fix issue with dialogue cleaqup where narration over multiple lines would end up being marked incorrectly

* dynamic action generation custom instructions
dynamic action generation narration for sensory actions

* fix actions when acting as another character

* 0.28.0

* conversation agent: split out generation settings, add actor instructions extension, add actor instruction offset slider

* prompt tweaks

* fix ai message regenerate if generated from choice

* cruft

* layered history implementation through summarizer
summarization tweaks

* show layered history in ux

* layered history fixes and tweaks
conversation actor instruction fixes

* more summarization fixes

* fix missing actor instructions

* prompt tweaks

* prompt tweaks

* force lower case when checking sensory type

* agent modal polish
implement find-natural-scene-termination summarizer action
some summarization tweaks

* integrate find_natural_scene_termination with layered history

* collect all denouements at once

* relock

* fix some issues with screenplay type formatting in conversation agent

* cleanup

* revert layered history summarization to use max_process_tokens instead of using ai to fine scene termination as that process falls apart in layer 1 and higher, at that point every item is a scene in itself.

* implement ai assisted digging through layered history to answer queries

* dig_layered_history tweaks and improvements

* prompt tweaks

* adjust budget

* adjust budget for RAG context

* layered_history disabled by default

* prompt tweaks to reinforcement updates

* prompt tweaks

* dig layered history - response without function call to be treated as answer

* clarify style keywords to avoid bleeding into the prompt as subject matter

* fix issue with cover image updates

* fix missing dialogue from context history

* fix issue where new scenes wouldn't load

* fix crash with layered summarization

* more context history fixes

* fix assured dialogue message in context history

* prompt tweaks

* tweaks to layered history generation

* prompt tweaks

* conversation agent can dig layered history for extra context

* some fixes to dig layered history

* scene fork adjust layered history

* layered history status indication

* allow configuration of message styles and colors

* fix issue where layered history generate would get stuck on layer 0

* dig layered history default to false

* prompt tweaks

* context investigation messages

* tweaks to context investigation

* context investigation polish of UX and allow specifying trigger

* prompt tweaks

* allow hiding of ci and director messages

* wire ci shrotcut buttons

* prompt tweaks

* prompt tweaks

* carry on analysis when digging layered history

* improve quality of generate choices by anchoring to last line in the scene

* update hint message

* prompt tweaks

* change default value for max_process_tokens

* docs

* dig layered history only if there are layers

* always enforce num choices limit

* relock

* typos

* prompt tweaks

* docs for forking a scene

* prompt tweaks

* world editor rubber banding fixes follow up

* layered history cleanup fixes

* gracefully handle malformed dig() call

* handle malformed answer() call

* only generate choices if last content isn't player message

* include more context in autocomplete prompts

* prompt tweaks

* typo

* fix issue where inactive characters could not be deleted

* more character delete bugs

* dig layered history fixes

* discard empty content investigations

* fix issue with autocomplete no longer working in world editor

* prompt tweaks

* support single quotes

* prompt tweaks

* fix issue with context investigation if final message was narrator text

* Include the query in the context investigation message

* context investigvations should note when historic events occured

* instructions on how to use internal notes

* time_diff return empty string no time supplied

* prompt tweaks

* fix date calculations for historic entries

* change default values

* prompt tweaks

* fix history regenerate continuing through page reload

* reorganize websocket tasks

* allow cancelling of history regenerate

* Capitalize first letter of summarization

* include base layer in context investigations

* prompt tweaks

* fix issue where context investigations would expand too much of the history at once

* attempt to determine character knowledge during context investigation

* prompt tweaks

* prompt tweaks

* fix mising timestamps

* more context during layer history digging

* fix issue with act-as not being able to select past the first npc if a scene had more than one active npcs in it

* docs

* error handling for malformed answer call

* timestamp calculation fixes and summarization improvements

* lock message manipulation while the ux is busy

* prompt tweaks

* toggling 'log debug messages' will log all messages to console even if no filter is specified

* layered history generation cancellable from ux

* prevent loading scene while another scene is currently loading

* improvements to choice generation prompt and error handling

* prompt tweaks

* prompt tweaks

* prompt tweaks

* fix issue with successive scene load not working

* correctly display timestamps and generated layers during history regen

* summarization improvements

* clean up context investigation prompt

* prompt tweaks

* increase response token size for dig_layered_history

* define missing presets

* missing preset

* prompt tweaks

* fix simulation suite

* attach punkt download to backend start, not frontend start

* dig layered history fixes

* prompt tweaks

* fix summarize_and_pin

* more fixes for time calculations

* relock

* prompt tweaks

* remove dupe entry from layered history

* bash version of update script

* prompt tweaks

* layered history defaults to enabled

* default decreased to 0.3 chance

* fix multi character natural flow selection with clients that don't support LLM coercion

* fix simulation suite call to change a character

* typo

* remove deprecated test

* use python3

* add missing 4o models

* add proper configs for 4o models

* prompt tweaks

* update reinforcement prompt ignores context investigations

* scene.snapshot formatting and dig_layered_history ignores reinforcments

* use end date instead of start date

* Reword 'Moments ago' to 'Recently' as it is more forgiving and applicable to longer time ranges

* fix time calculation issues during summarization of new entries

* no need for scoping

* dont display as range if start and end of entry are identical

* prompt tweaks
  • Loading branch information
vegu-ai-tools authored Nov 24, 2024
1 parent bb1cf69 commit 8025601
Show file tree
Hide file tree
Showing 108 changed files with 5,695 additions and 2,481 deletions.
2 changes: 1 addition & 1 deletion docs/dev/index.md
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
# Coning soon
# Coming soon

Developer documentation is coming soon. Stay tuned!
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/0.28.0/conversation-general-settings.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/0.28.0/director-general-settings.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/0.28.0/quick-action-generated-text.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/0.28.0/quick-action.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added docs/img/0.28.0/summarizer-general-settings.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
80 changes: 58 additions & 22 deletions docs/user-guide/agents/conversation/settings.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# Settings

![Conversation agent settings](/talemate/img/0.26.0/conversation-agent-settings.png)
## General

![Conversation agent general settings](/talemate/img/0.28.0/conversation-general-settings.png)

!!! note "Inference perameters"
Inference parameters are NOT configured through any individual agent.
Expand All @@ -11,9 +13,37 @@

The text-generation client to use for conversation generation.

##### Generation settings
##### Auto Break Repetition

If checked and talemate detects a repetitive response (based on a threshold), it will automatically re-generate the resposne with increased randomness parameters.

##### Natural Flow

When there are multiple characters in the scene, this will help the AI to keep the conversation flowing naturally, making sure turns are somewhat evenly distributed, and also checking that the most relevant character gets the next turn, based on the context.

##### Max. Auto turns

Maximum turns the AI gets in succession, before the player gets a turn no matter what.

##### Max. Idle turns

The maximum number of turns a character can go without speaking before the AI will force them to speak.

##### Long Term Memory

If checked will inject relevant information into the context using relevancy through the [Memory Agent](/talemate/user-guide/agents/memory).

##### Context Retrieval Method

What method to use for long term memory selection

- `Context queries based on recent context` - will take the last 3 messages in the scene and select relevant context from them. This is the fastest method, but may not always be the most relevant.
- `Context queries generated by AI` - will generate a set of context queries based on the current scene and select relevant context from them. This is slower, but may be more relevant.
- `AI compiled questions and answers` - will use the AI to generate a set of questions and answers based on the current scene and select relevant context from them. This is the slowest, and not necessarily better than the other methods.

Checkbox that exposes further settings to configure the conversation agent generation.
## Generation

![Conversation agent generation settings](/talemate/img/0.28.0/conversation-generation-settings.png)

##### Format

Expand All @@ -30,39 +60,45 @@ Visually this will make no difference to what you see, it may however affect how

The maximum length of the generated dialogue. (tokens)

##### Instructions

Extra instructions for the generation. This should be short and generic as it will be applied for all characters.

##### Jiggle

The amount of randomness to apply to the generation. This can help to avoid repetitive responses.

##### Auto Break Repetition
##### Task Instructions

If checked and talemate detects a repetitive response (based on a threshold), it will automatically re-generate the resposne with increased randomness parameters.
Extra instructions for the generation. This should be short and generic as it will be applied for all characters. This will be appended to the existing task instrunctions in the conversation prompt BEFORE the conversation history.

##### Natural Flow
##### Actor Instructions

When there are multiple characters in the scene, this will help the AI to keep the conversation flowing naturally, making sure turns are somewhat evenly distributed, and also checking that the most relevant character gets the next turn, based on the context.
General, broad isntructions for ALL actors in the scene. This will be appended to the existing actor instructions in the conversation prompt AFTER the conversation history.

##### Max. Auto turns
##### Actor Instructions Offset

Maximum turns the AI gets in succession, before the player gets a turn no matter what.
If > 0 will offset the instructions for the actor (both broad and character specific) into the history by that many turns. Some LLMs struggle to generate coherent continuations if the scene is interrupted by instructions right before the AI is asked to generate dialogue. This allows to shift the instruction backwards.

##### Max. Idle turns
## Context Investigation

The maximum number of turns a character can go without speaking before the AI will force them to speak.
A new :material-flask: experimental feature introduced in `0.28.0` alongside the [layered history summarization](/talemate/user-guide/agents/summarizer/settings#layered-history).

##### Long Term Memory
If enabled, the AI will investigate the history for relevant information to include in the conversation prompt. Investigation works by digging through the various layers of the history, and extracting relevant information based on the final message in the scene.

If checked will inject relevant information into the context using relevancy through the [Memory Agent](/talemate/user-guide/agents/memory).
This can be **very slow** depending on how many layers are enabled and generated. It can lead to a great improvement in the quality of the generated dialogue, but it currently still is a mixed bag. A strong LLM is almost a hard requirement for it produce anything useful. 22B+ models are recommended.

##### Context Retrieval Method
![Conversation agent context investigation settings](/talemate/img/0.28.0/conversation-context-investigation-settings.png)

What method to use for long term memory selection
!!! note "Tips"
- This is experimental and results WILL vary in quality.
- Requires a strong LLM. 22B+ models are recommended.
- Good, clean summarization of the history is a hard requirement for this to work well. Regenerate your history if it's messy. (World Editor -> History -> Regenerate)

- `Context queries based on recent context` - will take the last 3 messagews in the scene and select relevant context from them. This is the fastes method, but may not always be the most relevant.
- `Context queries generated by AI` - will generaste a set of context queries based on the current scene and select relevant context from them. This is slower, but may be more relevant.
- `AI compiled questions and answers` - will use the AI to generate a set of questions and answers based on the current scene and select relevant context from them. This is the slowest, and not necessarily better than the other methods.
##### Enable context investigation

Enable or disable the context investigation feature.

##### Trigger

Allows you to specify when the context investigation should be triggered.

- Agent decides - the AI will decide when to trigger the context investigation based on the scene.
- Only when a question is asked - the AI will only trigger the context investigation when a question is asked.

36 changes: 34 additions & 2 deletions docs/user-guide/agents/director/settings.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
# Settings

![Director agent settings](/talemate/img/0.26.0/director-agent-settings.png)
## General

![Director agent settings](/talemate/img/0.28.0/director-general-settings.png)

##### Direct

Expand Down Expand Up @@ -31,4 +33,34 @@ When an actor is given a direction, how is it to be injected into the context

If `Direction` is selected, the actor will be given the direction as a direct instruction, by the director.

If `Inner Monologue` is selected, the actor will be given the direction as a thought.
If `Inner Monologue` is selected, the actor will be given the direction as a thought.

## Dynamic Actions

Dynamic actions are introduced in `0.28.0` and allow the director to generate a set of clickable choices for the player to choose from.

![Director agent dynamic actions settings](/talemate/img/0.28.0/director-dynamic-actions-settings.png)

##### Enable Dynamic Actions

If enabled the director will generate a set of clickable choices for the player to choose from.

##### Chance

The chance that the director will generate a set of dynamic actions when its the players turn.

This ranges from `0` to `1`. `0` means the director will never generate dynamic actions, `1` means the director will always generate dynamic actions.

##### Number of Actions

The number of actions to generate.

##### Never auto progress on action selection

If this is checked and you pick an action, the scene will NOT automatically pass the turn to the next actor.

##### Instructions

Allows you to provide extra specific instructions to director on how to generate the dynamic actions.

For example you could provide a list of actions to choose from, or a list of actions to avoid. Or specify that you always want a certain action to be included.
41 changes: 39 additions & 2 deletions docs/user-guide/agents/summarizer/settings.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,10 @@
# Settings

![Summarizer agent settings](/talemate/img/0.26.0/summarizer-agent-settings.png)
## General

General summarization settings.

![Summarizer agent general settings](/talemate/img/0.28.0/summarizer-general-settings.png)

##### Summarize to long term memory archive

Expand All @@ -21,4 +25,37 @@ The method used to summarize the scene dialogue.

###### Use preceeding summaries to strengthen context

Help the AI summarize by including the last few summaries as additional context. Some models may incorporate this context into the new summary directly, so if you find yourself with a bunch of similar history entries, try setting this to 0.
Help the AI summarize by including the last few summaries as additional context. Some models may incorporate this context into the new summary directly, so if you find yourself with a bunch of similar history entries, try setting this to 0.

## Layered History

Settings for the layered history summarization.

Talemate `0.28.0` introduces a new feature called layered history summarization. This feature allows the AI to summarize the scene dialogue in layers, with each layer providing a different level of detail.

Not only does this allow to keep more context in the history, albeit with earlier layers containing less detail, but it also allows us to do history investgations to extract relevant information from the history during conversation and narration prompts.

Right now this is considered an experimental feature, and whether or not its feasible in the long term will depend on how well it works in practice.

![Summarizer agent layered history settings](/talemate/img/0.28.0/summarizer-layered-history-settings.png)

##### Enable layered history

Allows you to enable or disable the layered history summarization.

!!! note "Enabling this on big scenes"
If you enable this on a big established scene, the next time the summarization agent runs, it will take a while to process the entire history and generate the layers.

##### Token threshold

The number of tokens in the layer that will trigger the summarization process to the next layer.

##### Maximum number of layers

The maximum number of layers that can be created. Raising this limit past 3 is likely to have dimishing returns. We have observed that usually by layer 3 you are down to single sentences for individual events, making it difficult to summarize further in a meaningful way.

##### Maximum tokens to process

Smaller LLMs may struggle with accurately summarizing long texts. This setting will split the text into chunks and summarize each chunk separately, then stitch them together in the next layer. If you're using a strong LLM (70B+), you can try setting this to be the same as the threshold.

Setting this higher than the token threshold does nothing.
2 changes: 1 addition & 1 deletion docs/user-guide/agents/voice/elevenlabs.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,4 +4,4 @@ If you have not configured the ElevenLabs TTS API, the voice agent will show tha

![Elevenlaps api key missing](/talemate/img/0.26.0/voice-agent-missing-api-key.png)

See the [ElevenLabs API setup](/apis/elevenlabs.md) for instructions on how to set up the API key.
See the [ElevenLabs API setup](/talemate/user-guide/apis/elevenlabs/) for instructions on how to set up the API key.
10 changes: 10 additions & 0 deletions docs/user-guide/interacting.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,6 +34,16 @@ Version `0.26` introduces a new `act-as` feature, which allows you to act as ano

![Dialogue input - act as narrator](/talemate/img/0.26.0/interacting-input-act-as-narrator.png)

### Quick action

If you start a message with the `@` character you can have the AI generate the response based on what action you are taking. This is useful if you want to quickly generate a response without having to type out the full action and narration yourself.

![Quick action](/talemate/img/0.28.0/quick-action.png)

![Quick action generated text](/talemate/img/0.28.0/quick-action-generated-text.png)

This functionality was added in version `0.28.0`

### Autocomplete

When typing out your action / dialogue, you can hit the `ctrl+enter` key combination to generate an autocompletion of your current text.
Expand Down
8 changes: 7 additions & 1 deletion docs/user-guide/saving.md
Original file line number Diff line number Diff line change
Expand Up @@ -28,4 +28,10 @@ Some scenes start out with a locked save file. This is so that this particular s


!!! info
Alternatively you can also unlock the save file through the [Scene editor](/talemate/user-guide/world-editor/scene/settings) found in **:material-earth-box: World Editor** :material-arrow-right: **:material-script: Scene** :material-arrow-right: **:material-cogs: Settings**.
Alternatively you can also unlock the save file through the [Scene editor](/talemate/user-guide/world-editor/scene/settings) found in **:material-earth-box: World Editor** :material-arrow-right: **:material-script: Scene** :material-arrow-right: **:material-cogs: Settings**.

## Forking a copy of a scene

You can create a new copy of a scene from any message in the scene by clicking the :material-source-fork: **Fork** button underneath the message.

All progress after the target message will be removed and a new scene will be created with the previous messages.
Loading

0 comments on commit 8025601

Please sign in to comment.