diff --git a/stable/.buildinfo b/stable/.buildinfo
index 9296fe8b..497b3dcf 100644
--- a/stable/.buildinfo
+++ b/stable/.buildinfo
@@ -1,4 +1,4 @@
# Sphinx build info version 1
# This file hashes the configuration used when building these files. When it is not found, a full rebuild will be done.
-config: 0ed8f9e64e0f14b1eea91837a98334a7
+config: da86be50487b2a1819c228ce4d7c1d9c
tags: 645f666f9bcd5a90fca523b33c5a78b7
diff --git a/stable/.doctrees/commands/run.doctree b/stable/.doctrees/commands/run.doctree
index ef6e2d79..03efa2d2 100644
Binary files a/stable/.doctrees/commands/run.doctree and b/stable/.doctrees/commands/run.doctree differ
diff --git a/stable/.doctrees/environment.pickle b/stable/.doctrees/environment.pickle
index 9623c927..2273b8ff 100644
Binary files a/stable/.doctrees/environment.pickle and b/stable/.doctrees/environment.pickle differ
diff --git a/stable/.doctrees/methoddocs/application.doctree b/stable/.doctrees/methoddocs/application.doctree
index de8e98a8..7bc823b7 100644
Binary files a/stable/.doctrees/methoddocs/application.doctree and b/stable/.doctrees/methoddocs/application.doctree differ
diff --git a/stable/.doctrees/methoddocs/exceptions.doctree b/stable/.doctrees/methoddocs/exceptions.doctree
index b08c4d9d..9bef7cb7 100644
Binary files a/stable/.doctrees/methoddocs/exceptions.doctree and b/stable/.doctrees/methoddocs/exceptions.doctree differ
diff --git a/stable/.doctrees/methoddocs/middlewares.doctree b/stable/.doctrees/methoddocs/middlewares.doctree
index d6fce9ba..5ea03a6e 100644
Binary files a/stable/.doctrees/methoddocs/middlewares.doctree and b/stable/.doctrees/methoddocs/middlewares.doctree differ
diff --git a/stable/.doctrees/methoddocs/runner.doctree b/stable/.doctrees/methoddocs/runner.doctree
index 9426dce1..be40d9b7 100644
Binary files a/stable/.doctrees/methoddocs/runner.doctree and b/stable/.doctrees/methoddocs/runner.doctree differ
diff --git a/stable/.doctrees/methoddocs/subscriptions.doctree b/stable/.doctrees/methoddocs/subscriptions.doctree
index 3cbd8bb3..3bac47e1 100644
Binary files a/stable/.doctrees/methoddocs/subscriptions.doctree and b/stable/.doctrees/methoddocs/subscriptions.doctree differ
diff --git a/stable/.doctrees/methoddocs/utils.doctree b/stable/.doctrees/methoddocs/utils.doctree
index 562bd632..9ed5802b 100644
Binary files a/stable/.doctrees/methoddocs/utils.doctree and b/stable/.doctrees/methoddocs/utils.doctree differ
diff --git a/stable/.doctrees/userguides/development.doctree b/stable/.doctrees/userguides/development.doctree
index 2900b813..dd0974a2 100644
Binary files a/stable/.doctrees/userguides/development.doctree and b/stable/.doctrees/userguides/development.doctree differ
diff --git a/stable/_sources/userguides/development.md.txt b/stable/_sources/userguides/development.md.txt
index 6d3a5305..2ae7a8f7 100644
--- a/stable/_sources/userguides/development.md.txt
+++ b/stable/_sources/userguides/development.md.txt
@@ -61,19 +61,64 @@ Any errors you raise during this function will get captured by the client, and r
## Startup and Shutdown
-If you have heavier resources you want to load during startup, or otherwise perform some data collection prior to starting the bot, you can add a startup function like so:
+### Worker Events
+
+If you have heavier resources you want to load during startup, or want to initialize things like database connections, you can add a worker startup function like so:
```py
-@app.on_startup()
+@app.on_worker_startup()
def handle_on_worker_startup(state):
+ # Connect to DB, set initial state, etc
+ ...
+
+@app.on_worker_shutdown()
+def handle_on_worker_shutdown(state):
+ # cleanup resources, close connections cleanly, etc
...
```
This function comes a parameter `state` that you can use for storing the results of your startup computation or resources that you have provisioned.
-It's import to note that this is useful for ensuring that your workers (of which there can be multiple) have the resources necessary to properly handle any updates you want to make in your handler functions, such as connecting to the Telegram API, an SQL or NoSQL database connection, or something else.
-The `state` variable is also useful as this gets made available to each handler method so other stateful quantities can be maintained for other uses.
-TODO: Add more information about `state`
+It's import to note that this is useful for ensuring that your workers (of which there can be multiple) have the resources necessary to properly handle any updates you want to make in your handler functions, such as connecting to the Telegram API, an SQL or NoSQL database connection, or something else. **This function will run on every worker process**.
+
+*New in 0.2.0*: These events moved from `on_startup()` and `on_shutdown()` for clarity.
+
+#### Worker State
+
+The `state` variable is also useful as this can be made available to each handler method so other stateful quantities can be maintained for other uses. Each distributed worker has its own instance of state.
+
+To access the state from a handler, you must annotate `context` as a dependency like so:
+
+```py
+from typing import Annotated
+from taskiq import Context, TaskiqDepends
+
+@app.on_(chain.blocks)
+def block_handler(block, context: Annotated[Context, TaskiqDepends()]):
+ # Access state via context.state
+ ...
+```
+
+### Application Events
+
+You can also add an application startup and shutdown handler that will be **executed once upon every application startup**. This may be useful for things like processing historical events since the application was shutdown or other one-time actions to perform at startup.
+
+```py
+@app.on_startup()
+def handle_on_startup(startup_state):
+ # Process missed events, etc
+ # process_history(start_block=startup_state.last_block_seen)
+ # ...or startup_state.last_block_processed
+ ...
+
+
+@app.on_shutdown()
+def handle_on_shutdown():
+ # Record final state, etc
+ ...
+```
+
+*Changed in 0.2.0*: The behavior of the `@app.on_startup()` decorator and handler signature have changed. It is now executed only once upon application startup and worker events have moved on `@app.on_worker_startup()`.
## Running your Application
@@ -101,6 +146,34 @@ If you configure your application to use a signer, and that signer signs anythin
Always test your applications throughly before deploying.
```
+### Distributed Execution
+
+Using only the `silverback run ...` command in a defualt configuration executes everything in one process and the job queue is completely in-memory with a shared state. In some high volume environments, you may want to deploy your Silverback application in a distributed configuration using multiple processes to handle the messages at a higher rate.
+
+The primary components are the client and workers. The client handles Silverback events (blocks and contract event logs) and creates jobs for the workers to process in an asynchronous manner.
+
+For this to work, you must configure a [TaskIQ broker](https://taskiq-python.github.io/guide/architecture-overview.html#broker) capable of distributed processing. For instance, with [`taskiq_redis`](https://github.com/taskiq-python/taskiq-redis) you could do something like this for the client:
+
+```bash
+export SILVERBACK_BROKER_CLASS="taskiq_redis:ListQueueBroker"
+export SILVERBACK_BROKER_URI="redis://127.0.0.1:6379"
+
+silverback run "example:app" \
+ --network :mainnet:alchemy \
+ --runner "silverback.runner:WebsocketRunner"
+```
+
+And then the worker process with 2 worker subprocesses:
+
+```bash
+export SILVERBACK_BROKER_CLASS="taskiq_redis:ListQueueBroker"
+export SILVERBACK_BROKER_URI="redis://127.0.0.1:6379"
+
+silverback worker -w 2 "example:app"
+```
+
+This will run one client and 2 workers and all queue data will be go through Redis.
+
## Testing your Application
TODO: Add backtesting mode w/ `silverback test`
diff --git a/stable/commands/run.html b/stable/commands/run.html
index 9d4c2930..577511db 100644
--- a/stable/commands/run.html
+++ b/stable/commands/run.html
@@ -50,6 +50,7 @@
+
@@ -111,6 +112,7 @@
@app.on_shutdown()
-defdo_something_on_shutdown(state):
- ...# Update some external service, perhaps using information from `state`.
+defdo_something_on_shutdown():
+ ...# Record final state of app
If you have heavier resources you want to load during startup, or otherwise perform some data collection prior to starting the bot, you can add a startup function like so:
If you have heavier resources you want to load during startup, or want to initialize things like database connections, you can add a worker startup function like so:
+
@app.on_worker_startup()defhandle_on_worker_startup(state):
+ # Connect to DB, set initial state, etc
+ ...
+
+@app.on_worker_shutdown()
+defhandle_on_worker_shutdown(state):
+ # cleanup resources, close connections cleanly, etc
+ ...
+
+
+
This function comes a parameter state that you can use for storing the results of your startup computation or resources that you have provisioned.
+
It’s import to note that this is useful for ensuring that your workers (of which there can be multiple) have the resources necessary to properly handle any updates you want to make in your handler functions, such as connecting to the Telegram API, an SQL or NoSQL database connection, or something else. This function will run on every worker process.
+
New in 0.2.0: These events moved from on_startup() and on_shutdown() for clarity.
The state variable is also useful as this can be made available to each handler method so other stateful quantities can be maintained for other uses. Each distributed worker has its own instance of state.
+
To access the state from a handler, you must annotate context as a dependency like so:
+
fromtypingimportAnnotated
+fromtaskiqimportContext,TaskiqDepends
+
+@app.on_(chain.blocks)
+defblock_handler(block,context:Annotated[Context,TaskiqDepends()]):
+ # Access state via context.state...
-
This function comes a parameter state that you can use for storing the results of your startup computation or resources that you have provisioned.
-It’s import to note that this is useful for ensuring that your workers (of which there can be multiple) have the resources necessary to properly handle any updates you want to make in your handler functions, such as connecting to the Telegram API, an SQL or NoSQL database connection, or something else.
-The state variable is also useful as this gets made available to each handler method so other stateful quantities can be maintained for other uses.
You can also add an application startup and shutdown handler that will be executed once upon every application startup. This may be useful for things like processing historical events since the application was shutdown or other one-time actions to perform at startup.
+
@app.on_startup()
+defhandle_on_startup(startup_state):
+ # Process missed events, etc
+ # process_history(start_block=startup_state.last_block_seen)
+ # ...or startup_state.last_block_processed
+ ...
+
+
+@app.on_shutdown()
+defhandle_on_shutdown():
+ # Record final state, etc
+ ...
+
+
+
Changed in 0.2.0: The behavior of the @app.on_startup() decorator and handler signature have changed. It is now executed only once upon application startup and worker events have moved on @app.on_worker_startup().
Using only the silverbackrun... command in a defualt configuration executes everything in one process and the job queue is completely in-memory with a shared state. In some high volume environments, you may want to deploy your Silverback application in a distributed configuration using multiple processes to handle the messages at a higher rate.
+
The primary components are the client and workers. The client handles Silverback events (blocks and contract event logs) and creates jobs for the workers to process in an asynchronous manner.
+
For this to work, you must configure a TaskIQ broker capable of distributed processing. For instance, with taskiq_redis you could do something like this for the client:
The silverback.application module contains the high-level implementation of the the user’s
+Silverback application, meant to be used to expose method handlers and other functionality.
Create task to handle events created by container.
+
+
Parameters:
+
+
container – (Union[BlockContainer, ContractEvent]): The event source to watch.
+
new_block_timeout – (Optional[int]): Override for block timeout that is acceptable.
+Defaults to whatever the app’s settings are for default polling timeout are.
+
start_block (Optional[int]) – block number to start processing events from.
+Defaults to whatever the latest block is.
The silverback.middlewares module contains middleware intended to improve the usability of
+silverback as a whole, and add integrations for the silverback platform as well.
Run the task broker client for the assembled SilverbackApp application.
+
Will listen for events against the connected provider (using ManagerAccessMixin context),
+and process them by kicking events over to the configured broker.
+
+
Raises:
+
Halt – If there are no configured tasks to execute.
Run the task broker client for the assembled SilverbackApp application.
+
Will listen for events against the connected provider (using ManagerAccessMixin context),
+and process them by kicking events over to the configured broker.
+
+
Raises:
+
Halt – If there are no configured tasks to execute.
The silverback.subscriptions module contains an implementation of a Websocket subscription queue,
+used for connected to an RPC node via websockets that implements the eth_subscribe RPC method.
The SilverbackApp class handles state and configuration.
+Through this class, we can hook up event handlers to be executed each time we encounter a new block or each time a specific event is emitted.
+Initializing the app creates a network connection using the Ape configuration of your local project, making it easy to add a Silverback bot to your project in order to perform automation of necessary on-chain interactions required.
+
However, by default an app has no configured event handlers, so it won’t be very useful.
+This is where adding event handlers is useful via the app.on_ method.
+This method lets us specify which event will trigger the execution of our handler as well as which handler to execute.
Inside of handle_new_block you can define any logic that you want to handle each new block detected by the silverback client.
+You can return any serializable data structure from this function and that will be stored in the results database as a trackable metric for the execution of this handler.
+Any errors you raise during this function will get captured by the client, and recorded as a failure to handle this block.
Inside of handle_token_transfer_events you can define any logic that you want to handle each new transfer event that gets emitted by TOKEN.Transfer detected by the silverback client.
+Again, you can return any serializable data structure from this function and that will be stored in the results database as a trackable metric for the execution of this handler.
+Any errors you raise during this function will get captured by the client, and recorded as a failure to handle this transfer event log.
If you have heavier resources you want to load during startup, or want to initialize things like database connections, you can add a worker startup function like so:
+
@app.on_worker_startup()
+defhandle_on_worker_startup(state):
+ # Connect to DB, set initial state, etc
+ ...
+
+@app.on_worker_shutdown()
+defhandle_on_worker_shutdown(state):
+ # cleanup resources, close connections cleanly, etc
+ ...
+
+
+
This function comes a parameter state that you can use for storing the results of your startup computation or resources that you have provisioned.
+
It’s import to note that this is useful for ensuring that your workers (of which there can be multiple) have the resources necessary to properly handle any updates you want to make in your handler functions, such as connecting to the Telegram API, an SQL or NoSQL database connection, or something else. This function will run on every worker process.
+
New in 0.2.0: These events moved from on_startup() and on_shutdown() for clarity.
The state variable is also useful as this can be made available to each handler method so other stateful quantities can be maintained for other uses. Each distributed worker has its own instance of state.
+
To access the state from a handler, you must annotate context as a dependency like so:
+
fromtypingimportAnnotated
+fromtaskiqimportContext,TaskiqDepends
+
+@app.on_(chain.blocks)
+defblock_handler(block,context:Annotated[Context,TaskiqDepends()]):
+ # Access state via context.state
+ ...
+
You can also add an application startup and shutdown handler that will be executed once upon every application startup. This may be useful for things like processing historical events since the application was shutdown or other one-time actions to perform at startup.
+
@app.on_startup()
+defhandle_on_startup(startup_state):
+ # Process missed events, etc
+ # process_history(start_block=startup_state.last_block_seen)
+ # ...or startup_state.last_block_processed
+ ...
+
+
+@app.on_shutdown()
+defhandle_on_shutdown():
+ # Record final state, etc
+ ...
+
+
+
Changed in 0.2.0: The behavior of the @app.on_startup() decorator and handler signature have changed. It is now executed only once upon application startup and worker events have moved on @app.on_worker_startup().
Once you have programmed your bot, it’s really useful to be able to run it locally and validate that it does what you expect it to do.
+To run your bot locally, we have included a really useful cli command run that takes care of connecting to the proper network, configuring signers (using your local Ape accounts), and starting up the application client and in-memory task queue workers.
+
# Run your bot on the Ethereum Sepolia testnet, with your own signer:
+$silverbackrunmy_bot:app--network:sepolia--accountacct-name
+
+
+
It’s important to note that signers are optional, if not configured in the application then app.signer will be None.
+You can use this in your application to enable a “test execution” mode, something like this:
+
# Compute some metric that might lead to creating a transaction
+ifapp.signer:
+ # Execute a transaction via `sender=app.signer`
+else:
+ # Log what the transaction *would* have done, had a signer been enabled
+
+
+
If you configure your application to use a signer, and that signer signs anything given to it, remember that you can lose substational amounts of funds if you deploy this to a production network.
+Always test your applications throughly before deploying.
+
Using only the silverbackrun... command in a defualt configuration executes everything in one process and the job queue is completely in-memory with a shared state. In some high volume environments, you may want to deploy your Silverback application in a distributed configuration using multiple processes to handle the messages at a higher rate.
+
The primary components are the client and workers. The client handles Silverback events (blocks and contract event logs) and creates jobs for the workers to process in an asynchronous manner.
+
For this to work, you must configure a TaskIQ broker capable of distributed processing. For instance, with taskiq_redis you could do something like this for the client:
Silverback lets you create and deploy your own Python bots that respond to on-chain events.
+The Silverback library leverages the Ape development framework as well as it’s ecosystem of plugins and packages to enable you to develop simple-yet-sophisticated automated applications that can listen and respond to live chain data.
+
Silverback applications are excellent for use cases that involve continuously monitoring and responding to on-chain events, such as newly confirmed blocks or contract event logs.
+
Some examples of these types of applications:
+
+
Monitoring new pool creations, and depositing liquidity
+
Measuring trading activity of popular pools
+
Listening for large swaps to update a telegram group
Silverback relies heavily on the Ape development framework, so it’s worth it to familarize yourself with how to install Ape and it’s plugins using the Ape installation userguide.
This project is in development and should be considered a beta.
+Things might not be in their final state and breaking changes may occur.
+Comments, questions, criticisms and pull requests are welcomed.