From 045501f29e3ae687ea732745b87d036abeec3046 Mon Sep 17 00:00:00 2001 From: David Gardner Date: Wed, 23 Oct 2024 15:59:24 -0700 Subject: [PATCH] Tech writer feedback --- docs/source/basics/cpu_only_mode.md | 36 +++++++------- docs/source/getting_started.md | 74 ++++++++++++++++++++--------- 2 files changed, 70 insertions(+), 40 deletions(-) diff --git a/docs/source/basics/cpu_only_mode.md b/docs/source/basics/cpu_only_mode.md index 1330febab1..3c52eadf0d 100644 --- a/docs/source/basics/cpu_only_mode.md +++ b/docs/source/basics/cpu_only_mode.md @@ -16,13 +16,13 @@ limitations under the License. --> # Morpheus CPU-Only Mode -By default Morpheus is designed to take advantage of the GPU for accelerated processing. However, there are cases where it may be necessary to run Morpheus on a system without access to a GPU. To address this need, Morpheus provides a CPU only execution mode. Many stages within Morpheus require a GPU to run while others can operate in both GPU and CPU execution mode. Attempting to add a GPU only stage to a pipeline that is configured to operate in CPU only mode will result in an error. +By default, Morpheus is designed to take advantage of the GPU for accelerated processing. However, there are cases where it may be necessary to run Morpheus on a system without access to a GPU. To address this need, Morpheus provides a CPU-only execution mode. Many stages within Morpheus require a GPU to run, while others can operate in both GPU and CPU execution mode. Attempting to add a GPU-only stage to a pipeline that is configured to operate in CPU-only mode will result in an error. ## Execution Modes -By default Morpheus will run in GPU execution mode. Users have the choice of specifying the execution mode with either the Python API or from the command line. +By default, Morpheus will run in GPU execution mode. Users have the choice of specifying the execution mode with either the Python API or from the command line. ### Python API -Execution modes are defined in the `morpheus.config.ExecutionMode` enumeration, which is then specified in the `execution_mode` attribute of the `morpheus.config.Config` object. The following example demonstrates how to set the execution mode of a pipeline to CPU only: +Execution modes are defined in the `morpheus.config.ExecutionMode` enumeration, which is then specified in the `execution_mode` attribute of the `morpheus.config.Config` object. The following example demonstrates how to set the execution mode of a pipeline to CPU-only: ```python from morpheus.config import Config @@ -47,12 +47,12 @@ morpheus run --use_cpu_only pipeline-other --help ``` #### Example -The following is a simple command line example of a pipeline that can execute in CPU only mode. To begin ensure that you have fetched the examples dataset by running the following command from the root of the Morpheus repository: +The following is a simple command line example of a pipeline that can execute in CPU-only mode. To begin, ensure that you have fetched the examples dataset by running the following command from the root of the Morpheus repository: ```bash ./scripts/fetch_data.py fetch examples ``` -Then to run the pipeline run the following command: +Then, run the following command to run the pipeline: ```bash morpheus --log_level=INFO \ run --use_cpu_only pipeline-other \ @@ -64,18 +64,18 @@ morpheus --log_level=INFO \ ``` ## Designing Stages for CPU Execution -It is up to the author of each stage to decide which execution modes are supported. Options are: CPU, GPU or both. As mentioned previously the default execution mode is GPU, authors of stages which require a GPU do not need to make any changes to their stage definitions. +It is up to the author of each stage to decide which execution modes are supported. Options are: CPU, GPU, or both. As mentioned previously, the default execution mode is GPU; authors of stages which require a GPU do not need to make any changes to their stage definitions. ### DataFrames and Tensors -With the selection of the execution mode implies selection of DataFrame and tensor types. In GPU mode Morpheus will use [cuDF](https://docs.rapids.ai/api/cudf/stable/) DataFrames and tensors are represented as [CuPy](https://cupy.dev/) `ndarray` objects. In CPU mode Morpheus will use [pandas](https://pandas.pydata.org/) DataFrames and [NumPy](https://numpy.org/) `ndarray` objects. +The selection of the execution mode implies selection of DataFrame and tensor types. In GPU mode, Morpheus will use [cuDF](https://docs.rapids.ai/api/cudf/stable/) DataFrames and tensors are represented as [CuPy](https://cupy.dev/) `ndarray` objects. In CPU mode, Morpheus will use [pandas](https://pandas.pydata.org/) DataFrames and [NumPy](https://numpy.org/) `ndarray` objects. |Mode|DataFrame|Tensor| | -- | ------- | ---- | |GPU|[cuDF](https://docs.rapids.ai/api/cudf/stable/)|[CuPy](https://cupy.dev/)| |CPU|[pandas](https://pandas.pydata.org/)|[NumPy](https://numpy.org/)| -### Stages defined with `@stage` and `@source` decorators -Both the `@stage` and `@source` decorators have an optional `execution_modes` parameter that accepts a tuple of `morpheus.config.ExecutionMode` values which is used to specify the supported executions mode of the stage. +### Stages Defined with `@stage` and `@source` Decorators +Both the `@stage` and `@source` decorators have an optional `execution_modes` parameter that accepts a tuple of `morpheus.config.ExecutionMode` values, which is used to specify the supported execution mode of the stage. #### CPU-only Source & Stage Examples ```python @@ -115,10 +115,10 @@ if __name__ == "__main__": main() ``` -#### CPU & GPU Source & Stage Examples -Supporting both CPU and GPU execution modes requires writing code that can handle both types of DataFrames and `ndarray` objects. In many cases code designed to work with pandas will work with cuDF, and code designed to work with NumPy will work with CuPy without requiring any changes to the code. In some cases however, the API may differ slightly and there is a need to know the payload type, care must be taken not to directly import `cudf` or any other package requiring a GPU when running in CPU mode on a system without a GPU. Morpheus provides some helper methods to assist with this in the {py:mod}`~morpheus.utils.type_utils` module, such as {py:func}`~morpheus.utils.type_utils.is_cudf_type`, {py:func}`~morpheus.utils.type_utils.get_df_class`, and {py:func}`~morpheus.utils.type_utils.get_array_pkg`. +#### CPU and GPU Source and Stage Examples +Supporting both CPU and GPU execution modes requires writing code that can handle both types of DataFrames and `ndarray` objects. In many cases, code designed to work with pandas will work with cuDF, and code designed to work with NumPy will work with CuPy, without requiring any changes to the code. However, in some cases, the API may differ slightly and there is a need to know the payload type. Care must be taken not to directly import `cudf` or any other package requiring a GPU when running in CPU mode on a system without a GPU. Morpheus provides some helper methods to assist with this in the {py:mod}`~morpheus.utils.type_utils` module, such as {py:func}`~morpheus.utils.type_utils.is_cudf_type`, {py:func}`~morpheus.utils.type_utils.get_df_class`, and {py:func}`~morpheus.utils.type_utils.get_array_pkg`. -With a few simple modifications the previous example now supports both CPU and GPU execution modes. The `get_df_class` function is used to determine the DataFrame type to use, and we added a command line flag to switch between the two execution modes. +With a few simple modifications, the previous example now supports both CPU and GPU execution modes. The `get_df_class` function is used to determine the DataFrame type to use, and we added a command line flag to switch between the two execution modes. ```python import logging @@ -177,15 +177,15 @@ if __name__ == "__main__": main() ``` -### Source & Stages Classes -Similar to the `@source` and `@stage` decorators, class based sources and stages can also be defined to advertise which execution modes they support. The base class for all source and stage classes `StageBase` defines a `supported_execution_modes` method for this purpose which can be overridden in a derived class. The method in the base class is defined as: +### Source and Stage Classes +Similar to the `@source` and `@stage` decorators, class-based sources and stages can also be defined to advertise which execution modes they support. The base class for all source and stage classes, `StageBase`, defines a `supported_execution_modes` method for this purpose, which can be overridden in a derived class. The method in the base class is defined as: ```python def supported_execution_modes(self) -> tuple[ExecutionMode]: return (ExecutionMode.GPU, ) ``` -Stage authors are free to inspect constructor arguments of the stage to determine which execution modes are supported. However for many stages the supported execution modes do not change based upon the constructor arguments. In these cases the {py:class}`~morpheus.pipeline.execution_mode_mixins.GpuAndCpuMixin` and {py:class}`~morpheus.pipeline.execution_mode_mixins.CpuOnlyMixin` mixins can be used to simplify the implementation. +Stage authors are free to inspect constructor arguments of the stage to determine which execution modes are supported. However, for many stages the supported execution modes do not change based upon the constructor arguments. In these cases the {py:class}`~morpheus.pipeline.execution_mode_mixins.GpuAndCpuMixin` and {py:class}`~morpheus.pipeline.execution_mode_mixins.CpuOnlyMixin` mixins can be used to simplify the implementation. Example class definition: ```python @@ -201,13 +201,13 @@ class PassThruStage(PassThruTypeMixin, GpuAndCpuMixin, SinglePortStage): ``` #### GpuAndCpuMixin -In the previous decorators example we discussed utilizing various helper methods available in the {py:mod}`~morpheus.utils.type_utils` module to assist in writing code which is able to operate in both CPU and GPU execution modes. To simplify this further the `GpuAndCpuMixin` mixin adds these helper methods to the class. At time of writing they are: +In the previous decorators example, we discussed utilizing various helper methods available in the {py:mod}`~morpheus.utils.type_utils` module to assist in writing code that is able to operate in both CPU and GPU execution modes. To simplify this further, the `GpuAndCpuMixin` mixin adds these helper methods to the class. At the time of this writing, they are: - `df_type_str` - Returns either `"cudf"` or `"pandas"`. - `get_df_pkg` - Returns either the `cudf` or `pandas` module. - `get_df_class` - Returns either the `cudf.DataFrame` or `pandas.DataFrame` class. -### Stages with C++ implementations -C++ stages have the ability to interact with cuDF DataFrames via the [libcudf](https://docs.rapids.ai/api/libcudf/stable/) library, however no such C++ library exists for pandas DataFrames. As a result, any stages which contain both a Python and a C++ implementation, the Python implementation will be used in CPU mode, and the C++ implementation will be used in GPU mode. For these stages, the Python implementation is then free to assume DataFrames are of type `pandas.DataFrame` and tensors are of type `numpy.ndarray`. +### Stages with C++ Implementations +C++ stages have the ability to interact with cuDF DataFrames via the [libcudf](https://docs.rapids.ai/api/libcudf/stable/) library; however, no such C++ library exists for pandas DataFrames. As a result, any stages which contain both a Python and a C++ implementation, the Python implementation will be used in CPU mode, and the C++ implementation will be used in GPU mode. For these stages, the Python implementation is then free to assume DataFrames are of type `pandas.DataFrame` and tensors are of type `numpy.ndarray`. A stage which contains only a C++ implementation will not be able to run in CPU mode. diff --git a/docs/source/getting_started.md b/docs/source/getting_started.md index 55fe26f802..78c9ae7c4d 100644 --- a/docs/source/getting_started.md +++ b/docs/source/getting_started.md @@ -36,24 +36,24 @@ More advanced users, or those who are interested in using the latest pre-release > **Note about Docker:** > -> The Morpheus documentation and examples assume that the [Manage Docker as a non-root user](https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user) post install step has been performed allowing Docker commands to be executed by a non-root user. This is not strictly necessary so long as the current user has `sudo` privileges to execute Docker commands. +> The Morpheus documentation and examples assume that the [Manage Docker as a non-root user](https://docs.docker.com/engine/install/linux-postinstall/#manage-docker-as-a-non-root-user) post-installation step has been performed, allowing Docker commands to be executed by a non-root user. This is not strictly necessary as long as the current user has `sudo` privileges to execute Docker commands. -## Using pre-built Docker containers +## Using Pre-Built Docker Containers ### Pull the Morpheus Image 1. Go to [https://catalog.ngc.nvidia.com/orgs/nvidia/teams/morpheus/containers/morpheus/tags](https://catalog.ngc.nvidia.com/orgs/nvidia/teams/morpheus/containers/morpheus/tags) -1. Choose a version +1. Choose a version. 1. Download the selected version, for example for `24.10`: ```bash docker pull nvcr.io/nvidia/morpheus/morpheus:24.10-runtime ``` -1. Optional, many of the examples require NVIDIA Triton Inference Server to be running with the included models. To download the Morpheus Triton Server Models container (ensure that the version number matches that of the Morpheus container you downloaded in the previous step): +1. Optional: Many of the examples require NVIDIA Triton Inference Server to be running with the included models. To download the Morpheus Triton Server Models container, ensure that the version number matches that of the Morpheus container you downloaded in the previous step, then run: ```bash docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 ``` > **Note about Morpheus versions:** > -> Morpheus uses Calendar Versioning ([CalVer](https://calver.org/)). For each Morpheus release there will be an image tagged in the form of `YY.MM-runtime` this tag will always refer to the latest point release for that version. In addition to this there will also be at least one point release version tagged in the form of `vYY.MM.00-runtime` this will be the initial point release for that version (ex. `v24.10.00-runtime`). In the event of a major bug, we may release additional point releases (ex. `v24.10.01-runtime`, `v24.10.02-runtime` etc...), and the `YY.MM-runtime` tag will be updated to reference that point release. +> Morpheus uses Calendar Versioning ([CalVer](https://calver.org/)). For each Morpheus release there will be an image tagged in the form of `YY.MM-runtime`. This tag will always refer to the latest point release for that version. In addition, there will also be at least one point release version tagged in the form of `vYY.MM.00-runtime`. This will be the initial point release for that version (e.g., `v24.10.00-runtime`). In the event of a major bug, we may release additional point releases (e.g., `v24.10.01-runtime`, `v24.10.02-runtime` etc...), and the `YY.MM-runtime` tag will be updated to reference that point release. > > Users who want to ensure they are running with the latest bug fixes should use a release image tag (`YY.MM-runtime`). Users who need to deploy a specific version into production should use a point release image tag (`vYY.MM.00-runtime`). @@ -67,10 +67,10 @@ docker run --rm -ti --runtime=nvidia --gpus=all --net=host -v /var/run/docker.so Note about some of the flags above: | Flag | Description | | ---- | ----------- | -| `--runtime=nvidia` | Choose the NVIDIA docker runtime, this enables access to the GPU inside the container. This flag isn't needed if the `nvidia` runtime is already set as the default runtime for Docker. | +| `--runtime=nvidia` | Choose the NVIDIA docker runtime. This enables access to the GPU inside the container. This flag isn't needed if the `nvidia` runtime is already set as the default runtime for Docker. | | `--gpus=all` | Specify which GPUs the container has access to. Alternately, a specific GPU could be chosen with `--gpus=` | -| `--net=host` | Most of the Morpheus pipelines utilize [NVIDIA Triton Inference Server](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver), which will be running in another container. For simplicity we will give the container access to the host system's network, production deployments may opt for an explicit network configuration. | -| `-v /var/run/docker.sock:/var/run/docker.sock` | Enables access to the Docker socket file from within the running container, this allows launching other Docker containers from within the Morpheus container. This flag is required for launching Triton with access to the included Morpheus models, users with their own models can omit this. | +| `--net=host` | Most of the Morpheus pipelines utilize [NVIDIA Triton Inference Server](https://catalog.ngc.nvidia.com/orgs/nvidia/containers/tritonserver), which will be running in another container. For simplicity, we will give the container access to the host system's network; production deployments may opt for an explicit network configuration. | +| `-v /var/run/docker.sock:/var/run/docker.sock` | Enables access to the Docker socket file from within the running container. This allows launching other Docker containers from within the Morpheus container. This flag is required for launching Triton with access to the included Morpheus models. Users with their own models can omit this. | Once launched, users wishing to launch Triton using the included Morpheus models will need to install the Docker tools in the Morpheus container by running: ```bash @@ -81,7 +81,7 @@ Skip ahead to the [Acquiring the Morpheus Models Container](#acquiring-the-morph ## Using Morpheus Conda Packages The Morpheus stages are available as libraries that are hosted on the [`nvidia`](https://anaconda.org/nvidia) Conda channel. The Morpheus Conda packages are: -[`morpheus-core`](https://anaconda.org/nvidia/morpheus-core), [`morpheus-dfp`](https://anaconda.org/nvidia/morpheus-dfp) and [`morpheus-llm`](https://anaconda.org/nvidia/morpheus-llm) +[`morpheus-core`](https://anaconda.org/nvidia/morpheus-core), [`morpheus-dfp`](https://anaconda.org/nvidia/morpheus-dfp) and [`morpheus-llm`](https://anaconda.org/nvidia/morpheus-llm). For details on these libraries and how to use them, refer to the [Morpheus Conda Packages](./conda_packages.md) guide. @@ -139,7 +139,7 @@ To assist in building the Morpheus container, several scripts have been provided ./docker/build_container_release.sh ``` -By default this will create an image named `nvcr.io/nvidia/morpheus/morpheus:${MORPHEUS_VERSION}-runtime` where `$MORPHEUS_VERSION` is replaced by the output of `git describe --tags --abbrev=0`. You can specify a different Docker image name and tag by passing the script the `DOCKER_IMAGE_NAME`, and `DOCKER_IMAGE_TAG` environment variables respectively. +By default, this will create an image named `nvcr.io/nvidia/morpheus/morpheus:${MORPHEUS_VERSION}-runtime` where `$MORPHEUS_VERSION` is replaced by the output of `git describe --tags --abbrev=0`. You can specify a different Docker image name and tag by passing the script the `DOCKER_IMAGE_NAME`, and `DOCKER_IMAGE_TAG` environment variables, respectively. To run the built "release" container, use the following: @@ -147,7 +147,7 @@ To run the built "release" container, use the following: ./docker/run_container_release.sh ``` -The `./docker/run_container_release.sh` script accepts the same `DOCKER_IMAGE_NAME`, and `DOCKER_IMAGE_TAG` environment variables that the `./docker/build_container_release.sh` script does. For example, to run version `v24.10.00` use the following: +The `./docker/run_container_release.sh` script accepts the same `DOCKER_IMAGE_NAME` and `DOCKER_IMAGE_TAG` environment variables that the `./docker/build_container_release.sh` script does. For example, to run version `v24.10.00` use the following: ```bash DOCKER_IMAGE_TAG="v24.10.00-runtime" ./docker/run_container_release.sh @@ -155,19 +155,19 @@ DOCKER_IMAGE_TAG="v24.10.00-runtime" ./docker/run_container_release.sh ## Acquiring the Morpheus Models Container -Many of the validation tests and example workflows require a Triton server to function. For simplicity Morpheus provides a pre-built models container which contains both Triton and the Morpheus models. Users using a release version of Morpheus can download the corresponding Triton models container from NGC with the following command: +Many of the validation tests and example workflows require a Triton server to function. For simplicity, Morpheus provides a pre-built models container, which contains both the Triton and Morpheus models. Users implementing a release version of Morpheus can download the corresponding Triton models container from NGC with the following command: ```bash docker pull nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 ``` -Users working with an unreleased development version of Morpheus can build the Triton models container from the Morpheus repository. To build the Triton models container, from the root of the Morpheus repository run the following command: +Users working with an unreleased development version of Morpheus can build the Triton models container from the Morpheus repository. To build the Triton models container, run the following command from the root of the Morpheus repository: ```bash models/docker/build_container.sh ``` ## Launching Triton Server -In a new terminal use the following command to launch a Docker container for Triton loading all of the included pre-trained models: +In a new terminal, use the following command to launch a Docker container for Triton loading all of the included pre-trained models: ```bash docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \ nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 \ @@ -178,9 +178,9 @@ docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \ --disable-auto-complete-config ``` -This will launch Triton using the default network ports (8000 for HTTP, 8001 for GRPC, and 8002 for metrics), loading all of the examples models in the Morpheus repo. +This will launch Triton using the default network ports (8000 for HTTP, 8001 for GRPC, and 8002 for metrics), loading all of the example models in the Morpheus repo. -Note: The above command is useful for testing out Morpheus, however it does load several models into GPU memory, which at time of writing consumes roughly 2GB of GPU memory. Production users should consider only loading the specific models they plan on using with the `--model-control-mode=explicit` and `--load-model` flags. For example to launch Triton only loading the `abp-nvsmi-xgb` model: +Note: The above command is useful for testing out Morpheus, however it does load several models into GPU memory, which at the time of this writing consumes roughly 2GB of GPU memory. Production users should consider only loading the specific models they plan on using with the `--model-control-mode=explicit` and `--load-model` flags. For example, to launch Triton only loading the `abp-nvsmi-xgb` model: ```bash docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \ nvcr.io/nvidia/morpheus/morpheus-tritonserver-models:24.10 \ @@ -193,7 +193,7 @@ docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \ --load-model abp-nvsmi-xgb ``` -Alternately, for users who have checked out the Morpheus git repository, launching the Triton server container directly mounting the models from the repository is an option. This approach is most useful for users training their own models. From the root of the Morpheus repo, use the following command to launch a Docker container for Triton loading all of the included pre-trained models: +Alternately, for users who have checked out the Morpheus git repository, launching the Triton server container directly, mounting the models from the repository is an option. This approach is most useful for users training their own models. From the root of the Morpheus repo, use the following command to launch a Docker container for Triton loading all of the included pre-trained models: ```bash docker run --rm -ti --gpus=all -p8000:8000 -p8001:8001 -p8002:8002 \ -v $PWD/models:/models \ @@ -215,7 +215,7 @@ For full example pipelines using both the Python API and command line interface, ### Morpheus Python Interface -The Morpheus Python interface allows users to configure their pipelines using a Python script file. This is ideal for users who are working in a Jupyter Notebook, and users who need complex initialization logic. Documentation on using both the Morpheus Python & C++ APIs can be found in the [Morpheus Developer Guide](./developer_guide/guides.md). +The Morpheus Python interface allows users to configure their pipelines using a Python script file. This is ideal for users who are working in a Jupyter Notebook, and users who need complex initialization logic. Documentation on using both the Morpheus Python and C++ APIs can be found in the [Morpheus Developer Guide](./developer_guide/guides.md). ### Morpheus Command Line Interface (CLI) @@ -243,7 +243,7 @@ Commands: tools Run a utility tool ``` -Each command in the CLI has its own help information. Use `morpheus [command] [...sub-command] --help` to get instructions for each command and sub command. For example: +Each command in the CLI has its own help information. Use `morpheus [command] [...sub-command] --help` to get instructions for each command and sub-command. For example: ```bash $ morpheus run pipeline-nlp inf-triton --help @@ -295,7 +295,7 @@ Added stage: . Accepted input types: (,) ``` -This indicates that the `to-file` stage cannot accept the input type of `morpheus.messages.ControlMessage`. This is because the `to-file` stage has no idea how to write that class to a file; it only knows how to write messages of type `morpheus.messages.message_meta.MessageMeta`. To ensure you have a valid pipeline, examine at the `Accepted input types: (,)` portion of the error message. This indicates you need a stage that converts from the output type of the `deserialize` stage, `morpheus.messages.ControlMessage`, to `morpheus.messages.message_meta.MessageMeta`, which is exactly what the `serialize` stage does. +This indicates that the `to-file` stage cannot accept the input type of `morpheus.messages.ControlMessage`. This is because the `to-file` stage does not know how to write that class to a file; it only knows how to write messages of type `morpheus.messages.message_meta.MessageMeta`. To ensure you have a valid pipeline, examine the `Accepted input types: (,)` portion of the error message. This indicates you need a stage that converts from the output type of the `deserialize` stage, `morpheus.messages.ControlMessage`, to `morpheus.messages.message_meta.MessageMeta`, which is exactly what the `serialize` stage does. #### Pipeline Stages @@ -375,7 +375,37 @@ Commands: trigger Buffer data until the previous stage has completed. validate Validate pipeline output for testing. ``` -Note: The available commands for different types of pipelines are not the same. This means that the same stage, when used in different pipelines, may have different options. Check the CLI help for the most up-to-date information during development. + +And for the AE pipeline: + +``` +$ morpheus run pipeline-ae --help +Usage: morpheus run pipeline-ae [OPTIONS] COMMAND1 [ARGS]... [COMMAND2 [ARGS]...]... + + + +Commands: + add-class Add detected classifications to each message. + add-scores Add probability scores to each message. + buffer (Deprecated) Buffer results. + delay (Deprecated) Delay results for a certain duration. + filter Filter message by a classification threshold. + from-azure Source stage is used to load Azure Active Directory messages. + from-cloudtrail Load messages from a CloudTrail directory. + from-duo Source stage is used to load Duo Authentication messages. + inf-pytorch Perform inference with PyTorch. + inf-triton Perform inference with Triton Inference Server. + monitor Display throughput numbers at a specific point in the pipeline. + preprocess Prepare Autoencoder input DataFrames for inference. + serialize Includes & excludes columns from messages. + timeseries Perform time series anomaly detection and add prediction. + to-file Write all messages to a file. + to-kafka Write all messages to a Kafka cluster. + train-ae Train an Autoencoder model on incoming data. + trigger Buffer data until the previous stage has completed. + validate Validate pipeline output for testing. +``` +> **Note**: The available commands for different types of pipelines are not the same. This means that the same stage may have different options when used in different pipelines. Check the CLI help for the most up-to-date information during development. ## Next Steps * [Morpheus Examples](./examples.md) - Example pipelines using both the Python API and command line interface