This is the documentation of the configuration settings which can be overwritten using a custom YAML file. All the defaults are defined in ../modern-data-platform-stack/generator-config/vars/config.yml
.
There are some overall settings which will control the behaviour for all or a group of services. These are listed in the table below.
Config | Default | Since | Description |
---|---|---|---|
use_timezone |
1.5.0 | The timezone to use for the whole stack. By default is empty so the timezone of the docker engine is not changed and it will run as Etc/UTC . If you want to set it to another timezone, then specify a Unix timezone string, such as Europe/Zurich or America/New_York . An overview on the valid timezones can be found here: https://en.wikipedia.org/wiki/List_of_tz_database_time_zones |
|
logging_driver |
json-file |
1.17.0 | Docker logging driver to use, currently we support the following ones: json-file , fluentd or loki or syslog or splunk . |
loggin_fluentd_address |
fluentd:24224 |
1.17.0 | The socket address to connect to the Fluentd daemon, only applicable if logging_driver is set to fluentd . |
logging_syslog_address |
udp://syslog:1111 |
1.17.0 | The address of a syslog server, only applicable if logging_driver is set to syslog . |
logging_splunk_url |
http://splunk:8000 |
1.17.0 | Path to the Splunk Enterprise instance, self-service Splunk Cloud instance, or Splunk Cloud managed cluster, only applicable if logging_driver is set to splunk . |
logging_splunk_token |
`` | 1.17.0 | Splunk HTTP Event Collector token used to authenticate at the Splunk HTTP Event Collector (HEC), only applicable if logging_driver is set to splunk . |
private_docker_repository_name |
trivadis |
1.5.0 | Docker images not available on public Docker Hub will be retrieved using this private repository. By default it points to trivadis and you have to login first, before you can use the generated stack, if you have selected a private image. Use this config to point to your own private docker registry if needed. |
uid |
1000 |
1.9.0 | The UID to use when using the "user" property in a service to override the user inside the container. |
env |
${PLATYS_ENV} | 1.16.0 | Optional environment identifier of this platys instance, by default take it from the environment variable (can be specified in the .env file), but can be changed to hardcoded value. Allowed values (taken from DataHub): dev , test , qa , uat , ei , pre , non_prod , prod , corp |
data_centers |
dc1,dc2 |
1.14.0 | A comma-separated list of data-center names, to use if the property data_center_to_use has a value != 0. |
data_center_to_use |
0 |
1.14.0 | The data-center to use, if multiple DC should be simulated for a Kafka setup. |
jmx_monitoring_with_prometheus_enable |
false |
1.17.0 | Enable JMX Monitoring over Prometheus for Kafka and other parts of Kafka stack (Kafka Connect, ksqldb, schema-registry, ...). If enabled, then prometheus and grafana are automatically enabled as well. |
copy_cookbook_data_folder |
true |
1.14.0 | Copy all the data folders of the various cookbook recipes into the data-transfer/cookbook-data folder. |
There are some overall settings which will control the behaviour for all or a group of services. These are listed in the table below.
Config | Default | Since | Description |
---|---|---|---|
KAFKA_enable |
false |
1.9.0 | Use external Kafka service, such as Confluent Cloud. Specify the cluster through the KAFKA_bootstrap_servers property. |
KAFKA_bootstrap_servers |
`` | 1.9.0 | A comma-separated list of host and port pairs that addresses the external Kafka brokers |
KAFKA_security_protocol |
`` | 1.17.0 | The security protocol to use to connect to the external Kafka cluster. Either PLAINTEXT , SASL_PLAINTEXT , SASL_SSL or SSL . |
KAFKA_sasl_mechanism |
`` | 1.17.0 | The SASL machanism to use to connect to the external Kafka cluster. |
KAFKA_login_module |
`` | 1.17.0 | The login module to use to connect to the external Kafka cluster. |
KAFKA_sasl_username |
`` | 1.17.0 | Username to use to authenticate against the external Kafka cluster. Can be left empty and defined using the PLATYS_EXTERNAL_KAFKA_USERNAME environment variable (e.g. in .env ) |
KAFKA_sasl_password |
`` | 1.17.0 | Password to use to authenticate against the external Kafka cluster. Can be left empty and defined using the PLATYS_EXTERNAL_KAFKA_PASSWORD environment variable (e.g. in .env ) |
SCHEMA_REGISTRY_enable |
false |
1.9.0 | Use an external schema registry |
SCHEMA_REGISTRY_url |
`` | 1.9.0 | The URL of the external schema registry |
S3_enable |
false |
1.9.0 | Use external S3 service, such as AWS S3 cloud service or an on-premises S3 appliance. You have to configure two environment variables, PLATYS_AWS_ACCESS_KEY with the access key and PLATYS_AWS_SECRET_ACCESS_KEY with the access secret. This can be done on the on the docker host or in the .env file in the platform home (same folder where the docker-compose.yml is located). |
S3_endpoint |
s3.amazonaws.com |
1.9.0 | The endpoint address of the S3 external service |
S3_path_style_access |
false |
1.9.0 | Use Path Style Access if set to true , otherwise the default of virtual hosted-style access is used. |
ADLS_enable |
false |
1.15.0 | Use external Azure Data Lake Storage Gen2 service. You have to configure two environment variables, PLATYS_AZURE_ADLS_ACCESS_KEY with the access key. This can be done on the on the docker host or in the .env file in the platform home (same folder where the docker-compose.yml is located). |
ADLS_storage_account |
`` | 1.15.0 | The name of the storage account for the ADLS service. |
DATAHUB_enable |
false |
1.16.0 | Use external DataHub service. Specify the DataHub GMS service through the DATAHUB_gms_url property. |
DATAHUB_gms_url |
`` | 1.16.0 | the web url of the external DataHub GMS service instance to connect to. |
OLLAMA_enable |
false |
1.17.0 | Use external Ollama service. Specify the Ollama base URL through the OLLAMA_url property. |
OLLAMA_url |
http://${PUBLIC_IP}:11434 |
1.17.0 | the base url of the Ollama service (in the format http://<host>:<port> ) |
The configuration settings for enabling/disabling a given service are named XXXXX_enable
where XXXXX is the name of the service (he used to be named XXXXX_enabled
in version 1.0.0).
For each service there might be some other settings, such as controlling the number of nodes to start the service with, whether the service should map a data volume into the container or controlling some other proprietary configuration properties of the service itself.
Config | Default | Since | Description |
---|---|---|---|
Apache Zookeeper | |||
ZOOKEEPER_enable |
false |
1.0.0 | Apache Zookeeper is a coordination service used by Apache Kafka and Apache Atlas services. It is automatically enabled if using either of the two. |
ZOOKEEPER_nodes |
1 |
1.0.0 | number of Zookeeper nodes |
ZOOKEEPER_navigator_enable |
false |
1.1.0 | Zookeeper Navigator is a UI for managing and viewing zookeeper cluster. |
Apache Kafka | |||
KAFKA_enable |
false |
1.0.0 | Use confluent Kafka |
KAFKA_edition |
community |
1.2.0 | The Kafka edition to use, one of community or enterprise |
KAFKA_use_kraft_mode |
false |
1.13.0 | use Zookeeper-Less setup of Kafka 2.8.0 (Confluent 6.2) available as a preview. |
KAFKA_volume_map_data |
false |
1.0.0 | Volume map data folder into the Kafka broker |
KAFKA_use_standard_port_for_external_interface |
true |
1.14.0 | Should the standard Port 9092 - 9095 to be used for the external interface or for the private interface (docker host)? |
KAFKA_nodes |
3 |
1.0.0 | number of Kafka Broker nodes to use |
KAFKA_internal_replication_factor |
3 |
1.6.0 | the replication factor to use for the Kafka internal topics |
KAFKA_cluster_id |
y4vRIwfDT0SkZ65tD7Ey2A |
1.17.0 | the unique identifier for the Kafka cluster, replace default value with with a unique base64 UUID using docker run confluentinc/cp-kafka kafka-storage random-uuid . |
KAFKA_delete_topic_enable |
true |
1.0.0 | allow deletion of Kafka topics |
KAFKA_auto_create_topics_enable |
false |
1.0.0 | allow automatic creation of Kafka topics |
KAFKA_message_timestamp_type |
CreateTime |
1.8.0 | Define whether the timestamp in the message is message create time or log append time. The value should be either CreateTime or LogAppendTime . |
KAFKA_log_dirs |
`` | 1.17.0 | A comma-separated list of the directories where the log data is stored. |
KAFKA_log_segment_bytes |
1073741824 (1 GB) |
1.8.0 | The maximum size of a single log file. |
KAFKA_log_retention_ms |
`` | 1.8.0 | The number of milliseconds to keep a log file before deleting it (in milliseconds), If not set, the value in log.retention.hours is used. If set to -1 , no time limit is applied. |
KAFKA_log_retention_hours |
168 (1 GB) |
1.8.0 | The number of hours to keep a log file before deleting it (in hours), tertiary to log.retention.ms property |
KAFKA_log_retention_bytes |
-1 (not used) |
1.8.0 | The maximum size of the log before deleting it. |
KAFKA_compression_type |
producer |
1.8.0 | Specify the final compression type for a given topic. This configuration accepts the standard compression codecs (gzip , snappy , lz4 , zstd ). It additionally accepts uncompressed which is equivalent to no compression; and producer which means retain the original compression codec set by the producer. |
KAFKA_min_insync_replicas |
1 |
1.8.0 | When a producer sets acks to "all" (or "-1"), min.insync.replicas specifies the minimum number of replicas that must acknowledge a write for the write to be considered successful. If this minimum cannot be met, then the producer will raise an exception (either NotEnoughReplicas or NotEnoughReplicasAfterAppend). |
KAFKA_replica_selector_class |
1.8.0 | The fully qualified class name that implements ReplicaSelector. This is used by the broker to find the preferred read replica. By default, an implementation that returns the leader. | |
KAFKA_confluent_log_placement_constraints |
{} |
1.8.0 | This configuration is a JSON object that controls the set of brokers (replicas) which will always be allowed to join the ISR. And the set of brokers (observers) which are not allowed to join the ISR. Only enabled if KAFKA_edition is set to enterprise . Find more information here: https://docs.confluent.io/current/installation/configuration/broker-configs.html. |
KAFKA_confluent_tier_feature |
false |
1.8.0 | enables Tiered Storage for a broker. Setting this to true allows a broker to utilize Tiered Storage |
KAFKA_confluent_cluster_link_enable |
false |
1.17.0 | Enable cluster linking feature. |
KAFKA_confluent_tier_enable |
false |
1.8.0 | Allow tiering for topic(s). This enables tiering and fetching of data to and from the configured remote storage. Setting this to true causes all non-compacted topics to use tiered storage. |
KAFKA_confluent_tier_backend |
S3 |
1.8.0 | refers to the cloud storage service to which a broker will connect, either S3 for Amazon S3 or GCS for Google Cloud Storage. |
KAFKA_confluent_tier_s3_bucket |
kafka-logs |
1.8.0 | the S3 bucket name used for writing and reading tiered data |
KAFKA_confluent_tier_s3_prefix |
`` | 1.8.0 | This prefix will be added to tiered storage objects stored in S3. |
KAFKA_confluent_tier_s3_region |
us-east-1 |
1.8.0 | the S3 region used for writing and reading tiered data |
KAFKA_confluent_tier_s3_aws_endpoint_override |
`` | 1.8.0 | override the endpoint of the S3 storage, if an on-premises storage such as Minio is used. |
KAFKA_confluent_tier_s3_force_path_style_access |
false |
1.13.0 | Configures the client to use path-style access for all requests. This flag is not enabled by default. The default behavior is to detect which access style to use based on the configured endpoint and the bucket being accessed. Setting this flag will result in path-style access being forced for all requests. |
KAFKA_confluent_tier_local_hotset_bytes |
-1 |
1.8.0 | When tiering is enabled, this configuration controls the maximum size a partition (which consists of log segments) can grow to on broker-local storage before we will discard old log segments to free up space. Log segments retained on broker-local storage is referred as the "hotset". Segments discarded from local store could continue to exist in tiered storage and remain available for fetches depending on retention configurations. By default there is no size limit only a time limit. Since this limit is enforced at the partition level, multiply it by the number of partitions to compute the topic hotset in bytes. |
KAFKA_confluent_tier_local_hotset_ms |
86400000 (1 day) |
1.8.0 | When tiering is enabled, this configuration controls the maximum time we will retain a log segment on broker-local storage before we will discard it to free up space. Segments discarded from local store could continue to exist in tiered storage and remain available for fetches depending on retention configurations. If set to -1, no time limit is applied. |
KAFKA_confluent_tier_archiver_num_threads |
2 |
1.8.0 | The size of the thread pool used for tiering data to remote storage. This thread pool is also used to garbage collect data in tiered storage that has been deleted. |
KAFKA_confluent_tier_fetcher_num_threads |
4 |
1.8.0 | The size of the thread pool used by the TierFetcher. Roughly corresponds to number of concurrent fetch requests that can be served from tiered storage. |
KAFKA_confluent_tier_topic_delete_check_interval_ms |
10800000 (3 hours) |
1.8.0 | Frequency at which tiered objects cleanup is run for deleted topics. |
KAFKA_confluent_tier_metadata_replication_factor |
1 |
1.8.0 | The replication factor for the tier metadata topic (set higher to ensure availability). |
KAFKA_log4j_root_level |
INFO |
1.14.0 | Change the default log4j logging levels of Kafka broker. |
KAFKA_log4j_loggers |
`` | 1.14.0 | Add new logging levels for a Confluent Platform component, i.e. to override the log levels for Kafka controller and request loggers use kafka.controller=TRACE,kafka.request.logger=WARN . |
KAFKA_tools_log4j_level |
INFO |
1.14.0 | Change the default log4j logging levels of the Kafka tools. |
KAFKA_additional_jars |
`` | 1.17.0 | A comma-separated list of custom JAR's to copy to the server. Specify only the name without the .jar extension. |
KAFKA_security_protocol |
PLAINTEXT |
1.17.0 | Default Protocol used to communicate with brokers. Either PLAINTEXT , SASL_PLAINTEXT (SSL or SASL_SSL not supported yet). Use the KAFKA_<listener>_security_protocol to overwrite the protocol for a given listener. |
KAFKA_<listener>_security_protocol |
`` | 1.17.0 | Overwrite the Protocol used to communicate with brokers for a given . Replace <listener> with controller , broker , local , dockerhost or external . Either PLAINTEXT , SASL_PLAINTEXT (SSL or SASL_SSL not supported yet). |
KAFKA_sasl_mechanism |
PLAIN |
1.17.0 | Default SASL mechanism used for client connections. Either PLAIN , SCRAM-SHA-256 , SCRAM-SHA-256 (OAUTHBEARER not supported yet). Use the KAFKA_<listener>_sasl_mechanism to overwrite the sasl mechanism for a given listener. |
KAFKA_<listener> _sasl_mechanism |
`` | 1.17.0 | Overwrite the SASL mechanism used to communicate with brokers for a given . Replace <listener> with controller , broker , local , dockerhost or external . Either PLAIN , SCRAM-SHA-256 , SCRAM-SHA-256 (OAUTHBEARER not supported yet). |
KAFKA_admin_user |
admin |
1.17.0 | The username of the Kafka admin user. |
KAFKA_admin_password |
admin-secret |
1.17.0 | The password of the Kafka admin user. |
KAFKA_admin_password |
admin-secret |
1.17.0 | The password of the Kafka admin user. |
Kafka Init | Currently only works with an unsecure cluster) | ||
KAFKA_INIT_enable |
false |
1.17.0 | Use Kafka Init container to automatically create Kafka Topics upon startups |
KAFKA_INIT_topics |
`` | 1.17.0 | A comma separated list of topics to create, using the following syntax [topic name]:partitions=[partitions]:replicas=[replicas]:[key]=[val]:[key]=[val] or an index-based shorthand [topic name]:[partitions]:[replicas] . |
Kafka CLI | |||
KAFKA_CLI_enable |
false |
1.16.0 | Use confluent Kafka |
Schema Registry | |||
SCHEMA_REGISTRY_enable |
false |
1.14.0 | Generate Confluent Schema Registry service |
SCHEMA_REGISTRY_flavour |
confluent |
1.14.0 | Which schema registry to use, either confluent or apicurio |
SCHEMA_REGISTRY_nodes |
false |
1.14.0 | number of Confluent Schema Registry nodes |
- Confluent Schema Registry | |||
CONFLUENT_SCHEMA_REGISTRY_enable |
false |
1.14.0 | Generate Confluent Schema Registry service - Deprecated!! will just set the SCHEMA_REGISTRY_enable and the SCHEMA_REGISTRY_flavour to confluent . |
CONFLUENT_SCHEMA_REGISTRY_use_zookeeper_election |
false |
1.14.0 | use Zookeeper for election of "master" Schema Registry node |
CONFLUENT_SCHEMA_REGISTRY_replication_factor |
1 |
1.14.0 | replication factor to use for the _schemas topic |
CONFLUENT_SCHEMA_REGISTRY_leader_eligibility |
true |
1.14.0 | if true , this node can participate in primary election. In a multi-colocated setup, turn this off for clusters in the secondary data center. |
CONFLUENT_SCHEMA_REGISTRY_mode_mutability |
true |
1.14.0 | if true the mode of this Schema Registry node can be changed. |
CONFLUENT_SCHEMA_REGISTRY_schema_compatibility_level |
backward |
1.14.0 | The schema compatibility type. One of none , backward , backward_transitive , forward , forward_transitive , full or full_transitive . |
CONFLUENT_SCHEMA_REGISTRY_log4j_root_loglevel |
info |
1.14.0 | Change the rootLogger loglevel of the Schema Registry. |
CONFLUENT_SCHEMA_REGISTRY_debug |
false |
1.14.0 | Boolean indicating whether extra debugging information is generated in some error response entities. |
- Apicurio Registry | |||
APICURIO_SCHEMA_REGISTRY_storage |
kafkasql |
1.14.0 | The storage type to use, either mem for In-Memory, sql for Postgresql, 'mssql' for SQL Server or kafkasql for Kafka based storage. |
APICURIO_SCHEMA_REGISTRY_sql_storage_database |
apicuriodb |
1.14.0 | The database to use if storage is sql or mssql . |
APICURIO_SCHEMA_REGISTRY_sql_storage_user |
apicurio |
1.14.0 | The user to use if storage is sql or mssql . |
APICURIO_SCHEMA_REGISTRY_sql_storage_password |
abc123! |
1.14.0 | The password to use if storage is sql or mssql . |
APICURIO_auth_enabled |
false |
1.14.0 | Enable authentication using Keycloak? If set to true then KEYCLOAK_enable will be enabled automatically. |
APICURIO_auth_anonymous_read_access_enabled |
false |
1.14.0 | Allow anonymous users (REST API calls with no authentication credentials provided) to make read-only calls to the REST API. |
APICURIO_auth_import_default_users |
false |
1.14.0 | If set to true , imports the following pre-defined users sr-view , sr-dev and sr-admin into the registry realm. |
APICURIO_basic_auth_enabled |
false |
1.14.0 | Enable basic authentication? To set it to true , APICURIO_auth_enable has to be enabled as well and Keycloak configured accordingly. Basic authentication is just the authentication API layer, the authentication will still happen over keycloak. |
APICURIO_eventsourcing_enabled |
false |
1.14.0 | Enable Event Sourcing, i.e. enable the Schema Registry to send events when changes are made to the registry. |
APICURIO_eventsourcing_transport |
kafka |
1.14.0 | The Protocol to use for transporting the events. Either kafka or http are supported. The events are formatted using the CNCF Cloud Events specification. |
APICURIO_eventsourcing_kafka_topic |
registry-events |
1.14.0 | The Kafka topic to use, if APICURIO_eventsourcing_transport is set to kafka . |
APICURIO_eventsourcing_http_endpoint |
registry-events |
1.14.0 | The HTTP endpoint to push the data to, if APICURIO_eventsourcing_transport is set to http . |
Apache Kafka Connect | |||
KAFKA_CONNECT_enable |
false |
1.2.0 | Generate Kafka Connect service |
KAFKA_CONNECT_nodes |
1 |
1.13.0 | number of Kafka Connect nodes |
KAFKA_CONNECT_connectors |
1.6.0 | A comma separated list of components to be installed from Confluent Hub. Specify identifier of the form owner/component:version for the component in Confluent Hub. | |
KAFKA_CONNECT_config_providers |
1.9.0 | A comma-separated list of names for ConfigProviders. Allows to use variables in connector configurations that are dynamically resolved when the connector is (re)started. Can be used for secrets or any other configuration information which should be resolved dynamically at runtime. | |
KAFKA_CONNECT_config_providers_classes |
1.9.0 | The Java class names for the providers listed in the KAFKA_CONNECT_config_providers property. |
|
KAFKA_CONNECT_map_settings_file |
false |
1.9.0 | Map the settings.properties file placed in the $DATAPLATFORM_HOME/conf.override/kafka-connect/ folder into the container. Use it when enabling the FileConfigProvider trough the KAFKA_CONNECT_config_providers and KAFKA_CONNECT_config_providers_classes properties. |
ksqlDB | |||
KSQLDB_enable |
false |
1.2.0 | ksqlDB is streaming SQL on Kafka. If you enable it, then SCHEMA_REGISTRY_enable will be automatically set to true . |
KSQLDB_edition |
oss |
1.15.0 | the edition to use, either oss for the Open Source or cp for the Confluent Platform. |
KSQLDB_nodes |
2 |
1.2.0 | number of ksqlDB nodes |
KSQLDB_suppress_enabled |
false |
1.9.0 | Enable the Suppress functionality which have been added with ksqldb 0.13.0 |
KSQLDB_suppress_buffer_size_bytes |
`` | 1.9.0 | Bound the number of bytes that the buffer can use for suppression. Negative size means the buffer will be unbounded. If the maximum capacity is exceeded, the query will be terminated. |
KSQLDB_query_pull_table_scan_enabled |
false |
1.12.0 | Config to control whether table scans are permitted when executing pull queries. Works with ksqlDB > 0.17.0. |
KSQLDB_response_http_headers_config |
`` | 1.12.0 | Use to select which HTTP headers are returned in the HTTP response for Confluent Platform components. Specify multiple values in a comma-separated string using the format [action][header name]:[header value] where [action] is one of the following: set , add , setDate , or addDate . |
KSQLDB_queries_file |
`` | 1.9.0 | A file that specifies a predefined set of queries for the ksqlDB cluster. |
KSQLDB_use_embedded_connect |
false |
1.13.0 | Enable embedded kafka connect. Place connector jars into ./plugins/kafka-connect or install them from Confluent Hub using the KSQLDB_connect_connectors property. Important: Will only be effective, if KAFKA_CONNECT_enable is set to false . |
KSQLDB_connect_connectors |
`` | 1.13.0 | A comma separated list of components to be installed from Confluent Hub. Specify identifier of the form owner/component:version for the component in Confluent Hub. |
KSQLDB_persistence_default_format_key |
KAFKA |
1.15.0 | Sets the default value for the KEY_FORMAT property if one is not supplied explicitly in CREATE TABLE or CREATE STREAM statements. |
KSQLDB_persistence_default_format_value |
`` | 1.15.0 | Sets the default value for the VALUE_FORMAT property if one is not supplied explicitly in CREATE TABLE or CREATE STREAM statements. |
KSQLDB_log_topic |
ksql_processing_log |
1.16.0 | the name of the processing log Kafka topic. |
Materialize | |||
MATERIALIZE_enable |
false |
1.12.0 | Enable Materialize streaming database for real-time applications. |
HStreamDB | |||
HSTREAMDB_enable |
false |
1.17.0 | Enable HStreamDB streaming database for real-time applications. |
Benthos | |||
BENTHOS_enable |
false |
1.16.0 | Generate Benthos service |
BENTHOS_SERVER_enable |
false |
1.16.0 | Generate Benthos Server with an interactive Bloblang editor. |
RisingWave | |||
RISINGWAVE_enable |
false |
1.17.0 | Generate RisingWave service |
RISINGWAVE_edition |
cluster |
1.16.0 | Specify the RisingWave edition to start, either playground , standalone or cluster . |
Confluent Replicator | |||
KAFKA_REPLICATOR_enable |
false |
1.6.0 | Enable Confluent Replicator (part of Confluent Enterprise Platform). |
Kafka Mirror Maker 2 | |||
KAFKA_MM2_enable |
false |
1.14.0 | Enable Kafka Mirror Maker 2. |
Confluent REST Proxy | |||
KAFKA_RESTPROXY_enable |
false |
1.2.0 | Generate Confluent REST Proxy service |
Confluent MQTT Proxy | |||
KAFKA_MQTTPROXY_enable |
false |
1.2.0 | Generate Confluent MQTT Proxy service |
KAFKA_MQTTPROXY_topic_regex_list |
`` | 1.8.0 | A comma-separated list of pairs of type : that is used to map MQTT topics to Kafka topics. |
Zilla | |||
ZILLA_enable |
false |
1.15.0 | Generate Zilla service |
Lenses Box | |||
LENSES_BOX_enable |
false |
1.14.0 | Generate Lenses Box (Development) service. |
LENSES_BOX_license |
false |
1.14.0 | Set the end-user-license string you have gotten from http://lenses.io by email. |
kcat (used to be kafkacat) | |||
KCAT_enable |
false |
1.13.0 | Generate kcat CLI service |
kaskade | |||
KASKADE_enable |
false |
1.16.0 | Generate kaskade CLI service |
kafkactl | |||
KAFKACTL_enable |
false |
1.15.0 | Generate kafkactl CLI service |
jikkou | |||
JIKKOU_enable |
false |
1.14.0 | Generate Jikkou service |
JIKKOU_exclude_resources_regexp |
`` | 1.14.0 | Use exclude option specify a regex patterns to use for excluding resources, when running the Jikkou tool. |
JIKKOU_include_resources_regexp |
`` | 1.14.0 | Use include option specify a regex patterns to use for including resources, when running the Jikkou tool. |
JIKKOU_set_labels |
`` | 1.14.0 | A comma separated list of key=value pairs, one for each label to set. |
JIKKOU_set_values |
`` | 1.14.0 | A comma separated list of key=value pairs, one for each value to set. |
JIKKOU_kafka_brokers_wait_for_enabled |
true |
1.16.0 | Wait for kafka brokers to be available. |
JIKKOU_kafka_brokers_wait_for_min_available_enabled |
true |
1.17.0 | Wait for the total number of Kafka brokers (value of KAFKA_broker_nodes ) to be available. |
JIKKOU_kafka_brokers_wait_for_retry_backoff_ms |
10000 |
1.17.0 | The amount of time to wait before verifying that brokers are available. |
JIKKOU_kafka_brokers_wait_for_timeout_ms |
120000 |
1.16.0 | Wait until brokers are available or this timeout is reached. |
JIKKOU_validation_default_topic_name_regex |
[a-zA-Z0-9\\._\\-]+ |
1.17.0 | The regex to use to validate the topic name against. |
JIKKOU_validation_default_topic_min_num_partitions |
1 |
1.17.0 | The minimum number of partitions to use when creating a topic. |
JIKKOU_validation_default_topic_min_replication_factor |
1 |
1.17.0 | The minimum number of replicas to use when creating a topic. |
Schema Registry UI | |||
SCHEMA_REGISTRY_UI_enable |
false |
1.2.0 | Generate Landoop Schema-Registry UI service |
SCHEMA_REGISTRY_UI_use_public_ip |
true |
1.10.0 | If true use PUBLIC_IP, if false use DOCKER_HOST_IP for the IP address of the schema registry API. |
SCHEMA_REGISTRY_UI_map_resolv_conf |
true |
1.13.0 | If true the conf/resolv.conf is mapped into the container to avoid the panic: runtime error: slice bounds out of range when running Docker on an actual Ubunutu. |
Kafka Topics UI | |||
KAFKA_TOPICS_UI_enable |
false |
1.4.0 | Generate Landoop Kafka Topics UI service |
KAFKA_TOPICS_UI_map_resolv_conf |
true |
1.13.0 | If true the conf/resolv.conf is mapped into the container to avoid the panic: runtime error: slice bounds out of range when running Docker on an actual Ubunutu. |
Kafka Connect UI | |||
KAFKA_CONNECT_UI_enable |
false |
1.2.0 | Generate Landoop Connect UI service |
KAFKA_CONNECT_UI_map_resolv_conf |
true |
1.13.0 | If true the conf/resolv.conf is mapped into the container to avoid the panic: runtime error: slice bounds out of range when running Docker on an actual Ubunutu. |
Cluster Manger for Apache Kafka (CMAK) | |||
CMAK_enable |
false |
1.3.0 | Generate CMAK (Cluster Manger for Apache Kafka) service (used to be Kafka Manager) |
CMAK_auth_enabled |
false |
1.7.0 | if set to true then the manager will be secured with basic authentication |
CMAK_username |
admin |
1.7.0 | the username to use for basic auth |
CMAK_password |
abc123! |
1.7.0 | the password to use for basic auth |
Kafdrop | |||
KAFDROP_enable |
false |
1.2.0 | Kafdrop is a Kafka UI service, which can be used to administer and managed a Kafka cluster. |
Kafka Admin | |||
KADMIN_enable |
false |
1.2.0 | Generate KAdmin service |
Kafka GUI for Apache Kafka (AKHQ) | |||
AKHQ_enable |
false |
1.3.0 | Generate AKHQ (used to be KafkaHQ) service |
AKHQ_topic_page_size |
25 |
1.16.0 | number of topics per page. |
AKHQ_topic_data_size |
50 |
1.16.0 | max record per page when showing topic data. |
AKHQ_topic_data_poll_timeout |
1000 |
1.16.0 | The time, in milliseconds, spent waiting in poll if data is not available in the buffer. |
AKHQ_topic_data_kafka_max_message_length |
1000000 |
1.16.0 | Max message length allowed to send to UI when retrieving a list of records (in bytes) |
AKHQ_default_view |
HIDE_INTERNAL |
1.16.0 | Configure the default topic list view, one of ALL , HIDE_INTERNAL , HIDE_INTERNAL_STREAM , HIDE_STREAM . |
AKHQ_sort |
OLDEST |
1.16.0 | Configure the default sort order for topic data, one of OLDEST , NEWEST . |
AKHQ_show_consumer_groups |
true |
1.16.0 | Should AKHQ display the consumer groups column on a topic? |
AKHQ_show_all_consumer_groups |
true |
1.16.0 | Should AKHQ expand the consumer group list instead of just showing one? |
AKHQ_show_last_record |
true |
1.16.0 | Should AKHQ display the last record timestamp sent to a topic? |
AKHQ_read_only_mode |
false |
1.16.0 | Setup AKHQ to read-only? |
AKHQ_auth_enable |
false |
1.17.0 | Enable authentication? |
AKHQ_auth_type |
basic |
1.17.0 | Which authentication type should be used. Currently only basic is supported by the generator. |
AKHQ_username |
admin |
1.17.0 | The username to configure for basic authentication. |
AKHQ_password |
admin |
1.17.0 | The password to configure for basic authentication. |
Kafka UI | |||
KAFKA_UI_enable |
false |
1.11.0 | Generate Kafka UI service |
EFAK (previously Kafka Eagle) | |||
EFAK_enable |
false |
1.13.0 | Generate EFAK (Kafak Eagle) service |
Kowl | |||
KOWL_enable |
false |
1.13.0 | Generate Kowl service |
Redpanda Console | |||
REDPANDA_CONSOLE_enable |
false |
1.16.0 | Generate Redpanda Console (new home for Kowl) service |
REDPANDA_CONSOLE_edition |
oss |
1.16.0 | The edition of Redpanda Console, either oss or enterprise . |
Kouncil | |||
KOUNCIL_enable |
false |
1.14.0 | Generate Kouncil service |
Kafka Magic | |||
KAFKA_MAGIC_enable |
false |
1.14.0 | Generate Kafka Magic service |
Kafka WebView | |||
KAFKA_WEBVIEW_enable |
false |
1.15.0 | Generate Kafka WebView service |
kpow | |||
KPOW_enable |
false |
1.16.0 | Generate kpow service |
KPOW_edition |
cc |
1.16.0 | The edition to use, either ce (community edition) or se (standard edition) or ee (enterprise edition) |
KPOW_use_external_license_info |
false |
1.16.0 | if true , then the license information should be placed in a file called ./license/kpow/kpow-license.env , otherwise the config settings KPOW_licenseXXX should be used. |
KPOW_license_id |
`` | 1.16.0 | The kpow license id you retrieved with your license. You can get a trial license from https://kpow.io/get-started/#individual. |
KPOW_license_code |
`` | 1.16.0 | The kpow license code you retrieved with your license. |
KPOW_licensee |
`` | 1.16.0 | The kpow license you retrieved with your license. |
KPOW_license_signature |
`` | 1.16.0 | The kpow license signature you retrieved with your license. |
KPOW_license_expiry |
`` | 1.16.0 | The kpow license expiry you retrieved with your license. |
Conduktor Platform | |||
CONDUKTOR_PLATFORM_enable |
false |
1.16.0 | Generate Conduktor Platform service |
CONDUKTOR_PLATFORM_license_key |
`` | 1.16.0 | the Conduktor license key for the Enterprise plan. Leave empty for the free plan. |
CONDUKTOR_PLATFORM_organisation_name |
default |
1.16.0 | The name of the organisation. |
CONDUKTOR_PLATFORM_admin_email |
[email protected] |
1.16.0 | Admin user name (either a username or an email address). |
CONDUKTOR_PLATFORM_admin_psw |
abc123! |
1.16.0 | Admin user password. |
CONDUKTOR_PLATFORM_use_external_postgres |
false |
1.16.0 | Use external Postgresql database for the Conduktor metadata. |
CONDUKTOR_PLATFORM_postgres_host |
postgresql |
1.16.0 | Hostname of the Postgresql database, applicable if CONDUKTOR_PLATFORM_use_external_postgres is set to true . |
CONDUKTOR_PLATFORM_postgres_port |
5432 |
1.16.0 | Port of the Postgresql database, applicable if CONDUKTOR_PLATFORM_use_external_postgres is set to true . |
CONDUKTOR_PLATFORMpostgres_db |
postgres |
1.16.0 | Database name of the Postgresql database, applicable if CONDUKTOR_PLATFORM_use_external_postgres is set to true . |
CONDUKTOR_PLATFORM_postgres_username |
postgres |
1.16.0 | Username of the Postgresql database, applicable if CONDUKTOR_PLATFORM_use_external_postgres is set to true . |
CONDUKTOR_PLATFORM_postgres_password |
abc123! |
1.16.0 | Password of the Postgresql database, applicable if CONDUKTOR_PLATFORM_use_external_postgres is set to true . |
Kadeck | |||
KADECK_enable |
false |
1.17.0 | Generate Kadeck service |
KADECK_edition |
free |
1.17.0 | The Kadeck version to use, either free or enterprise . |
KADECK_free_email_address |
free |
1.17.0 | The email address to use for the free edtion. |
KADECK_ee_team_id |
free |
1.17.0 | The team id to use for the enterprise edition. |
KADECK_ee_secret |
free |
1.17.0 | The secret to use for the enterprise ediiton. |
Kafkistry | |||
KAFKISTRY_enable |
false |
1.16.0 | Generate Kafkistry service |
KAFKISTRY_users_passwords |
`admin | abc123! | Admy |
KAFKISTRY_owner_groups |
`Test_Group | admin` | 1.17.0 |
KLAW | |||
KLAW_enable |
false |
1.17.0 | Generate KLAW service |
Kafka Connector Board | |||
KAFKA_CONNECTOR_BOARD_enable |
false |
1.16.0 | Generate Kafka Connector Board service |
Streams Explorer | |||
STREAMS_EXPLORER_enable |
false |
1.11.0 | Generate Streams Explorer service |
Kafka Lag Exporter | |||
KAFKA_LAG_EXPORTER_enable |
false |
1.12.0 | Generate Kafka Lag Exporter service |
Remora | |||
REMORA_enable |
false |
1.14.0 | Generate Remora service |
Burrow | |||
BURROW_enable |
false |
1.14.0 | Generate Burrow service |
BURROW_UI_enable |
false |
1.14.0 | Generate Burrow UI service |
BURROW_DASHBOARD_enable |
false |
1.14.0 | Generate Burrow Dashboard service |
Confluent Control Center | |||
KAFKA_CCC_enable |
false |
1.1.0 | Generate Confluent Metrics Reporter service |
Debezium Server | |||
DEBEZIUM_SERVER_enable |
false |
1.13.0 | Generate Debezium Server service |
DEBEZIUM_SERVER_volume_map_data |
false |
1.13.0 | Volume map data folder into the Debezium Server service |
Debezium UI | |||
DEBEZIUM_UI_enable |
false |
1.12.0 | Generate Debezium UI service |
Config | Default | Since | Description |
---|---|---|---|
Apache Hadoop | |||
HADOOP_enable |
false |
1.0.0 | Generate Hadoop services |
HADOOP_datanodes |
2 |
1.0.0 | number of Hadoop Datanodes |
Apache Spark | |||
SPARK_enable |
false |
1.0.0 | Generate Spark services |
SPARK_base_version |
2.4 |
1.15.0 | Which base version of Spark to use, one of 2.4 or 3.1 or 3.2 . Replaces SPARK_major_version from 1.15.0 . |
SPARK_catalog |
in-memory |
1.2.0 | the catalog to use for Spark, either use in-memory or hive . |
SPARK_workers |
2 |
1.0.0 | number of Spark Worker nodes |
SPARK_master_opts |
1.10.0 | Configuration properties that apply only to the master in the form "-Dx=y" (default: none). See here list of possible options. | |
SPARK_worker_cores |
1.10.0 | Total number of cores to allow Spark applications to use on the machine (default: all available cores). | |
SPARK_worker_memory |
1.10.0 | Total amount of memory to allow Spark applications to use on the machine, e.g. 1000m, 2g (default: total memory minus 1 GiB); note that each application's individual memory is configured using its spark.executor.memory property. | |
SPARK_worker_opts |
-Dspark.worker.cleanup.enabled=true |
1.10.0 | Configuration properties that apply only to the worker in the form "-Dx=y" (default: none). See here list of possible options. |
SPARK_jars_repositories |
1.6.0 | Comma-separated list of additional remote repositories to search for the maven coordinates given with --packages or spark.jars.packages (spark.jars.repositories runtime environment setting) |
|
SPARK_jars |
1.8.0 | Comma-separated list of jars to include on the driver and executor classpath. Globs are allowed. | |
SPARK_jars_packages |
1.5.2 | Comma-separated list of Maven coordinates of jars to include on the driver and executor classpath (will be added to the spark.jars.packages runtime environment setting). The transitive dependencies are downloaded automatically. |
|
SPARK_install_jars_packages |
1.16.0 | Comma-separated list of Maven coordinates of jars to install into /spark/jars when starting Spark. The Maven dependencies are downloaded without transitive dependencies. You have to manually assure, that transitive dependencies are included. |
|
SPARK_jars_excludes |
1.8.0 | Comma-separated list of groupId:artifactId , to exclude while resolving the dependencies provided in spark.jars.packages to avoid dependency conflicts. |
|
SPARK_jars_ivySettings |
1.5.2 | Path to an Ivy settings file to customize resolution of jars specified using spark.jars.packages instead of the built-in defaults, such as maven central (spark.jars.ivysettings runtime environment setting) |
|
SPARK_driver_extraJavaOptions |
1.5.2 | A string of extra JVM options to pass to the driver spark.driver.extraJavaOptions runtime environment setting |
|
SPARK_executor_extraJavaOptions |
1.5.2 | A string of extra JVM options to pass to the executor spark.executor.extraJavaOptions runtime environment setting |
|
SPARK_sql_warehouse_dir |
'' | 1.16.0 | A string specifying the default Spark SQL Hive Warehouse location (spark.sql.warehouse.dir runtime environment setting). If left empty, then if defaults to hdfs://namenode:9000/user/hive/warehouse if using HDFS, s3a://admin-bucket/hive/warehouse if using S3 and file:///hive/warehouse if none of the two. |
SPARK_cores_max |
1.10.0 | the maximum amount of CPU cores to request for the application from across the cluster (not from each machine). If not set, the default will be spark.deploy.defaultCores to be set through SPARK_MASTER_OPTS . |
|
SPARK_executor_memory |
1.10.0 | Amount of memory to use per executor process, in the same format as JVM memory strings with a size unit suffix ("k", "m", "g" or "t") (e.g. 512m, 2g). If not set, the default will be 1g . |
|
SPARK_table_format_type |
'' | 1.16.0 | The table format to enable in Spark, either 'delta', 'iceberg' or 'hudi' |
SPARK_datahub_agent_enable |
'false' | 1.17.0 | Enable the DataHub agent to integrate Spark with DataHub. For it to take effect, either DATAHUB_enable or external.DATAHUB_enable has to be set to true . |
Apache Spark History Server | |||
SPARK_HISTORYSERVER_enable |
false |
1.0.0 | Generate Spark History Server |
Apache Spark Thrift Server | |||
SPARK_THRIFTSERVER_enable |
false |
1.0.0 | Generate Spark Thrift Server |
SPARK_THRIFTSERVER_cores_max |
1.16.0 | the maximum amount of CPU cores to request for the application from across the cluster (not from each machine). If not set, the default will be spark.deploy.defaultCores to be set through SPARK_MASTER_OPTS . |
|
SPARK_THRIFTSERVER_executor_memory |
1.16.0 | Amount of memory to use per executor process, in the same format as JVM memory strings with a size unit suffix ("k", "m", "g" or "t") (e.g. 512m, 2g). If not set, the default will be 1g . |
|
Apache Livy | |||
LIVY_enable |
false |
1.1.0 | Generate Spark Livy Server |
Apache Flink | |||
FLINK_enable |
false |
1.13.0 | Generate Flink services |
FLINK_taskmanagers |
1 |
1.13.0 | number of Flink TaskManager nodes |
FLINK_SQL_CLI_enable |
false |
1.13.0 | Generate Flink SQL CLI service |
Nussknacker | |||
NUSSKNACKER_enable |
false |
1.16.0 | Generate Nussknacker services |
NUSSKNACKER_scenario_type |
streaming |
1.16.0 | The processing mode (how a scenario deployed on an engine interacts with the outside world), one of streaming , streaming-lite-embedded , request-response-embedded . |
Apache Tika | |||
TIKA_enable |
false |
1.13.0 | Generate Apache Tika Server |
TIKA_edition |
minimal |
1.13.0 | either minimal which contains only Apache Tika and it's core dependencies or full , which also includes dependencies for the GDAL and Tesseract OCR parsers |
Apache Hive | |||
HIVE_enable |
false |
1.0.0 | Generate Hive service |
Apache Hue | |||
HUE_enable |
false |
1.0.0 | Generate Hue UI service |
Config | Default | Since | Description |
---|---|---|---|
Apache Avro Tools | |||
AVRO_TOOLS_enable |
false |
1.14.0 | Generate Avro Tools CLI service. |
Apache Parquet Tools | |||
PARQUET_TOOLS_enable |
false |
1.16.0 | Generate Parquet Tools CLI service. |
Apache Atlas | |||
ATLAS_enable |
false |
1.0.0 | Generate Atlas service |
ATLAS_provision_atlas_sample_data |
false |
1.2.0 | Provision Apache Atlas sample data? |
ATLAS_provision_amundsen_sample_data |
false |
1.2.0 | Provision Amundsen sample types? |
ATLAS_hive_hook_enable |
false |
1.17.0 | Enable the Hive Hook in hive-server ? Only relevant if HIVE_enable is set to true . The JARs are no longer provided and need to be downloaded manually and provided in ./plugins/hive-server/apache-atlas-hive-hook/hook . |
DataHub | |||
DATAHUB_enable |
false |
1.4.0 | Generate DataHub service |
DATAHUB_volume_map_data |
false |
1.4.0 | should the data of the databases be mapped to the outside of the container. |
DATAHUB_mae_consumer_standalone |
false |
1.13.0 | enable standalone MAE consumer? If set to false , the MAE will be started as part of the GMS service. |
DATAHUB_mce_consumer_standalone |
false |
1.13.0 | enable standalone MCE consumer? If set to false , the MAE will be started as part of the GMS service. |
DATAHUB_ui_ingestion_enabled |
true |
1.16.0 | enable ingestion over DataHub UI. |
DATAHUB_analytics_enabled |
false |
1.16.0 | enable DataHub client-side analytics. |
DATAHUB_use_kibana |
false |
1.14.0 | enable Kibana service on the Eleasticsearch database of DataHub. |
DATAHUB_auth_policies_enabled |
true |
1.14.0 | enable the Policies feature? |
DATAHUB_telemetry_enabled |
false |
1.16.0 | enable sending of Telemetry to the DataHub project? |
DATAHUB_precreate_topics |
true |
1.16.0 | pre-create Kafka Topics on startup? |
DATAHUB_graph_service_impl |
neo4j |
1.14.0 | The Graph database to be used as the backend, either one of neo4j or elasticsearch . |
DATAHUB_graph_service_diff_mode_enabled |
true |
1.16.0 | enable diff mode for graph writes? |
. | |||
DATAHUB_search_service_impl |
elasticsearch |
1.17.0 | The Search database to be used as the backend, either one of elasticsearch or opensearch . |
DATAHUB_provision_sample_data |
false |
1.14.0 | Should sample data be provisioned? |
DATAHUB_secret |
abc123!abc123! |
1.16.0 | Allows for changing the datahub secret. |
DATAHUB_map_user_props |
false |
1.16.0 | Map the user.props file from ./secret/datahub into the datahub-frontend-react container for changing the datahub user password or for adding new users. |
Amundsen | |||
AMUNDSEN_enable |
false |
1.0.0 | Generate Amundsen service |
AMUNDSEN_metastore |
amundsen |
1.2.0 | the Amundsen backend to use, either amundsen or atlas . |
Dataverse | |||
DATAVERSE_enable |
false |
1.17.0 | Generate Dataverse service |
DATAVERSE_mail_host |
maildev |
1.17.0 | A hostname (w/o port!) where to reach a Mail Server on port 25. |
DATAVERSE_s3_bucket |
`` | 1.17.0 | The bucket to use for dataverse in S3. Only has an effect if also either Minio is enabled (MINIO_enable set to true ) or an external S3 is configured. |
DATAVERSE_default_storage |
`` | 1.17.0 | Storage service to use by default. |
DATAVERSE_download_redirect |
false |
1.17.0 | Enable direct download or proxy through dataverse. |
DATAVERSE_upload_redirect |
false |
1.17.0 | Enable direct upload of files added to a dataset to the S3 store. |
DATAVERSE_url_expiration_minutes |
60 |
1.17.0 | If direct uploads/downloads: time until links expire. |
DATAVERSE_direct_upload_limit |
`` | 1.17.0 | Maximum size of direct upload files that can be ingested. Defaults to no limit. |
DATAVERSE_minimal_part_size |
`` | 1.17.0 | Multipart direct uploads will occur for files larger than this. |
DATAVERSE_volume_map_data |
false |
1.17.0 | Map the data folder into the container. |
CKAN | |||
CKAN_enable |
false |
1.17.0 | Generate CKAN service |
CKAN_use_dev_edition |
false |
1.17.0 | Enable the developer edition? |
CKAN_volume_map_storage |
false |
1.17.0 | Map the storage folder into the container. |
CKAN_sysadmin_password |
abc123abc123 |
1.17.0 | the sysadmin password. |
CKAN_postgres_db |
ckan |
1.17.0 | The PostgreSQL database to use. |
CKAN_postgres_user |
ckan |
1.17.0 | The user to connect to PostgreSQL. |
CKAN_postgres_password |
abc123! |
1.17.0 | The password of the user to connect to PostgreSQL. |
CKAN_datastore_readonly_user |
ckanro |
1.17.0 | The readonly user to connect to PostgreSQL. |
CKAN_datastore_readonly_password |
ckan |
1.17.0 | The password of the readonly user to connect to PostgreSQL. |
CKAN_use_s3_store |
false |
1.17.0 | Enable the use of S3 storage, either enable local Minio or external S3 support for it to be active. |
CKAN_s3_store_bucket_name |
ckan |
1.17.0 | The name of the S3 bucket to use when S3 storage is enabled. |
Marquez | |||
MARQUEZ_enable |
false |
1.5.0 | Generate Marquez service, an open source metadata service for the collection, aggregation, and visualization of a data ecosystem’s metadata. |
MARQUEZ_volume_map_data |
false |
1.5.0 | should the data of the databases be mapped to the outside of the container. |
MARQUEZ_provision_marquez_sample_data |
false |
1.5.0 | Provision Marquez sample data. |
Apache Ranger | |||
RANGER_enable |
false |
1.5.0 | Generate Apache Ranger service. Ranger is a framework to enable, monitor and manage comprehensive data security across the Hadoop platform. |
RANGER_postgresql_volume_map_data |
false |
1.8.0 | Volume map the data folder into the container. |
Open Policy Agent (OPA) | |||
OPA_enable |
false |
1.17.0 | Generate Open Policy Agent service. |
OPA_log_level |
INFO |
1.17.0 | The log level to use. |
Styra Enterprise Open Policy Agent | |||
STYRA_EOPA_enable |
false |
1.17.0 | Generate Styra Enterprise OPA service. |
STYRA_EOPA_license_key |
`` | 1.17.0 | The license key for Styra Enterpise OPA. |
STYRA_EOPA_log_level |
INFO |
1.17.0 | The log level to use. |
OpenLDAP | |||
OPENLDAP_enable |
false |
1.16.0 | Generate OpenLDAP service |
OPENLDAP_volume_map_data |
false |
1.17.0 | Volume map the data folder into the container. |
OPENLDAP_volume_map_config |
false |
1.17.0 | Volume map the config folder into the container. |
phpLDAPadmin | |||
PHP_LDAP_ADMIN_enable |
false |
1.16.0 | Generate phpLDAPadmin service |
PHP_LDAP_ADMIN_ldap_hosts |
openldap |
1.16.0 | the LDAP host to connect to |
LDAP User Manager | |||
LDAP_USER_MANAGER_enable |
false |
1.16.0 | Generate LDAP User Manager service |
LDAP_USER_MANAGER_ldap_host |
openldap |
1.16.0 | the LDAP host to connect to |
Vault | |||
VAULT_enable |
false |
1.5.0 | Generate Vault service |
VAULT_use_dev_mode |
false |
1.14.0 | Start Vault in development mode? Dev server should only be used for experimentation with Vault features. |
VAULT_dev_mode_token |
abc123! |
1.17.0 | The value of the vault token to use if in development mode (VAULT_use_dev_mode is set to true ). |
VAULT_volume_map_data |
false |
1.5.0 | Volume map data folder into the Vault service |
Keycloak | |||
KEYCLOAK_enable |
false |
1.14.0 | Generate Keycloak service |
KEYCLOAK_import_realm_enable |
true |
1.17.0 | Import realm files (with extension .json ) available in /opt/keycloak/data/import automatically. |
KEYCLOAK_db_vendor |
h2 |
1.14.0 | The database to use, currently postgres , h2 , mssql or mysql are supported by platys. |
KEYCLOAK_loglevel |
info |
1.14.0 | Sets the log level for Keycloak. |
KEYCLOAK_features |
`` | 1.17.0 | A comma-separated list of features to enable (some supported features, and all preview features are disabled by default). See https://www.keycloak.org/server/features. |
Curity Identity Server | |||
CURITY_enable |
false |
1.16.0 | Generate Curity service |
CURITY_logging_level |
INFO |
1.16.0 | Logging level |
CURITY_password |
abc123! |
1.16.0 | The password of the admin user |
CURITY_volume_map_license_file |
false |
1.16.0 | Map the license.json file with the Curity license into the container. The file has to be placed unter ./licenses/curity/ . A license can also be specified/uploaded through the Admin UI. |
Config | Default | Since | Description |
---|---|---|---|
StreamSets DataCollector | |||
STREAMSETS_enable |
false |
1.0.0 | Generate StreamSets service |
STREAMSETS_volume_map_data |
false |
1.1.0 | Volume map StreamSets data folder |
STREAMSETS_volume_map_logs |
false |
1.9.0 | Volume map StreamSets logs folder |
STREAMSETS_volume_map_security_policy |
false |
1.8.0 | Volume map StreamSets sdc-security.policy file |
STREAMSETS_activate_https |
false |
1.2.0 | Activate HTTPS for StreamSets |
STREAMSETS_additional_port_mappings |
0 |
1.2.0 | Number of additional port mappings to add to the StreamSets service. This is helpful when working with the HTTP Server component of SDC, so that these services can be reached from outside without having to manually change the docker-compose.yml. |
STREAMSETS_stage_libs |
`` | 1.11.0 | A comma separated list of additional StreamSets Stage Libs to be installed. The names can be found here. |
STREAMSETS_enterprise_stage_libs |
`` | 1.11.0 | A comma separated list of additional StreamSets Enterprise Stage Libs to be installed. The names can be found here. |
STREAMSETS_jdbc_jars |
`` | 1.14.0 | A comma separated list of JDBC Jars to be installed, made available in ./plugins/streamsets/libs-extras/streamsets-datacollector-jdbc-lib/ . |
STREAMSETS_install_pipelines |
false |
1.11.0 | Automatically install StreamSets pipelines placed inside ./scripts/streamsets/pipelines ? |
STREAMSETS_http_authentication |
`` | 1.9.0 | The authentication for the HTTP endpoint of the data collector, one of 'none', 'basic', 'digest', 'form' or 'aster' |
STREAMSETS_sdc_id |
`` | 1.13.0 | Set the sdc.id file to the value provided here. The SDC ID a.k.a. product id is imporant for the activiation of StreamSets, as the Activation Code is linked to the product id. |
STREAMSETS_use_external_conf_file |
false |
1.15.0 | Indicates if an external runtime configuration properties file should be used. If set to true , then a file named configuration.properties should be defined in the folder ./custom-conf/streamsets . |
StreamSets DataCollector Edge | |||
STREAMSETS_EDGE_enable |
false |
1.2.0 | Generate StreamSets Edge service |
StreamSets Transformer | |||
STREAMSETS_TRANSFORMER_enable |
false |
1.2.0 | Generate StreamSets Transformer service |
StreamSets DataOps Platform | |||
STREAMSETS_DATAOPS_enable |
false |
1.14.0 | Generate StreamSets Cloud Self-Managed service |
STREAMSETS_DATAOPS_deployment_sch_url |
https://eu01.hub.streamsets.com |
1.14.0 | The Streamsets cloud deployment URL, retrieved fro the StreamSets DataOps portal. |
STREAMSETS_DATAOPS_deployment_id |
`` | 1.14.0 | The deployment id, retrieved from the StreamSets DataOps portal. Alternatively you can use the environment variable STREAMSETS_DATAOPS_DEPLOYMENT_ID to specify the value. |
STREAMSETS_DATAOPS_deployment_token |
`` | 1.14.0 | The deployment token, retrieved from the StreamSets DataOps portal. Alternatively you can use the environment variable STREAMSETS_DATAOPS_DEPLOYMENT_TOKEN to specify the value. |
Apache NiFi | |||
NIFI_enable |
false |
1.11.0 | Generate Apache NiFi service |
NIFI_major_version |
false |
1.17.0 | Which major version of NiFI to use, one of 1 or 2 . The exact version can then be specified using either the NIFI_1_version or NIFI_2_version setting. |
NIFI_run_secure |
false |
1.15.0 | Enable HTTPs with single user authentication |
NIFI_use_custom_certs |
false |
1.16.0 | Use custom certificates (keystore and truststore)? if yes, place them in ./custom-conf/nifi/security . |
NIFI_keystore_password |
`` | 1.16.0 | The password for the keystore, if NIFI_use_custom_certs is set to true . |
NIFI_key_password |
`` | 1.16.0 | The password for the certificate in the keystore, if NIFI_use_custom_certs is set to true . If left empty then the password for the keystore is used. |
NIFI_truststore_password |
`` | 1.16.0 | The password for the truststore, if NIFI_use_custom_certs is set to true . |
NIFI_inital_admin_identitiy |
`` | 1.16.0 | The initial admin user is granted access to the UI and given the ability to create additional users, groups, and policies. The value of this property could be a certificate DN , LDAP identity (DN or username), or a Kerberos principal. |
NIFI_username |
nifi |
1.15.0 | The username to be used when Nifi runs in secure mode (NIFI_run_secure is set to true). If NIFI_username is set to empty, then a random username and password will be generated on startup and shown in the log. |
NIFI_password |
1234567890AB |
1.15.0 | The password to be used when Nifi runs in secure mode (NIFI_run_secure is set to true). Must be 12 characters minimum, otherwise NiFi will generate a random username and password. |
NIFI_nodes |
1 |
1.15.0 | The number of NiFi nodes to generate. Can be connected to a cluster by setting NIFI_create_cluster to true . |
NIFI_create_cluster |
false |
1.15.0 | Should the NiFi nodes be connected to a cluster. |
NIFI_election_max_wait |
1 min |
1.15.0 | Specifies the amount of time to wait before electing a Flow as the "correct" Flow, when starting a cluster. |
NIFI_volume_map_custom_config |
false |
1.16.0 | Volume map custom config (./custom-conf/nifi/nifi.properties ) into the Apache NiFi service |
NIFI_volume_map_data |
false |
1.9.0 | Volume map various data folder into the Apache NiFi service |
NIFI_volume_map_logs |
false |
1.9.0 | Volume map logs folder into the Apache NiFi service |
NIFI_jvm_heap_init |
`` | 1.9.0 | the JVM Memory initial heap size (use values acceptable to the JVM Xmx and Xms parameters such as 1g or 512m) |
NIFI_jvm_heap_max |
`` | 1.9.0 | the JVM Memory maximum heap size (use values acceptable to the JVM Xmx and Xms parameters such as 1g or 512m) |
NIFI_python_enabled |
false |
1.16.0 | Use a NiFi image with a Python installation? |
NIFI_python_provide_requirements_file |
false |
1.16.0 | Copy the file ./custom-conf/nifi/requirements.txt file with additional python dependencies into the container? |
NIFI_python_version |
3.10 |
1.16.0 | The Python version to use. Currently 3.10 is the only one supported. |
Apache NiFi Registry | |||
NIFI_REGISTRY_enable |
false |
1.9.0 | Generate Apache NiFi Registry service |
NIFI_REGISTRY_major_version |
false |
1.17.0 | Which major version of NiFI Registry to use, one of 1 or 2 . The exact version can then be specified using either the NIFI_REGISTRY_1_version or NIFI_REGISTRY_2_version setting. |
NIFI_REGISTRY_log_level |
INFO |
1.17.0 | The NiFi Registry Log level, one of TRACE , DEBUG , INFO , WARN , ERROR . |
NIFI_REGISTRY_run_secure |
false |
1.16.0 | Run NiFi Registry secure with two-ways TLS? |
NIFI_REGISTRY_volume_map_data |
false |
1.9.0 | Volume map database folder into the Apache NiFi Registry service |
NIFI_REGISTRY_volume_map_logs |
false |
1.17.0 | Volume map logs folder into the Apache NiFi Registry service |
NIFI_REGISTRY_volume_map_flow_storage |
false |
1.16.0 | Volume map the flow storage folder (for file or git storage)? |
NIFI_REGISTRY_flow_storage_folder_on_dockerhost |
./container-volume/nifi-registry/flow-storage |
1.16.0 | The folder on the docker host which is used as the local (on docker host) folder for the flow storage. |
NIFI_REGISTRY_flow_provider |
file |
1.16.0 | The flow persistence provider, either file or git . |
NIFI_REGISTRY_git_remote |
`` | 1.16.0 | Automatically push to the specified remote, i.g. origin . This property is optional and if not specified, commits will remain in the local repository unless a push is performed manually. |
NIFI_REGISTRY_git_user |
`` | 1.16.0 | The user used to push to the Git repository. |
NIFI_REGISTRY_git_password |
`` | 1.16.0 | The password of the user used to push to the Git repository. Only effective if NIFI_REGISTRY_git_user is used as well. |
NIFI_REGISTRY_git_use_ssh_auth |
false |
1.17.0 | Use SSH keys to authenticate against the Git repository. |
NIFI_REGISTRY_git_repo |
`` | 1.16.0 | The Git remote repository URI. |
NIFI_REGISTRY_bundle_provider |
file |
1.16.0 | The bundle persistence provider, either file or s3 . |
NIFI_REGISTRY_s3_bucket_name |
`` | 1.16.0 | The bucket name if the s3 bundle persistence provider is selected. |
NIFI_REGISTRY_s3_key_prefix |
`` | 1.16.0 | The key prefix that will be added to all S3 keys, if the s3 bundle persistence provider is selected. |
Apache NiFi Toolkit | |||
NIFI_TOOLKIT_enable |
false |
1.15.0 | Generate Apache NiFi Toolkit service |
NIFI_TOOLKIT_major_version |
false |
1.17.0 | Which major version of NiFI Toolkit to use, one of 1 or 2 . The exact version can then be specified using either the NIFI_TOOLKIT_1_version or NIFI_TOOLKIT_2_version setting. |
MonitoFi | |||
MONITOFI_enable |
false |
1.17.0 | Generate MonitoFi service (you also have to enable InfluxDB and Grafana) |
MONITOFI_sleep_interval |
60 |
1.17.0 | The refresh interval |
Apache StreamPipes | |||
STREAMPIPES_enable |
false |
1.14.0 | Generate Apache StreamPipeps service |
Cribl Stream | |||
CRIBL_STREAM_enable |
false |
1.17.0 | Generate Cribl Stream service |
CRIBL_STREAM_workers |
1 |
1.17.0 | Number of Cribl Stream workers, if started in distributed mode |
CRIBL_STREAM_volume_map_data |
false |
1.17.0 | Volume map Cribl data folder? |
Cribl Edge | |||
CRIBL_EDGE_enable |
false |
1.17.0 | Generate Cribl Edge service |
CRIBL_EDGE_nodes |
1 |
1.17.0 | Number of Cribl Edge nodes, if started in distributed mode |
CRIBL_EDGE_managed |
false |
1.17.0 | Do we want Cribl Edge to be managed by a Cribl Master node. For that you have to also enable Cribl Stream. |
CRIBL_EDGE_fleet |
default_fleet |
1.17.0 | Name of the Fleet the edge node is assigned to. |
Conduit | |||
CONDUIT_enable |
false |
1.15.0 | Generate Conduit service |
FluentD | |||
FLUENTD_enable |
false |
1.17.0 | Generate FluentD service |
FLUENTD_conf_file_name |
`` | 1.17.0 | Name of a custom config file to map into the FluentD container. |
FLUENTD_s3_bucket_name |
docker-log |
1.17.0 | Name of the S3 bucket to use if Minio is enabled. |
FLUENTD_s3_bucket_file_type |
json |
1.17.0 | Name of the S3 bucket to use if Minio is enabled. |
FileBeat | |||
FILEBEAT_enable |
false |
1.17.0 | Generate FileBeat service |
FILEBEAT_conf_file_name |
`` | 1.17.0 | Name of a custom config file to map into the FileBeat container. |
Apache Sqoop | |||
SQOOP_enable |
false |
1.3.0 | Generate Apache Sqoop service |
Node-RED | |||
NODERED_enable |
false |
1.1.0 | Generate Node-RED service |
NODERED_volume_map_data |
false |
1.1.0 | Volume map data folder into the Node-RED service |
Streamsheets | |||
STREAMSHEETS_enable |
false |
1.6.0 | Generate Streamsheets service |
Spring Cloud Data Flow | |||
SPRING_DATAFLOW_enable |
false |
1.10.0 | Generate Spring Cloud Data Flow service |
Airbyte | |||
AIRBYTE_enable |
false |
1.15.0 | Generate Airbyte service |
AIRBYTE_volume_map_data |
false |
1.15.0 | Volume Map the data folder of the airbyte-db service. |
AIRBYTE_database_user |
airbyte |
1.15.0 | The database user to use for the airbyte-db service. |
AIRBYTE_database_password |
abc123! |
1.15.0 | The password to use for the airbyte-db service. |
AIRBYTE_database_db |
airbyte |
1.15.0 | The name of the database to use for the airbyte-db service. |
AIRBYTE_database_host |
airbyte-db |
1.15.0 | The host name to use for the airbyte-db service. |
AIRBYTE_database_port |
5432 |
1.15.0 | The port to use for the airbyte-db service. |
AIRBYTE_configs_database_minimum_flyway_migration_version |
0.40.23.002 |
1.15.0 | The minimum configs database version for the flyway migration. |
AIRBYTE_jobs_database_minimum_flyway_migration_version |
0.40.26.001 |
1.15.0 | The minimum jobs database version for the flyway migration. |
AIRBYTE_log_level |
INFO |
1.15.0 | The log level for airbyte. |
Apache Airflow | |||
AIRFLOW_enable |
false |
1.3.0 | Generate Airflow services (Web, Scheduler & Postgresql DB) |
AIRFLOW_executor |
local |
1.12.0 | The Airflow executor to use, either local , sequential or celery |
AIRFLOW_workers |
1 |
1.16.0 | number of Airflow Worker nodes, if AIRFLOW_executor is set to celery . |
AIRFLOW_admin_username |
airflow |
1.16.0 | Username of the user with admin role |
AIRFLOW_admin_password |
airflow |
1.16.0 | Password of the user with admin role |
AIRFLOW_fernet_key |
`` | 1.17.0 | The fernet key to use to encrypt passwords in the connection configuration and the variable configuration. It guarantees that a password encrypted using it cannot be manipulated or read without the key. Fernet is an implementation of symmetric (also known as “secret key”) authenticated cryptography. |
AIRFLOW_secret_key |
abc123! |
1.16.0 | The secret key to use for the webserver. |
AIRFLOW_volume_map_logs |
false |
1.16.0 | Volume map logs folder into the Airflow service |
AIRFLOW_volume_map_docker_daemon |
false |
1.17.0 | Volume map docker daemon folder into the Airflow service. If enabled, the DOCKER_GROUP environment variable, referenced from the group_add section, has to be set: `DOCKER_GROUP=$(getent group docker |
AIRFLOW_provision_examples |
false |
1.3.0 | Provision Airflow examples? |
AIRFLOW_additional_requirements |
`` | 1.16.0 | Additional PIP requirements to add when starting the Airflow container. |
AIRFLOW_auth_backends |
airflow.api.auth.backend.session |
1.17.0 | Comma separated list of auth backends to authenticate users against airflow and the REST API. |
AIRFLOW_dag_dir_list_interval |
300 |
1.16.0 | How often (in seconds) to scan the DAGs directory for new files. Default to 5 minutes. |
AIRFLOW_dags_paused_at_creation |
true |
1.16.0 | Are DAGs paused by default at creation? |
AIRFLOW_expose_config |
true |
1.16.0 | Expose the configuration file in the web server. Set to non-sensitive-only to show all values except those that have security implications. True shows all values. False hides the configuration completely. |
AIRFLOW_python_version |
3.8 |
1.16.0 | The python version to use inside the Airflow containers. |
AIRFLOW_use_slim |
false |
1.16.0 | Use the slim docker image. |
AIRFLOW_variables |
`` | 1.17.0 | A comma-separated list of VARIABLE_NAME=variable-value , which will be added as AIRFLOW_VAR_{VARIABLE_NAME} environment variables to docker-compose.yml . |
Penthao Webspoon | |||
PENTHAO_enable |
false |
1.6.0 | Generate Penthao Webspoon service |
Config | Default | Since | Description |
---|---|---|---|
dbt | |||
DBT_enable |
false |
1.15.0 | Generate dbt CLI service |
Config | Default | Since | Description |
---|---|---|---|
Apache Zeppelin | |||
ZEPPELIN_enable |
false |
1.0.0 | Generate Apache Zeppelin service |
ZEPPELIN_volume_map_data |
false |
1.0.0 | Volume map notebooks folder into Zeppelin service |
ZEPPELIN_admin_username |
admin |
1.10.0 | Username of the user with admin role |
ZEPPELIN_admin_password |
changeme |
1.10.0 | Password of the user with admin role |
ZEPPELIN_user_username |
zeppelin |
1.10.0 | Username of the user with user role |
ZEPPELIN_admin_password |
changeme |
1.10.0 | Password of the user with user role |
ZEPPELIN_spark_cores_max |
1.10.0 | the maximum amount of CPU cores to request for the application from across the cluster (not from each machine). If not set, the default will be spark.deploy.defaultCores to be set through SPARK_MASTER_OPTS . |
|
ZEPPELIN_spark_executor_memory |
1.10.0 | Amount of memory to use per executor process, in the same format as JVM memory strings with a size unit suffix ("k", "m", "g" or "t") (e.g. 512m, 2g). If not set, the default will be 1g . |
|
ZEPPELIN_notebook_dir |
notebook |
1.15.0 | Directory where the notebooks are saved. If the default is used, then the tutorial notebooks will be shown. |
ZEPPELIN_notebook_cron_enable |
1.10.0 | Enable cron scheduler on each notebook. | |
ZEPPELIN_spark_submit_options |
1.12.0 | Add spark submit options additional to the one which are already set by default. | |
ZEPPELIN_use_local_spark |
1.15.0 | Use local spark environment if set to true by setting the SPARK_MASTER setting to local[*] |
|
Jupyter | |||
JUPYTER_enable |
false |
1.1.0 | Global Jupyter flag, if set to false, it will overwrite all of the specific JUPYTER_xxxx_enable flags |
JUPYTER_volume_map_data |
false |
1.1.0 | Volume map data folder into the Jupyter service |
JUPYTER_edition |
minimal |
1.2.0 | Which Jupyter edition to use, one of minimal , r , scipy , tensorflow , datascience , all_spark |
JUPYTER_python_packages |
`` | 1.13.0 | Python packages to install, as a space separated list of packages: <package-1> <package-2> |
JUPYTER_spark_jars_packages |
`` | 1.16.0 | Comma-separated list of Maven coordinates of jars to include on the driver and executor classpaths (will be added to the spark.jars.packages runtime environment setting). |
JUPYTER_tokenless |
false |
1.16.0 | Disable token authentication, if set to true . |
JUPYTER_token |
`` | 1.16.0 | Token used for authenticating first-time connections to the server. If not set, then a new random token is generated which can be found in the log of the container. |
JUPYTER_notebook_args |
`` | 1.16.0 | Custom options to be added to the jupyter command. |
JUPYTER_startup_cmd |
lab |
1.16.0 | the command to be used when starting the container, either lab , jupyter , nbclassic or server . Using server will only start the backend with no frontend. |
JUPYTER_download_jars |
com.amazonaws:aws-java-sdk-bundle:1.11.375 ... |
1.17.0 | Additional JARs to download when starting Jupyter. |
JUPYTER_volume_map_docker_daemon |
false |
1.17.0 | Volume map docker daemon folder into the Jupyter service. |
JupyterHub | |||
JUPYTERHUB_enable |
false |
1.16.0 | Enable JupyterHub service. |
JUPYTERHUB_python_packages |
`` | 1.16.0 | Python packages to install, as a space separated list of packages: <package-1> <package-2> |
JUPYTERHUB_authenticator_class |
jupyterhub.auth.DummyAuthenticator |
1.16.0 | The Authenticator class to use for JupyterHub. |
JUPYTERHUB_use_custom_userlist |
false |
1.16.0 | Use a custom userlist file from custom-conf/jupyterhub for configuring users for JupyterHub. If set to false a userlist file with just the user jupyterhub is used. |
JUPYTERHUB_global_password |
abc123! |
1.16.0 | The global password to use for all the users added through the userlist file. |
JUPYTERHUB_global_password |
false |
1.16.0 | Use Postgresql database to store the data of JupyterHub. |
JUPYTERHUB_postgres_host |
postgresql |
1.16.0 | Hostname of the Postgresql database. |
JUPYTERHUB_postgres_db |
postgres |
1.16.0 | Database name of the Postgresql database. |
JUPYTERHUB_postgres_username |
postgres |
1.16.0 | Username of the Postgresql database. |
JUPYTERHUB_postgres_password |
abc123! |
1.16.0 | Password of the Postgresql database. |
Anaconda | |||
ANACONDA_enable |
false |
1.16.0 | Generate Anaconda service |
ANACONDA_volume_map_notebooks |
false |
1.16.0 | Volume map notebooks folder into the Anaconda service. |
RStudio | |||
RSTUDIO_enable |
false |
1.13.0 | Generate RStudio service |
RSTUDIO_run_as_root |
false |
1.13.0 | Run RStudio with root permissions? |
RSTUDIO_password |
rstudio |
1.13.0 | The password of the rstudio user |
RSTUDIO_disable_auth |
false |
1.13.0 | Run RStudio without authentication? |
Shiny Server | |||
SHINY_SERVER_enable |
false |
1.13.0 | Generate Shiny Server service |
SHINY_SERVER_edition |
base |
1.13.0 | Select the Shiny Server edition: base or verser |
SHINY_SERVER_volume_map_apps |
false |
1.13.0 | Volume map apps folder into the Shiny Server service. |
Dataiku Data Science Studio | |||
DATAIKU_enable |
false |
1.11.0 | Generate Dataiku service |
DATAIKU_volume_map_data |
false |
1.11.0 | Volume map data folder into the Dataiku service. |
MLflow Server | |||
MLFLOW_SERVER_enable |
false |
1.13.0 | Generate MLflow server service |
MLFLOW_SERVER_volume_map_data |
false |
1.13.0 | Volume map data folder into the MLflow server service |
MLFLOW_SERVER_backend |
file |
1.13.0 | The backend store to use, one of file , postgresql or mysql |
MLFLOW_SERVER_db_user |
mlflow |
1.13.0 | the db user to use, if MLFLOW_SERVER_backend is either set to postgres or mysql |
MLFLOW_SERVER_db_password |
mlflow |
1.13.0 | the db password to use, if MLFLOW_SERVER_backend is either set to postgres or mysql |
MLFOW_SERVER_artifact_root |
/mlruns |
1.13.0 | the location of the artifact store. |
Optuna | |||
OPTUNA_enable |
false |
1.13.0 | Generate Optuna service |
OPTUNA_DASHBOARD_enable |
false |
1.13.0 | Generate Optuna Dashboard service |
MindsDB | |||
MINDSDB_enable |
false |
1.17.0 | Generate MindsDB service |
Ollama | |||
OLLAMA_enable |
false |
1.17.0 | Generate Ollama service |
OLLAMA_gpu_enabled |
true |
1.17.0 | Enable GPU for Ollama? |
OLLAMA_volume_map_data |
false |
1.17.0 | Volume map the data folder into the Ollama service. |
OLLAMA_llms |
llama2 |
1.17.0 | A comma separated list of models to load, any ollama model tag with or without tag (such as llama2 , mistral , phi , ..), or gpt-4 , gpt-3.5 or claudev2 . |
OLLAMA_debug |
false |
1.17.0 | Enable debugging mode? |
LocalAI | |||
LOCAL_AI_enable |
false |
1.17.0 | Generate LocalAI service |
LOCAL_AI_gpu_enabled |
false |
1.17.0 | Enable GPU for Local AI? |
LOCAL_AI_volume_map_data |
false |
1.17.0 | Volume map the data folder into the LocalAI service. |
Ollama WebUI | |||
OLLAMA_WEBUI_enable |
false |
1.17.0 | Generate Ollama WebUI service |
OLLAMA_WEBUI_volume_map_data |
false |
1.17.0 | Volume map data folder into Ollama WebUI service |
OLLAMA_WEBUI_secret_key |
false |
1.17.0 | Generate Ollama WebUI service |
Alpaca WebUI | |||
ALPACA_WEBUI_enable |
false |
1.17.0 | Generate Alpaca WebUI service |
Anything LLM | |||
ANYTHING_LLM_enable |
false |
1.17.0 | Generate Anything LLM service |
ANYTHING_LLM_volume_map_dotenv |
false |
1.17.0 | Map the .env file from custom-conf/anything-llm folder into the container. Can be used to overwrite default settings. |
ANYTHING_LLM_volume_map_data |
false |
1.17.0 | Map the data folder into the container. |
big-AGI | |||
BIG_AGI_enable |
false |
1.17.0 | Generate big-AGI service |
AutoGen Studio | |||
AUTOGEN_STUDIO_enable |
false |
1.17.0 | Generate AutoGen Studio service |
AUTOGEN_STUDIO_workers |
1 |
1.17.0 | Specify the number of concurrent worker processes or threads to be used by the AutoGen tooling for parallel execution |
AUTOGEN_STUDIO_openai_api_key |
false |
1.17.0 | The Open AI API Key |
LiteLLM | |||
FLOWISE_enable |
false |
1.17.0 | Generate Flowise service |
FLOWISE_volume_map_data |
false |
1.17.0 | Volume map the data folder into flowise. |
FLOWISE_username |
`` | 1.17.0 | Name of the user for Flowise service, if empty then it runs without authentication. |
FLOWISE_password |
`` | 1.17.0 | Password of the user for Flowise service, if empty then it runs without authentication. |
FLOWISE_debug |
false |
1.17.0 | Print logs onto the console? |
FLOWISE_log_level |
info |
1.17.0 | Log Level, one of error , warn , info , verbose , debug . |
FLOWISE_disable_telemetry |
true |
1.17.0 | Enable telemetry? |
FLOWISE_database_type |
sqlite |
1.17.0 | The database to store the Flowise data, either sqlite , mysql or postgres . |
FLOWISE_database_username |
`` | 1.17.0 | The username for the database, only valid if FLOWISE_database_type is not sqlite . |
FLOWISE_database_password |
`` | 1.17.0 | The password for the database, only valid if FLOWISE_database_type is not sqlite . |
FLOWISE_database_name |
`` | 1.17.0 | The database name, only valid if FLOWISE_database_type is not sqlite . |
FLOWISE_langchain_tracing_v2 |
false |
1.17.0 | Turn LangSmith tracing on? |
FLOWISE_langchain_endpoint |
`` | 1.17.0 | LangSmith endpoint |
FLOWISE_langchain_api_key |
`` | 1.17.0 | LangSmith API Key |
FLOWISE_langchain_project |
`` | 1.17.0 | Project to trace on LangSmith |
LiteLLM | |||
LITELLM_enable |
false |
1.17.0 | Generate LiteLLM service |
Drools KIE Server | |||
KIE_SERVER_enable |
false |
1.13.0 | Generate Drools KIE Server service |
Config | Default | Since | Description |
---|---|---|---|
OpenTelemetry Collector | |||
OTEL_COLLECTOR_enable |
false |
1.14.0 | Generate OTEL Collector service |
OTEL_COLLECTOR_use_custom_conf |
false |
1.14.0 | Use a otel-collector-config.yaml config file placed into custom-conf/otel-collector folder for configuring OTEL Collector? |
Zipkin | |||
ZIPKIN_enable |
false |
1.13.0 | Generate Zipkin service |
ZIPKIN_storage_type |
mem |
1.13.0 | Selects the storage type to use for Zipkin storage, one of mem , cassandra3 , mysql or elasticsearch . |
ZIPKIN_collect_kafka |
false |
1.13.0 | Collect traces from Kafka |
ZIPKIN_debug |
false |
1.13.0 | Enable debug logging |
Jaeger | |||
JAEGER_enable |
false |
1.14.0 | Generate Jaeger service |
JAEGER_zipkin_port |
false |
1.14.0 | Exposes Zipkin compatible REST API on this port. |
Pitchfork | |||
PITCHFORK_enable |
false |
1.14.0 | Generate Pitchfork service. |
PITCHFORK_server_port |
9413 |
1.14.0 | HTTP port where Pitchfork is listening. |
PITCHFORK_use_logging |
false |
1.14.0 | Enable the logging of spans for troubleshooting. |
PITCHFORK_use_zipkin_http |
false |
1.14.0 | If enabled Pitchfork will forward spans to an HTTP Zipkin server. |
PITCHFORK_use_haystack_kafka |
false |
1.14.0 | If enabled Pitchfork will forward spans to a Kafka broker. |
PITCHFORK_haystack_kafka_topic |
proto-spans |
1.14.0 | The name of the Kafka topic where the spans will be submitted to. |
Promtail | |||
PROMTAIL_enable |
false |
1.14.0 | Generate Promtail service |
Loki | |||
LOKI_enable |
false |
1.14.0 | Generate Loki service |
Tempo | |||
TEMPO_enable |
false |
1.14.0 | Generate Tempo service |
TEMPO_volume_map_data |
false |
1.14.0 | Volume Map Tempo data folder |
TEMPO_with_tempo_query |
false |
1.14.0 | Grafana 7.4.x is not able to query Tempo directly and requires the tempo-query component as an intermediary. Use this property to switch it on. |
TEMPO_use_custom_conf |
false |
1.14.0 | Use a tempo.yaml config file placed into custom-conf/tempo folder for configuring Tempo? |
Grafana | |||
GRAFANA_enable |
false |
1.0.0 | Generate Grafana service |
GRAFANA_feature_toggles |
`` | 1.14.0 | Comma-separated list of preview features to enable, such as tempoSearch . Find avaialble features here. |
GRAFANA_install_plugins |
false |
1.0.0 | Comma-separated list of the plugins to have installed upon first start. |
GRAFANA_auth_anonymous_enabled |
false |
1.17.0 | Allow Anonymus autthorization. |
Hawtio | |||
HAWTIO_enable |
false |
1.6.0 | Generate Hawtio service |
Spring Boot Admin | |||
SPRING_BOOT_ADMIN_enable |
false |
1.6.0 | Generate Spring Boot Admin service |
Config | Default | Since | Description |
---|---|---|---|
Metabase | |||
METABASE_enable |
false |
1.14.0 | Generate Metabase service |
METABASE_db_type |
h2 |
1.14.0 | The database type to use, either h2 or postgres . |
METABASE_postgres_dbname |
metabasedb |
1.14.0 | the name of the database, if METABASE_db_type is set to postgres . |
METABASE_postgres_user |
metabasedb |
1.14.0 | the name of the database user, if METABASE_db_type is set to postgres . |
METABASE_postgres_password |
abc123! |
1.14.0 | the password of the database user, if METABASE_db_type is set to postgres . |
METABASE_mysql_dbname |
metabasedb |
1.17.0 | the name of the database, if METABASE_db_type is set to mysql . |
METABASE_mysql_user |
metabasedb |
1.17.0 | the name of the database user, if METABASE_db_type is set to mysql . |
METABASE_mysql_password |
abc123! |
1.17.0 | the password of the database user, if METABASE_db_type is set to mysql . |
METABASE_volume_map_data |
false |
1.14.0 | Volume Map Metabase data folder? |
METABASE_query_caching_enabled |
false |
1.17.0 | Enabling caching will save the results of queries that take a long time to run. |
Superset | |||
SUPERSET_enable |
false |
1.4.0 | Generate Superset service |
SUPERSET_provision_example |
false |
1.11.0 | Provision Superset examples? |
Redash | |||
REDASH_enable |
false |
1.7.0 | Generate Redash visualization service |
Smashing | |||
SMASHING_enable |
false |
1.8.0 | Generate Smashing dashboard service |
SMASHING_volume_map_dashboards |
false |
1.8.0 | Map dashboards folder into the container to add your own dashboards. |
SMASHING_volume_map_jobs |
false |
1.8.0 | Map jobs folder into the container to add your own dashboards. |
SMASHING_volume_map_jobs |
false |
1.8.0 | Map widgets folder into the container to add widgets. |
SMASHING_install_gems |
false |
1.8.0 | A list of additional gem names to install. |
SMASHING_install_widgets |
false |
1.8.0 | A list of gist IDs of additional widgets to install. |
Tipboard | |||
TIPBOARD_enable |
false |
1.8.0 | Generate Tipboard dashboard service |
TIPBOARD_volume_map_dashboards |
false |
1.8.0 | Map dashboards folder into the container to add your own dashboards. Place the xxxx.yaml layout files in the ./scripts/tipboard folder and it will be mapped to the right place. |
TIPBOARD_api_key |
e2c3275d0e1a4bc0da360dd225d74a43 |
1.8.0 | the API key to use when sending data to the tipboard tiles. |
TIPBOARD_project_name |
sample |
1.8.0 | name of the project. |
TIPBOARD_port |
7272 |
1.8.0 | port where tipboard binds to |
TIPBOARD_redis_host |
redis-1 |
1.8.0 | the redis server tipboard is connecting to |
TIPBOARD_redis_port |
6379 |
1.8.0 | the redis port tipboard is connecting to |
TIPBOARD_redis_password |
`` | 1.8.0 | the password for connecting to redis, no authentication to redis if empty |
TIPBOARD_redis_db |
4 | 1.8.0 | the redis database to use |
TIPBOARD_flipboard_interval |
0 | 1.8.0 | the interval in seconds to flip(rotate) multiple dashboards periodically |
TIPBOARD_flipboard_sequence |
'' | 1.8.0 | set the dashboards you want to flip(rotate), if not all should be included. Specify a list of dashboard names: 'my_first_dashboard', 'my_third_dashboard' |
Chartboard | |||
CHARTBOARD_enable |
false |
1.8.0 | Generate Chartboard dashboard service |
CHARTBOARD_volume_map_dashboards |
false |
1.8.0 | Map dashboards folder into the container to add your own dashboards. Place the xxxx.yaml layout files in the ./scripts/chartboard folder and it will be mapped to the right place. |
Retool | |||
RETOOL_enable |
false |
1.15.0 | Generate Retool low-code platform service |
ToolJet | |||
TOOLJET_enable |
false |
1.17.0 | Generate ToolJet low-code platform service |
Streamlit | |||
STREAMLIT_enable |
false |
1.17.0 | Generate Streamlit application(s). |
STREAMLIT_image |
python |
1.17.0 | The base docker image to use to install Streamlit into upon start. |
STREAMLIT_artefacts_folder |
./scripts/streamlit/apps |
1.17.0 | The folder on the docker host, where the Streamlit application files are hosted. |
STREAMLIT_apps |
hello-world/hello-world.py |
1.17.0 | The name of the Streamlit application python file. By default the hello-world application is provisioned. Can be a single app or a comma-separated list of multiple applications, each started as a separate container. |
STREAMLIT_apps_description |
Streamlit Hello World App |
1.17.0 | The description of the Streamlit application. By default the hello-world application is provisioned. Can be a single string description or a comma-separated list of description, must match the number of STREAMLIT_apps . |
STREAMLIT_requirements_files |
`` | 1.17.0 | The filename(s) of the requirements file to be used (please note that the script should be inside the folder mounted using STREAMLIT_artefacts_folder ). Can be a single file or a comma-separated list of files, must match the number of STREAMLIT_apps . |
STREAMLIT_python_packages |
`` | 1.17.0 | Python packages to install, as a space separated list of packages: <package-1> <package-2> . |
STREAMLIT_environment |
`` | 1.17.0 | A comma-separated list of key-value pairs to setup as environment variable in each of the streamlit applications: <env_var1>=<value>,<env_var2>=<value> . |
Baserow | |||
BASEROW_enable |
false |
1.16.0 | Generate Baserow service |
BASEROW_volume_map_data |
false |
1.16.0 | Volume map data folder into the Baserow service |
Config | Default | Since | Description |
---|---|---|---|
Memcached | |||
MEMCACHED_enable |
false |
1.7.0 | Generate Memcached service |
Redis | |||
REDIS_enable |
false |
1.0.0 | Generate Redis service |
REDIS_replicasets |
0 |
1.9.0 | The number of replica instances to generate. |
REDIS_volume_map_data |
false |
1.9.0 | Volume map data folder into the Redis service |
REDIS_allow_empty_password |
true |
1.9.0 | Allow access the database without a password?. |
REDIS_password |
`` | 1.9.0 | Set the Redis server password, only used if REDIS_allow_empty_password is set to false . Alternative to setting it via REDIS_password_file . |
REDIS_password_file |
`` | 1.17.0 | The name of a password file in ./security/redis/ which holds the Redis server password. Only used if REDIS_allow_empty_password is set to false . Alternative to setting it via REDIS_password . |
REDIS_acl_file |
`` | 1.17.0 | The name of an ACL file available in ./security/redis/ where the Redis access control lists, to control which commands can be executed and what keys can be accessed. |
REDIS_disable_commands |
`` | 1.9.0 | Comma-separated list of Redis commands to disable. Defaults to empty. |
REDIS_aof_enable |
true |
1.17.0 | Use the Append Only File persistence. If set to false will disable it. |
REDIS_rdb_enable |
false |
1.17.0 | Use the RDB persistence? If set to false , it is disabled, and REDIS_rdb_save_policy will have no effect. |
REDIS_rdb_save_policy |
`` | 1.17.0 | The data synchronization strategy, a space delimited list of key-value pairs in the form of nof-seconds#nof-changes . So for example 60#1000 30#2000 will dump the data to disk every 60 seconds if at least 1000 keys have changed and ever 30 seconds if at least 2000 keys have changed. |
REDIS_io_threads |
`` | 1.17.0 | The number of I/O threads to use in Redis, to divert from the normally single-threaded operation mode of Redis. Available since Redis 6.0. By default multi-threading is disabled, set this parameter to anything > 1 to enable it. |
REDIS_io_threads_do_reads_enable |
false |
1.17.0 | Usually threading reads does not help much. Set this to true to enable threading reads. |
REDIS_overrides_config_file |
`` | 1.17.0 | The name of an overrides file available in ./custom-conf/redis/ containing only the settings you wish to override. Can be combined with the other specific settings available above but be careful to not override a value set through the specific configs. |
REDIS_log_level |
notice |
1.17.0 | Specify the server verbosity level, one of nothing (no logging), warning (only very important, critical messages are logged), notice (moderately verbose, what you want for production), verbose (many rarely useful info, less verbose than debug) and debug . |
Redis Stack | |||
REDIS_STACK_enable |
false |
1.17.0 | Generate Redis Stack service |
REDIS_STACK_volume_map_data |
false |
1.17.0 | Volume map data folder into the Redis Stack service |
REDIS_STACK_password |
`` | 1.17.0 | Set the Redis server password, only used if REDIS_allow_empty_password is set to false . Alternative to setting it via REDIS_password_file . |
Redis Exporter | |||
REDIS_EXPORTER_enable |
false |
1.17.0 | Generate Redis (Metrics) Exporter service |
REDIS_EXPORTER_password_file |
`` | 1.17.0 | The name of the password file to use, to get the password for the redis database. It's not the same as the one used for Redis itself. |
REDIS_EXPORTER_check_keys |
`` | 1.17.0 | Comma separated list of key patterns to export value and length/size, eg: db3=user_count will export key user_count from db 3 . db defaults to 0 if omitted. |
REDIS_EXPORTER_check_single_keys |
`` | 1.17.0 | Comma separated list of keys to export value and length/size, eg: db3=user_count will export key user_count from db 3 . db defaults to 0 if omitted. |
REDIS_EXPORTER_count_keys |
`` | 1.17.0 | Comma separated list of patterns to count, eg: db3=sessions:* will count all keys with prefix sessions: from db 3 . db defaults to `0 if omitted. |
REDIS_EXPORTER_redis_only_metrics |
false |
1.17.0 | Whether to also export go runtime metrics. |
REDIS_EXPORTER_incl_config_metrics |
false |
1.17.0 | Whether to include all config settings as metrics. |
REDIS_EXPORTER_incl_system_metrics |
false |
1.17.0 | Whether to include system metrics like total_system_memory_bytes . |
REDIS_EXPORTER_redact_config_metrics |
false |
1.17.0 | Whether to redact config settings that include potentially sensitive information like password. |
REDIS_EXPORTER_ping_on_connect |
false |
1.17.0 | Whether to ping the redis instance after connecting and record the duration as a metric. |
REDIS_EXPORTER_export_client_list |
false |
1.17.0 | Whether to include the client's port when exporting the client list. |
REDIS_EXPORTER_check_key_groups |
`` | 1.17.0 | Comma separated list of LUA regexes for classifying keys into groups. The regexes are applied in specified order to individual keys, and the group name is generated by concatenating all capture groups of the first regex that matches a key. A key will be tracked under the unclassified group if none of the specified regexes matches it. |
REDIS_EXPORTER_max_distinct_key_groups |
0 |
1.17.0 | Maximum number of distinct key groups that can be tracked independently per Redis database. If exceeded, only key groups with the highest memory consumption within the limit will be tracked separately, all remaining key groups will be tracked under a single overflow key group. |
Redis Insight | |||
REDIS_INSIGHT_enable |
false |
1.9.0 | Generate Redis Insight service |
Redis Commander | |||
REDIS_COMMANDER_enable |
false |
1.9.0 | Generate Redis Commander service |
Apache Cassandra | |||
CASSANDRA_enable |
false |
1.0.0 | Generate Elasticsearch service |
CASSANDRA_major_version |
3 |
1.13.0 | Which major version of Cassandra to use, one of 3 or 4 . The exact version can then be specified using either the CASSANDRA_3_version or CASSANDRA_4_version setting. |
CASSANDRA_volume_map_data |
false |
1.15.0 | Volume map data folder into the Cassandra service |
CASSANDRA_nodes |
1 |
1.15.0 | number of Cassandra nodes to form a Cassandra cluster. Support for up to 5 nodes. |
CASSANDRA_local_jmx |
yes |
1.16.0 | set it to no to open JMX port for remote access while requiring authentication. |
CASSANDRA_useranme |
cassandra |
1.16.0 | Cassandra user name. |
CASSANDRA_password |
cassandra |
1.16.0 | Cassandra user password. |
Cassandra Web | |||
CASSANDRA_WEB_enable |
false |
1.16.0 | Generate Cassandra Web service. |
Reaper | |||
REAPER_enable |
false |
1.16.0 | Generate Reaper service. |
MongoDB | |||
MONGO_enable |
false |
1.2.0 | Generate MongoDB service |
MONGO_nodes |
1 |
1.3.0 | number of MongoDB nodes in replicaset |
MONGO_volume_map_data |
false |
1.17.0 | volume map the data folder into the MongoDB instance. |
MONGO_volume_map_log |
false |
1.17.0 | volume map the log folder into the MongoDB instance. |
MONGO_root_username |
`` | 1.17.0 | The username of the root user, if undefined or empty, then authentication is disabled. |
MONGO_root_password |
`` | 1.17.0 | The password of the root user, if undefined or empty, then authentication is disabled. |
MONGO_init_database |
`` | 1.17.0 | This variable allows you to specify the name of a database to be used for creation scripts in the ./init/mongo/ folder. |
Mongo Express | |||
MONGO_EXPRESS_enable |
false |
1.16.0 | Generate Mongo Express service |
MONGO_EXPRESS_editor_theme |
default |
1.16.0 | editor color theme, one of default , ambient , dracula , ... see here for more: https://codemirror.net/5/demo/theme.html#default |
Apache Solr | |||
SOLR_enable |
false |
1.0.0 | Generate Solr service |
Elasticsearch | |||
ELASTICSEARCH_enable |
false |
1.0.0 | Generate Elasticsearch service |
ELASTICSEARCH_major_version |
8 |
1.16.0 | Which major version of Elasticsearch to use, one of 7 or 8 . The exact version can then be specified using either the ELASTICSEARCH_7_version or ELASTICSEARCH_8_version setting. |
ELASTICSEARCH_edition |
oss |
1.16.0 | The Elasticsearch edition to use, either oss or elastic . |
Kibana | |||
KIBANA_enable |
false |
1.0.0 | Generate Kibana service (use ELASTICSEARCH_edition to specify if the open source or Elastic package should be used). |
Dejavu | |||
DEJAVU_enable |
false |
1.4.0 | Enable DajaVu Elasticsearch UI |
Cerebro | |||
CEREBRO_enable |
false |
1.4.0 | Generate Cerebro Elasticserach UI service |
ElasticHQ | |||
ELASTICHQ_enable |
false |
1.4.0 | Generate ElasticHQ Elasticsearch UI service |
ElasticVue | |||
ELASTICVUE_enable |
false |
1.15.0 | Generate ElasticVue Elasticsearch UI service |
OpenSearch | |||
OPENSEARCH_enable |
false |
1.15.0 | Generate OpenSearch service |
OPENSEARCH_nodes |
1 |
1.15.0 | The number of nodes to generate |
OPENSEARCH_volume_map_data |
false |
1.15.0 | Volume map the data folder? |
OpenSearch Dashboards | |||
OPENSEARCH_DASHBOARDS_enable |
false |
1.0.0 | Generate OpenSearch Dashboards service |
Splunk | |||
SPLUNK_enable |
false |
1.17.0 | Generate Splunk service |
JanusGraph | |||
JANUSGRAPH_enable |
false |
1.16.0 | Generate JanusGraph service |
JANUSGRAPH_props_template |
berkeleyje |
1.16.0 | JanusGraph properties file template to use, one of berkeleyje , berkeleyje-es , berkeleyje-lucene , cql-es , cql , inmemory . |
Gremlin Console | |||
GREMLIN_CONSOLE_enable |
false |
1.16.0 | Generate Gremlin Console service |
GREMLIN_CONSOLE_remote_host |
janusgraph |
1.16.0 | The hostname of an external Gremlin Server instance to connect to from gremlin console. |
Invana Engine | |||
INVANA_ENGINE_enable |
false |
1.16.0 | Generate Invana Engine service |
Invana Studio | |||
INVANA_STUDIO_enable |
false |
1.16.0 | Generate Invana Studio service |
Neo4J | |||
NEO4J_enable |
false |
1.0.0 | Generate Neo4J service |
NEO4J_major_version |
5 |
1.17.0 | The Neo4j major version to use, either 4 or 5 |
NEO4J_volume_map_data |
false |
1.0.0 | Volume map data folder into the Neo4J service |
NEO4J_volume_map_logs |
false |
1.0.0 | Volume map logs folder into the Neo4J service |
NEO4J_dbms_logs_debug_level |
INFO |
1.15.0 | Debug log level threshold. One of INFO , DEBUG , WARN , ERROR or NONE . |
NEO4J_dbms_memory_pagecache_size |
`` | 1.15.0 | The amount of memory to use for mapping the store files, in bytes (or kilobytes with the k suffix, megabytes with m and gigabytes with g ). If left empty, Neo4J uses 512M . |
NEO4J_dbms_memory_heap_max_size |
`` | 1.15.0 | Maximum heap size. By default it is calculated based on available system resources. If left empty, Neo4J uses 512M . |
NEO4J_plugins |
`` | 1.15.0 | a comma-separated list of NEO4J plugins to install. See here for all supported plugins and the keys to use here (i.e. apoc or streams ). |
NEO4J_dbms_security_procedures_unrestricted |
`` | 1.17.0 | a comma-separated list of procedures and user-defined functions that are allowed full access to the database. The list may contain both fully-qualified procedure names, and partial names with the wildcard *. Note that this enables these procedures to bypass security. Use with caution. |
NEO4J_extension_script |
`` | 1.15.0 | pointing to a location in a folder you need to mount (i.e. in ./init/neo4j ). You can use this script to perform an additional initialization or configuration of the environment, for example, loading credentials or dynamically setting neo4j.conf settings, etc. |
NEO4J_source_enabled |
true |
1.15.0 | Enable the Source plugin? You also have to either add streams to the NEO4J_plugins setting or manually download the plugin to the ./plugins/neo4j/ folder. |
NEO4J_topic_name |
neo4j |
1.15.0 | The topic name to route the change messages to. |
NEO4J_streams_source_topic_nodes_neo4j |
`` | 1.15.0 | Maximum heap size. By default it is calculated based on available system resources. If left empty, Neo4J uses 512M . |
NEO4J_kafka_acks |
1 |
1.15.0 | The Kafka acks setting to use when writing to Kafka topic. . |
NEO4J_kafka_transactional_id |
`` | 1.15.0 | The Kafka trascational.id setting to use when writing to Kafka topic. |
NEO4J_admin_password |
abc123abc123 |
1.16.0 | The Password of the admin user. A password must be at least 8 characters. |
NEO4J_server_config_strict_validation_enabled |
true |
1.17.0 | Strict configuration validation will prevent the database from starting up if unknown configuration options are specified in the neo4j settings namespace (such as dbms., cypher., etc) or if settings are declared multiple times. Only applicable if NEO4J_major_version is set to 5 . |
Quine | |||
QUINE_enable |
false |
1.15.0 | Generate Quine service |
Memgraph | |||
MEMGRAPH_enable |
false |
1.16.0 | Generate Memgraph service |
MEMGRAPH_edition |
platform |
1.16.0 | The "edition" to use, either platform for the Memgraph-Platform or db for just the Memgraph database (use MEMGRAPH_with_mage to specify if you want support for the mage graph library). |
MEMGRAPH_with_mage |
false |
1.16.0 | Use the docker image with the mage graph algorithms included? |
MEMGRAPH_volume_map_data |
false |
1.16.0 | Volume map data folder into the Memgraph service |
MEMGRAPH_volume_map_log |
false |
1.16.0 | Volume map log folder into the Memgraph service |
MEMGRAPH_volume_map_custom_conf |
false |
1.16.0 | Volume map custom-conf folder into the Memgraph service. Place a memgraph.conf file into custom-conf/memgraph to configure the Memgraph instance. |
ArcadeDB | |||
ARCADEDB_enable |
false |
1.16.0 | Generate ArcadeDB service |
ARCADEDB_volume_map_data |
false |
1.16.0 | Volume map log folder into the ArcadeDB service? |
ARCADEDB_root_password |
abc123abc123 |
1.16.0 | The password of the root user (must be at least 8 characters long) |
ARCADEDB_provision_sample_data |
false |
1.16.0 | Should sample data be provisioned with ArcadeDB? |
Dgraph | |||
DGRAPH_enable |
false |
1.11.0 | Generate Dgraph service |
Stardog | |||
STARDOG_enable |
false |
1.7.0 | Generate Stardog service |
STARDOG_volume_map_data |
false |
1.7.0 | Volume map data folder into the Stardog service |
STARDOG_STUDIO_enable |
false |
1.7.0 | Generate Stardog Studio UI service |
GraphDB | |||
GRAPHDB_enable |
false |
1.11.0 | Generate GraphDB service |
GRAPHDB_edition |
free |
1.11.0 | the GraphDB edtion to use, either free , se or ee . |
GRAPHDB_volume_map_data |
false |
1.11.0 | Volume map data folder into the GraphDB service |
GRAPHDB_heap_size |
2G |
1.11.0 | GraphDB heap size |
GRAPHDB_workbench_import_dir |
/opt/graphdb/examples |
1.11.0 | the location of the file import folder |
InfluxData InfluxDB | |||
INFLUXDB_enable |
false |
1.1.0 | Generate InfluxDB service |
INFLUXDB_volume_map_data |
false |
1.1.0 | Volume map data folder into the InfluxDB service |
InfluxData Telegraf | |||
INFLUXDB_TELEGRAF_enable |
false |
1.2.0 | Generate Telegraf service |
InfluxData Chronograf | |||
INFLUXDB_CHRONOGRAF_enable |
false |
1.2.0 | Generate Chronograf service |
INFLUXDB_CHRONOGRAF_volume_map_data |
false |
1.1.0 | Volume map data folder into the Chronograf service |
InfluxData Kapacitor | |||
INFLUXDB_KAPACITOR_enable |
false |
1.1.0 | Generate Kapacitor service |
INFLUXDB_KAPACITOR_volume_map_data |
false |
1.2.0 | Volume map data folder into the Kapacitor service |
InfluxData InfluxDB v2 | |||
INFLUXDB2_enable |
false |
1.1.0 | Generate InfluxDB 2.0 service |
INFLUXDB2_volume_map_data |
false |
1.1.0 | Volume map data folder into the InfluxDB 2.0 service |
INFLUXDB2_volume_map_config |
false |
1.14.0 | Volume map config folder into the InfluxDB service |
INFLUXDB2_username |
influx |
1.14.0 | Name of the initial super user for InfluxDB2. |
INFLUXDB2_password |
abc123abc123! |
1.14.0 | Name of the initial super user password for InfluxDB2. |
INFLUXDB2_org |
platys |
1.14.0 | Name of the system's initial organization. |
INFLUXDB2_bucket |
demo-bucket |
1.14.0 | Name of the system's initial bucket. |
INFLUXDB2_admin_token |
`` | 1.14.0 | The authentication token to associate with the system's initial super-user. If not set, a token will be auto-generated by the system. |
QuestDB | |||
QUESTDB_enable |
false |
1.12.0 | Generate QuestDB service |
QUESTDB_volume_map_data |
false |
1.12.0 | Volume map data folder into the QuestDB service |
Kudu | |||
KUDU_enable |
false |
1.8.0 | Generate Kudu service |
Chroma | |||
CHROMA_enable |
false |
1.17.0 | Generate Chroma service |
CHROMA_volume_map_data |
false |
1.17.0 | Volume map data folder into service. |
CHROMA_auth_token |
`` | 1.17.0 | To enable authentication, set the auth token. |
Qdrant | |||
QDRANT_enable |
false |
1.17.0 | Generate Qdrant service |
QDRANT_volume_map_data |
false |
1.17.0 | Volume map data folder into service. |
Qdrant | |||
QDRANT_enable |
false |
1.17.0 | Generate Qdrant service |
QDRANT_volume_map_data |
false |
1.17.0 | Volume map data folder into service. |
Weaviate | |||
WEAVIATE_enable |
false |
1.17.0 | Generate Weaviate service |
WEAVIATE_volume_map_data |
false |
1.17.0 | Volume map data folder into service. |
Milvus | |||
MILVUS_enable |
false |
1.17.0 | Generate Milvus service |
MILVUS_volume_map_data |
false |
1.17.0 | Volume map data folder into service. |
Attu | |||
ATTU_enable |
false |
1.17.0 | Generate Attu service |
Vector Admin | |||
VECTOR_ADMIN_enable |
false |
1.17.0 | Generate Vector Admin service |
VECTOR_ADMIN_postgresql_database |
`` | 1.18.0 | the PostgreSQL database to use for connecting to the PostgreSQL. If not set, the value of the POSTGRESQL_database property is used. |
VECTOR_ADMIN_postgresql_user |
`` | 1.18.0 | the PostgreSQL user to use for connecting to the PostgreSQL database. If not set, the value of the POSTGRESQL_user property is used. |
VECTOR_ADMIN_postgresql_password |
`` | 1.18.0 | the PostgreSQL password to use for connecting to the PostgreSQL database. If not set, the value of the POSTGRESQL_password property is used. |
Hazelcast | |||
HAZELCAST_enable |
false |
1.13.0 | Generate Hazelcast service |
HAZELCAST_nodes |
1 |
1.13.0 | How many Hazelcast nodes should be generated |
HAZELCAST_volume_map_custom_config |
`` | 1.14.0 | Should a custom config file (hazelcast.yaml ) be mapped into the container. |
HAZELCAST_use_jet |
false |
1.14.0 | Should the Jet service be enabled. |
Hazelcast MC (Management Center) | |||
HAZELCAST_MC_enable |
false |
1.13.0 | Generate Hazelcast MC (Management Center) service |
Apache Ignite | |||
IGNITE_enable |
false |
1.13.0 | Generate Ignite service |
IGNITE_servers |
1 |
1.13.0 | How many Ignite servers should be generated |
IGNITE_option_libs |
`` | 1.13.0 | A list of modules that will be enabled for the node. |
Prometheus | |||
PROMETHEUS_enable |
false |
1.1.0 | Generate Prometheus service |
PROMETHEUS_volume_map_data |
false |
1.1.0 | Volume map data folder into the Prometheus service |
PROMETHEUS_volume_map_custom_config |
false |
1.17.0 | Volume map custom config (./custom-conf/prometheus/prometheus-config/prometheus.yml ) into the Prometheus service. |
Prometheus Pushgateway | |||
PROMETHEUS_PUSHGATEWAY_enable |
false |
1.17.0 | Generate Prometheus Pushgateway service |
Prometheus Node Exporter | |||
PROMETHEUS_NODEEXPORTER_enable |
false |
1.17.0 | Generate Prometheus Node Exporter service |
Prometheus Alertmanager | |||
PROMETHEUS_ALERTMANAGER_enable |
false |
1.17.0 | Generate Prometheus Alertmanager service |
Tile38 | |||
TILE38_enable |
false |
1.0.0 | Generate Tile38 service |
Etcd | |||
ETCD_enable |
false |
1.17.0 | Generate etcd service |
Etcd E3W | |||
ETCD_E3W_enable |
false |
1.17.0 | Generate etcd E3W UI service |
yugabyteDB | |||
YUGABYTE_enable |
false |
1.5.0 | Generate yugabyteDB service |
SingleStore | |||
SINGLE_STORE_enable |
false |
1.17.0 | Generate SingleStore service |
SINGLE_STORE_license |
`` | 1.17.0 | The license to use. You can sign up for a free SingleStore license. |
Oracle RDBMS Enterprise Edition (EE) | |||
ORACLE_EE_enable |
false |
1.13.0 | Generate Oracle service. For using this service, you either need to have access to the private images located in the Docker Hub organisation specified in the private_docker_repository_name global property (defaults to trivadis ) and be logged in when starting the platform. |
ORACLE_EE_volume_map_data |
false |
1.13.0 | Volume map data folder into the Oracle service |
ORACLE_EE_password |
EAo4KsTfRR |
1.13.0 | The password to use for the SYS and SYSTEM user. |
ORACLE_EE_container_enable |
false |
1.13.0 | Enable a pluggable container database. |
Oracle RDBMS Express Edition (XE) | |||
ORACLE_XE_enable |
false |
1.13.0 | Generate Oracle XE service. |
ORACLE_XE_edition |
regular |
1.13.0 | The image flavour to use, one of regular , slim or full |
ORACLE_XE_use_faststart |
false |
1.16.0 | Use Expanded and ready to go database inside the image? Trades image size on disk for faster startup time |
ORACLE_XE_volume_map_data |
false |
1.13.0 | Volume map data folder into the Oracle service |
ORACLE_XE_database |
`` | 1.13.0 | Set this variable to a non-empty string to create a new pluggable database with the name specified in this variable. |
ORACLE_XE_password |
EAo4KsTfRR |
1.13.0 | The password to use for the SYS and SYSTEM user. |
ORACLE_XE_random_password |
`` | 1.13.0 | Set this variable to a non-empty value, like yes, to generate a random initial password for the SYS and SYSTEM users. The generated password will be printed to stdout (ORACLE PASSWORD FOR SYS AND SYSTEM: ... ). |
ORACLE_XE_app_user |
`` | 1.13.0 | Set this parameter to a non-empty string to create a new database schema user with the name specified in this variable. The user will be created in the default XEPDB1 pluggable database. If ORACLE_XE_database has been specified, the user will also be created in that pluggable database. |
ORACLE_XE_app_user_password |
`` | 1.13.0 | Set this variable to a non-empty string to define a password for the database schema user specified by ORACLE_XE_app_user . |
Oracle Database Free | |||
ORACLE_FREE_enable |
false |
1.16.0 | Generate Oracle Database Free service. |
ORACLE_FREE_edition |
regular |
1.16.0 | The image flavour to use, one of regular , slim or full |
ORACLE_FREE_use_faststart |
false |
1.16.0 | Use Expanded and ready to go database inside the image? Trades image size on disk for faster startup time |
ORACLE_FREE_volume_map_data |
false |
1.16.0 | Volume map data folder into the Oracle service |
ORACLE_FREE_database |
`` | 1.16.0 | Set this variable to a non-empty string to create a new pluggable database with the name specified in this variable. |
ORACLE_FREE_password |
EAo4KsTfRR |
1.16.0 | The password to use for the SYS and SYSTEM user. |
ORACLE_FREE_random_password |
`` | 1.16.0 | Set this variable to a non-empty value, like yes, to generate a random initial password for the SYS and SYSTEM users. The generated password will be printed to stdout (ORACLE PASSWORD FOR SYS AND SYSTEM: ... ). |
ORACLE_FREE_app_user |
`` | 1.16.0 | Set this parameter to a non-empty string to create a new database schema user with the name specified in this variable. The user will be created in the default FREEPDB1 pluggable database. If ORACLE_FREE_database has been specified, the user will also be created in that pluggable database. |
ORACLE_FREE_app_user_password |
`` | 1.16.0 | Set this variable to a non-empty string to define a password for the database schema user specified by ORACLE_FREE_app_user . |
Oracle SQLcl | |||
ORACLE_SQLCL_enable |
false |
1.15.0 | Generate Oracle SQLcl service |
Oracle REST Data Service | |||
ORACLE_REST_DATA_SERVICE_enable |
false |
1.5.0 | Generate Oracle REST Data Service (ORDS) |
MySQL | |||
MYSQL_enable |
false |
1.0.0 | Generate MySQL service |
MYSQL_database |
sample |
1.16.0 | the name of the MYSQL database |
MYSQL_user |
sample |
1.16.0 | the name of the MYSQL user |
MYSQL_password |
sample |
1.16.0 | the MYSQL user password |
MariaDB | |||
MARIADB_enable |
false |
1.17.0 | Generate MariaDB service |
MARIADB_volume_map_data |
false |
1.17.0 | Volume map data folder into the MariaDB service |
MARIADB_database |
sample |
1.17.0 | the name of the MariaDB database |
MARIADB_user |
sample |
1.17.0 | the name of the MariaDB user |
MARIADB_password |
sample |
1.17.0 | the MariaDB user password |
SQL Server | |||
SQLSERVER_enable |
false |
1.0.0 | Generate SQL Server service |
SQLSERVER_provision_adventure_works |
false |
1.15.0 | Provision the Adventureworks Sample database with SQL Server?. |
SQLSERVER_provision_adventure_works_edition |
oltp |
1.15.0 | Which edition of the Adventureworks sample database to provision, either oltp , datawarehouse or light . |
PostgreSQL | |||
POSTGRESQL_enable |
false |
1.0.0 | Generate PostgreSQL service |
POSTGRESQL_volume_map_data |
false |
1.0.0 | Volume map data folder into the Postgresql service |
POSTGRESQL_database |
postgres |
1.8.0 | Name of the Postgresql database |
POSTGRESQL_user |
demo |
1.8.0 | Name of the Postgresql user |
POSTGRESQL_password |
abc123! |
1.8.0 | Password for the Postgresql user |
POSTGRESQL_multiple_databases |
demodb |
1.13.0 | A comma-separated list of Postgresql databases to create |
POSTGRESQL_multiple_users |
demo |
1.13.0 | A comma-separated list of Postgresql users to create. Must be of same size as POSTGRESQL_multiple_databases . |
POSTGRESQL_multiple_passwords |
abc123! |
1.13.0 | A comma-separated list of Postgresql passwords for the users to create. Must be of same size as POSTGRESQL_multiple_databases . |
POSTGRESQL_multiple_addl_roles |
`` | 1.16.0 | A comma-separated list of Postgresql roles to assign to the the users. Must be of same size as POSTGRESQL_multiple_databases . You can specify multiple roles for each user role1 role2,role1 role2 . |
POSTGRESQL_wal_level |
`` | 1.15.0 | Configure Postgresql wal_level configuration setting. Either replica , minimal , logical . Use logical to enable CDC with Debezium. |
PostgREST | |||
POSTGREST_enable |
false |
1.11.0 | Generate PostgREST service |
pgAdmin | |||
PGADMIN_enable |
false |
1.13.0 | Generate pgAdmin service |
Adminer | |||
ADMINER_enable |
false |
1.0.0 | Generate Adminer RDBMS Admin UI service |
Cloudbeaver | |||
CLOUDBEAVER_enable |
false |
1.6.0 | Generate Cloudbeaver RDBMS Admin UI service |
CLOUDBEAVER_volume_map_workspace |
false |
1.12.0 | Volume map workspace folder into the Cloudbeaver service |
SQLPad | |||
SQLPAD_enable |
false |
1.11.0 | Generate SQLPad UI service |
SQL Chat | |||
SQLCHAT_enable |
false |
1.16.0 | Generate SQL Chat UI service |
SQLCHAT_api_key |
`` | 1.16.0 | OpenAI API key. |
SQLCHAT_api_endpoint |
https://api.openai.com |
1.16.0 | OpenAI API endpoint. |
SQLCHAT_database_less |
false |
1.16.0 | Set to true to start SQL Chat in database-less mode. |
NocoDB | |||
NOCODB_enable |
false |
1.15.0 | Generate NocoDB UI service |
NOCODB_volume_map_data |
false |
1.15.0 | Volume map data folder into the NocoDB service |
Quix | |||
QUIX_enable |
false |
1.6.0 | Generate Quix Database Notebook service |
Axon Server | |||
AXON_enable |
false |
1.0.0 | Generate Axon Server service |
EventStore | |||
EVENTSTORE_enable |
false |
1.12.0 | Generate EventStore service |
MinIO Object Storage | |||
MINIO_enable |
false |
1.0.0 | Generate Minio service |
MINIO_volume_map_data |
false |
1.1.0 | Volume map data folder into the Minio service |
MINIO_access_key |
V42FCGRVMK24JJ8DHUYG |
1.9.0 | The access key to be used for MinIO. |
MINIO_secret_key |
bKhWxVF3kQoLY9kFmt91l+tDrEoZjqnWXzY9Eza |
1.9.0 | The secret key to be used for MinIO. |
MINIO_buckets |
`` | 1.16.0 | A comma separated list of buckets to create upon start. |
MINIO_browser_enable |
true |
1.10.0 | To disable web browser access, set this value to false . |
MINIO_audit_webhook_enable |
false |
1.17.0 | Enable MinIO audit webhook service? |
MINIO_audit_webhook_endpoint |
`` | 1.17.0 | The HTTP endpoint of the audit webhook service. |
MINIO_audit_webhook_auth_token |
`` | 1.17.0 | An authentication token of the appropriate type for the audit webhook service endpoint. Omit for endpoints which do not require authentication. |
MINIO_audit_kafka_enable |
false |
1.17.0 | Configure MinIO to publish audit logs to a Kafka broker. |
MINIO_audit_kafka_topic |
minio-audit-log |
1.17.0 | The name of the Kafka topic where MinIO audit log events should be published to. |
MINIO_notify_snyc_enable |
false |
1.17.0 | Enables synchronous bucket notifications. |
MINIO_notify_webhook_enable |
false |
1.17.0 | Enable bucket notification notifications webhook service? |
MINIO_notify_webhook_endpoint |
`` | 1.17.0 | The HTTP endpoint of the notify webhook service. |
MINIO_notify_webhook_auth_token |
`` | 1.17.0 | An authentication token of the appropriate type for the bucket notification webhook service endpoint. Omit for endpoints which do not require authentication. |
MINIO_notify_kafka_enable |
false |
1.17.0 | Configure MinIO to publish bucket notifications to a Kafka broker. |
MINIO_notify_kafka_topic |
minio-notify-log |
1.17.0 | The name of the Kafka topic where MinIO bucket notification events should be published to. |
MINIO_notify_mqtt_enable |
false |
1.17.0 | Configure MinIO to publish bucket notifications to a MQTT broker. |
MINIO_notify_mqtt_broker_endpoint |
tcp://mosquitto-1:1883 |
1.17.0 | The name of the MQTT server/broker endpoint where MinIO bucket notification events should be published to. MinIO supports TCP, TLS or Websocket connections. |
MINIO_notify_mqtt_topic |
minio-notify-log |
1.17.0 | The name of the MQTT topic where MinIO bucket notification events should be published to. |
MINIO_notify_mqtt_username |
`` | 1.17.0 | The MQTT username with which MinIO authenticates to the MQTT server/broker. |
MINIO_notify_mqtt_password |
`` | 1.17.0 | The MQTT password with which MinIO authenticates to the MQTT server/broker. |
MINIO_notify_redis_enable |
false |
1.17.0 | Configure MinIO to publish bucket notifications to a Redis database. |
MINIO_notify_redis_endpoint |
redis-1:6379/0 |
1.17.0 | The name of the Redis endpoint where MinIO bucket notification events should be published to. |
MINIO_notify_redis_key |
minio-key |
1.17.0 | Specify the Redis key to use for storing and updating events. Redis auto-creates the key if it does not exist. |
MINIO_notify_redis_format |
namespace |
1.17.0 | Specify the format of event data written to the Redis service endpoint. Either namespace or access . |
MINIO_notify_redis_password |
`` | 1.17.0 | The password with which MinIO authenticates to the Redis database. |
MINIO_lambda_webhook_enable |
false |
1.17.0 | Enable HTTP webhook endpoint for triggering an Object Lambda webhook endpoint? |
MINIO_lambda_webhook_endpoint |
`` | 1.17.0 | The HTTP endpoint of the Object Lambda webhook endpoint for a handler function. |
MINIO_labmda_webhook_auth_token |
`` | 1.17.0 | An authentication token of the appropriate type for the lambda webhook service endpoint. Omit for endpoints which do not require authentication. |
MinIO Console | |||
MINIO_CONSOLE_enable |
false |
1.11.0 | Generate Minio Console service |
Adminio UI | |||
ADMINIO_UI_enable |
false |
1.11.0 | Generate Adminio UI service |
Minio Web | |||
MINIO_WEB_enable |
false |
1.17.0 | Generate Minio Web service |
MINIO_WEB_s3_bucket_name |
`` | 1.17.0 | The Minio bucket name to serve the artefacts. |
MINIO_WEB_s3_prefix |
`` | 1.17.0 | S3 prefix to add when querying for a file in the minio bucket, e.g. myapp/ (make sure you add the / at the end). |
MINIO_WEB_default_html |
index.html |
1.17.0 | Default file to serve initially, e.g. homepage of the web app. |
MINIO_WEB_favicon |
`` | 1.17.0 | Name of a favicon file to use instead of the default one provided. The file has to be available in the ./custom-conf/minio-web/ folder of the platform. |
MINIO_WEB_md_template |
`` | 1.17.0 | Name of a markdown template file to use instead of the default one provided. Renders any markdown resources as HTML with the template. Template MUST have a placeholder {{ .Content }}. The file has to be available in the ./custom-conf/minio-web/ folder of the platform. |
MinIO KES | |||
MINIO_KES_enable |
false |
1.17.0 | Generate Minio KES service |
Iceberg REST Catalog | |||
ICEBERG_REST_CATALOG_enable |
false |
1.16.0 | Generate Iceberg REST Catalog service |
ICEBERG_REST_CATALOG_type |
jdbc |
1.16.0 | The catalog implementation to use, either jdbc , hive or nessie . |
Filestash | |||
FILESTASH_enable |
false |
1.11.0 | Generate Filestash service |
FILESTASH_set_default_config |
true |
1.17.0 | Set default configuration with the admin password set to abc123! with s3 , local , ftp and sftp storage backends enabled. |
S3 Manager | |||
S3MANGER_enable |
false |
1.11.0 | Generate S3Manager service |
AWS CLI | |||
AWSCLI_enable |
false |
1.0.0 | Generate AWSCLI service |
Azure CLI | |||
AZURECLI_enable |
false |
1.15.0 | Generate Azure CLI service |
Azure Storage Explorer | |||
AZURE_STORAGE_EXPLORER_enable |
false |
1.15.0 | Generate Azure Storage Explorer service |
LakeFS | |||
LAKEFS_enable |
false |
1.12.0 | Generate LakeFS service |
LAKEFS_blockstore_type |
s3 |
1.13.0 | Block adapter to use, one of [local , s3 , gs , azure , mem ]. This controls where the underlying data will be stored. |
LAKEFS_database_type |
postgresql |
1.16.0 | Key-Value database to use for storing the metadata of LakeFS, one of [local , postgresql ,dynamodb ]. |
LAKEFS_logging_level |
INFO |
1.13.0 | Logging level to output, one of [DEBUG , INFO , WARN , ERROR , NONE ]. |
ProjectNessie | |||
NESSIE_enable |
false |
1.16.0 | Generate Nessie service |
NESSIE_store_type |
in-memory |
1.16.0 | Storage adapter to use for Nessie, one of [in-memory or postgresql or mongodb ]. This controls where the underlying data will be stored. |
Config | Default | Since | Description |
---|---|---|---|
Trino | |||
TRINO_enable |
false |
1.11.0 | Enable Trino service. |
TRINO_install |
single |
1.11.0 | Install a single trino node or a presto cluster . |
TRINO_workers |
3 |
1.6.0 | the number of presto workers to setup for a Presto cluster, if TRINO_install is set to cluster . |
TRINO_edition |
starburstdata |
1.11.0 | The Trino edition to use, either prestosql or oss . |
TRINO_auth_enabled |
false |
1.16.0 | Enable password file authentication? A default password file is provided with 3 users (admin , userA and userB ). You can specify a custom password file in custom-conf/trino/password.db if TRINO_auth_use_custom_password_file is set to true . |
TRINO_auth_use_custom_password_file |
false |
1.16.0 | Use a custom password file? The password file has to be specified in custom-conf/trino/password.db . |
TRINO_auth_use_custom_certs |
false |
1.16.0 | Use a custom self-signed certificates? The keystore has to be created in custom-conf/trino/certs . |
TRINO_auth_with_groups |
false |
1.16.0 | Enable mapping user names onto groups for easier access control and resource group management? Use file custom-conf/trino/security/group.txt for specifying the groups. |
TRINO_access_control_enabled |
false |
1.16.0 | Enable file-based access control where access to data and operations is defined by rules declared in manually-configured JSON files? Use file custom-conf/trino/security/rules.json to specify authorization rules for the whole cluster. |
TRINO_hive_storage_format |
ORC |
1.16.0 | The default file format used when creating new tables. |
TRINO_hive_compression_codec |
GZIP |
1.16.0 | The compression codec to use when writing files. Possible values are NONE , SNAPPY , LZ4 , ZSTD , or GZIP . |
TRINO_hive_views_enabled |
false |
1.16.0 | Support Hive Views defined in HiveQL. |
TRINO_hive_run_as_invoker |
false |
1.16.0 | Set to true to run the Hive Views in invoker security mode, if false Hive Views are run in definer security mode. |
TRINO_hive_legacy_translation |
false |
1.16.0 | Set to true to enable legacy behaviour which interprets any HiveQL query that defines a view as if it is written in SQL. It does not do any translation, but instead relies on the fact that HiveQL is very similar to SQL. |
TRINO_kafka_table_names |
`` | 1.11.0 | Comma-separated list of all tables (kafka topics) provided by the kafka catalog. Only effective, it KAFKA_enable is set to true . |
TRINO_kafka_default_schema |
`` | 1.16.0 | Default schema name for "Kafka tables". Only effective, it KAFKA_enable is set to true . |
TRINO_event_listeners |
`` | 1.17.0 | A comma-separated list of event listener plugins to register. There should be a directory with the same name in plugins/trino/<TRINO_event_listener> holding the implementation and a configuration file <TRINO_event_listener>.properties in conf/trino . |
TRINO_postgresql_database |
`` | 1.16.0 | the PostgreSQL database to use for connecting to the PostgreSQL. If not set, the value of the POSTGRESQL_database property is used. |
TRINO_postgresql_user |
`` | 1.16.0 | the PostgreSQL user to use for connecting to the PostgreSQL database. If not set, the value of the POSTGRESQL_user property is used. |
TRINO_postgresql_password |
`` | 1.16.0 | the PostgreSQL password to use for connecting to the PostgreSQL database. If not set, the value of the POSTGRESQL_password property is used. |
TRINO_oracle_user |
`` | 1.14.0 | the Oracle user to use for connecting to an Oracle database. |
TRINO_oracle_password |
`` | 1.14.0 | the Oracle password to use for connecting to an Oracle database. |
TRINO_sqlserver_database |
`` | 1.16.0 | the SQLServer database to use for connecting to an SQLServer database. |
TRINO_sqlserver_user |
`` | 1.16.0 | the SQLServer user to use for connecting to an SQLServer database. |
TRINO_sqlserver_password |
`` | 1.16.0 | the SQLServer password to use for connecting to an SQLServer database. |
TRINO_redis_table_names |
`` | 1.17.0 | list of all tables provided by the Redis catalog |
TRINO_redis_stack_table_names |
`` | 1.17.0 | list of all tables provided by the Redis Stack catalog |
TRINO_with_tpch_catalog |
false |
1.16.0 | provide the TPCH catalog. |
TRINO_with_tpcds_catalog |
false |
1.16.0 | provide the TPCDS catalog. |
TRINO_with_memory_catalog |
false |
1.16.0 | provide the Memory catalog. |
TRINO_starburstdata_use_license |
false |
1.16.0 | if true, the starburstdata.license file will be mapped into the container. Only usable if TRINO_edition is set to starburstdata . This enables the additional security features, more connectors, a cost-based query optimizer and much more. |
TRINO_additional_catalogs |
`` | 1.16.0 | provide a comma-separated list of additional catalog file names to be registered with Trino/Starbrustdata. Only provide the name of the catalog, without .properties extension and place the file into ./custom-conf/trino/catalog . |
TRINO_additional_plugins |
`` | 1.17.0 | provide a comma-separated list of additional plugins (folder name of the plugin) to be registered with Trino/Starbrustdata. Place the plugin into ./plugins/trino/connector . A plugin can be a connector or a custom UDF. |
TRINO_CLI_enable |
true |
1.11.0 | Enable Trino CLI service. Enabled by default, if TRINO_enable is set to true . |
Presto | |||
PRESTO_enable |
false |
1.2.0 | Enable Presto service. |
PRESTO_install |
single |
1.6.0 | Install a single presto node or a presto cluster . |
PRESTO_workers |
3 |
1.6.0 | the number of presto workers to setup for a Presto cluster, if PRESTO_install is set to cluster . |
PRESTO_edition |
ahana |
1.11.0 | The Presto edition to use, either prestodb or ahana . |
PRESTO_CLI_enable |
true |
1.6.0 | Enable Presto CLI service. Enabled by default, if PRESTO_enable is set to true . |
Dremio | |||
DREMIO_enable |
false |
1.2.0 | Enable Dremio service. |
Apache Drill | |||
DRILL_enable |
false |
1.4.0 | Enable Apache Drill service. |
Hasura | |||
HASURA_enable |
false |
1.11.0 | Enable Hasura GraphQL server service. |
HASURA_log_level |
info |
1.16.0 | Set the logging level. Options: debug , info , warn or error . |
HASURA_admin_secret |
`` | 1.16.0 | Admin secret key, required to access the Hasura instance. This is mandatory when you use webhook or JWT. |
HASURA_pro_key |
`` | 1.16.0 | The pro key to enable Hasura EE. |
GraphQL Mesh | |||
GRAPHQL_MESH_enable |
false |
1.11.0 | Enable Hasura GraphQL Mesh service. |
Directus | |||
DIRECTUS_enable |
false |
1.16.0 | Enable Directus service. |
Config | Default | Since | Description |
---|---|---|---|
Apache Druid | |||
DRUID_enable |
false |
1.4.0 | Generate Druid service |
DRUID_edition |
oss-sandbox |
1.4.0 | Generate single-server sandbox (oss-sandbox ) or cluster (oss-cluster ). Currently only oss-sandbox is supported. |
DRUID_volume_map_data |
false |
1.4.0 | Volume map data folder into the Druid service (currently has no impact). |
Apache Pinot | |||
PINOT_enable |
false |
1.12.0 | Generate Pinot service |
PINOT_servers |
1 |
1.12.0 | The number of Pinot server nodes to start in the cluster. |
PINOT_volume_map_data |
false |
1.12.0 | Volume map data folder into the Pinot service (currently has no impact). |
Config | Default | Since | Description |
---|---|---|---|
Mosquitto (MQTT) | |||
MOSQUITTO_enable |
false |
1.0.0 | Generate Mosquitto service |
MOSQUITTO_nodes |
1 |
1.1.0 | number of Mosquitto nodes |
MOSQUITTO_volume_map_data |
false |
1.1.0 | Volume map data folder into the Mosquitto broker |
MQTT HiveMQ 3 | |||
HIVEMQ3_enable |
false |
1.1.0 | Generate HiveMQ 3.x service |
MQTT HiveMQ 4 | |||
HIVEMQ4_enable |
false |
1.1.0 | Generate HiveMQ 4.x service |
MQTT EMQX | |||
EMQX_enable |
false |
1.12.0 | Generate EMQX service |
EMQX_edition |
oss |
1.12.0 | Generate EMQX service, either oss for the open-source or enterprise for the enterprise edition. |
MQTT UI | |||
MQTT_UI_enable |
false |
1.0.0 | Generate MQTT UI service |
Cedalo Management Center | |||
CEDALO_MANAGEMENT_CENTER_enable |
false |
1.9.0 | Generate the Cedalo Management Center for Mosquitto MQTT broker. |
CEDALO_MANAGEMENT_CENTER_username |
cedalo |
1.16.0 | The name of the cedalo default user. |
CEDALO_MANAGEMENT_CENTER_password |
abc123! |
1.16.0 | The password of the cedalo default user. |
Thingsboard | |||
THINGSBOARD_enable |
false |
1.11.0 | Generate Thingsboard service |
THINGSBOARD_volume_map_data |
false |
1.11.0 | Volume map data folder into the Thingsboard service |
THINGSBOARD_volume_map_log |
false |
1.11.0 | Volume map log folder into the Thingsboard service |
Config | Default | Since | Description |
---|---|---|---|
ActiveMQ | |||
ACTIVEMQ_enable |
false |
1.0.0 | Generate ActiveMQ service |
ACTIVEMQ_edition |
classic |
1.17.0 | The ActiveMQ component to use, either classic or artemis . |
ACTIVEMQ_user |
activemq |
1.17.0 | The ActiveMQ user. |
ACTIVEMQ_password |
abc123! |
1.17.0 | The ActiveMQ password to use for the user. |
ACTIVEMQ_volume_map_data |
false |
1.6.0 | Volume map data folder into the ActiveMQ broker |
RabbitMQ | |||
RABBITMQ_enable |
false |
1.6.0 | Generate RabbitMQ service |
RABBITMQ_volume_map_data |
false |
1.6.0 | Volume map data folder into the RabbitMQ broker |
RABBITMQ_volume_map_logs |
false |
1.6.0 | Volume map logs folder into the RabbitMQ broker |
Solace PubSub+ | |||
SOLACE_PUBSUB_enable |
false |
1.17.0 | Generate Solace PubSub+ service |
SOLACE_PUBSUB_volume_map_data |
false |
1.17.0 | Volume map data folder into the Solace PubSub+ broker |
SOLACE_PUBSUB_username |
admin |
1.17.0 | Username of the admin user |
SOLACE_PUBSUB_password |
abc123! |
1.17.0 | Password of the admin user |
Solace Kafka Proxy | |||
SOLACE_KAFKA_PROXY_enable |
false |
1.17.0 | Generate Solace Kafka Proxy service |
SOLACE_KAFKA_PROXY_vpn_name |
default |
1.17.0 | the Message VPN of the Solace broker to connect to |
SOLACE_KAFKA_PROXY_separators |
_. |
1.17.0 | if the Kafka topic contains a "level separator", this will convert it into a Solace topic level separator / . Can take multiple characters, e.g.: _. will convert either underscore or period to a slash. |
Pure FTPd | |||
PURE_FTPD_enable |
false |
1.17.0 | Generate Pure FTPd service |
PURE_FTPD_volume_map_data |
false |
1.17.0 | Volume map the local data folder into the Pure FTPd service |
PURE_FTPD_volume_map_data_transfer |
false |
1.17.0 | Volume map the local data-transfer folder into the SFTP service |
PURE_FTPD_username |
ftp |
1.17.0 | The username of the user on the Pure FTPd Server |
PURE_FTPD_username |
abc123! |
1.17.0 | The password of the user on the Pure FTPd Server |
PURE_FTPD_home |
/home/ftp-data |
1.17.0 | The home directory on the Pure FTPd Server |
SFTP | |||
SFTP_enable |
false |
1.16.0 | Generate SFTP service |
SFTP_volume_map_data |
false |
1.17.0 | Volume map the local data folder into the SFTP service |
SFTP_volume_map_data_transfer |
false |
1.17.0 | Volume map the local data-transfer folder into the SFTP service |
SFTP_username |
ftp |
1.16.0 | The username of the user on the SFTP Server |
SFTP_username |
abc123! |
1.16.0 | The password of the user on the SFTP Server |
SFTP_home |
ftp-data |
1.16.0 | The home directory on the SFTP Server |
FileZilla | |||
FILEZILLA_enable |
false |
1.16.0 | Generate FileZilla service |
MailDev | |||
MAILDEV_enable |
false |
1.17.0 | Generate MailDev service |
MAILDEV_smtp_port |
25 |
1.17.0 | Internal SMTP port to use. |
MAILDEV_web_disable |
false |
1.17.0 | Disable the use of the web interface. Useful for unit testing. |
Mailpit | |||
MAILPIT_enable |
false |
1.17.0 | Generate Mailpit service |
MAILPIT_smtp_port |
25 |
1.17.0 | Internal SMTP port to use. |
MAILPIT_volume_map_data |
false |
1.17.0 | Map the data folder into the container. |
MailHog | |||
MAILHOG_enable |
false |
1.17.0 | Generate MailHog service |
MAILHOG_smtp_port |
25 |
1.17.0 | Internal SMTP port to use. |
MAILHOG_storage_type |
memory |
1.17.0 | Set the message storage type to use, either memory or maildir or mongodb . |
MAILHOG_volume_map_data |
false |
1.17.0 | Map the data folder into the container. You also have to set the MAILHOG_storage_type to maildir for it to be effective. |
MAILHOG_mongo_uri |
mongo-1:27017 |
1.17.0 | The host and port for the MongoDB message storage, if MAILHOG_storage_type is set to mongodb |
MAILHOG_mongo_db |
mailhog |
1.17.0 | The database name for the MongoDB message storage, if MAILHOG_storage_type is set to mongodb |
MAILHOG_mongo_collection |
messages |
1.17.0 | The collection name for the MongoDB message storage, if MAILHOG_storage_type is set to mongodb |
Config | Default | Since | Description |
---|---|---|---|
Camunda BPM Platform | |||
CAMUNDA_BPM_PLATFORM_enable |
false |
1.14.0 | Generate Camunda BPM Platform service |
Camunda Optimize | |||
CAMUNDA_OPTIMIZE_enable |
false |
1.14.0 | Generate Camunda Optimize service |
Camunda Zeebe | |||
CAMUNDA_ZEEBE_enable |
false |
1.12.0 | Generate Camunda Zeebe service |
CAMUNDA_ZEEBE_volume_map_data |
false |
1.12.0 | Map the data folder into the container. |
Camunda Operate | |||
CAMUNDA_OPERATE_enable |
false |
1.12.0 | Generate Camunda Operate service |
Camunda ZeeQS - Zeebe Query Service | |||
CAMUNDA_ZEEQS_enable |
false |
1.12.0 | Generate Camunda ZeeQS service |
Softproject X4 Server | |||
X4_SERVER_enable |
false |
1.17.0 | Generate X4 Server service |
X4_SERVER_db_type |
h2 |
1.17.0 | database type to use, either 'h2' or 'postgres' or 'mssql' |
IOEvent Cockpit | |||
IOEVENT_COCKPIT_enable |
false |
1.17.0 | Generate IOEvent Cockpit service |
Config | Default | Since | Description |
---|---|---|---|
Tyk API Gateway | |||
TYK_enable |
false |
1.16.0 | Generate Tyk API Gateway service |
TYK_edition |
oss |
1.16.0 | The edition of Tyk to use, either oss for Open Source or pro for Professional edition. |
TYK_secret |
abc123! |
1.16.0 | The secret for the gateway. |
TYK_PUMP_enable |
false |
1.16.0 | Generate Tyk Pump service |
TYK_PUMP_backend_type |
mongo |
1.16.0 | Backend to use for Tyk Pump, either mongo , postgres or kafka |
Kong API Gateway | |||
KONG_enable |
false |
1.16.0 | Generate Kong API Gateway service |
KONG_nodes |
1 |
1.16.0 | Number of Kong nodes to enable. |
KONG_use_declarative_config |
false |
1.16.0 | Map declarative configuration into the /kong folder. If enabled, place config file(s) into custom-conf/kong/ . |
KONG_use_db |
false |
1.16.0 | Should a database be used as a backend or should it run db-less? |
KONG_db_type |
postgres |
1.16.0 | The database backend to use for Kong, either ``postgresfor Postgresql database or cassandra` for Apache Cassandra (deprecated by Kong and will no longer be available in Kong 4.x). |
KONG_log_level |
info |
1.16.0 | The log level to use for Kong service. |
KONG_volume_map_working |
false |
1.16.0 | Volume map working directory folder into the Kong service. |
KONG_volume_map_data |
false |
1.16.0 | Volume map data folder into the Kong service. |
KONG_license_data |
| `` | 1.16.0 | The enterprise license, if you want to run the enterprise subscription. |
| KONG_plugins
| | 1.16.0 | A list of custom plugins to enable. | | [**_Kong Deck_**](./services/kong-deck.md) ![x86-64](./images/x86-64.png) | | | | | | | `KONG_DECK_enable` | `false` | 1.16.0 | Generate Kong Deck service | | [**_Konga_**](./services/konga.md) ![x86-64](./images/x86-64.png) | | | | | | | `KONGA_enable` | `false` | 1.16.0 | Generate Konga Admin UI service | | `KONGA_volume_map_data` | `false` | 1.16.0 | Volume map folder into Konga service? | | [**_Kong Admin UI_**](./services/kong-admin-ui.md) ![x86-64](./images/x86-64.png) | | | | | | | `KONG_ADMIN_UI_enable` | `false` | 1.16.0 | Generate Kong Admin UI service | | [**_KongMap_**](./services/kong-map.md) ![x86-64](./images/x86-64.png) | | | | | | | `KONG_MAP_enable` | `false` | 1.16.0 | Generate KongMap service | | [**_Swagger Editor_**](./services/swagger-editor.md) ![x86-64](./images/x86-64.png) | | | | | | | `SWAGGER_EDITOR_enable` | `false` | 1.6.0 | Generate Swagger Editor service | | [**_Swagger UI_**](./services/swagger-ui.md) ![x86-64](./images/x86-64.png) | | | | | | | `SWAGGER_UI_enable` | `false` | 1.6.0 | Generate Swagger UI service | | [**_AsyncAPI Studio_**](./services/asyncapi-studio.md) ![x86-64](./images/x86-64.png) | | | | | | | `ASYNCAPI_STUDIO_enable` | `false` | 1.17.0 | Generate AsyncAPI Studio service | | [**_Postman_**](./services/postman.md) ![x86-64](./images/x86-64.png) | | | | | `POSTMAN_enable` | `false` | 1.11.0 | Generate Postman service | | [**_Pact Broker_**](./services/pact-broker.md) ![x86-64](./images/x86-64.png) | | | | | `PACT_BROKER_enable` | `false` | 1.17.0 | Generate Pact Broker service | | [**_Microcks_**](./services/microcks.md) ![x86-64](./images/x86-64.png) | | | | | `MICROCKS_enable` | `false` | 1.11.0 | Generate Microcks service | | [**_MockServer_**](./services/mockserver.md) ![x86-64](./images/x86-64.png) | | | | | | | `MOCK_SERVER_enable` | `false` | 1.15.0 | Generate MockServer service | | `MOCK_SERVER_log_level` | `DEBUG` | 1.15.0 | The the minimum level of logs to record in the event log and to output to system out. | | `MOCK_SERVER_persist_expectations` | `false` | 1.15.0 | Enable the persisting of expectations as json, which is updated whenever the expectation state is updated (i.e. add, clear, expires, etc). | | `MOCK_SERVER_persisted_expecations_path` |
| 1.15.0 | The file path used to save persisted expectations as json, which is updated whenever the expectation state is updated (i.e. add, clear, expires, etc). |
| MOCK_SERVER_initialization_json_path
| `` | 1.15.0 | The name of the json file (stored in ./scripts/mockserver/
used to initialize expectations in MockServer at startup, if set MockServer will load this file and initialise expectations for each item in the file when is starts. |
Config | Default | Since | Description |
---|---|---|---|
Code-Server | |||
CODE_SERVER_enable |
false |
1.3.0 | Generate Code Server (VS Code IDE) service |
CODE_SERVER_volume_map_platform_root |
false |
1.3.0 | Map the platform root into the editor, so files can be edited which are part of the platform. |
Taiga | |||
TAIGA_enable |
false |
1.17.0 | Generate Taiga project managment service |
TAIGA_volume_map_db_data |
false |
1.17.0 | Volume map db-data folder into the Taiga service. |
Taskcafé | |||
TASKCAFE_enable |
false |
1.17.0 | Generate Taskcafé Kanban board service |
TASKCAFE_volume_map_db_data |
false |
1.17.0 | Volume map db-data folder into the Taskcafé service. |
Focalboard | |||
FOCALBOARD_enable |
false |
1.17.0 | Generate Focalboard service |
FOCALBOARD_volume_map_db_data |
false |
1.17.0 | Volume map db-data folder into the Focalboard service. |
Excalidraw | |||
EXCALIDRAW_enable |
false |
1.13.0 | Generate Excalidraw service |
Firefox Browser | |||
FIREFOX_enable |
false |
1.13.0 | Generate Firefox Browser service |
FIREFOX_use_port_80 |
false |
1.13.0 | Run Firefox on Port 80 if true (in that case markdown-viewer will run on port 8000 ). |
File Browser | |||
FILE_BROWSER_enable |
false |
1.11.0 | Generate Filebrowser service |
WeTTY | |||
WETTY_enable |
false |
1.9.0 | Generate WeTTY service (Terminal over HTTP) |
Raneto | |||
RANETO_enable |
false |
1.17.0 | Generate a Raneto knowledge platform service. |
RANETO_MADNESS_volume_map_config |
false |
1.17.0 | Volume map the config folder of the container. |
Markdown Madness | |||
MARKDOWN_MADNESS_enable |
false |
1.17.0 | Generate a Markdown Madness service. |
MARKDOWN_MADNESS_volume_map_docs |
false |
1.17.0 | Volume map the docs folder of the container. |
MARKDOWN_MADNESS_volume_map_config_file |
false |
1.17.0 | Volume map the ./custom-conf/markdown-madness.yml file into the container to configure the behaviour of Markdown Madness. |
Markdown Viewer | |||
MARKDOWN_VIEWER_enable |
true |
1.9.0 | Generate a web page with the details on the data platform. |
MARKDOWN_VIEWER_use_port_80 |
true |
1.10.0 | Use Port 80 for the markdown viewer? If set to false , port 8008 is used. |
MARKDOWN_VIEWER_use_public_ip |
true |
1.10.0 | When rendering markdown pages, use the public IP address for links to services. If set to false , the docker host IP is used instead. |
MARKDOWN_VIEWER_edition |
markdown-madness |
1.17.0 | The markdown "engine" to use, either markdown-web or markdown-madness . |
MARKDOWN_VIEWER_services_list_version |
2 |
1.17.0 | The version of the Services list to render. Either 1 (original) or 2 (with ports). |
log4brains | |||
LOG4BRAINS_enable |
true |
1.11.0 | Generate log4brains service. |
LOG4BRAINS_repository_name |
trivadis |
1.11.0 | The repository name part of the log4brains docker image which should be used |
LOG4BRAINS_image_name |
log4brains |
1.11.0 | The image name part of the log4brains docker image which should be used |
LOG4BRAINS_adr_source_dir |
true |
1.11.0 | The folder holding the ADR sources. |
LOG4BRAINS_command |
preview |
1.11.0 | The log4brains command to run |
| |
Config | Default | Since | Description |
---|---|---|---|
Python | |||
PYTHON_enable |
false |
1.2.0 | Generate Python 3 container |
PYTHON_image |
python |
1.14.0 | The docker image to use |
PYTHON_artefacts_folder |
1.14.0 | path to the folder on host machine that will mapped to /tmp on container and from where the python script file and requirements file will be searched | |
PYTHON_script_file |
1.14.0 | the filename of the python script to be executed (please note that the script should be in the folder mounted using PYTHON_artefacts_folder ) |
|
PYTHON_requirements_file |
1.14.0 | the filename of the requirements file to be used (please note that the script should be in the folder mounted using PYTHON_artefacts_folder ) |
|
PYTHON_python_packages |
`` | 1.16.0 | Python packages to install, as a space separated list of packages: <package-1> <package-2> |
Nuclio FaaS | |||
NUCLIO_enable |
false |
1.13.0 | Enable Nuclio service. |
NUCLIO_map_tmp_folder |
false |
1.17.0 | Map local /tmp folder into container? |
Config | Default | Since | Description |
---|---|---|---|
Portainer | |||
PORTAINER_enable |
false |
1.0.0 | Generate Portainer Container UI service |
CetusGuard | |||
CETUSGUARD_enable |
false |
1.17.0 | Generate CetusGuard Docker-Socket-Proxy service |
CETUSGUARD_docker_daemon_socket |
unix:///var/run/docker.sock |
1.17.0 | Docker daemon socket to connect to. Defaults to the local unix socket which is mapped into the centusguard container. |
CETUSGUARD_port |
2375 |
1.17.0 | The port the centus service binds to. |
CETUSGUARD_no_builtin_rules |
false |
1.17.0 | Do not load the built-in rules (which allow a few common harmless endpoints, /_ping , /version and /info ). |
CETUSGUARD_rules |
false |
1.17.0 | A comma-delimited list of rules. Can either be one line or using the CENUSGUARD_rules: > syntax be stretched over multiple lines. |
CETUSGUARD_rules_file |
`` | 1.17.0 | The name of the rules file. |
CETUSGUARD_log_level |
6 |
1.17.0 | The minimum entry level to log, a value from 0 to 7 . |
cAdvisor | |||
CADVISOR_enable |
false |
1.2.0 | Generate CAdvisor Container UI service |
Docker Registry | |||
DOCKER_REGISTRY_enable |
false |
1.17.0 | Generate Docker Registry service |
DOCKER_REGISTRY_volume_map_data |
false |
1.17.0 | Volume Map the data folder of the docker-registry service. |
DOCKER_REGISTRY_volume_map_custom_config |
false |
1.17.0 | Volume Map a custom config.yml file from ./custom-conf/docker-registry into the docker-registry service. |
Watchtower | |||
WATCHTOWER_enable |
false |
1.11.0 | Generates the watchtower service, a container-based solution for automating Docker container base image updates. |
WATCHTOWER_poll_interval |
300 |
1.11.0 | Poll interval (in seconds). This value controls how frequently watchtower will poll for new images. |
WATCHTOWER_schedule |
`` | 1.17.0 | Cron expression in 6 fields (rather than the traditional 5) which defines when and how often to check for new images (e.g. 0 0 4 * * * ). Either WATCHTOWER_poll_interval or a schedule expression can be defined, but not both. |
WATCHTOWER_watch_containers |
`` | 1.17.0 | Containers to watch, a list of container names separated by a space (kafka-1 kafka-2 kafka-3 ). |
WATCHTOWER_cleanup_enable |
true |
1.17.0 | Remove old images after updating to prevent accumulation of orphaned images on the system. |
WATCHTOWER_no_restart_enable |
false |
1.17.0 | Do not restart containers after restarting? By default the containers are restarted. Can be useful if containers are started by an external system. |
WATCHTOWER_rolling_restart_enable |
false |
1.17.0 | Restart one image at time instead of stopping and starting all at once. Useful in conjunction with lifecycle hooks to implement zero-downtime deploy. |
WATCHTOWER_debug_enable |
false |
1.17.0 | Enable debug mode with verbose logging? |
WATCHTOWER_trace_enable |
false |
1.17.0 | Enable trace mode with verbose logging? |
WATCHTOWER_monitor_only_enable |
false |
1.17.0 | Monitor only but do not update the containers? |
WATCHTOWER_label_enable |
false |
1.17.0 | Update only containers that have the com.centurylinklabs.watchtower.enable label set to true. You can either set that label if it is supported by the XXXX_watchtower_enable config setting or by using the docker-compose.override.yml file. |
WATCHTOWER_scope |
`` | 1.17.0 | Update containers that have a com.centurylinklabs.watchtower.scope label set with the same value as the given argument. You can either set that label if it is supported by the XXXX_watchtower_scope config setting or by using the docker-compose.override.yml file. See here for an example of this label. |
WATCHTOWER_http_api_update_enable |
false |
1.17.0 | Run Watchtower in HTTP API mode only allowing image updates to be triggered by an HTTP request? See HTTP API. |
WATCHTOWER_http_api_token |
`` | 1.17.0 | Sets the authentication token to HTTP API requests. |
WATCHTOWER_http_api_period_polls_enable |
false |
1.17.0 | Keep running periodic updates if the HTTP API mode is enabled? Otherwise the HTTP API would prevent periodic polls. |
WATCHTOWER_http_api_metrics_enable |
false |
1.17.0 | Enables a metrics endpoint, exposing prometheus metrics via HTTP. See Metrics. |
WATCHTOWER_timeout |
10 |
1.17.0 | Timeout (in seconds) before the container is forcefully stopped. |
WATCHTOWER_map_config_json |
false |
1.15.0 | Map the config.json file from $HOME/.docker/ folder into the watchtower container? |
S3FS | |||
S3FS_enable |
false |
1.17.0 | Generate S3FS service. You also have to enable either minio or external s3 . |
S3FS_bucket_name |
false |
1.17.0 | The bucket name to use on S3. |
Config | Default | Since | Description |
---|---|---|---|
HAPI FHIR Server | |||
HAPI_FHIR_enable |
false |
1.17.0 | Generate HAPI FHIR server service |
Blaze FHIR Server | |||
BLAZE_FHIR_enable |
false |
1.17.0 | Generate Blaze FHIR server service |
LinuxForHealth FHIR Server | |||
LFH_FHIR_enable |
false |
1.17.0 | Generate LinxuForHealth FHIR server service |
LFH_FHIR_user_password |
change-password |
1.17.0 | The password for the fhiruser user. |
LFH_FHIR_admin_password |
change-password |
1.17.0 | The password for the fhiradmin user. |
Miracum FHIR Gateway | |||
FHIR_GATEWAY_enable |
false |
1.17.0 | Generate Miracum FHIR Gateway service |
FHIR_GATEWAY_fhir_server_enabled |
false |
1.17.0 | Whether to send received resources to a downstream FHIR server. |
FHIR_GATEWAY_fhir_server_url |
http://hapi-server:8080/fhir |
1.17.0 | URL of the FHIR server to send data to. |
FHIR_GATEWAY_fhir_server_username |
`` | 1.17.0 | HTTP basic auth username of the FHIR server to send data to. |
FHIR_GATEWAY_fhir_server_password |
`` | 1.17.0 | HTTP basic auth password of the FHIR server to send data to. |
FHIR_GATEWAY_postgresql_enabled |
false |
1.17.0 | Persists any received FHIR resource in a PostgreSQL database? |
FHIR_GATEWAY_kafka_enabled |
false |
1.17.0 | Enable reading FHIR resources from, and writing them back to a Kafka clusters. |
FHIR_GATEWAY_pseudonymizer_enabled |
false |
1.17.0 | Whether pseudonymization should be enabled. |