This module is part of a project to simplify the provisioning of Hazelcast (single or multi-node) on AWS cloud using Terraform. You may also wish to consider using the Kubernetes Provider.
Terraform module that provisions Hazelcast container in ECS.
This module creates the following resources required for Hazelcast container to be up and running in ECS cluster.
- ECS cluster.
- ECS task definition and service for provided version of Hazelcast.
- Roles required for EC2 to execute the task.
This approach creates an ECS task for Hazelcast and runs/manages that task on EC2 instance of provided instance type. It allows creating a single or multi node cluster based on inputs to the module. More details here.
- Configure AWS credentials. Refer this for help.
- Make sure your AWS user has permissions required to create all resources in the diagram.
- Install Terraform from here.
Note: Change the inputs to match your requirement.
For a single member deployment:
module "hazelcast_cluster" {
source = "path-to-the-module"
region = "ap-southeast-1"
name = "hazelcast"
hazelcast_version = "3.12.7"
hazelcast_container_cpu = 512
hazelcast_container_memory = 2048
instance_type = "t3.small"
security_group_id = "security-group-id"
subnet_id = "subnet-id"
hazelcast_discovery_tag_key = "Purpose"
tags = {
Purpose = "Orders"
Environment = "Development"
CreatedBy = "Terraform"
}
}
For a multi member deployment:
Note: Set hazelcast_members_count
for Hazelcast member count and
instance_count
for EC2 instance count. This gives the flexibility of, for example,
configuring three Hazelcast members in two EC2 instances.
module "hazelcast_cluster" {
source = "path-to-the-module"
region = "ap-southeast-1"
name = "hazelcast"
hazelcast_version = "3.12.7"
hazelcast_container_cpu = 512
hazelcast_container_memory = 2048
hazelcast_members_count = 2
instance_type = "t3.small"
security_group_id = "security-group-id"
subnet_id = "subnet-id"
instance_count = 2
hazelcast_discovery_tag_key = "Purpose"
tags = {
Purpose = "Orders"
Environment = "Development"
CreatedBy = "Terraform"
}
}
Try out the module functionality with an example defined here.
- Switch to examples directory
cd examples
- Initialize Terraform to download required plugins
terraform init
- Run
plan
to find out all resources that are going to be createdterraform plan
- Run
apply
to create those resourcesterraform apply
- Install a Hazelcast Client and test the connection with the instance public IP address
- Make sure to destroy them once you are done exploring
terraform destroy
Name | Description | Type | Default | Required |
---|---|---|---|---|
name | The name of the deployment | string | n/a |
yes |
tags | Tags for the created resources | map | n/a |
yes |
region | AWS Region | string | n/a |
yes |
ami_id | ECS Optimised AWS EC2 AMI ID | string | latest ECS Optimised AMI | no |
hazelcast_version | Hazelcast version to deploy | string | latest | yes |
hazelcast_container_cpu | Hazelcast container CPU units | string | n/a |
yes |
hazelcast_container_memory | Hazelcast container memory | string | n/a |
yes |
hazelcast_discovery_tag_key | Hazelcast AWS Discovery Tag Key | string | n/a |
yes |
hazelcast_members_count | Hazelcast members / tasks count | number | 1 | no |
instance_type | EC2 Instance type to launch for ECS | string | n/a |
yes |
instance_count | EC2 Instance count | number | 1 | no |
security_group_id | EC2 Security Group ID | string | n/a |
yes |
subnet_id | EC2 Subnet ID | string | n/a |
yes |
Notes:
- Please note that the AMI provided must be an ECS Optimised AMI with Docker and ECS agent installed.
hazelcast_discovery_tag_key
will be used to configure auto discovery with Hazelcast AWS plugin.
Name | Description |
---|---|
ecs_cluster_arn | ARN of ECS Cluster |
ecs_cluster_name | Name of the ECS Cluster |
instance_public_ip | Public IP of the ECS EC2 instance(s) |
instance_private_ip | Private IP of the ECS EC2 instance(s) |
- Make sure you have installed JAVA and Docker.
- Set the values
access-key
andsecret-key
intests/hazelcast-java-client/src/main/resources/hazelcast-client.yaml
with your AWS Access Keys.
The test setup is automated in setup.sh. It does the following things:
- Deploys the 'multi-node' example to AWS with Terraform.
- Verifies that the cluster has formed successfully.
- Builds the Docker image for the JAVA client.
- Runs the built Docker image.
- Verifies members discovery.
- Verifies client connection to the cluster.
Teardown of the test environment is automated in teardown.sh.
- Destroys the 'multi-node' example.
- Removes Docker images and containers.
Note: Don't forget to teardown the cluster to avoid incurring charges.
We appreciate your help!
Open an issue or submit a pull request for an enhancement. Browse through the current open issues.