Migrate a monolith to microservices
Monolithic applications contain all of the components they need to function in a single, indivisible unit. They can be easier to develop and test because all of the application code is in the same location and it is easier to secure communication between components that are contained within the same application package.
However, issues come up as monolithic applications get larger and more complex, more developers join the project, or certain components need to be scaled individually. Moving towards a microservice architecture can help solve these issues.
When you are ready to transition to a microservices architecture, Consul and Nomad provide the functionality to help you deploy, connect, secure, monitor, and scale your application.
In this tutorial, you will clone the code repository to your local workstation and learn about the cloud infrastructure required to complete the scenarios in this collection.
Collection overview
This collection is composed of six different tutorials:
- a general overview, provided by this tutorial, that helps you navigate across the code used in the collection;
- Set up the cluster guides you through the setup of a Consul and Nomad cluster, and its infrastructure is used as a prerequisite for the remaining tutorials;
- four scenario tutorials, each showing the deployment of HashiCups, a demo application, on the Nomad and Consul cluster at different levels of integration:
- Deploy HashiCups demonstrates how to convert a Docker Compose configuration, used to deploy a monolithic application locally, into a Nomad job configuration file, or jobspec, that deploys the same application, as a monolith, into the Nomad cluster. This scenario does not integrate Consul for the deployment.
- Integrate service discovery demonstrates how to convert a Nomad job configuration for a monolithic application into an application deployed using Consul service discovery. The tutorial shows two different scenarios: the first, in which the application is deployed on a single Nomad node, the second, in which the application is deployed on multiple Nomad nodes, taking advantage of Nomad scheduling capabilities.
- Integrate service mesh and API gateway demonstrates how to set up your Consul and Nomad cluster to use Consul service mesh. The tutorial includes configuration for Consul API gateway and Consul intentions that are required for application security.
- Scale a service demonstrates how to use Nomad Autoscaler to automatically scale part of the HashiCups application in response to a spike in traffic.
The architectural diagrams below provide you with a visual representation of the configurations you will learn about and use in the different steps of the collection.
The cluster consists of three server nodes, three private client nodes, and one publicly accessible client node. Each node runs the Consul agent and Nomad agent. The agents run in either server or client mode depending on the role of the node.
Note
With the exception of the prerequisite tutorial that sets up the Nomad and Consul cluster, none of the other tutorials are mandatory. They are intended to show the progression of deployment maturity and the different integrations available between Consul and Nomad. You can decide which of the deployments suit your current scenario better and learn how to perform it without having to follow the other scenario tutorials.
Review the code repository
The infrastructure creation flow consists of three steps:
- Create the Amazon Machine Image (AMI) with Packer.
- Provision the infrastructure with Terraform.
- Set up access to the CLI and UI for both Consul and Nomad.
Clone the hashicorp-education/learn-consul-nomad-vm
code repository to your local workstation.
$ git clone https://github.com/hashicorp-education/learn-consul-nomad-vm.git
Change to the directory of the local repository.
$ cd learn-consul-nomad-vm
The aws
directory
View the structure of the aws
directory. It contains the configuration files for creating the AMI and cluster infrastructure.
$ tree awsaws├── aws-ec2-control_plane.tf├── aws-ec2-data_plane.tf├── aws_base.tf├── consul_configuration.tf├── image.pkr.hcl├── outputs.tf├── providers.tf├── secrets.tf├── variables.hcl.example└── variables.tf 1 directory, 10 files
- The
aws-ec2-control_plane.tf
file contains configuration for creating the servers whileaws-ec2-data_plane.tf
contains configuration for creating the clients. Both are structured similarly.
aws-ec2-control_plane.tf
resource "aws_instance" "server" { depends_on = [module.vpc] count = var.server_count ami = var.ami instance_type = var.server_instance_type key_name = aws_key_pair.vm_ssh_key-pair.key_name associate_public_ip_address = true vpc_security_group_ids = [ aws_security_group.consul_nomad_ui_ingress.id, aws_security_group.ssh_ingress.id, aws_security_group.allow_all_internal.id ] subnet_id = module.vpc.public_subnets[0] # instance tags # ConsulAutoJoin is necessary for nodes to automatically join the cluster tags = { Name = "${local.name}-server-${count.index}", ConsulJoinTag = "auto-join-${random_string.suffix.result}", NomadType = "server" } # ... user_data = templatefile("${path.module}/../shared/data-scripts/user-data-server.sh", { domain = var.domain, datacenter = var.datacenter, server_count = "${var.server_count}", consul_node_name = "consul-server-${count.index}", cloud_env = "aws", retry_join = local.retry_join_consul, consul_encryption_key = random_id.consul_gossip_key.b64_std, consul_management_token = random_uuid.consul_mgmt_token.result, nomad_node_name = "nomad-server-${count.index}", nomad_encryption_key = random_id.nomad_gossip_key.b64_std, nomad_management_token = random_uuid.nomad_mgmt_token.result, ca_certificate = base64gzip("${tls_self_signed_cert.datacenter_ca.cert_pem}"), agent_certificate = base64gzip("${tls_locally_signed_cert.server_cert[count.index].cert_pem}"), agent_key = base64gzip("${tls_private_key.server_key[count.index].private_key_pem}") }) # ... # Waits for cloud-init to complete. Needed for ACL creation. provisioner "remote-exec" { inline = [ "echo 'Waiting for user data script to finish'", "cloud-init status --wait > /dev/null" ] } iam_instance_profile = aws_iam_instance_profile.instance_profile.name # ...}
- The
aws_base.tf
file contains configuration for creating the Virtual Private Cloud (VPC), security groups, and IAM configurations. This file defines the ingress ports for Consul, Nomad, and the HashiCups application.
aws_base.tf
# ...resource "aws_security_group" "consul_nomad_ui_ingress" { name = "${local.name}-ui-ingress" vpc_id = module.vpc.vpc_id # Nomad UI ingress { from_port = 4646 to_port = 4646 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # Consul UI ingress { from_port = 8443 to_port = 8443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } egress { from_port = 0 to_port = 0 protocol = "-1" cidr_blocks = ["0.0.0.0/0"] }}# ...resource "aws_security_group" "clients_ingress" { name = "${local.name}-clients-ingress" vpc_id = module.vpc.vpc_id # ... # Add application ingress rules here # These rules are applied only to the client nodes # HTTP ingress ingress { from_port = 80 to_port = 80 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # HTTPS ingress ingress { from_port = 443 to_port = 443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] } # HTTPS ingress ingress { from_port = 8443 to_port = 8443 protocol = "tcp" cidr_blocks = ["0.0.0.0/0"] }}
- The
image.pkr.hcl
file contains the configuration to create an AMI using a Ubuntu 22.04 base image. Packer copies theshared
directory from the root of the code repository to the machine image and runs theshared/scripts/setup.sh
script.
image.pkr.hcl
# ...data "amazon-ami" "hashistack" { filters = { architecture = "x86_64" "block-device-mapping.volume-type" = "gp2" name = "ubuntu/images/hvm-ssd/ubuntu-jammy-22.04-amd64-server-*" root-device-type = "ebs" virtualization-type = "hvm" } most_recent = true owners = ["099720109477"] region = var.region}# ...build { # ... provisioner "shell" { inline = ["sudo mkdir -p /ops/shared", "sudo chmod 777 -R /ops"] } provisioner "file" { destination = "/ops" source = "../shared" } provisioner "shell" { environment_vars = ["INSTALL_NVIDIA_DOCKER=false", "CLOUD_ENV=aws"] script = "../shared/scripts/setup.sh" }}
The
secrets.tf
file contains configuration for creating gossip encryption keys, TLS certificates for the server and client nodes, and ACL policies and tokens for both Consul and Nomad.The
variables.hcl.example
file is the configuration file template used by Packer when building the AMI and Terraform when provisioning infrastructure. A copy is made of this file during cluster creation and updated with the AWS region and AMI ID after Packer builds the image. It also contains configurable variables for the cluster and their default values.The
variables.tf
file defines the variables used by Terraform and includes resource naming configurations, node types and counts, and Consul configurations for cluster auto-joining and additional cluster configuration.
variables.tf
# Random suffix for Auto-join and resource namingresource "random_string" "suffix" { length = 4 special = false upper = false} # Prefix for resource namesvariable "prefix" { description = "The prefix used for all resources in this plan" default = "learn-consul-nomad-vms"} # Random prefix for resource nameslocals { name = "${var.prefix}-${random_string.suffix.result}"} # Random Auto-Join for Consul servers# Nomad servers will use Consul to join the clusterlocals { retry_join_consul = "provider=aws tag_key=ConsulJoinTag tag_value=auto-join-${random_string.suffix.result}"}# ...
The shared
directory
Next, view the structure of the shared
directory. It contains the configuration files for creating the server and client nodes, the Nomad job specification files for HashiCups, and additional scripts.
$ tree sharedshared├── conf│ ├── agent-config-consul_client.hcl│ ├── agent-config-consul_server.hcl│ ├── agent-config-consul_server_tokens.hcl│ ├── agent-config-consul_server_tokens_bootstrap.hcl│ ├── agent-config-consul_template.hcl│ ├── agent-config-nomad_client.hcl│ ├── agent-config-nomad_server.hcl│ ├── agent-config-vault.hcl│ ├── systemd-service-config-resolved.conf│ └── systemd-service-consul_template.service├── data-scripts│ ├── user-data-client.sh│ └── user-data-server.sh├── jobs│ ├── 01.hashicups.nomad.hcl│ ├── 02.hashicups.nomad.hcl│ ├── 03.hashicups.nomad.hcl│ ├── 04.api-gateway.config.sh│ ├── 04.api-gateway.nomad.hcl│ ├── 04.hashicups.nomad.hcl│ ├── 04.intentions.consul.sh│ ├── 05.autoscaler.config.sh│ ├── 05.autoscaler.nomad.hcl│ ├── 05.hashicups.nomad.hcl│ └── 05.load-test.sh└── scripts ├── setup.sh └── unset_env_variables.sh 5 directories, 25 files
The shared/conf
directory contains the agent configuration files for the Consul and Nomad server and client nodes. It also contains systemd
configurations for setting up Consul as the DNS.
- The
shared/data-scripts/user-data-server.sh
andshared/data-scripts/user-data-client.sh
scripts are run by Terraform during the provisioning process for the server and client nodes respectively once the virtual machine's initial setup is complete. The scripts configure and start the Consul and Nomad agents by retrieving certificates, exporting environment variables, and starting the agent services. Theuser-data-server.sh
script additionally bootstraps the Nomad ACL system.
user-data-server.sh
# ...# Copy template into Consul configuration directorysudo cp $CONFIG_DIR/agent-config-consul_server.hcl $CONSUL_CONFIG_DIR/consul.hcl set -x # Populate the file with values from the variablessudo sed -i "s/_CONSUL_DATACENTER/$CONSUL_DATACENTER/g" $CONSUL_CONFIG_DIR/consul.hclsudo sed -i "s/_CONSUL_DOMAIN/$CONSUL_DOMAIN/g" $CONSUL_CONFIG_DIR/consul.hclsudo sed -i "s/_CONSUL_NODE_NAME/$CONSUL_NODE_NAME/g" $CONSUL_CONFIG_DIR/consul.hclsudo sed -i "s/_CONSUL_SERVER_COUNT/$CONSUL_SERVER_COUNT/g" $CONSUL_CONFIG_DIR/consul.hclsudo sed -i "s/_CONSUL_BIND_ADDR/$CONSUL_BIND_ADDR/g" $CONSUL_CONFIG_DIR/consul.hclsudo sed -i "s/_CONSUL_RETRY_JOIN/$CONSUL_RETRY_JOIN/g" $CONSUL_CONFIG_DIR/consul.hclsudo sed -i "s#_CONSUL_ENCRYPTION_KEY#$CONSUL_ENCRYPTION_KEY#g" $CONSUL_CONFIG_DIR/consul.hcl# ...# Start Consulecho "Start Consul"sudo systemctl enable consul.servicesudo systemctl start consul.service# ...# Create Nomad server token to interact with ConsulOUTPUT=$(CONSUL_HTTP_TOKEN=$CONSUL_MANAGEMENT_TOKEN consul acl token create -description="Nomad server auto-join token for $CONSUL_NODE_NAME" --format json -templated-policy="builtin/nomad-server")CONSUL_AGENT_TOKEN=$(echo "$OUTPUT" | jq -r ".SecretID") # Copy template into Nomad configuration directorysudo cp $CONFIG_DIR/agent-config-nomad_server.hcl $NOMAD_CONFIG_DIR/nomad.hcl # Populate the file with values from the variablessudo sed -i "s/_NOMAD_DATACENTER/$NOMAD_DATACENTER/g" $NOMAD_CONFIG_DIR/nomad.hclsudo sed -i "s/_NOMAD_DOMAIN/$NOMAD_DOMAIN/g" $NOMAD_CONFIG_DIR/nomad.hclsudo sed -i "s/_NOMAD_NODE_NAME/$NOMAD_NODE_NAME/g" $NOMAD_CONFIG_DIR/nomad.hclsudo sed -i "s/_NOMAD_SERVER_COUNT/$NOMAD_SERVER_COUNT/g" $NOMAD_CONFIG_DIR/nomad.hclsudo sed -i "s#_NOMAD_ENCRYPTION_KEY#$NOMAD_ENCRYPTION_KEY#g" $NOMAD_CONFIG_DIR/nomad.hclsudo sed -i "s/_CONSUL_IP_ADDRESS/$CONSUL_PUBLIC_BIND_ADDR/g" $NOMAD_CONFIG_DIR/nomad.hclsudo sed -i "s/_CONSUL_AGENT_TOKEN/$CONSUL_AGENT_TOKEN/g" $NOMAD_CONFIG_DIR/nomad.hcl echo "Start Nomad" sudo systemctl enable nomad.servicesudo systemctl start nomad.service# ...
The
shared/jobs
folder contains all of the HashiCups jobspecs and any associated script files for additional components like the API gateway and the Nomad Autoscaler. The other tutorials in this collection will explain each of them.The
shared/scripts/setup.sh
file is the script run by Packer during the image creation process. This script installs the Docker and Java dependencies as well as the Consul and Nomad binaries.
setup.sh
# ...# Dockerdistro=$(lsb_release -si | tr '[:upper:]' '[:lower:]')sudo apt-get install -y apt-transport-https ca-certificates gnupg2 curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/${distro} $(lsb_release -cs) stable" sudo apt-get updatesudo apt-get install -y docker-ce # Javasudo add-apt-repository -y ppa:openjdk-r/ppasudo apt-get update sudo apt-get install -y openjdk-8-jdkJAVA_HOME=$(readlink -f /usr/bin/java | sed "s:bin/java::") # Install HashiCorp Apt Repositorywget -O- https://apt.releases.hashicorp.com/gpg | sudo gpg --dearmor -o /usr/share/keyrings/hashicorp-archive-keyring.gpgecho "deb [signed-by=/usr/share/keyrings/hashicorp-archive-keyring.gpg] https://apt.releases.hashicorp.com $(lsb_release -cs) main" | sudo tee /etc/apt/sources.list.d/hashicorp.list # Install HashiStack Packagessudo apt-get update && sudo apt-get -y install \ consul=$CONSULVERSION* \ nomad=$NOMADVERSION* \ vault=$VAULTVERSION* \ consul-template=$CONSULTEMPLATEVERSION*# ...
- The
shared/scripts/unset_env_variables.sh
script unsets local environment variables in your CLI before the infrastructure destruction process with Terraform.
Next steps
In this tutorial, you became familiar with the infrastructure set up process for the cluster.
In the next tutorial, you will create the cluster running Consul and Nomad and set up access to each of their command line and user interfaces.