Set up a Nomad cluster on GCP
This tutorial will guide you through deploying a Nomad cluster with access control lists (ACLs) enabled on GCP. Consider checking out the cluster setup overview first as it covers the contents of the code repository used in this tutorial.
Prerequisites
For this tutorial, you need:
- Packer 1.9.4 or later
- Terraform 1.2.0 or later
- Nomad 1.7.7 or later
- gcloud CLI 474.0.0 or later
- A Google Cloud account configured for use with Terraform
Note
This tutorial creates GCP resources that may not qualify as part of the GCP free tier. Be sure to follow the Cleanup process at the end of this tutorial so you don't incur any additional unnecessary charges.
Clone the code repository
The cluster setup code repository contains configuration files for creating a Nomad cluster on GCP. It uses Consul for the initial setup of the Nomad servers and clients and enables ACLs for both Consul and Nomad.
Clone the code repository.
$ git clone https://github.com/hashicorp-education/learn-nomad-cluster-setup
Navigate to the cloned repository folder.
$ cd learn-nomad-cluster-setup
Navigate to the gcp
folder.
$ cd gcp
Create the Nomad cluster
There are two main steps to creating the cluster: building a Google Compute Engine image with Packer and provisioning the cluster infrastructure with Terraform. Both Packer and Terraform require that you configure variables before you run commands. The variables.hcl.example
file contains the configuration you need for this tutorial.
Update the variables file for Packer
Rename variables.hcl.example
to variables.hcl
, and open it in your text editor.
$ mv variables.hcl.example variables.hcl
Configure the gcloud
tool for use with Terraform and use the values from authenticating to GCP to update the project
, region
and zone
variables. In this example, those are hc-3ff63253e6a54756b207e4d4727
, us-east1
, us-east1-b
. The remaining variables are for Terraform ,and you update them after building the VM image.
gcp/variables.hcl
# Packer variables (all are required)project = "hc-3ff63253e6a54756b207e4d4727"region = "us-east1"zone = "us-east1-b"# ...
Build the GCE image
Initialize Packer to download the required plugins.
Tip
packer init
returns no output when it finishes successfully.
$ packer init image.pkr.hcl
Then, build the image and provide the variables file with the -var-file
flag.
Tip
Packer will print out a Warning: Undefined variable
message notifying you that some variables were set in variables.hcl
but not used, this is only a warning. The build will still complete sucessfully.
$ packer build -var-file=variables.hcl image.pkr.hcl # ... Build 'googlecompute.hashistack' finished after 4 minutes 31 seconds. ==> Wait completed after 4 minutes 31 seconds ==> Builds finished. The artifacts of successful builds are:--> googlecompute.hashistack: A disk image was created: hashistack-20221121163551
Packer outputs the specific disk image ID once it finishes building the image. In this example, the value is hashistack-20221121163551
.
Update the variables file for Terraform
Open variables.hcl
in your text editor and update the machine_image
variable with the value output from the Packer build. In this example, the value is hashistack-20221121163551
.
gcp/variables.hcl
# ... machine_image = "hashistack-20221121163551" # These variables will default to the values shown# and do not need to be updated unless you want to# change them# allowlist_ip = "0.0.0.0/0"# name = "nomad"# server_instance_type = "t2.micro"# server_count = "3"# client_instance_type = "t2.micro"# client_count = "3"
The remaining variables in variables.hcl
are optional.
-
allowlist_ip
is a CIDR range specifying which IP addresses are allowed to access the Consul and Nomad UIs on ports8500
and4646
as well as SSH on port22
. The default value of0.0.0.0/0
allows traffic from everywhere. -
name
is a prefix for naming the GCP resources. -
server_instance_type
andclient_instance_type
are the virtual machine instance types for the cluster server and client nodes, respectively. -
server_count
andclient_count
are the number of nodes to create for the servers and clients, respectively.
Deploy the Nomad cluster
Initialize Terraform to download required plugins and set up the workspace.
$ terraform initInitializing the backend...# ...Initializing provider plugins...# ...Terraform has been successfully initialized!# ...
Run the Terraform deployment and provide the variables file with the -var-file
flag. Respond yes
to the prompt to confirm the operation. The provisioning takes several minutes. The Consul and Nomad web interfaces are available upon completion.
$ terraform apply -var-file=variables.hcl # ... Plan: 11 to add, 0 to change, 0 to destroy. # ... Do you want to perform these actions? Terraform will perform the actions described above. Only 'yes' will be accepted to approve. Enter a value: Yes # ... Apply complete! Resources: 11 added, 0 changed, 0 destroyed. Outputs: IP_Addresses = <<EOT Client public IPs: 52.91.50.99, 18.212.78.29, 3.93.189.88 Server public IPs: 107.21.138.240, 54.224.82.187, 3.87.112.200 The Consul UI can be accessed at http://107.21.138.240:8500/uiwith the bootstrap token: dbd4d67b-4629-975c-e9a8-ff1a38ed1520 EOTconsul_bootstrap_token_secret = "dbd4d67b-4629-975c-e9a8-ff1a38ed1520"lb_address_consul_nomad = "http://107.21.138.240"
Verify the services are in a healthy state. Navigate to the Consul UI in your web browser with the URL in the Terraform output.
Click on the Log in button and use the bootstrap token secret consul_bootstrap_token_secret
from the Terraform output to log in.
Click on the Nodes page from the sidebar navigation. There are six healthy nodes, including three Consul servers and three Consul clients created with Terraform.
Set up access to Nomad
Run the post-setup.sh
script.
Note
It may take some time for the setup scripts to complete and for the Nomad user token to become available in the Consul KV store. If the post-setup.sh
script doesn't work the first time, wait a couple of minutes and try again.
$ ./post-setup.shThe Nomad user token has been saved locally to nomad.token and deleted from the Consul KV store. Set the following environment variables to access your Nomad cluster with the user token created during setup: export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646export NOMAD_TOKEN=$(cat nomad.token) The Nomad UI can be accessed at http://107.21.138.240:4646/uiwith the bootstrap token: 22444f72-c222-bd26-6c2c-584fb9e5b698
Apply the export
commands from the output.
$ export NOMAD_ADDR=$(terraform output -raw lb_address_consul_nomad):4646 && \ export NOMAD_TOKEN=$(cat nomad.token)
Finally, verify connectivity to the cluster with nomad node status
$ nomad node statusID Node Pool DC Name Class Drain Eligibility Status06320436 default dc1 ip-172-31-18-200 <none> false eligible ready6f5076b1 default dc1 ip-172-31-16-246 <none> false eligible ready5fc1e22c default dc1 ip-172-31-17-43 <none> false eligible ready
Navigate to the Nomad UI in your web browser with the URL in the post-setup.sh
script output. Click on Sign In in the top right corner and log in with the bootstrap token saved in the NOMAD_TOKEN
environment variable. Set the Secret ID to the token's value and click Sign in with secret.
Click on the Clients page from the sidebar navigation to explore the UI.
Cleanup
Destroy infrastructure
Use terraform destroy
to remove the provisioned infrastructure. Respond yes
to the prompt to confirm removal.
$ terraform destroy -var-file=variables.hcl # ... google_compute_instance.server[1]: Destruction complete after 51sgoogle_compute_instance.client[2]: Destruction complete after 51sgoogle_compute_instance.server[2]: Destruction complete after 51sgoogle_compute_instance.client[1]: Destruction complete after 51sgoogle_compute_instance.server[0]: Destruction complete after 51sgoogle_compute_instance.client[0]: Destruction complete after 51sgoogle_compute_network.hashistack: Destruction complete after 52s Destroy complete! Resources: 11 destroyed.
Delete the GCE image
Your GCP account still has the virtual machine image, which you may be charged for. Delete the image by running the gcp compute images delete
command. In this example, the image name is hashistack-20221121163551
.
$ gcloud compute images delete hashistack-20221121163551 The following images will be deleted: - [hashistack-20221121163551] Do you want to continue (Y/n)? y Deleted [https://www.googleapis.com/compute/v1/projects/hc-3ff63253e6a54756b207e4d4727/global/images/hashistack-20221121163551].
Next steps
In this tutorial you created a Nomad cluster on GCP with Consul and ACLs enabled. From here, you may want to:
- Run a job with a Nomad spec file or with Nomad Pack
- Test out native service discovery in Nomad
For more information, check out the following resources.
- Learn more about managing your Nomad cluster
- Read more about the ACL stanza and using ACLs