Create Swarm Service Locally
docker swarm init
Swarm initialized: current node (sj85xz6gexk9kkso65fptppjv) is now a manager.
To add a worker to this swarm, run the following command:
docker swarm join --token SWMTKN-1-03w64skgdjczcl7dcmqsilyb8eaqvvhpkhqnmc0pvykstru5xa-8qo1i3qc7hw789y7appol2kni 192.168.65.2:2377
To add a manager to this swarm, run 'docker swarm join-token manager' and follow the instructions.
What we have now is a single node swarm with all the built-in functionality out of the box
What just happened? (docker swarm init)
Lots of PKI and security automation
Root Signing Certificate created for our Swarm
Certificate is issued for first Manager node
Join tokens are created
Raft Consensus database created to store root CA, configs and secrets
Encrypted by default on disk (1.13+)
No need for another key/value system to hold orchestration/secrets
Replicates logs amongst Managers via mutual TLS in "control plane"
Let's explore
~> docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS
sj85xz6gexk9kkso65fptppjv * moby Ready Active Leader
Let's create a service
~> docker service create alpine ping 8.8.8.8
mxtjxv23zdmn7xtduhuci9ckz
~> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
mxtjxv23zdmn confident_mclean replicated 1/1 alpine:latest
~> docker service ps mxtjxv23zdmn # this shows us containers
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
05ecxpe99oif confident_mclean.1 alpine:latest moby Running Running about a minute ago
~> docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
dba6938ae565 alpine:latest "ping 8.8.8.8" 2 minutes ago Up 2 minutes confident_mclean.1.05ecxpe99oifzy3pmh94s3csa
Let's scale our service
~> docker service update mxtjxv23zdmn --replicas 3
mxtjxv23zdmn
~> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
mxtjxv23zdmn confident_mclean replicated 3/3 alpine:latest
docker service ps mxtjxv23zdmn
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
05ecxpe99oif confident_mclean.1 alpine:latest moby Running Running 6 minutes ago
zkivho70qhy9 confident_mclean.2 alpine:latest moby Running Running 2 minutes ago
ud0isohnx5vy confident_mclean.3 alpine:latest moby Running Running 2 minutes ago
We now have running 3 tasks
Let's remove/kill one of the running containers and see what happens
~> docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
2288215be2ae alpine:latest "ping 8.8.8.8" 5 minutes ago Up 5 minutes confident_mclean.3.ud0isohnx5vy50eomjjhad9yo
b6f1bf35be1e alpine:latest "ping 8.8.8.8" 5 minutes ago Up 5 minutes confident_mclean.2.zkivho70qhy9t4mg6d7kf94ue
dba6938ae565 alpine:latest "ping 8.8.8.8" 9 minutes ago Up 9 minutes confident_mclean.1.05ecxpe99oifzy3pmh94s3csa
~> docker container rm -f 2288215be2ae
2288215be2ae
~> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
mxtjxv23zdmn confident_mclean replicated 2/3 alpine:latest
~> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
mxtjxv23zdmn confident_mclean replicated 3/3 alpine:latest
~> docker service ps mxtjxv23zdmn
ID NAME IMAGE NODE DESIRED STATE CURRENT STATE ERROR PORTS
ccfz80x50ztb confident_mclean.1 alpine:latest moby Running Running 41 seconds ago
05ecxpe99oif \_ confident_mclean.1 alpine:latest moby Shutdown Failed 48 seconds ago "task: non-zero exit (137)"
zkivho70qhy9 confident_mclean.2 alpine:latest moby Running Running 8 minutes ago
sohismj9iukc confident_mclean.3 alpine:latest moby Running Running 8 minutes ago
docker service restarted the killed container and it keeps history of crashes in
ps
outputnow let's remove our service
~> docker service rm confident_mclean
confident_mclean
~> docker service ls
ID NAME MODE REPLICAS IMAGE PORTS
~> docker container ls
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
~> docker container ls -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
External links
Last updated
Was this helpful?