Node Availability
Each node can have one of three admin-controlled states
Only affects if existing or new containers can run on that node
active: Runs existing tasks, available for new tasks
pause: Runs existing tasks, not available for new tasks
(good for troubleshooting)
drain: Reschedules existing tasks, not available for new tasks
(good for maintenance)
This affects service updates and recovering tasks too
Internet examples include "drain managers", but not realistic
Use labels to control manager tasks
Examples
Prevent node2 from starting new containers
docker node update --availability pause node2
Stop containers on node3 and assign their tasks to other nodes
docker node update --availability drain node3
Mark node as available again
docker node update --availability=active
This does not rebalance containers across nodes
Hands On
[node1] ~> docker service create --name webapp1 --replicas 4 nginx
vcc4y9ootqso9y33ibc3p49p0
overall progress: 4 out of 4 tasks
1/4: running [==================================================>]
2/4: running [==================================================>]
3/4: running [==================================================>]
4/4: running [==================================================>]
verify: Service converged

[node1] ~> docker node update --availability=pause manager2
manager2
##### VISUALIZER IS EXACTLY SAME AS ABOVE
[node1] ~> docker service update --replicas=8 webapp1
webapp1
overall progress: 8 out of 8 tasks
1/8: running [==================================================>]
2/8: running [==================================================>]
3/8: running [==================================================>]
4/8: running [==================================================>]
5/8: running [==================================================>]
6/8: running [==================================================>]
7/8: running [==================================================>]
8/8: running [==================================================>]
verify: Service converged

[node1] ~> docker node update --availability=active manager2
manager2
##### VISUALIZER IS EXACTLY SAME AS ABOVE
##### SWARM by default DOES NOT rebalance cluster when nodes become active again
[node1] ~> docker node update --availability=drain worker1
worker1

[node1] ~> docker node ls
ID HOSTNAME STATUS AVAILABILITY MANAGER STATUS ENGINE VERSION
th2nb6uc5j61iqsz67q1oztaj * manager1 Ready Active Leader 18.09.1
rru8z0gj9riq61jp3fwiyiesq manager2 Ready Active 18.09.1
tsojlv10aqlvncyr58ptg1q88 worker1 Ready Drain 18.09.1
[node1] ~> docker node update --availability=active worker1
worker1
##### VISUALIZER IS EXACTLY SAME AS ABOVE
##### SWARM by default DOES NOT rebalance cluster when nodes become active again
External links
Last updated
Was this helpful?