Hi, this is a default behaviour of EKS node group. I am using terraform which which kind of solves the issues but I have to specify;

`lifecycle { create_before_destroy = true }`

I have written a terraform module which will handle any update nicely:

Regarding using multiple node groups, ideally one per availability zone is related more to cluster autoscaler. Cluster autoscaler does not support Auto Scaling Groups which span multiple Availability Zones; instead you should use an Auto Scaling Group for each Availability Zone and enable the balance-similar-node-groups feature. If you do use a single Auto Scaling Group that spans multiple Availability Zones you will find that AWS unexpectedly terminates nodes without them being drained because of the rebalancing feature.

Written by

Lead Software/Infrastructure/Devops Engineer

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store