Hi, this is a default behaviour of EKS node group. I am using terraform which which kind of solves the issues but I have to specify;
`lifecycle { create_before_destroy = true }`
I have written a terraform module which will handle any update nicely:
Regarding using multiple node groups, ideally one per availability zone is related more to cluster autoscaler. Cluster autoscaler does not support Auto Scaling Groups which span multiple Availability Zones; instead you should use an Auto Scaling Group for each Availability Zone and enable the balance-similar-node-groups feature. If you do use a single Auto Scaling Group that spans multiple Availability Zones you will find that AWS unexpectedly terminates nodes without them being drained because of the rebalancing feature.