
I'm going to play around with this a bit on my own AWS environment and will let you guys know what I discover. I get a HTTP timeout when the Fargate pod tries to communicate back with Dataiku, however. The wizard gives you the option of creating a cluster and launching a sample web application. In a small-scale test on our company infra, I configured fargate in EKS, and successfully got Dataiku to trigger creation of a pod in a namespace that is backed not by EC2 but by Fargate. In the Regions where Amazon ECS supports AWS Fargate, the classic Amazon ECS first-run wizard guides you through the process of getting started with Amazon ECS using the Fargate launch type. In both cases the Fargate Task took over 7 minutes to start. I created a POC using our application image and the base 2019 Core image from Microsoft. Long run expect this to get even better until cold start times are something you don't even have to ask about.
AWS FARGATE STARTUP TIME WINDOWS
Of course, not having to worry about autoscalers working, knowing a job will always be able to run, and being able to use AWS tags per kubernetes namespace (for internal billing purposes!) would be extremely powerful. If I get Fargate to work for Dataiku, I see no reason to even investigate EKS node groups. containers We are containerizing a legacy application that requires Windows and would like to use Fargate now that it supports Windows. Press 'Run Task' button to task in PROVISIONING stage (1 second) PROVISIONING to PENDING (15 seconds) PENDING to RUNNING (3 seconds) So this container launched to a fully running state in under 20 seconds. Setting up AWS Fargate with the first run wizard is doubly beneficial because it will walk us through creation of any additional AWS components our environment. As sidenote: Using the Dataiku EKS plugin is not an option for our company. In Fargate’s first run wizard, we get started building the AWS Fargate deployment from the ground up, starting with containers and working our way up to the Cluster level. Realistically, that brings us to two options: AWS EKS Node Groups (with K8s autoscaler configured), or EKS Fargate for serverless pods, a feature recently made available by AWS.

Seriously, I can not stress enough how useless traditional auto-scaling is for Kubernetes.

It still runs on vanilla EC2 autoscaling, (as it was provisioned in the early days of AWS EKS), the performance of which is horrible for the kind of usage patterns data scientists in Dataiku have. Some background: My company runs Dataiku on AWS EC2, with all compute carried out by a non-plugin EKS cluster.
