When you use MongoDB or any other database as a transient data store you will be mainly limited by the network performance of your containers.
The network traffic between the database and the containers often gets saturated by the TCP traffic that between the services and the database.
At my previous company we actually had a support contract with MongoDB and we benchmarked several solutions for the same workload. This workload fit in memory of the container so the working set did not need to swap to disk:
- Run against MongoDB Atlas
- Run it against another Fargate container with Mongo installed
- Run it locally on our machines
- Run it against a sidecar container via the loopback interface
After benchmarking on the same size machines we came to the conclusion that the underlying hardware is not the problem, but the network is a limiting factor in this case.
Enter Fargate Sidecars
To get the same performance as using a local setup, we used a sidecar container in our task definition. An example task definition:
{ "containerDefinitions": [ { "command": ["mongod", "--nojournal"], "essential": true, "image": "mongo:latest", "name": "mongo", "ulimits": [ { name = "nofile" softLimit = 65535 hardLimit = 65535 } ], "portMappings": [ { "containerPort": 27017 } ] } { "essential": true, "image": "httpd:2.4", "name": "sample-fargate-app", "portMappings": [ { "containerPort": 80, "hostPort": 80, "protocol": "tcp" } ] } ], "cpu": "256", "executionRoleArn": "arn:aws:iam::012345678910:role/ecsTaskExecutionRole", "family": "fargate-task-definition", "memory": "512", "networkMode": "awsvpc", "runtimePlatform": { "operatingSystemFamily": "LINUX" }, "requiresCompatibilities": [ "FARGATE" ] }
If you only set the cpu and memory on the task definition level (and not on the container level) the tasks will distribute memory and cpu automatically. We set mongodb to nojournal mode as its transient and somewhat faster that way. We also have to set the nofile ulimit so the container does not crash on intensive workloads.
In your code you would copy the required data from your MongoDB cluster to the transient database. Run your code against the local interface so it is extremely fast (no network overhead). When you finish your code, copy the data back to the persistent MongoDB cluster.
You might run into Docker Hub rate limits if you use this approach. Because every time you start the container it also pulls MongoDB from Docker Hub.
I wrote an article about that as well. You can find that article here.