Run a PHP application on AWS Fargate
Following the trend of serverless, all that hype (or not?) I was looking through the AWS services offered and stumbled upon AWS Fargate , a service that lets you run containerized applications on either Amazon Elastic Container Service (ECS) or Amazon Elastic Kubernetes Services (EKS) . For the tooling (development and deployment) of our PHP application I’d like to stick to probably the widely adopted tools, like: Of course, you’re not bound to this tooling. If you want to use another external database hosted somewhere else than on Amazon, feel free to do. The same applies to logging. You’ve got a working Greylog up and running then you’re free to use that. These things are not mandatory for AWS Fargate. I just decided to use that for convenience reasons. If you want to know about pricing of this setup by now, let’s leave it with what advocates usually say: It depends! 😉 To get a general feeling have look at the costs section at the end of this article. A lot of the (further) costs depend on the pricing. For this project/experiment I use Laravel 7. Of course, you can use any framework you’d like to run this application. Just make sure you adjust certain paths for building the Docker images. Grab the code : All files can be found in this Github repository . Feel free to open issues and PRs if you spot anything wrong. The final directory layout: Let’s get started! To provide the complete application to AWS Fargate, we’ll split it into three containers: For the database we don’t create a separate container as we need a stateful solution. We’ll rely on a multi-stage build as well as on images only for our Docker images to keep the image sizes extremely small. You can find some comparison and further reading on Alpine images in this blog post or on the Alpine docker page. Here is an example just to demonstrate the basic usage for our multi-stage build. You can find the full Dockerfile in the repository . Due to the fact that we only hold the files needed for the containers we keep to size of the images relatively small. For the php image, the size mainly depends on the modules needed as we need to include the build and development files for creating the extensions. This might increase our image size. For my setup I ended with these sizes: Although included in the project and supported by AWS Fargate, Docker compose is not going to be used in our AWS deployment. For orchestration of the containers we are going to use Amazons ECS tasks definition in which you can link containers. I will come to that a bit later when talking about the task definition. You can use the the docker-compose file for local testing of the containers. In there you can see how we addressed to different stages of the Docker image - using , although we have only one Dockerfile. Through the item the containers are linked. Do not use links . Using links in your docker-compose.yml is considered deprecated and being removed more or less soon. Either use or create a user-defined network . For local testing you just need to run . Afterwards you’ll find the three images, linked, up and running. The push to the repository is part of our deployment pipeline which is our next topic. For deployment we are going to use Github Actions to create the Docker images, push them to a Docker registry (ECR in our case), creating a deployment task for ECS to pick up the images and spin up a new service that runs our three containers on ECS. Most of you are already familiar with so we are going to step right into our workflow file at . To make the workflow work you need to register secrets in Github for your AWS key, AWS secret key and AWS region. Furthermore you need to adjust the names (3 in total) according to the repositories you have created in AWS. For that, we quickly switch to our ECR console and create three repositories. This feature is supported since mid of 2019 and prevents tags from being overwritten. As we’re tagging the images with and wanting this tag to reflect the latest changes we set this to . Image scanning helps in identifying software vulnerabilities in your container images. We are going to use this here. You can certainly disable it too, if you don’t like it for whatever reasons. After you created the secret in your Github repository, set up the repositories on AWS ECR, adjusted the names for the repositories in your workflow let’s quickly have a look at each of the workflow steps: In ECS, the basic unit of a deployment is a task, a logical construct that models one or more containers. This means that the ECS APIs operate on tasks rather than individual containers. In ECS, you can’t run a container: rather, you run a task, which, in turns, run your container(s). A task contains one or more containers. In our workflow the steps 7, 8 and 9 are responsible to adjust the file. This file can be compared to the or any other orchestration file you use to connect your Docker containers. One special thing here is the following line appearing in each of step 7, 8 and 9: What it does in 8 and 9 is, that it takes the former out and exchange the field inside. In step 7 it uses the as it is. This is needed to iteratively inserting the image used from our Docker registry and finally pushing the complete task definition in step 10. Moving on to the task-definition.json file itself. There is a whole documentation sections on this file AWS docs page. I’ll continue with the parts that are relevant for us here. Let’s go through the file, but starting at the end: All containers have set to . This means, that if one container fails or stops for any reason, all other containers that are part of the task are stopped. Next is all the configuration about AWS Fargate and all the connected services we are going to use. AWS Fargate cannot be configured directly as it is more an underlying technology to run serverless applications on Amazon AWS. In the next chapters we are going to step into each part that needs configuration to get the whole Laravel application running. Talking about security on AWS could fill an entire series of posts. I’ll keep this to a minumum where I’d personally think this is a reasonable way to operate an application. Separate user and group for your application In your Identity and Access Management (IAM) create a new user called that is the one who is allowed to run all tasks around your application. Assign and manage permissions via a group, f. ex. . Make the user member of this group and then assign permissions to this group instead of directly to the user. Roles for ECR and ECS By default the newly created user has no rights to execute any operation on ECR or ECS. In our case, this user need to be able to push images to ECR or to run services on ECS to execute our task to spin up the container. For that we need to attach to two policies to our group: Surely we would need to tighten the permissions a bit later on. I’ll get to that in a separate post. Execution role From the AWS docs The Amazon ECS container agent, and the Fargate agent for your Fargate tasks, make calls to the Amazon ECS API on your behalf. The agent requires an IAM role for the service to know that the agent belongs to you. This IAM role is referred to as a task execution IAM role. So the, create a new role called and attach the following policies: The name of this role needs to match the value in our workflow. Head over to your AWS Management Console, open Services, type ECS and click on Elastic Container Service. On the left side menu click on and hit the Create Cluster button. Launch type: Fargate Task definition: This is prepopulated with the name in our . Service name: This should match the service name from our workflow file. Number of tasks: Leave that to for now. We will not run more than one instance of this service. Create service: Step 2 Cluster VPC : Make sure you select the correct Virtual Private Cluster (VPC) group that was created together with your cluster. It should be selected automatically. Subnets : Select the subnet that are unselected, normally two. Load balancers: We will go with None for the moment and come back later to add a Load Balancer to our service. Service discovery : Disable service discovery as we won’t use Amazon Route 53 for this project. If needed you can add this later on, of course. Auto-scaling : Skip that. We won’t use that. Review all your settings and hit Create service . Managing secrets and/or environment variables on AWS can be done with either AWS Secrets Manager or with AWS Systems Manager Parameter Store . I decided to go with the parameter store for one main reason: it is (almost) free of charge. For those who want to read more about differences and pros and cons of both solutions have a look at this blog post for comparison of both. Whether it is a configuration like or an actual secret like the . Both is going to be stored in the parameter store and injected into the container in our . Store your as instead of type . A SecureString parameter is any sensitive data that needs to be stored and referenced in a secure manner. If you have data that you don’t want users to alter or reference in plain text, such as passwords or license keys, create those parameters using the SecureString datatype. Although I used the normal type you can do better for the above mentioned reasons. To quickly check if you can reach your site, navigate to your cluster, and check the task that is currently running. There you will find your public IP address that directly points to port of your app. When you enter that page you should be welcomed with the Laravel landing page from our fresh install. Note : We won’t handle neither HTTPS nor Amazon Route 53 here. After adding the load balancer you can point your domain to CNAME of ELB. The ELB is going to direct all traffic to port of our application where our nginx is listening. A load balancer can be created in the EC2 service. On the left select Load balancers and hit the button Create Load Balancer . Next select Application Load Balancer. In the first step make sure you Additionally add the availability zones for your different subnets. Next would be step 2 which we are going to skip, as it is only about HTTPS. We create a new security group with only one rule that allows traffic coming from of type to reach our instance via on port . Here we create a new target group with a target type . Our load balancer routes requests to the targets in this target group using the protocol and port that you specify. The Register Targets step can be skipped and on step 6 you please check all the settings again. If everything looks good, you can save the configuration of the ELB. Now when pushing changes to your Github repository the deploy workflow start, build the images, pushed them to the ECR Docker registry, creates the task that is picked up by the service and creates your application. So, what’s the price you have to pay for this setup? As I mentioned earlier, this is hard to say, as it depends on usage, dimensioning and the region. Here is a nice overview, how the different AWS regions varies by costs. To make it short, the top 5: Use the AWS pricing calculator to get a proper pricing. If you’re in a first phase of your project you are probably eligible for the AWS Free Usage Tier . This will surely give you a lot of space to play around and test. Here is a list of the usage for the service I created and played around with for this post. As you can see the most important for now is the space for our Docker registry on ECR. So to keep our images small is basically saving us money. Just a quick rundown on improvements further ideas to be done. Provide HTTPS access, refine the groups, policies in IAM to tighten access and strengthen security, see the AWS SDK for Laravel to make handling for AWS in Laravel easier … and probably many many more things :-) Deployment : Github including Github Actions Infrastructure : Elastic Container Service (ECS) to run our containers on Docker container hosted on Amazon Elastic Container Registry (ECR) Database : we’ll be using Amazon RDS Logging : Amazon Cloudwatch Checkout : Checkout your repository from Github Configure AWS credentials : Configure AWS credential environment variables for use in other GitHub Actions Login to Amazon ECR : Log into Amazon ECR with the local Docker client Build, tag, push image: nginx : Build the Docker image for nginx and pushes it to Amazon ECR Build, tag, push image: php-fpm : Build the Docker image for php-fpm and pushes it to Amazon ECR Build, tag, push image : nodejs: Build the Docker image for nodejs and pushes it to Amazon ECR Render task definition: nginx : Renders the final repository URL including the name of repository into the task-definition.json file. We’ll come to that in the next topic. Render task definition: php-fpm : Renders the final repository URL including the name of repository into the task-definition.json file. Render task definition: nodejs : Renders the final repository URL including the name of repository into the task-definition.json file. Deploy Amazon ECS task definition : A task definition is required to run Docker containers in Amazon ECS. You define your containers, its hardware resources, inter-container connections as well as host connections, where to send logs to, and many more. See the next section about this topic. But first let’s check for two important settings, and . is the name you are going to choose in Amazon ECS. is the name for the service that picks up the task and deploys it into the cluster. Logout of Amazon ECR : Log out from Amazon ECR and erase any credentials connected with it requiresCompatibilities : This needs to be set to FARGATE. Otherwise ECS won’t recognize it properly. networkMode : This is set to awsvpc, so every task that is launched from that task definition gets its own elastic network interface (ENI) and a primary private IP address. That makes it possible to call services and applications as if they would be in one system (not in distributed containers). Example for nginx calling php-fpm: If we would orchestrate with Docker Compose we normally call a container by its name. So the above statement would probably be . cpu : The cpu value can be expressed in CPU units or vCPUs in a task definition but is converted to an integer indicating the CPU units when the task definition is registered. memory : The memory value can be expressed in MiB or GB in a task definition but is converted to an integer indicating the MiB when the task definition is registered. Both values, and can be defined for each container separately or for the complete task. In this sample application here, I defined the values for the complete task. This runs just fine. And remember, you can change (scale) this as you need. This is just a point to start from. executionRoleArn : This is connected to permissions on AWS. We leave this to the value . I’ll come to this in the section about configuring AWS Fargate and its roles. family : This is the name for our task that can be freely choosen. So you can deploy multiple if you like and do balancing and stuff. It is just common name for a set of containers, in our case here for three. containerDefinitions : Here comes the fun part… the containers. I am going to summarize the important things you’ll encounter in inside the : : Open port 80 to host to be accessible from outside : Open port 9000 only for inter-container communication to be accessible from nginx, Secrets and environment variable like keys, settings for debugging, env to be used are properly set : nothing special AmazonEC2ContainerRegistryFullAccess AmazonECS_FullAccess AmazonECSTaskExecutionRolePolicy AmazonSSMReadOnlyAccess: we are going to need that to read our environment variables from AWS Systems Manager Parameter Store Launch type: Fargate Task definition: This is prepopulated with the name in our . Service name: This should match the service name from our workflow file. Number of tasks: Leave that to for now. We will not run more than one instance of this service. Parameter store : Free of charge, limit of 10,000 parameters per account Secrets manager : $0.40 per secret stored and additional $0.05 for 10,000 API calls have a listener for HTTP on port have the correct Virtual Private Cloud (VPC) selected. N. Virginia