Setting Up a Scalable Magento AWS Cluster [Technical Overview]

Find out how to build such an infrastructure with a scalable Magento AWS cluster to meet any situation head-on.

In the Magento infrastructure, we often face the necessity to increase resources to cope with incoming customer traffic flow, such as on Black Friday or other kinds of promotions and sales. And if we have a dedicated or managed server, we can’t just increase it. Instead, we have to choose a server with enough capacity for this kind of load.

But it often happens that it becomes not enough because of the growing popularity of a website. In this case, the autoscaling AWS cloud cluster and Magento settings for working with it could come in handy.

Here’s a simplified infrastructure diagram:

Internals of the Infrastructure

Golden Image

First of all, we need to create a basic image with all applications. To make it happen, you need to create an instance and install Nginx, PHP with extensions, and other additional applications, for instance, for logging or monitoring. We also take the config for Nginx from Magento and use AMI Linux 2 for the basic image.

You could use a clean AMI Linux 2 image and move the application installation to Userdata, although it will increase the start time of the instance in the autoscaling group.

After configuring the instance, we take a snapshot of it, which we will use in the autoscaling group later.

IAM Permissions

Then we need to create an IAM role for SSM and S3 access. SSM is easy to use to gain access to the instance console from the web AWS console, while S3 is used to store configuration files.

For example, we could add AmazonS3FullAccess and AmazonSSMManagedInstanceCore.

Amazon EFS

Reading and recording the media, report, log, and other kinds of folders could be possible after creating an Elastic File System (EFS).

We will add the auto-mount of this storage to User data.

#!/bin/bash

mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-f2ab1caa.efs.eu-central-1.amazonaws.com:/ /var/www/shared/

noresvport fs-f2ab1caa.efs.eu-central-1.amazonaws.com (URL of the created repository) /var/www/shared/ (folder where we will mount on instances)

Security Groups

To make communication between services possible and efficient, we need to create security groups (SG):

ProdSG (SG for frontend and backend instances)

      HTTP TCP 80 allow for ProdInternalLB

      HTTP TCP 80 allow for ProdVarnish

      HTTP TCP 80     allow for ProdLB

ProdLB (external load balancer)

    allow all traffic to 443 and 80 ports

ProdRedisSG 

      TCP 6379 allow for ProdSG

ProdElasticSG  (Elasticsearch SG)

          TCP 80

ProdDBSG (Aurora RDS)

         TCP 3306 allow for ProdSG

ProdVarnish  (Varnish cache instance)

         TCP 80 allow for ProdSG

TCP 80 allow for ProdLB

ProdEFS (access to EFS )

         TCP 2049 allow for ProdSG

ProdInternalLB 

          TCP 80      allow for  ProdVarnish

Amazon Elasticsearch

It’s an easy-to-manage solution that makes it easy to manage Elasticsearch in the AWS Cloud. In most cases, we use the service for Magento search to allows users to quickly find what they need.

Aurora RDS

We create the Master database and optional replica database with SG created earlier. Database URL and user/password are configured in Magento env.php.

ElastiCache (Redis)

Create one or two Redis instances for Magento cache and sessions, which entirely depends on load as Redis instances have network limitations.

Launch configuration

Next, we need to create a launch configuration for the Frontend ASG and Backend ASG in launch configurations. We choose an IAM role that we created earlier, security group ProdSG, EBS (we usually choose 100Gb), and add to User data.

#!/bin/bash

mount -t nfs -o nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2,noresvport fs-f2ab1caa.efs.eu-central-1.amazonaws.com:/ /var/www/shared/

We frequently choose the following instance type: c5.xlarge or c5.2xlarge

Target groups

Later we need to create three target groups (TG) for Public, Admin, and Varnish.

We create TG for 80 port without target instance, for Admin and Public, while for Varnish TG, we make it for 80 port with Varnish target instance.

Autoscaling group

After that, we need to create two separate TG for Public and Admin.

Choose the already created ProdSG in ASG, launch configuration, and TG. Also, here is possible to configure the number of instances for Public and one instance for Admin, which will be enough.

Load Balancers

Here we need to create two Application Load Balancers (ALB): internet-facing and internal. We point internal ALB to public TG and internet-facing one to Varnish TG and Admin TG, for which we need to configure a rule — a path or subdomain one.

Varnish Instance

It’s just an AMI Linux 2 instance with Varnish and Nginx inside, while the Varnish configuration should be generated from Magento.

Nginx is needed for proxying requests to the internal ALB as it’s not possible to use Varnish with a domain name as Backend.

After we create that environment, it’s possible to manually deploy Magento to all Public and Admin instances. It should work as expected but without the autoscaling. To resolve this issue and exclude manual work, we need to configure the AWS CodeDeploy service. It works with Magento Artifact, which is Magento code after composer, di:compile and static-content:deploy, in the S3 bucket.

For generating static content, we need information about the theme and scope in config.php. Generating the artifact is available in Jenkins, Bitbucket-pipeline AWS code-Pipeline, or in another similar CI tool.

AWS CodeDeploy

Finally, we need to create an application, which could be prod as an example, where we should create two deployment groups for Admin and public ASG.

For a start, create an IAM role with an S3 bucket and codeDeployRole access. Choose this IAM role in deployment group configuration, Amazon EC2 autoscaling groups in environment configuration, and select ASG prod. Also, uncheck ‘Enable load balancing’ and save changes.

Conclusion

We hope that you’ll find this post interesting and useful. If you want to dive deeper into how it works in the real world, check out our latest case study on creating an AWS infrastructure for a clothing store to help them prepare well for Black Friday promotions. We’ve fully described the operational flow and list of services and solutions we’ve used to make it possible for the client to handle the capacity of 10,000 simultaneous users during sales.

We sincerely thank our DevOps Engineer Alexey for preparing this post.

Ready to enlist expert Magento developers for reinforcement?

Leave your contact info, and we’ll get in touch with you as soon as we can