elastic load balancer

EC2 Elastic Load Balancer and Auto Scaling

Elastic Load Balancer automatically distributes traffic (incoming) across multiple targets in one or more Availability Zones (AZs) on EC2 instances, containers, and IP addresses. By monitoring the health of its registered targets, it routes traffic only to the healthy target — elastic Load Balancer scales out and scales in as incoming traffic changes – increases or decreases– over time.

There are different types of load balancers that Elastic Load Balancing supports. These load balancers are as follows: Application, Network, Gateway, and Classic. The type of balancer you select depends on your need — select the one that best fits your requirement.

Application Load Balancer

Application Load Balancer operates at layer seven which is the request level. Based on the content of the request, it routes traffic to targets such as EC2 instances, containers, IP addresses, and Lambda function.

Key points about Application Load Balancer:

  • The application load balancer is ideal for advanced load balancing of HTTP and HTTPS traffic. It is particularly useful to load balance requests for modern application architectures, including microservices and container-based applications.
  • The application load balancer operates at layer 7 — request level. It routes traffic to targets – EC2 instances, containers, IP addresses, and Lambda functions — based on the content of the request.
  • The application load balancer simplifies and improves the security of the application by ensuring that the latest SSL/TLS ciphers and protocols are used at all times.

Network Load Balancer

The network load balancer operates at the fourth layer of the OSI model. It can handle millions of requests per second.

Key points about Network Load Balancer:

• The Network Load Balancer operates at the Layer 4 — connection level. Based on IP protocol data, it routes connections to targets – EC2 instances, microservices, and containers – within VPC.
• The Network Load Balancer is ideal for load balancing of both TCP and UDP traffic.
• The Network Load Balancer is capable of not only handling millions of requests per second but can also maintaining ultra-low latency.
• The Network Load Balancer is optimized to handle sudden and volatile traffic patterns using a single static IP address per AZ.

Gateway Load Balancer

Gateway Load Balancers help you deploy, scale, and manage systems such as firewalls, intrusion detection and prevention systems, and deep packet inspection systems. A Gateway Load Balancer operates at the network layer of the OSI model. It listens for all IP packets across all ports and forwards traffic to the target group specified in the listener rule.

Gateway Load Balancers use Gateway Load Balancer endpoints to exchange traffic across VPC boundaries securely. A Gateway Load Balancer endpoint is a VPC endpoint that provides private connectivity between virtual appliances in the service provider VPC and application servers in the service consumer VPC.

Classic Load Balancer

Classic Load Balancer offers basic load balancing across multiple EC2 instances and operates at both the request and connection levels.  It is intended for applications that are built for classic EC2 instances.

Key points about Classic Load Balancer:

• The Classic Load Balancer provides basic load balancing across multiple EC2 instances and operates at both the request and connection levels.
• The Classic Load Balancer is intended for applications that are built for the classic EC2 instances.

AWS Auto Scaling

AWS Auto Scaling provides monitoring to your applications and automatically adjusts capacity to maintain steady, predictable performance at the lowest possible cost. Using AWS Auto Scaling, you can quickly set up auto-scaling across multiple services. The service provides a simple and powerful UI to configure and set up auto-scaling. You can use auto-scaling with Amazon EC2 instances, Spot Fleets, Amazon ECS tasks, Amazon DynamoDB, and Amazon Aurora Replicas. With AWS Auto Scaling, your applications get the right resources at the right time.

For AWS Auto Scaling — There is no additional cost. You only pay Amazon CloudWatch monitoring fees and for the AWS resources needed to run your applications.

Horizontal Scaling

A “horizontally scalable” system increases its resource capacity by adding more nodes or machines to the system. Comparing a horizontally scalable system with a scalable vertical design, the former is preferred to scale the systems. The reason is that a horizontally scalable system helps increase the degree of fault tolerance of the overall strategy and helps improve performance by enabling parallel execution of the workload and distributing that workload across multiple machines.

Horizontal scalability helps increase in making the system horizontally scalable. In a horizontally scalable system, since more machines are added to increase the pool of resources, thus if one machine goes down, the other machine is allocated to process the workload of the failed machine. Thus, helping to increase the degree of fault tolerance of fault the overall system.

Vertical Scaling

A “vertical scalable” system is considered constrained on resources such as CPU, RAM, and storage, which negatively impacts the overall system's performance. Therefore, to improve this system's implementation by the “vertical scalable” mechanism means adding more resources such as CPU, RAM, and storage. However, since there is still no addition of a machine or node, making the system vertical scalable doesn't improve the fault tolerance of the overall design.

About The Author

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses User Verification plugin to reduce spam. See how your comment data is processed.
Hide picture