Welcome.
In this video, we'll explore the various
Route 53 routing policies and how they
help optimize traffic management for your
applications.
Routing policies in Route 53 define how
the service responds to DNS queries
It's important to note that the term
routing in this context is different from
traffic routing
in a load balancer.
Here, routing refers specifically to how
DNS queries are answered, not how
traffic is routed.
Route 53 supports several types of routing
policies to address different use cases.
Simple routing used for single resources
performing a specific function, for
example, a
web server hosting a website.
Weighted routing distributes traffic to
multiple resources based on assigned
weights, allowing
fine-grained control over traffic
distribution.
Faillover routing can be used for disaster
recovery scenarios, configures active
passive
failover, ensuring traffic is routed to a
secondary resource if the primary one
fails
Latency-based routes.
Traffic to the region or resource that
provides the lowest latency to the user,
ensuring
better performance.
Geolocation routing serves users based on
their geographic location, Routes traffic
based on the physical location of users,
geo proximity routes traffic based on the
location
of resources and users, with the option to
shift traffic between regions.
Multi-value answer.
Routing can be used for improved
availability by returning multiple IP
addresses.
Returns up to 8 healthy records selected
at random, improving availability by
balancing
responses.
Understanding these policies allows you to
configure Route 53 for efficient and
reliable DNS
query responses.
Let's take a look at the Route 53 simple
routing policy.
This is the most basic routing policy in
Route 53.
It's designed to route traffic to a single
resource, such as a web server hosting
your
website.
With simple routing, you typically use an
A record to resolve the resource without
applying any
advanced rules or configurations.
It's important to note that this policy
does not include advanced features like
weighted or latency-based
routing.
Instead, it provides a straightforward way
to configure standard DNS records for
simple
use cases.
This diagram illustrates how the simple
routing policy works in Amazon Route 53.
In this example, we have a client trying
to access a domain example.com.
Using Route 53's simple routing policy,
the DNS query resolves
to a single IP address.
This type of routing is straightforward
and ideal for simple use cases, where
traffic
is directed to a single resource, such as
a web server or an application.
There are no additional routing rules or
conditions applied.
The simple routing policy is a reliable
way to configure standard DNS records when
advanced routing features like weighted or
latency-based routing are not required.
Let's discuss some important details about
the simple routing policy in Amazon Route
53.
First, you can specify multiple values
within the same DNS record.
If multiple values are returned, the
client selects one randomly, which helps
distribute traffic
across those values.
When using an alias record with the simple
routing policy, you only need to specify a
single AWS resource, simplifying the
configuration process.
However, it's important to note that
health checks are not performed with this
routing policy.
This means Route 53 does not monitor the
availability of the resources associated
with the record
The simple routing policy is ideal for
basic use cases without the need for
advanced health
monitoring or traffic distribution logic.
This diagram illustrates how Amazon Route
53's simple routing policy works when
multiple
values are specified.
In this example, the client sends a DNS
query to resolve the domain example.com
Route 53 responds with multiple IP
addresses such as 93.184.21634
93.18421635, and 8318428636
The client then randomly selects one of
these IP addresses to connect to.
This approach is simple and helps
distribute traffic across multiple
endpoints without applying any
advanced routing logic.
However, it's important to note that no
health checks are performed in this
configuration, meaning
Route 53 does not verify whether the IP
addresses are functional before
responding.
This makes the simple routing policy ideal
for basic use cases where traffic
distribution is randomized
and health monitoring is not required.
Let's explore the weighted routing policy
in Amazon Route 53.
This policy allows you to associate
multiple resources with a single domain,
such as example.com
or a subdomain like Acme.example.com.
What makes this policy powerful is that it
gives you control over how much traffic is
routed to each resource.
By assigning specific weights, you can
distribute traffic in the proportions that
best suit your needs
making it ideal for scenarios like load
balancing or gradual feature rollouts.
This is achieved by assigning a relative
weight to each resource.
For example, you can control what
percentage of DNS queries go to specific
resources
by adjusting their assigned weights.
It's important to note that the total
weight doesn't need to equal 100, making
it flexible
for various traffic distribution
scenarios.
This routing policy is ideal for use cases
like traffic splitting during application
testing or
gradual feature rollouts.
This diagram demonstrates how the weighted
routing policy works in Amazon Route 53
In this example, we have 3 EC2 instances,
each assigned a specific
weight.
The weights determine how traffic is
distributed among these instances.
The first EC2 instance has a weight of
70%, meaning it receives the majority
of the traffic.
The 2nd instance is assigned a weight of
20%, receiving a smaller portion of the
traffic
Finally, the 3rd instance is given a
weight of 10%, receiving the least traffic
When a client makes a request to Route 53
for the domain, Route 53 uses
these weights to distribute traffic in the
specified proportions.
This policy is ideal for scenarios like
load balancing, testing new deployments,
or
gradually rolling out updates, as it
allows precise control over traffic
distribution.
Let's explore some key points about the
weighted routing policy in Amazon Route
53.
To stop sending traffic to a specific
resource, you can assign it a weight of 0
Interestingly, if all records are assigned
a weight of 0, traffic is distributed
equally
among all associated resources.
For DNS configuration, all records in the
hosted zone must have the same name and
type
Additionally, each record can be
associated with a health check to ensure
only healthy resources
receive traffic.
Typical use cases for weighted routing
include load balancing traffic across
AWS regions, AB testing to evaluate new
application versions
by gradually routing traffic to them.
This policy provides great flexibility for
distributing and managing traffic in
dynamic
environments.
Let's take a closer look at the latency
routing policy in Amazon Route 53
The primary purpose of this routing policy
is to minimize round trip time and ensure
the
best latency for users by directing their
requests to the most efficient AWS region.
This policy is especially useful when your
application is hosted in multiple AWS
regions
By automatically routing users to the
closest resource with the lowest latency,
it
significantly enhances overall performance
and user experience.
The latency routing policy is an ideal
solution for improving responsiveness in
global
applications where performance is a
critical factor.
Let's take a closer look at how the
latency routing policy works in Amazon
Route 53
First, you need to create latency records
for your resources across multiple AWS
regions.
These records define how Route 53 handles
queries for your domain.
When a DNS query is received for your
domain or subdomain such as example.com
or Acme.example.com, Route 53 identifies
the AWS
regions where latency records exist.
It then determines which region offers the
lowest latency for the user.
Once the optimal region is identified,
Route 53 selects the corresponding latency
record
for that region.
And responds with the resources value such
as the IP address of a web server.
Additionally, latency records can be
associated with health checks, ensuring
only
healthy resources handle traffic.
The policy also supports failover
capability, adding an extra layer of
reliability
for your application.
This diagram demonstrates how Amazon Route
53's latency-based routing policy
operates to optimize user performance.
In this example, we have 2 application
load balancers, ALBs, one
hosted in the US East One region and
another in the AP Southeast One region
When a user sends a DNS query, Route 53
evaluates the latency
between the user and the AWS regions
hosting these ALBs.
For example, users in North America are
directed to the ALB in US East 1
as it provides the lowest latency for
their location.
Meanwhile, users in Asia are routed to the
ALB and AP Southeast one, ensuring
the fastest response times for their
region.
This policy ensures that users connect to
the AWS region with the best performance
by minimizing
round trip latency.
This improves the overall user experience
and makes latency-based routing an ideal
choice for
applications hosted across multiple AWS
regions.
Let's explore the failover routing policy
in Amazon Route 53.
This policy allows you to route traffic
based on the health of your resources.
If the primary resource is healthy,
traffic is routed to it as expected.
However, if the primary becomes unhealthy,
the policy automatically redirects traffic
to a backup
resource.
Fail-over routing is ideal for active
passive failover setups, where you need to
ensure high availability
and reliability by seamlessly switching
traffic to a backup resource during
failures.
Faillover routing is ideal for active
passive failover setups, where you need to
ensure
high availability and reliability by
seamlessly switching traffic to a backup
resource
during failures.
This diagram illustrates the failover
routing policy, specifically an active
passive
setup in Amazon Route 53.
When a client sends a DNS query, Route 53
checks the health of the primary resource
such as the primary EC2 instance.
This health check is mandatory to monitor
the availability of the resource.
If the primary EC2 instance is healthy,
Route 53 routes the traffic to it as
expected.
However, if the health check detects that
the primary resource is unavailable or
unhealthy
the policy automatically redirects traffic
to the secondary resource, which acts as
the disaster
recovery or backup instance.
This ensures high availability by
seamlessly shifting traffic to the backup
resource when the primary
instance fails, making the failover
routing policy ideal for disaster recovery
and critical applications.
By leveraging health checks and active
passive failover, this policy provides a
reliable
and automated solution for managing
application downtime.
Let's start by understanding the
geolocation routing policy in Amazon Route
53.
One key difference between the geolocation
routing policy and the latency-based
routing policy
is that geolocation routing is based on
the user's physical location, not
performance
This policy enables Route 53 to respond to
DNS queries based on where the user is
located geographically.
When configuring the policy, you can
specify user locations by continent,
country, or
even a US state.
In cases where there is overlap, the most
precise location takes precedence.
To ensure all users are handled, it's also
important to create a default record,
which acts
as a fallback for unmatched locations.
Now, let's look at the features and use
cases of the geolocation routing policy in
Amazon
Route 53.
This policy can be associated with health
checks, ensuring that traffic is only
routed to
healthy resources.
There are 2 primary use cases for the
geolocation routing policy.
Content localization.
You can use this policy to deliver
localized content or present your website
in the user's
language, enhancing the user experience.
Content restriction.
It's also useful for restricting content
distribution to specific geographic
locations
ensuring compliance with distribution
rights.
These capabilities make the geolocation
routing policy ideal for tailoring content
delivery to meet
both user preferences and regulatory
requirements.
This diagram illustrates how the
geolocation routing policy in Amazon Route
53 operates
In this example, the DNS routing is based
on the geographic location of the user
making
the request.
Let's break it down.
A user from California is routed to an IP
address 98.12.23.65
which is specifically configured for that
region.
Similarly, a user from Ohio is directed to
another IP address, 23.34.54.72
For users who are not located in any
specified region, Route 53 uses the
default
record.
For example, if a request comes from a
location without a matching geolocation
rule, traffic
is routed to 54.24.36.78.
The geolocation routing policy ensures
that content is tailored to the user's
location
improving performance and enabling
features such as localized content
delivery or
regulatory compliance.
Additionally, this policy can be
associated with health checks, ensuring
that only healthy
resources receive traffic.
Let's start with an overview of the geop
proximity routing policy in Amazon Route
53
This policy routes traffic based on the
geographic location of both users and
resources
It gives you the flexibility to prioritize
specific resources by shifting traffic
towards them.
A key feature of the geop proximity
routing policy is the ability to adjust
traffic flow
by expanding or shrinking the geographic
region assigned to a resource.
This allows you to manage traffic
distribution dynamically, ensuring optimal
resource utilization
Now, let's talk about configuring the geop
proximity routing policy in Amazon Route
53
To adjust traffic flow, you can use bias
values.
To expand a region and increase traffic to
a resource, set a bias value between 1
and 11.
To shrink a region and reduce traffic to a
resource, set a bias value between 1
and 99.
This policy supports both AWS resources
and non-AWS resources.
For AWS resources, you specify the AWS
region.
For non-AWS resources, you use the
latitude and longitude of the resource.
It's important to note that to use this
feature, you must configure it through
Route 53
traffic flow.
This diagram illustrates the geop
proximity routing policy in Amazon Route
53.
In this example, we have two AWS regions,
US West 1 with a
bias of 00, and US East 1 with a higher
bias
value of 50.
The bias value in US East One expands its
geographic region, directing
more users to connect to resources in that
region, even if they are closer to US
West One.
Geo proximity.
Routing allows traffic to be dynamically
shifted based on these bias values.
For instance, a positive bias like the 50
in US East 1, increases
the size of the region, drawing more
traffic.
A neutral or lower bias, like the 0 in US
West 1, keeps traffic within its
default boundaries.
This feature is especially useful for
scenarios where one region has greater
capacity or needs to
handle a larger share of the traffic.
Geop proximity routing also works with
non-AWS resources by specifying their
latitude
and longitude.
To configure this policy, you must use
Route 53 traffic flow, which provides an
easy way to manage traffic distribution
across multiple locations.
Let's explore the multi-value answer
routing policy in Amazon Route 53.
This policy is designed to route traffic
to multiple resources.
A key feature of multi-value answer
routing is its ability to integrate with
health checks
Only healthy resources are included in the
responses to DNS queries, ensuring better
reliability
and performance.
Let's understand how the multi-value
answer routing policy works.
For each DNS query, Route 53 can return up
to 8 healthy records from the
available resources.
This ensures that only healthy resources
are included in the response, improving
reliability
and performance.
However, it's important to note that the
multi-value answer routing policy is not a
substitute
for an elastic load balancer.
While it provides basic traffic
distribution, ELBs offer more advanced
load balancing
capabilities.
Here is the quick summary of choosing a
routing policy in Route 53, simple routing
policy
Used for single resources performing a
specific function, e.g., a web server
hosting a website, failover routing.
Policy configures active passive failover,
ensuring traffic is routed
to a secondary resource if the primary one
fails.
Geolocation Routing policy routes traffic
based on the
physical location of users, geo proximity.
Routing policy routes traffic based on the
location of resources and users with
the option to shift traffic between
regions.
Latency, routing policy, routes traffic to
the region or resource
that provides the lowest latency to the
user, ensuring better performance.
Multi-value Answer routing policy.
Returns up to 8 healthy records selected
at random, improving availability by
balancing
responses.
Weighted routing policy distributes
traffic to multiple resources based on
assigned
weights, allowing fine-grained control
over traffic distribution.
These routing policies allow you to
customize how DNS queries are answered,
ensuring
optimal performance, availability, and
efficiency based on your application's
needs.