AWS Storage Services
Table of Contents
Amazon S3 is an object storage service that offers industry-leading scalability, data availability, security, and performance. What it means customers of all sizes and industries can use it to store and protect any amount of data for different types of use cases, such as websites, mobile applications, backup and restore, archive, and big data analytics. As S3 is object-based storage, it cannot be used for file sharing between instances.
Amazon Simple Storage Service is storage for the Internet. To upload data into S3, you need to create an S3 bucket in one of the AWS Regions. Amazon S3 default encryption provides a way to set the default encryption behavior for an S3 bucket. Encryption for an S3 bucket is an additional feature, and the user needs to enable it.
Amazon S3 is a unique service in the sense that it follows a global namespace, but the buckets are regional. You specify an AWS Region when you create your Amazon S3 bucket. This is a regional service. You pay depending on the storage class you choose for your data.
Amazon EFS provides a simple, scalable, fully managed elastic NFS file system for use with AWS services and on-premises resources. It is a file storage service for use with Amazon EC2. It provides a file system interface, file system access semantics, and concurrently accessible storage for up to thousands of Amazon EC2 instances. It is built to scale on-demand to petabytes of storage without disrupting applications. It can automatically grow and shrink as you add and remove files, eliminating the need to provision and manage capacity to accommodate growth.
Amazon EFS is a regional service storing data within and across multiple Availability Zones (AZs) for high availability and durability. Amazon EC2 instances can access your file system across AZs, regions, and VPCs, while on-premises servers can access using AWS Direct Connect or AWS VPN. Amazon EFS is designed to provide massively parallel shared access to thousands of Amazon EC2 instances, enabling your applications to achieve high levels of aggregate throughput and IOPS with consistent low latencies.
To access EFS file systems from on-premises, you must have an AWS Direct Connect or AWS VPN connection between your on-premises datacenter and your Amazon VPC. You mount an EFS file system on your on-premises Linux server using the standard Linux mount command for mounting a file system.
The service is designed to be highly scalable, highly available, and highly durable. Amazon EFS file systems store data and metadata across multiple Availability Zones in an AWS Region. EFS file system can be mounted on instances across multiple Availability Zones.
Amazon EFS is well suited to support a broad spectrum of use cases from home directories to business-critical applications. Customers can use EFS to lift-and-shift existing enterprise applications to the AWS Cloud. Other use cases include big data analytics, web serving and content management, application development and testing, media and entertainment workflows, database backups, and container storage.
Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA). EFS IA provides price/performance that's cost-optimized for files not accessed every day. By simply enabling EFS Lifecycle Management on your file system, files not accessed according to the lifecycle policy you choose will be automatically and transparently moved into EFS IA.
Amazon EFS Overview:
• EFS One Zone storage class is used to store data in a single AWS Availability Zone. Data stored in this storage class may be lost in the event of an Availability Zone destruction.
• EFS is a file system service and not an object storage service.
• Amazon EFS cannot be used as a boot volume for Amazon EC2 instances. For boot volumes, Amazon Elastic Block Storage (Amazon EBS) volumes are used.
• You will pay a fee each time you read from or write data stored on the EFS (Infrequent Access storage class). The Infrequent Access storage class is cost-optimized for files accessed less frequently. Data stored on the Infrequent Access storage class costs less than Standard, and you will pay a fee each time you read from or write to a file.
• EC2 instances can access files on an EFS file system across many Availability Zones, Regions and VPCs
How EFS works:
EFS High Availability
Amazon Elastic Block Store (EBS) is an easy-to-use, high-performance block storage service designed for use with Amazon EC2 for both throughput and transaction-intensive workloads at any scale. EBS can be used with a broad range of workloads such as enterprise applications, containerized applications, big data analytics, and many others. EBS volumes are designed for mission-critical systems; they can be replicated within an Availability Zone (AZ) and can easily scale to petabytes of data. You can attach an available EBS volume to one instance that is in the same Availability Zone as the volume.
EBS volumes cannot be accessed simultaneously by multiple EC2 instances. An EBS can only be mounted to one EC2 instance at a time, so this option is not correct for the given use case.
Amazon EBS volumes are not encrypted, by default. You can configure your AWS account to enforce the encryption of the new EBS volumes and snapshot copies that you create. Encryption (at rest and during transit) is an optional feature for EBS and has to be enabled by the user.
Amazon Elastic Block Store (Amazon EBS) provides persistent block storage volumes for use with Amazon EC2 instances. Each Amazon EBS volume is automatically replicated within its Availability Zone to protect you from component failure — thus, offering high availability and durability. With Amazon EBS, you can scale your usage up or down within minutes—all while paying a low price for only what you provision. The fundamentals charges for EBS volumes are the volume type (based on performance), the storage volume in GB per month provisioned, the number of IOPS provisioned per month, the storage consumed by snapshots, and outbound data transfer.
An EBS snapshot is a point-in-time copy of your Amazon EBS volume. EBS snapshots are one of the components of an AMI, but EBS snapshots alone cannot be used to deploy the same EC2 instances across different Availability Zones (AZs).
• EBS volumes can only be mounted with Amazon EC2.
• EBS volume can be attached to a single instance in the same Availability Zone whereas EFS file system can be mounted on instances across multiple Availability Zones.
• Amazon EBS Snapshots are a point in time copy of your block data. For the first snapshot of a volume, Amazon EBS saves a full copy of your data to Amazon S3. EBS Snapshots are stored incrementally, which means you are billed only for the changed blocks stored.
• When using EBS direct APIs for Snapshots, additional EC2 data transfer charges will apply only when you use external or cross-region data transfers.
• Snapshot storage is based on the amount of space your data consumes in Amazon S3. Because Amazon EBS does not save empty blocks, it is likely that the snapshot size will be considerably less than your volume size. Copying EBS snapshots is charged for the data transferred across regions. After the snapshot is copied, standard EBS snapshot charges apply for storage in the destination region.
• Data transfer-in is always free, including for EBS volumes.
An instance store provides temporary block-level storage for your EC2 instance. Instance storage is located on disks that are physically attached to the host computer. An instance store is a good option when you need storage with very low latency, but you don't need the data to persist when the instance terminates. An Instance Store is ideal for the temporary storage of information that frequently changes, such as buffers, caches, scratch data, and other temporary content, or for data that is replicated across a fleet of instances, such as a load-balanced pool of web servers. As Instance Store volumes are tied to an EC2 instance, they are also single AZ entities.
EC2 Instances Store Overview:
SK Singh is the founder, a software, cloud, and data engineer. He has been involved in the software industry for around 25 years. He has a bachelor's degree in computer science and engineering from India and a master's degree in software engineering from the Pennsylvania State University. SK has been involved in a wide range of software projects for many governments, private, start-ups, and large public companies in various software engineering roles. He has many professional certifications such as AWS, Hadoop, Kafka, Oracle, Unix, Java, Java-related frameworks, and many others related.