When I speak to customers who are planning to migrate massive amounts of on-premises data to AWS, they tell me that they want to simplify their storage management, reduce their costs, and to make the data more accessible so that it can be used for analytics, machine learning training, genomics, and other use cases. Customers are already using Network Attached Storage (NAS) on-premises, and are looking for a cloud-based upgrade that offers similar capabilities including point-in-time snapshots, data clones, and user management.
AWS customers such as Amdocs, Vela Games, and Astera Labs have been running their mission-critical and performance-intensive NAS workloads like databases, game development and streaming, and semiconductor chip design on Amazon FSx for OpenZFS. They’ve been using the existing SSD storage class on FSx to provide the predictable, high performance these workloads need. However, many other customers have large data sets that are stored on HDD-based or hybrid SSD/HDD-based NAS storage on prem that find it cost-prohibitive to move their data sets to all-SSD storage. Additionally, these customers are finding it increasingly challenging and expensive to manage provisioned storage on prem for unpredictable data sets and avoid running out of space. And they are keeping their NAS data around for longer because it could have future value for building their next model, investment strategy, or product, but that means they need to spend more time and effort monitoring access patterns and moving data around between hot and cold storage media to optimize costs.
FSx Intelligent-Tiering
Taking all of this into account, I am happy to be able to tell you about the new Amazon FSx Intelligent-Tiering storage class, available today for use with Amazon FSx for OpenZFS file systems. The new storage class is priced 85% lower than the existing SSD storage class and 20% lower than traditional HDD-based deployments on premises, and brings full elasticity and intelligent tiering to NAS data sets.
Your data moves between three storage tiers (Frequent Access, Infrequent Access, and Archive) with no effort on your part, so you get automatic cost savings with no upfront costs or commitments. Here’s how the tiers work:
Frequent Access – Data that has been accessed within the last 30 days is stored in this tier.
Infrequent Access – Data that has been not been accessed for 30 to 90 days is stored in tier, at a 44% cost reduction from Frequent Access.
Archive – Data that has not been accessed for 90 or more days is stored in this tier, at a 65% cost reduction from Infrequent Access.
Regardless of the storage tier, your data is stored across multiple AWS Availability Zones (AZs) for redundancy and availability, and can be retrieved instantly in milliseconds.
There’s no need to manage or pre-provision storage, making this storage class a great fit for uses case such as genomics, financial data analytics, seismic imagery analysis, and machine learning where storage requirements can change dramatically over the course of days or weeks.
Along with the potential for cost savings, you get high performance: up to 400K IOPS and 20 GB/second of throughput for each OpenZFS file system, with a time-to-first-byte of tens of milliseconds for all data, regardless of storage class. You can also configure an SSD-based read cache (64 GiB to 512 TiB) to reduce the time-to-first-byte by 10x to 100x for cached data.
Creating a File System
I can create a file system using the AWS Management Console, CLI, API, or a AWS CloudFormation. From the Console I click Create file system to get started:
I choose Amazon FSx for OpenZFS and click Next:
Then I enter a name (jeff_fsx_openzfs_1) for my file system and select the Intelligent-Tiering storage class. I choose the desired Throughput capacity, and I select one of the three sizing mode options for the read cache, click Next, and confirm my choices in order to create my file system:
It is ready within minutes, and I can NFS mount it to my EC2 instance:
After I run a representative workload for a while I can look at the metrics and review the performance of my file system:
It appears that I have plenty of throughput, but my read cache may be larger than needed. I created it in Automatically Provisioned mode, which allocated 3200 GiB of cache. I can change that (and save some money) with a couple of clicks:
I can also change the throughput capacity as needed:
Amazon FSx NAS Features and Attributes
Let’s take a quick look at some of the features which make FSx for OpenZFS and the FSx Intelligent-Tiering storage class a great for for your NAS-level storage needs:
Built-in Backups – Amazon FSx automatically makes a daily backup of each file system during a specified backup window and retains them for a specified retention period. The backups are file-system consistent, highly durable, and incremental. You can also create backups on your own and retain them for as long as needed.
Point-In-Time Snapshots -You can create a read-only image of an OpenZFS volume at any time. The snapshots are stored within the file system and consume storage; they can be used to restore a volume, restore individual files and folders, or to create a new volume as either a clone or a full-copy.
Replication – You can replicate a point-in-time view of an OpenZFS volume to another volume across file systems, AWS Regions, and AWS accounts. FSx uses ZFS send/receive technology behind the scenes to perform this replication and automatically establishes and maintains network connectivity between file systems to handle interruptions and resume data transfer as needed.
Data Compression – You can enable ZSTD or LZ4 compression on your OpenZFS volumes to reduce storage cost and speed up data transfer.
User and Volume Quotas – You can limit the amount of storage consumed by an individual volume or user.
Things to Know
Here are a couple of things to keep in mind before we wrap up:
Regions – This new storage class is available in the US East (Ohio, N. Virginia), US West (Oregon), Asia Pacific (Mumbai, Singapore, Sydney, Tokyo), Canada (Central), and Europe (Frankfurt, Ireland) AWS Regions.
Pricing – Pricing is based on the amount of primary storage consumed (GB/Month) and read cache provisioned (GB/Month). See the Amazon FSx for OpenZFS Pricing page for more information.
— Jeff;
from AWS News Blog https://ift.tt/9AVGSzN
Share this content: