Amazon MemoryDB FAQs

General

Amazon MemoryDB is a Redis OSS-compatible, durable, in-memory database service that delivers ultra-fast performance. MemoryDB enables you to achieve microsecond read latency, single-digit millisecond write latency, high throughput, and Multi-AZ durability for modern applications, like those built with microservices architectures. These applications require low latency, high scalability, and use Redis OSS’ flexible data structures and APIs to make development agile and easy. MemoryDB stores your entire dataset in memory and leverages a distributed transactional log to provide both in-memory speed and data durability, consistency, and recoverability. You can use MemoryDB as a fully managed, primary database, enabling you to build high-performance applications without having to separately manage a cache, durable database, or the required underlying infrastructure.

You can get started by creating a new MemoryDB cluster using the AWS Management Console, Command Line Interface (CLI), or Software Development Kit (SDK). To create a MemoryDB cluster in the console, sign in and navigate to Amazon MemoryDB. From there, select “Get Started” then “Create new cluster.” For more detailed steps, and how to get started with the CLI, please see the MemoryDB documentation.

Yes, MemoryDB maintains compatibility with Redis OSS and supports the same set of data types, parameters, and commands that you are familiar with. This means that your application code, clients, and tools you already use today with Redis OSS can be used with MemoryDB. MemoryDB supports all Redis OSS data types such as strings, lists, sets, hashes, sorted sets, hyperloglogs, bitmaps, and streams. Also, MemoryDB supports the 200+ Redis OSS commands with the exception of Redis OSS admin commands, because MemoryDB manages your cluster for you.

For information on the versions of Redis OSS supported in MemoryDB, please visit the MemoryDB documentation.

A MemoryDB cluster is a collection of one or more nodes serving a single dataset. A MemoryDB dataset is partitioned into shards, and each shard has a primary node and up to 5 optional replica nodes. A primary node serves read and write requests, while a replica only serves read requests. A primary node can failover to a replica node, promoting that replica to the new primary node for that shard. For more information, visit the MemoryDB documentation.

MemoryDB is a durable, in-memory database for workloads that require an ultra-fast, Redis OSS-compatible primary database. You should consider using MemoryDB if your workload requires a durable database that provides ultra-fast performance (microsecond read and single-digit millisecond write latency). MemoryDB may also be a good fit for your use case if you want to build an application using Redis OSS data structures and APIs with a primary, durable database. Finally, you should consider using MemoryDB to simplify your application architecture and lower costs by replacing usage of a database with a cache for durability and performance.

ElastiCache is a service that is commonly used to cache data from other databases and data stores using Redis OSS. You should consider ElastiCache for caching workloads where you want to accelerate data access with your existing primary database or data store (microsecond read and write performance). You should also consider ElastiCache for use cases where you want to use the Redis OSS data structures and APIs to access data stored in a primary database or data store.

Please refer to the service level agreement (SLA).

For current limits and quotas, see the MemoryDB documentation.

Performance and durability

MemoryDB’s throughput and latency vary based on the node type, payload size, and number of client connections. MemoryDB delivers microsecond read latency, single-digit millisecond write latency, and read-after-write latency on the primary node for a cluster shard. MemoryDB can support up to 390K read and 100K write requests per second and up to 1.3 GB/s read and 100 MB/s write throughput per node (based on internal testing on read-only and write-only workloads). A MemoryDB cluster shards data across one or more nodes, enabling you to add more shards or replicas to your cluster to increase aggregate throughput.

MemoryDB stores your entire data set in memory and uses a distributed Multi-AZ transactional log to provide data durability, consistency, and recoverability. By storing data across multiple AZs, MemoryDB has fast database recovery and restart. By also storing the data in-memory, MemoryDB can deliver ultra-fast performance and high throughput.

MemoryDB leverages a distributed transactional log to durably store data. By storing data across multiple AZs, MemoryDB has fast database recovery and restart. Also, MemoryDB offers eventual consistency for replica nodes and consistent reads on primary nodes.

Redis OSS includes an optional append-only file (AOF) feature, which persists data in a file on a primary node’s disk for durability. However, because AOF stores data locally on primary nodes in a single availability zone, there are risks for data loss. Also, in the event of a node failure, there are risks of consistency issues with replicas.

Yes, MemoryDB supports high availability. You can create a MemoryDB cluster with Multi-AZ availability with up to 5 replicas in different AZs. When a failure occurs on a primary node, MemoryDB will automatically failover and promote one of the replicas to serve as the new primary and direct write traffic to it. Additionally, MemoryDB utilizes a distributed transactional log to ensure the data on replicas is kept up-to-date, even in the event of a primary node failure. Failover typically happens in under 20 seconds for unplanned outages and typically under 200 milliseconds for planned outages.

MemoryDB uses a distributed transactional log to durably store data written to your database during database recovery, restart, failover, and eventual consistency between primaries and replicas.

Redis OSS allows writes and strongly consistent reads on the primary node of each shard and eventually consistent reads from read replicas. These consistency properties are not guaranteed if a primary node fails, as writes can become lost during a failover and thus violate the consistency model.

The consistency model of MemoryDB is similar to Redis OSS. However, in MemoryDB, data is not lost across failovers, allowing clients to read their writes from primaries regardless of node failures. Only data that is successfully persisted in the multi-AZ transaction log is visible. Replica nodes are still eventually consistent, with lag metrics published to Amazon CloudWatch.

With MemoryDB version 7 compatible with Redis OSS, we introduced enhanced IO multiplexing, which delivers additional improvements to throughput and latency at scale. Enhanced IO multiplexing is ideal for throughput-bound workloads with multiple client connections, and its benefits scale with the level of workload concurrency, As an example, when using r6g.4xlarge node and running 5200 concurrent clients, you can achieve up to 46% increased throughput (read and write operations per second) and up to 21% decreased P99 latency, compared with MemoryDB version 6 compatible with Redis OSS. For these types of workloads, a node's network IO processing can become a limiting factor in the ability to scale. With MemoryDB version 7 compatible with Redis OSS, each dedicated network IO thread pipelines commands from multiple clients into the MemoryDB engine, taking advantage of Redis OSS ability to efficiently process commands in batches.

For more information see the documentation.

Data ingestion and query

To write data to and read data from your MemoryDB cluster, you connect to your cluster using one of the supported Redis OSS clients. For a list of supported Redis OSS clients, please see the Redis OSS documentation. For instructions on how to connect to your MemoryDB cluster using a Redis OSS client, see the MemoryDB documentation.

Hardware, scaling and maintenance

You create a MemoryDB cluster with up to 500 nodes. This gives a maximum memory storage capacity of ~100 TB, assuming you have 250 primary nodes each with one replica for high availability (500 nodes total).

Yes, you can resize your MemoryDB cluster horizontally and vertically. You can scale your cluster horizontally by adding or removing nodes. You can choose to add shards to spread your dataset across more shards, and you can add additional replica nodes to each shard to increase availability and read throughput. You can also remove shards and replicas to scale-in your cluster. Additionally, you can scale your cluster vertically by changing your node type, which changes your memory and CPU resources per node. During horizontal and vertical resizing operations, your cluster continues to stay online and serve read and write requests.

MemoryDB makes maintenance and updates easy for your cluster, and provides two different processes for cluster maintenance. First, for some mandatory updates, MemoryDB automatically patches your cluster during maintenance windows which you specify. Second, for some updates, MemoryDB utilizes service updates, which you can apply at any time or schedule for a future maintenance window. Some service updates are automatically scheduled in a maintenance window after a certain date. Cluster updates help strengthen security, reliability, and operational performance of your clusters, and your cluster continues to stay online and serve read and write requests. For more information on cluster maintenance, see the MemoryDB documentation.

Backup and restore

Yes, you create snapshots to back up the data and metadata of your MemoryDB cluster. You can manually create a snapshot, or you can use MemoryDB’s automated snapshot scheduler to take a new snapshot each day at a time you specify. You can choose to retain your snapshot for up to 35 days after it is created, and MemoryDB. Snapshots are stored in Amazon S3 which is designed for 99.999999999% (11 9's) durability. Also, you can choose to take a final snapshot of your cluster when you are deleting the cluster. Furthermore, you can export MemoryDB snapshots from the service to your Amazon S3 bucket. For more information on snapshots, see the MemoryDB documentation.

Yes, you can restore your MemoryDB cluster from a snapshot when creating a new MemoryDB cluster.

Yes, you can restore your MemoryDB cluster from a Redis OSS RDB file. You can specify the RDB file to restore from when creating a new MemoryDB cluster.

Yes, you can migrate data from ElastiCache to MemoryDB. First, create a snapshot of your ElastiCache cluster and export it to your S3 bucket. Next, create a new MemoryDB cluster and specify the backup to restore from. MemoryDB will create a new cluster with the data and Redis OSS metadata from the snapshot. For more information on migrating data from ElastiCache to MemoryDB, see the MemoryDB documentation.

Metrics

Yes, MemoryDB offers operational and performance metrics for your cluster. MemoryDB has over 30 CloudWatch metrics, and you can view these metrics in the MemoryDB console. For more information on CloudWatch metrics and MemoryDB, see the MemoryDB documentation.

Security and compliance

Yes, MemoryDB supports encryption of your data both at-rest and in-transit. For encryption at rest, you can use AWS Key Management Service customer managed keys (CMK) or a MemoryDB provided key. With Graviton2 instances for your MemoryDB cluster, your data is encrypted in memory using always-on 256-bit DRAM encryption.

MemoryDB uses Redis OSS Access Control Lists (ACLs) to control both authentication and authorization for your cluster. ACLs enable you to define different permissions for different users in the same cluster. An ACL is a collection of one or more users. Each user has a password and access string, which is used to authorize access to Redis OSS commands and data. To learn more about ACLs in MemoryDB, see the MemoryDB documentation.

Yes, all MemoryDB clusters must be launched in a VPC.

We will continue to support more compliance certifications. See here for the latest compliance readiness information.

Yes. To receive a history of all Amazon MemoryDB API calls made on your account, you simply turn on CloudTrail in the AWS Management Console. For more information, visit the CloudTrail home page.

Cost optimization

Data tiering for Amazon MemoryDB is a new price-performance option for MemoryDB which automatically moves less-frequently accessed data from memory to high performance, locally attached solid-state drives (SSD). Data tiering increases capacity, simplifies cluster management, and improves total cost of ownership (TCO) for MemoryDB.

You should use data tiering when you need an easier and more cost-effective way to scale data capacity for your MemoryDB clusters without sacrificing your applications’ availability. Data tiering is ideal for workloads that access up to 20% of their data regularly, and for applications that can tolerate additional latency the first time a less-frequently accessed item is needed. Using data tiering with R6gd nodes that have nearly 5x more total capacity (memory + SSD) can help you achieve over 60% storage cost savings when running at maximum utilization, compared to R6g nodes (memory only). Assuming 500-byte String values, you can typically expect an additional 450µs latency for read requests to data stored on SSD compared to read requests to data in memory.

Data tiering works by utilizing SSD storage in cluster nodes when available memory capacity is exhausted. When using cluster nodes that have SSD storage, data tiering is automatically enabled and MemoryDB manages data placement, transparently moving items between memory and disk using a least-recently used (LRU) policy. When memory is fully consumed, MemoryDB automatically detects which items were least-recently used and moves their values to disk, optimizing cost. When an application needs to retrieve an item from disk, MemoryDB transparently moves its value to memory before serving the request, with minimal impact to performance.

To get started, create a new MemoryDB cluster using memory-optimized instances with ARM-based AWS Graviton2 processors and NVMe SSDs (R6gd). You can then migrate data from an existing cluster by importing a snapshot.

R6gd nodes with data tiering is based on per instance-hour consumed. You also pay for data written when using R6gd, similar to other MemoryDB node types. For more details, see the MemoryDB pricing page.

To get started, create a new MemoryDB cluster using memory-optimized instances with ARM-based AWS Graviton2 processors and NVMe SSDs (R6gd). You can then migrate data from an existing cluster by importing a snapshot.

MemoryDB reserved nodes offer size flexibility within a node family and AWS Region. This means that the discounted reserved node rate will be applied automatically to usage of all sizes in the same node family. For example, if you purchase a r6g.xlarge reserved node and need to scale to a larger node r6g.2xlarge, your reserved node discounted rate is automatically applied to 50% usage of the r6g.2xlarge node in the same AWS Region. The size flexibility capability will reduce the time that you need to spend managing your reserved nodes and since you’re no longer tied to a specific database node size, you can get the most out of your discount even if your capacity needs change.

MemoryDB reserved node pricing is based on node type, term duration (one- or three-year), payment option (No Upfront, Partial Upfront, All Upfront), and AWS Region. Please note that reserved node prices don't cover Data Written or Snapshot Storage costs. For more details, see the MemoryDB pricing page.

MemoryDB offers reserved nodes for the memory optimized R6g, R7g, and R6gd (with data tiering) nodes.