AWS Database Blog

Build a multi-Region session store with Amazon ElastiCache for Valkey Global Datastore

As companies expand globally, they must be able to architect highly available and fault-tolerant systems across multiple AWS Regions. With such scale, a company can find itself in this position when designing a caching solution across its multi-Region infrastructure. In this post, we dive deep into how to use Amazon ElastiCache for Valkey, a fully managed in-memory data store with Redis OSS and Valkey compatibility, and the Amazon ElastiCache for Valkey Global Datastore feature set.

Automate Amazon RDS for PostgreSQL major or minor version upgrade using AWS Systems Manager and Amazon EC2

In this post, we guide you through setting up automation for pre-upgrade checks and upgrading a fleet of Amazon RDS for PostgreSQL instances. In this solution, we use AWS Systems Manager to automate the Amazon RDS upgrade job.

Supercharging vector search performance and relevance with pgvector 0.8.0 on Amazon Aurora PostgreSQL

In this post, we explore how pgvector 0.8.0 on Aurora PostgreSQL-Compatible delivers up to 9x faster query processing and 100x more relevant search results, addressing key scaling challenges that enterprise AI applications face when implementing vector search at scale.

Explore the new openCypher custom functions and subquery support in Amazon Neptune

In this post, we describe some of the openCypher features that have been released as part of the 1.4.2.0 engine update to Amazon Neptune. Neptune provides developers with the choice of building their graph applications using three open graph query languages: openCypher, Apache TinkerPop Gremlin, and the World Wide Web Consortium’s (W3C) SPARQL 1.1. You can use the guide at the end of this post to try out the new features that are described.

Connect Amazon Bedrock Agents with Amazon Aurora PostgreSQL using Amazon RDS Data API

In this post, we describe a solution to integrate generative AI applications with relational databases like Amazon Aurora PostgreSQL-Compatible Edition using RDS Data API (Data API) for simplified database interactions, Amazon Bedrock for AI model access, Amazon Bedrock Agents for task automation and Amazon Bedrock Knowledge Bases for context information retrieval.

Run SQL Server post-migration activities using Cloud Migration Factory on AWS

In this post, we show you essential post-migration tasks to perform after migrating your SQL Server database to Amazon EC2 and how to automate this activity by using Cloud Migration Factory on AWS (CMF), such as validating database status, configuring performance settings, and running consistency checks. Additionally, we explore how the CMF solution can automate these essential tasks, providing efficiency, scalability, and heightened visibility to simplify and expedite your migration process.

Achieve up to 1.7 times higher write throughput and 1.38 times better price performance with Amazon Aurora PostgreSQL on AWS Graviton4-based R8g instances

In this post, we demonstrate how upgrading to Graviton4-based R8g instances with Aurora PostgreSQL-Compatible 17.4 on Aurora I/O-Optimized cluster configuration can deliver significant price-performance gains – delivering up to 1.7 times higher write throughput, 1.38 times better price-performance and reducing commit latency by up to 46% on r8g.16xlarge instances and 38% on r8g.2xlarge instances as compared to Graviton2-based R6g instances.

How Amazon maintains accurate totals at scale with Amazon DynamoDB

Amazon’s Finance Technologies Tax team (FinTech Tax) manages mission-critical services for tax computation, deduction, remittance, and reporting across global jurisdictions. The Application processes billions of transactions annually across multiple international marketplaces. In this post, we show how the team implemented tiered tax withholding using Amazon DynamoDB transactions and conditional writes.