![]() |
Ten years ago, we announced the general available Amazon Aurora database, which combined the speed and availability of top commercial databases with simplicity and cost efficiency of Open Source database.
As Jeff described in his starting blog post: “The storage has replicated both in these areas available areas, along with the update model controlled by the Kvora, Amazon Aurora is designed to provide high performance and 99.99% aviability, while easily and scaling to 64 Tib Storage.”
When we started developing Auror ten years ago, we made a basic architectural decision that would forever change the database landscape: we won. This new approach has allowed Auror to supply the performance and availability of commercial databases at one -Tes price.
This is one of the reasons why the Hungs of thousands of AWS customers selects the Aurora Asir relational database.
Today you are enthusiastic to invite us to join us at a live event 21. August 2025 to celebrate the decade of aurora innovation.
A short view of back at the past
During the development of Aurora, we focused on four basic innovation topics: security as our highest priority, scalabibility for fulfilling increasing workload, predictable prices for better cost management and more regions for global applications. Let me go through several key milestones on the way of Aurora.
Aurora’s preview on Re: Invent 2014 and generally provided it in July 2015. At the start we introduced Auror as a “new cost -effective database engine compatible with MySQL”.
In June 2016, we introduced the endpoints of the readers and the cross regions reading the replica, followed by the integration of the AWS Lambda and the ability to load the table directly from the Amazon S3 in October. In June 2017, we added cloning and exporting databases and exports to Amazon S3 Capabilites and full compatibility with PostgreSQL in October of the same year.
The journey continued to preview without a server in November 2017, which became generally in August 2018. We introduced a blue/green deployment to simplify database updates and optimized instances reading to improve the performance performance.
In 2023, we add vector capacity with a PGVector to search for similarity for Aurora PostgreSQL and Aurora I/O-optimized to provide up to 40 cost savings for I/O. We launched the integration of Aurora Zero-ETL with Amazon Redshift, which allows almost analytics and ml in real time using Amazon Redshift on petabias of transaction data from Aurory to remove and hold the comprehensive data pipes they perform. This year, we added Aurora MySQL Zero-A Integration with Amazon Sagemaker, Enabubling almost in real time access to your Lake Architecture of Sagemaker to launch a wide range of analysts.
In 2024, we did it effortlessly, as just one click to choose Aurora PostgreSQL as a vector store for Amazon Bedrock Nowledge Basse and launch Aurora PostgreSQL Database limit, without horizontal scaling.
In order to simplify scaling for customers, in September 2020 we will also enter the maximum storage at 128 TIB, which allows many applications to work within a single instance. Last month, we further simplified scaling by doubled the maximum storage to 256 TIB without not necessary and paid prices-like Vy-chi. This allows even more customers to operate their growing workload without the complexity of management of multiple instances when holding costs.
Last time on Re: Invent 2024, we and Amazon Aurora DSQL, which was generally available in May 2025. Aurora DSQL is our latest innovation in distributed SQL databases, offering active active high availability and multiregion strong consistency. It is the fastest distributed SQL database without server for always available applications, carefree scaling to satisfy any demand for zero infrastructure management.
Aurora DSQL builds on our original architectural principles of the storage and calculation department, with independent scaling of reading, writing, calculation and storage. It provides 99.99% of the one-region and 99.999% of the available multi-region, with a strong consistency across all regional endpoints.
And in June we launched a model context (MCP) servers, so you can integrate your AI agents into your data sources and data services.
We will celebrate 10 years of innovationWaiting for the Livestream 21st August event. You will learn directly from architects who promoted the computing and storage department in cloud databases, with technical knowledge about Aurora architecture and scaling. You will also receive a look into the future of database technology because Aurora’s engineers share their vision and discuss a complex challenge that they work to solve customers.
The event also offers practical demonstrations that show you how to implement key features. You will see how to create Ai-Power using PGVector, understand the cost optimization with the new Aurora DSQL price, and learn how to achieve a strong consistency for more regions for global applications.
The interactive format includes Q&A with Aurora experts, so you will be able to answer your specific technical questions. You can also receive AWS credits to test the new Aurora capability.
If you are interested in the AI AI, you will be particularly left from sessions on MCP servers, chain agents, and how to integrate Strands with Aurora DSQL, showing how to safely integrate AI capacity with your Aurora databases.
When operating critical missions or creating new applications, these sessions will help you understand how to use the latest Aurora features.
Sign up today to secure your place and be part of this database innovation.
To the next decade Aurora Innovation!
– seb