A common adage for any developer choosing a database technology is “just use Postgres”. While this advice remains well-intentioned, it deserves a fresh look in 2025. Thanks to a wave of recent innovations and acquisitions, PostgreSQL providers now resemble JavaScript frameworks of the 2010s - there’s seemingly a new one to consider every six months.
Just choosing PostgreSQL isn’t enough anymore - there is an entirely new decision tree to follow before you can launch a system. 🌳
Postgres uses an elephant as its mascot. This means I have a good excuse to reminisce on my March 2025 trip to Thailand 🇹🇭
Postgres Renaissance 🎨
The saying “just use Postgres” is deliberately simplistic. It is intended to reduce analysis paralysis at a critical point in a system’s lifetime. Postgres is the “boring” choice because it has been battle-tested in production for decades by thousands of companies. Being open-source and extensible, it has a thriving community. There is very little to gain by choosing a less-proven technology. Or one with restrictive licensing. Or adopting several solutions with their own cost models. It closes the fewest number of doors when a project’s uncertainty is at its highest.
Recent Postgres versions and extensions (e.g pgvector
) have introduced and significantly improved support for vector/JSON data, which makes it even harder to justify starting with a specialised store. Extensions like pg_duckdb
support column-oriented, analytical workloads inside a row-oriented, transactional system. Many articles exist on the versatility of Postgres. Some even go as far as stating you can host a full REST API with PostgREST (though I haven’t seen Production examples of this).
In recent times, database technologies derived from Postgres have exploded in number and seen rapid success. In the last few months alone:
- 🤑 Databricks acquired Neon, a serverless Postgres provider.
- 🤑 Snowflake acquired Crunchy Data, a hosted Postgres provider.
- 🎉 AWS Aurora DSQL was released.
- 🎉 PlanetScale expanded on its MySQL foundation, to offer a Postgres alternative.
Below is a non-exhaustive list of recent technologies derived from Postgres, and their offerings.
Notable systems (Click to expand/collapse)
Technology | Description |
---|---|
Neon | Serverless, fully-managed instances with autoscaling and branching. |
Nile | Serverless, fully-managed instances for multi-tenant applications (e.g. B2B SaaS). |
Supabase | Open-source Firebase alternative - either managed or self-hosted. |
TigerData (TimescaleDB) | Optimized for time-series and real-time analytics - either managed or self-hosted. |
CockroachDB | Distributed SQL with Postgres compatibility - either managed or self-hosted. |
PlanetScale Postgres | Managed instances, emphasizing performance and scalability based on NVMe SSDs. |
Prisma | Serverless, fully-managed instances with a TypeScript Object Relational Mapper (ORM). |
YugabyteDB | Distributed SQL with Postgres compatibility - either managed or self-hosted. Zero-downtime upgrades. |
Crunchy Data | Fully-compatible Postgres instances for Enterprise - either managed or self-hosted. |
Heroku | Managed instances integrated into the Heroku platform (PaaS). |
AWS RDS / Aurora | Managed instances integrated into the AWS platform - provisioned or serverless. |
Google Cloud SQL / AlloyDB | Managed instances integrated into the Google Cloud platform - provisioned or serverless. |
Azure Database | Managed instances integrated into the Azure platform - provisioned or serverless. |
Image credit: TigerData
Choosing Postgres over other relational databases doesn’t always go to plan. Uber famously switched to MySQL after facing slowness during writes, favouring the way MySQL handles index updates. Other systems have faced issues such as table bloat and transaction ID wraparound due to Postgres’s MVCC model, which avoids locks in favor of versioning. This is a good opportunity to preach the lessons from Designing Data Intensive Applications. Without understanding the underlying storage and retrieval technology these databases rely on, you are likely to face surprises in Production.
Recent Trends 🔨
In this section, we go over the main innovations that have emerged in recent years, which have caused the explosion in providers, each with their own unique offerings. As Postgres is open-source, it is possible to cherry-pick features and compromise on some fundamental aspects in order to unlock new behaviours.
Compute <> Storage Divide ➗
Postgres pre-dates the age of cloud and distributed systems. As such, it is monolithic (combining storage and compute on the same server) and process-oriented (rather than thread-oriented). To thrive in a cloud-native world, providers have decoupled storage and compute layers, which allows each layer to scale independently. This change enables serverless systems like Neon and AWS Aurora Serverless. Use of durable block storage like S3 means lightweight VMs can run the Query Engine and hold minimal state, allowing horizontal scalability. Different systems embrace this to different extents. For example, Aurora Serverless v2 incurs a 15 second cold-start when scaling up from zero.
Besides scalability, the other by-products of standalone storage are database branching and fast backups due to copy-on-write semantics. These are game-changing features that a traditional Postgres system cannot provide.
Relaxing Isolation for Speed 🏎️
Postgres is flexible in supporting 4 levels of transaction isolation at session or transaction level. By constraining the isolation level to Repeatable Read
(using something like Snapshot Isolation) rather than Serializable
, we require far less consensus before transactions can commit or return data.
In most applications, the key-set of data being read is far greater than the key-set of data being written, and reads outnumber writes by an order of magnitude. This realisation along with constraining isolation level to Repeatable Read
is core to the way AWS DSQL scales. By constraining in one dimension, new technologies are unlocking scale that a traditional setup could not match.
New Data Primitives 📊️
Being a general purpose and extensible database, some providers have imposed an opinionated abstraction on top of core Postgres. For example, TigerData introduces hypertables
to model time-series data, which would otherwise need manual partitioning and regular pruning via extensions like pg_partman
and pg_cron
.
AI companies make use of vector stores (through pgvector
and HNSW indexes) to build RAG (Retrieval Augmented Generation) pipelines. The recent Databricks and Snowflake acquisitions show that engineers increasingly value an “all-in-one” platform rather than a standalone OLAP solution.
Decision Points ⚖️
This section raises a set of questions I would consider when evaluating a Postgres provider to reduce a lot of future pain points. They are not ordered by priority. Click to expand/collapse each section.
Query Patterns ❓
- What is the expected ratio of reads and writes - is a single instance sufficient? (Does the provider offer a range of instance sizes? Are there read replicas, connection poolers or sharding solutions?)
- What periods of inactivity am I expecting? (Serverless is more economically-viable for spiky workloads. In practice, when baseline traffic varies by at least 50% from its peak.)
- What levels of transaction isolation will I need? (This rules out a few distributed offerings.)
- Does the provider have global endpoints / multi-region support? (Local benchmarks are no good if your database is on the other side of the world).
Compatibility 💘
- Do we need runtime compatibility, wire-protocol, or something in between? (How many Postgres features will I have support for?).
- What service limitations exist? (e.g., Limits on database object counts, concurrent connections, foreign keys, transaction size or timings).
- What extensions are available? (e.g., Cloud platforms often expose only a subset).
- Are system tables accessible, or can I easily monitor diagnostic information? (e.g.,
pg_stat-*
tables). - What integrations exist with existing services? (e.g., Cloud providers offer archival or “zero-ETL” features).
Control 💪
- Does a
superuser
exist? (Some platforms do not provide access). - How portable is my data? (Can the underlying backups and data files be exported externally?).
- How much control do we need over major versions’ adoption timelines? (Vendors often enforce a support window).
- What methods are supported for upgrades? (e.g., In-place, logical replication, zero-downtime. Providers like YugabyteDB claim to have zero-downtime upgrades. Others need DIY solutions).
- What is the pricing model? (Is billing broken down by storage, compute and I/O?).
Security & Risk 🎲
- How much trust do you place in the company in the long term? (Cloud providers may sunset services but have better longevity. Newer providers may withdraw their free tier, or be acquired and change pricing).
- What encryption and security models exist? (Is my infrastructure pooled or isolated? Can I rotate encryption keys used?).
- Is there support for multi-tenancy or Row-Level Security (RLS)? (Nile aims to address this for B2B SaaS firms).
- What SLAs exist and is downtime compensated? (e.g., During scaling events / maintenance windows).
Starting Point 📍
Having raised the important questions, it is clear that there is a lot to think about upfront. In the absence of any other information (and to avoid being overwhelmed!), I would adopt these principles.
- Stay close to the roots : Maintain runtime compatibility with standard PosrgreSQL for as long as possible. This makes local testing and feedback loops much quicker, and means faster adoption of new versions.
- Benchmark 📐: Measure against your own workloads - do not rely solely on online benchmarks.
- Avoid sprawl 🐙: Default to Postgres for new use cases before reaching for new platforms or databases.
- Know your options 🧠: Periodically evaluate which migration paths exist, their upgrade timelines, tenancy models and trends.
Once a solution is adopted at scale, migration to a new setup is always possible, but it takes significant effort (just ask Figma). There is no way to know the future, but with these principles in mind you will do better than most at adopting the right flavour of Postgres for your use case.
Summary 🧵
In this post, we revisited the advice to “just use Postgres” refreshed for 2025, in the broader context of acquisitions and recent product launches. The statement still holds true, but some nuance exists to answer the question of “what next?”. These questions form a practical checklist for evaluating Postgres providers.
It is an exciting time to be a developer working with Postgres, as the technology continues to evolve almost 40 years after its inception at Berkeley.
In future posts, I may dive into some of the idiosyncrasies of Postgres or dive into systems like AWS’ DSQL. There are some great creators in the space which make it far easier to keep up to date in the meantime. For example:
Get in touch 📧
As I say in my About page, I would love to hear from you. If you got to the end of this post and have anything to share, please get in touch on LinkedIn or Twitter.