Coming soon · Product architecture preview

Awide PolarDB Disaggregated Compute & Storage

Cloud-native PostgreSQL compatibility with a modern shared-storage architecture.

One write node, elastic read-only compute, one logical data foundation and a path to real-time analytics without ETL.

  • Scale compute and storage independently
  • Add read capacity without cloning the whole database
  • Keep the familiar PostgreSQL operational model
The scaling problem

Classic PostgreSQL scaling makes every standby
carry the full database

Traditional PostgreSQL keeps compute and storage tightly coupled. A primary and every standby have their own local storage, which makes read scaling slower to provision, more expensive at scale and sensitive to replication lag.

Before

Compute-storage integration

  • Every standby needs a full local copy of the data.
  • Large databases make standby creation and rejoin slower.
  • Replication lag can grow between primary and standby nodes.
  • Sharding adds operational complexity, distribution skew and SQL compatibility trade-offs.
After

Compute-storage separation

  • Compute is separated from persistent storage.
  • Read-only nodes use the same physical storage foundation.
  • Compute and storage can scale on different lifecycles.
  • The cluster keeps a familiar primary/replica model for PostgreSQL teams.
Why Awide PolarDB

A PostgreSQL-compatible path to cloud elasticity

Awide PolarDB is designed for teams that want PostgreSQL application compatibility, but need the infrastructure economics and elasticity of a disaggregated cloud-native database.

PostgreSQL compatibility first

Keep applications, utilities and extension workflows close to the PostgreSQL ecosystem while moving the storage model forward.

One data foundation

Reduce the need for full data copies on every compute node. RO nodes can be added faster and with less storage overhead.

Elastic read and analytics compute

Scale read-only nodes and parallel query workers around the same shared storage layer as workload demand changes.

Architecture

Disaggregated compute, shared storage, familiar PostgreSQL operations

The compute node keeps local buffers and temporary data, while persistent data and WAL live in shared storage. A single RW node accepts writes; RO nodes read from the same data layer and receive WAL metadata to understand what changed.

RW node writes WAL and data to the shared storage layer.
RO nodes can serve reads without maintaining separate full database copies.
LogIndex-style WAL metadata tracks changed pages and LSNs.
Shared filesystem and high-throughput storage path support coordinated access from compute nodes.
Compute

Database engine nodes for writes, read scaling and parallel analytical execution.

Storage

Shared persistent storage designed for throughput, durability and independent capacity scaling.

WAL path

Metadata-first replication sends PageID/LSN information to replicas; data is fetched from shared storage.

Filesystem

A PolarFS-style layer provides coordinated file access, POSIX-like APIs and direct I/O patterns.

Read/write splitting and consistency

Scale reads without giving up correctness semantics

Read/write splitting should not mean that users randomly see stale data. Awide PolarDB's target architecture combines a proxy layer, LSN-aware routing and CSN-based visibility to support session and global consistency patterns.

01

Application traffic

Client connections enter through a proxy layer that can separate read and write traffic.

02

RW node

Writes go to the primary RW node, which advances WAL and transaction visibility state.

03

RO nodes

Reads are balanced across read-only nodes when they are sufficiently current for the request.

04

Consistency guard

LSN/CSN tracking helps enforce "read your writes" and stronger serializability-oriented behavior.

Performance engineering

Built around the PostgreSQL bottlenecks that show up at scale

The architecture targets known pressure points in large PostgreSQL deployments: connection-heavy workloads, WAL commit-rate bottlenecks, dirty buffer flushing, replica freshness and large read-only scale-out.

CSN

Commit Sequence Number assigns a cluster-wide commit order and helps reduce snapshot bottlenecks under high connection counts.

WAL Pipeline

Dedicated worker stages can separate WAL advance, write, flush and notify phases, reducing commit-path serialization.

Parallel bgwriter

Dirty buffers can be flushed to shared storage in parallel instead of concentrating work on a single path.

Shared Server

A built-in connection pooling model can reduce process overhead for workloads with many concurrent active sessions.

HTAP

OLTP and operational analytics
on the same data

Awide PolarDB targets hybrid transactional and analytical processing with parallel query execution across compute nodes. The goal is fresh analytics without an extra ETL copy, while preserving the operational PostgreSQL data model.

No ETL copy

Analytical queries work against the same shared data foundation as OLTP.

MPP execution

Parallel workers can distribute scan and aggregation work across compute nodes.

Elastic compute

Scale query workers up vertically and add compute nodes horizontally.

Fresh data

Run lightweight analytics on operational data closer to the write path.

High Availability & Disaster Recovery

Shared-storage HA needs orchestration that understands the architecture

Classic HA tooling assumes independent local storage on every node. Awide PolarDB is designed around shared-storage-aware orchestration, role management and WAL forwarding patterns for disaster recovery.

Role-aware cluster management for primary, replica, standby and WAL relay nodes.
Fast switchover and failover flows that account for shared storage state.
DataMax-style WAL relay nodes for RPO=0-oriented DR designs.
Promotion only after WAL synchronization is complete.
Patroni-ready

Shared-storage deployments need an HA control plane adapted for non-local storage semantics.

RPO target

WAL relay patterns can synchronously accept WAL locally and forward it asynchronously to remote sites.

Operational fit

Keep the primary-replica operational model familiar to PostgreSQL teams while modernizing the storage layer.

Cloud control

Deploy within controlled networks, regions and governance boundaries.

Use cases

For PostgreSQL workloads that need more than bigger servers

Awide PolarDB is suited for environments where PostgreSQL compatibility matters, but classic primary-replica or sharded architectures create cost, latency or operational friction.

Read-heavy SaaS platforms

Add read-only compute without cloning terabytes into every replica.

High-concurrency applications

Use CSN and shared-server patterns to reduce connection and snapshot pressure.

Operational analytics

Run lightweight analytical queries on fresh data without a separate ETL pipeline.

Regulated cloud environments

Keep infrastructure ownership, network isolation and regional placement under customer governance.

PostgreSQL modernization

Move beyond local-storage ceilings while preserving a familiar PostgreSQL cluster experience.

Cost-aware scale-out

Grow compute when CPU is the bottleneck; grow storage when capacity or I/O is the bottleneck.

FAQ

Questions product teams
will ask first

This page is written for the "coming soon" phase. Final packaging, supported clouds, storage backends and commercial terms can be added when the product is ready for launch.

Is Awide PolarDB a replacement for Awide PostgreSQL DBMS?

No. Awide PostgreSQL DBMS is the hardened PostgreSQL foundation. Awide PolarDB is a separate architecture for workloads that need disaggregated compute, shared storage, read scale-out and HTAP capabilities.

Will existing PostgreSQL applications work?

The product is positioned for PostgreSQL-compatible workloads. Application behavior, extension support and migration guidance should be validated per workload during early access.

Why not just shard PostgreSQL?

Sharding can scale writes, but it adds distribution-key decisions, cross-shard query complexity, distributed transactions and resharding workflows. Awide PolarDB keeps one logical data foundation with a single write node and elastic read/analytics compute.

Does the architecture require RDMA?

High-speed, low-latency networking is important for shared-storage systems. Final deployment profiles should specify supported storage backends and network modes.

When will it be available?

Awide PolarDB is currently marked as coming soon. Contact Awide to request an architecture briefing or early access discussion.

Ready to evaluate disaggregated PostgreSQL?

Talk to Awide about early access, target cloud environments, storage topology, consistency requirements, HA/DR design and migration fit for your PostgreSQL workloads.