Cloud-native PostgreSQL compatibility with a modern shared-storage architecture.
One write node, elastic read-only compute, one logical data foundation and a path to real-time analytics without ETL.
Traditional PostgreSQL keeps compute and storage tightly coupled. A primary and every standby have their own local storage, which makes read scaling slower to provision, more expensive at scale and sensitive to replication lag.
Awide PolarDB is designed for teams that want PostgreSQL application compatibility, but need the infrastructure economics and elasticity of a disaggregated cloud-native database.
Keep applications, utilities and extension workflows close to the PostgreSQL ecosystem while moving the storage model forward.
Reduce the need for full data copies on every compute node. RO nodes can be added faster and with less storage overhead.
Scale read-only nodes and parallel query workers around the same shared storage layer as workload demand changes.
The compute node keeps local buffers and temporary data, while persistent data and WAL live in shared storage. A single RW node accepts writes; RO nodes read from the same data layer and receive WAL metadata to understand what changed.
Database engine nodes for writes, read scaling and parallel analytical execution.
Shared persistent storage designed for throughput, durability and independent capacity scaling.
Metadata-first replication sends PageID/LSN information to replicas; data is fetched from shared storage.
A PolarFS-style layer provides coordinated file access, POSIX-like APIs and direct I/O patterns.
Read/write splitting should not mean that users randomly see stale data. Awide PolarDB's target architecture combines a proxy layer, LSN-aware routing and CSN-based visibility to support session and global consistency patterns.
Client connections enter through a proxy layer that can separate read and write traffic.
Writes go to the primary RW node, which advances WAL and transaction visibility state.
Reads are balanced across read-only nodes when they are sufficiently current for the request.
LSN/CSN tracking helps enforce "read your writes" and stronger serializability-oriented behavior.
The architecture targets known pressure points in large PostgreSQL deployments: connection-heavy workloads, WAL commit-rate bottlenecks, dirty buffer flushing, replica freshness and large read-only scale-out.
Commit Sequence Number assigns a cluster-wide commit order and helps reduce snapshot bottlenecks under high connection counts.
Dedicated worker stages can separate WAL advance, write, flush and notify phases, reducing commit-path serialization.
Dirty buffers can be flushed to shared storage in parallel instead of concentrating work on a single path.
A built-in connection pooling model can reduce process overhead for workloads with many concurrent active sessions.
Awide PolarDB targets hybrid transactional and analytical processing with parallel query execution across compute nodes. The goal is fresh analytics without an extra ETL copy, while preserving the operational PostgreSQL data model.
Analytical queries work against the same shared data foundation as OLTP.
Parallel workers can distribute scan and aggregation work across compute nodes.
Scale query workers up vertically and add compute nodes horizontally.
Run lightweight analytics on operational data closer to the write path.
Classic HA tooling assumes independent local storage on every node. Awide PolarDB is designed around shared-storage-aware orchestration, role management and WAL forwarding patterns for disaster recovery.
Shared-storage deployments need an HA control plane adapted for non-local storage semantics.
WAL relay patterns can synchronously accept WAL locally and forward it asynchronously to remote sites.
Keep the primary-replica operational model familiar to PostgreSQL teams while modernizing the storage layer.
Deploy within controlled networks, regions and governance boundaries.
Awide PolarDB is suited for environments where PostgreSQL compatibility matters, but classic primary-replica or sharded architectures create cost, latency or operational friction.
Add read-only compute without cloning terabytes into every replica.
Use CSN and shared-server patterns to reduce connection and snapshot pressure.
Run lightweight analytical queries on fresh data without a separate ETL pipeline.
Keep infrastructure ownership, network isolation and regional placement under customer governance.
Move beyond local-storage ceilings while preserving a familiar PostgreSQL cluster experience.
Grow compute when CPU is the bottleneck; grow storage when capacity or I/O is the bottleneck.
This page is written for the "coming soon" phase. Final packaging, supported clouds, storage backends and commercial terms can be added when the product is ready for launch.
No. Awide PostgreSQL DBMS is the hardened PostgreSQL foundation. Awide PolarDB is a separate architecture for workloads that need disaggregated compute, shared storage, read scale-out and HTAP capabilities.
The product is positioned for PostgreSQL-compatible workloads. Application behavior, extension support and migration guidance should be validated per workload during early access.
Sharding can scale writes, but it adds distribution-key decisions, cross-shard query complexity, distributed transactions and resharding workflows. Awide PolarDB keeps one logical data foundation with a single write node and elastic read/analytics compute.
High-speed, low-latency networking is important for shared-storage systems. Final deployment profiles should specify supported storage backends and network modes.
Awide PolarDB is currently marked as coming soon. Contact Awide to request an architecture briefing or early access discussion.
Talk to Awide about early access, target cloud environments, storage topology, consistency requirements, HA/DR design and migration fit for your PostgreSQL workloads.