Configure PGFS storage for Tiered Tables v1.2
To enable Tiered Tables, you must configure a PGFS storage location on your PGD cluster. This defines where PGD will offload "cold" data (as Iceberg tables) to object storage.
You can use an S3-compatible object store (AWS S3, Google Cloud Storage, or compatible private storage).
Why configure PGFS for Tiered Tables?
By configuring this:
- You give PGD a destination for analytics offload of partitions managed by Tiered Tables.
 - You unlock the ability to transparently query cold data using Lakehouse clusters or PGD parent table.
 - This is the first required step before configuring the PGD node group for offload and enabling AutoPartition.
 
For background, see: Tiered Tables in Hybrid Manager
Goals
After completing this How-To, you will be able to:
- Create a valid PGFS storage location pointing to your object storage bucket.
 - Verify that your PGD cluster can use this location for offloading.
 - Proceed to configuring PGD node group offload settings.
 
Prerequisites
Before you begin:
- You have a PGD cluster provisioned in Hybrid Manager.
 - You have an object storage bucket created.
 - You have access credentials for the bucket (if private).
 - You have connected to your PGD node using 
psqlor a SQL client. 
Steps
Step 1: Connect to your PGD node
Connect using:
- psql
 - DBeaver
 - pgAdmin
 - Any SQL client that supports your PGD node.
 
Step 2: Run pgfs.create_storage_location()
Public S3 bucket example:
SELECT pgfs.create_storage_location(
name => 'hm_tiered_analytics_store',
url => 's3://your-public-bucket/path/to/tiered-data',
options => '{"aws_skip_signature": "true"}',
credentials => '{}'
);Private S3 bucket example:
SELECT pgfs.create_storage_location(
name => 'hm_tiered_analytics_store',
url => 's3://your-private-bucket/path/to/tiered-data',
options => '{"region": "us-west-2"}',
credentials => '{"access_key_id": "YOUR_AWS_ACCESS_KEY_ID", "secret_access_key": "YOUR_AWS_SECRET_ACCESS_KEY"}'
);Guidance:
- Replace 
urlwith your target bucket path. - Adjust 
optionsandcredentialsfor your cloud provider as needed. 
Step 3: Validate that the PGFS storage location was created
Run:
SELECT * FROM pgfs.storage_location;
You should see your newly created PGFS location listed.
Notes
- You can create multiple PGFS storage locations and assign them to different PGD node groups.
 - If multiple node groups offload to the same bucket, use distinct path prefixes per node group to avoid data overlap.
 
Next steps
Now that you have configured your PGFS storage location:
You can configure PGD node group offload settings to target this location: Configure PGD node group for analytics offload
You can configure BDR AutoPartition to manage partitioning and trigger offload: Configure BDR AutoPartition with analytics offload
Later, you can monitor offload status and storage savings: Monitor Tiered Tables status and storage savings
You can query Tiered Table data from PGD or Lakehouse: Query Tiered Tables from PGD and Lakehouse
Related concepts
For an architecture view of how PGFS fits into Tiered Tables, see: Tiered Tables in Hybrid Manager