Thursday, June 19, 2025

POSTGRESQL PAGES AND LINKS

 POSTGRESQL PAGES AND LINKS


(postgresql dba scripts)

## POSTGRESQL HIGH ALERTS AND CHECKS


## POSTGRES DB SCRIPTS

https://dbaclass.com/postgres-db-scripts/


## pg_dba_scripts - PostgreSQL DBA scripts


https://github.com/Azmodey/pg_dba_scripts

## PostgreSQL – Architecture


https://awsprocert.com/databases/postgresql-dba/


## Daily used SQL query statements for PostgreSQL DBA


https://pankajconnect.medium.com/daily-used-sql-query-statements-for-postgresql-dba-f697b63968a8

POSTGRESQL HIGH ALERTS AND CHECKS

 

POSTGRESQL HIGH ALERTS AND CHECKS

✅ 1. Backend Connection Exhaustion (PostgreSQL max_connections hit)

  • If PgBouncer is in session or transaction mode, it reuses backend connections.

  • But if PostgreSQL backend is out of max_connections, PgBouncer cannot assign clients, and it stalls.

👉 You must check PostgreSQL's max_connections:

SHOW max_connections;
SELECT count(*) FROM pg_stat_activity;

If count(*) ≈ max_connections, this causes backpressure.


✅ 2. Client Storm or Stuck Sessions

  • A few client apps (or one bad job) may create 80+ simultaneous requests, and they get stuck.

  • Once PgBouncer's default_pool_size is fully occupied, no new clients can proceed.

Run:

SHOW POOLS;       -- in PgBouncer shell

Look for:

  • "in_use" = 80, and "waiting" clients

  • "cl_active" or "sv_active" = saturated pool


✅ 3. Backend Query or Transaction Stuck (Locks or Long Queries)

If queries behind those 80 connections are:

  • Stuck in lock waits

  • Doing full table scans

  • Or idle in transaction (not releasing pool)

PgBouncer won’t release the connections → keeps looping.

Run in PostgreSQL:

SELECT pid, state, wait_event_type, wait_event, query, now() - query_start AS runtime
FROM pg_stat_activity
WHERE state <> 'idle'
ORDER BY runtime DESC;

✅ 4. Health Checks / Monitoring Apps Flooding PgBouncer

Sometimes a monitoring tool (like Zabbix or a REST check) hits PgBouncer repeatedly.

Run:

SELECT application_name, count(*) FROM pg_stat_activity GROUP BY 1 ORDER BY 2 DESC;

You may find a single app is creating 80 connections on its own.


✅ 5. PgBouncer Timeouts or Misconfigurations

Check if PgBouncer is killing idle sessions too aggressively, or not recycling sessions:

In pgbouncer.ini:

max_client_conn = 1000
default_pool_size = 100
reserve_pool_size = 20
reserve_pool_timeout = 2
server_idle_timeout = 300
client_idle_timeout = 600

Also verify:

pool_mode = transaction   # better than session

If in session mode, connections are sticky and won't release until client disconnects.


✅ Recommended Investigation Steps

Step What to Check Command
🔎 PostgreSQL backend limit Is it hitting max_connections? SHOW max_connections;
🔎 Active PgBouncer pools Who's using connections SHOW POOLS;
🔎 Stuck backend queries Any locks or long-running queries Query pg_stat_activity
🔎 Session ownership Top users/applications SELECT usename, application_name...
🔎 PgBouncer config pool_mode, timeouts, pool size cat /etc/pgbouncer/pgbouncer.ini

🛠️ Permanent Solution Suggestions

✔ Tune PostgreSQL:

  • max_connections = 500+

  • Monitor with pg_stat_activity

✔ Tune PgBouncer:

pool_mode = transaction
default_pool_size = 100
max_client_conn = 1000
reserve_pool_size = 20

✔ Enable Logging in PgBouncer:

log_connections = 1
log_disconnections = 1
log_pooler_errors = 1

Check logs in /var/log/pgbouncer.log or similar.


🔧 If You Can't Access PgBouncer Server

From PostgreSQL side, still run:

SELECT client_addr, application_name, usename, backend_start
FROM pg_stat_activity
WHERE backend_start > now() - interval '30 minutes';

This helps you identify the clients from PgBouncer, which apps, and when they connected.