Friday, June 04, 2010

Postgresql tuning - quick tips

The configuration file for postgresql is located in <postgresql_install_dir>/data/postgresql.conf

You can alter the following settings in the config for better performance of postgresql.

# listen to all ip addresses
listen_addresses = '*'

# should not exceed 1000. Req 400 bytes per connection
max_connections = 500

# used for managing connection, active operations
# should be 1/4 of the available memory

shared_buffers = 1024MB

# dedicated memory for each operation. Used basically for sorting
# should be available_memory/max_connections for normal operations. Max available_memory/(2*max_connections)
# another way of getting this number is using the EXPLAIN ANALYZE query. If the plan shows "sort method: external merge disk: 7532kb", then work_mem of 8Mb can do wonders.

work_mem = 1MB

# same as work_mem but for vaccum, alter, other ddl qry.
# should be around 256MB

maintenance_work_mem = 256MB

# size of write ahead log files
# default 8 KB. 1 MB is enough for large systems.
# SMP machines are better with 8 MB

wal_buffers = 1MB

# After every transaction, pgsql forces a commit to disk out its write-ahead log.
# defaults to fsync.
# generally switched to open_sync, but it is buggy on many platforms.
# Should be benchmarked with very heavy query, before switching.

wal_sync_method = fsync

# estimate of how much memory is available for disk caching by the OS and within the DB itself
# recomended to 1/2 of available memory. On unix can be set to free+cached from "free".

effective_cache_size = 512MB

# by default 3*16MB per segment = 48 MB. Can be resource intensive on modern systems.
# setting it to 32 - checkpoint every 512 MB can be effective in reducing disk io

checkpoint_segments = 32

# checkpoint occurs every 5 minutes. can be increased
checkpoint_timeout = 5min

# should be increased if you want to collect a lot of information for helping pgsql to create query plans
default_statistics_target=100

# Synchronous commit introduced in pgsql 8.3 allows a small amount of data loss (in case of failure) for large boost in number of updates on the database per second.
synchronous_commit=on

# If after tweaking every variable, your query execution plan still is not acceptable,
# you can bring down the random page cost to 3.0 or 2.0.
# setting this variable lower will encourage the query optimizer to use random access index scans.

random_page_cost=4.0

No comments: