a9f4fe0639
In the past I've seen the console and puppetdb databases grow much larger than expected and I have a hypothesis that this is because the default 20% setting for autovacuum is too high for our workload. So, there's a potential that changing this to 1% might be too low and might reduce performance but it should reduce disk usage and it can always be tuned upward if the performance hit is too large, however, it is much more difficult to reduce the size of a database after it has grown too large. Basically, I'm erroring on the side of using less disk space and hopefully less outages due to running out of disk space and we can open a conversation on performance later if one needs to be had.
30 lines
928 B
Puppet
30 lines
928 B
Puppet
class profile::pe_postgresql_management (
|
|
$autovacuum_scale_factor = '.01',
|
|
$manage_postgresql_service = true,
|
|
) {
|
|
|
|
$postgresql_service_resource_name = 'postgresqld'
|
|
$postgresql_service_name = 'pe-postgresql'
|
|
$notify_postgresql_service = $manage_postgresql_service ? {
|
|
true => Service[$postgresql_service_resource_name],
|
|
default => undef,
|
|
}
|
|
|
|
if $manage_postgresql_service {
|
|
service { $postgresql_service_resource_name :
|
|
name => $postgresql_service_name,
|
|
ensure => running,
|
|
enable => true,
|
|
}
|
|
}
|
|
|
|
#http://www.postgresql.org/docs/9.4/static/runtime-config-autovacuum.html
|
|
postgresql_conf { ['autovacuum_vacuum_scale_factor', 'autovacuum_analyze_scale_factor'] :
|
|
ensure => present,
|
|
target => '/opt/puppetlabs/server/data/postgresql/9.4/data/postgresql.conf',
|
|
value => $autovacuum_scale_factor,
|
|
notify => $notify_postgresql_service,
|
|
}
|
|
|
|
}
|