ChatGPT解决这个技术问题 Extra ChatGPT

How to pass in password to pg_dump?

I'm trying to create a cronjob to back up my database every night before something catastrophic happens. It looks like this command should meet my needs:

0 3 * * * pg_dump dbname | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz

Except after running that, it expects me to type in a password. I can't do that if I run it from cron. How can I pass one in automatically?

possibly helpful post i wrote on automating pg_restore ! medium.com/@trinity/…
answer using a connection string here : stackoverflow.com/a/29101292/1579667

C
Community

Create a .pgpass file in the home directory of the account that pg_dump will run as.

The format is:

hostname:port:database:username:password

Then, set the file's mode to 0600. Otherwise, it will be ignored.

chmod 600 ~/.pgpass

See the Postgresql documentation libpq-pgpass for more details.


Create ~/.pgpass with localhost:5432:mydbname:postgres:mypass Then chmod 600 ~/.pgpass
Possibly helpful: On Ubuntu, "sudo su postgres" to switch to the "postgres" user, then create the .pgpass file and execute the dump.
I followed your answer but still unable to succesfully create my backup file. Please see my link: unix.stackexchange.com/questions/257898/… . Thank you.
Works on 9.6.2 :o)
Note about sudo su postgres: that Unix user does not necessarily exists. It doesn't need to. But the DB user should.
M
Max

Or you can set up crontab to run a script. Inside that script you can set an environment variable like this: export PGPASSWORD="$put_here_the_password"

This way if you have multiple commands that would require password you can put them all in the script. If the password changes you only have to change it in one place (the script).

And I agree with Joshua, using pg_dump -Fc generates the most flexible export format and is already compressed. For more info see: pg_dump documentation

E.g.

# dump the database in custom-format archive
pg_dump -Fc mydb > db.dump

# restore the database
pg_restore -d newdb db.dump

I can see why the .pgpass file would be a better solution. I was just giving an alternative, not sure if it deserves a downvote though :)
I didn't downvote. That was someone else; I didn't think it warranted a downvote either. Have a +1 to make up for it.
So many haters. I appreciate this answer and am adopting it for my own application.
Setting the PGPASSWORD environment variable is not a recommended practice by the documentation (postgresql.org/docs/current/static/libpq-envars.html) : Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using the ~/.pgpass file
This would actually be the preferred way for docker containers.
g
gitaarik

If you want to do it in one command:

PGPASSWORD="mypass" pg_dump mydb > mydb.dump

Setting the PGPASSWORD environment variable is not a recommended practice by the documentation (postgresql.org/docs/current/static/libpq-envars.html) : Use of this environment variable is not recommended for security reasons, as some operating systems allow non-root users to see process environment variables via ps; instead consider using the ~/.pgpass file
It's still a useful comment. There are lots of deployment cases where this is still helpful.
I always got the error 'Peer authentication failed for user "username"'. Solution was: PGPASSWORD="mypass" pg_dump -U username -h localhost > mydb.dump
My opinion is that it is far better to set up an environment variable (where you have control, where and how the password will be stored) as in a known, unencrypted location. This part of the postgresql doc is faulty, and this answer is a good one.
My password had an '@' in it. This worked. I couldnt figure out how to make it work with the postgres:// syntax. Didn't try the .pgpass because my postgress user has no home directory.
J
Josue Alexander Ibarra

For a one-liner, like migrating a database you can use --dbname followed by a connection string (including the password) as stated in the pg_dump manual

In essence.

pg_dump --dbname=postgresql://username:password@127.0.0.1:5432/mydatabase

Note: Make sure that you use the option --dbname instead of the shorter -d and use a valid URI prefix, postgresql:// or postgres://.

The general URI form is:

postgresql://[user[:password]@][netloc][:port][/dbname][?param1=value1&...]

Best practice in your case (repetitive task in cron) this shouldn't be done because of security issues. If it weren't for .pgpass file I would save the connection string as an environment variable.

export MYDB=postgresql://username:password@127.0.0.1:5432/mydatabase

then have in your crontab

0 3 * * * pg_dump --dbname=$MYDB | gzip > ~/backup/db/$(date +%Y-%m-%d).psql.gz


Version 9.1 of Postgre outputs an unknown option for dbname
This was tested with versions 9.4 and 9.3 on arch and RHEL respectively. can you post your connection string? anonymized of course.
Thanks, @JosueIbarra. Tested successfully on PostgreSQL 9.3, Ubuntu 14.04.
@EntryLevelR you need to pipe the output to a file in order to save it. see this relevant question askubuntu.com/questions/420981/…
this should be the accepted answer. One liner, clear.
R
Rajan Verma - Aarvy

This one liner helps me while creating dump of a single database.

PGPASSWORD="yourpassword" pg_dump -U postgres -h localhost mydb > mydb.pgsql

helped a lot...thnxxx
This should be the accepted answer. Thanks!
Thanks. this is the only option works for me.
F
Francisco Luz
$ PGPASSWORD="mypass" pg_dump -i -h localhost -p 5432 -U username -F c -b -v -f dumpfilename.dump databasename

Nice, but sadly doesn't work for me, I get "query failed: ERROR: permission denied for relation direction_lookup"
@Doc have you tried giving the necessary permissions to the pg user?
D
David Buck

You can pass a password into pg_dump directly by using the following:

pg_dump "host=localhost port=5432 dbname=mydb user=myuser password=mypass" > mydb_export.sql

Welcome to Stack Overflow! While your answer may work, it has serious security implications. Arguments of a command are visible in ps(1), so if a process monitors ps(1) then the password is compromised.
Yes, @JonathanRosa, you are right. But Larry Spence just answered the question. So the security issue is not a problem if this is done in docker for example.
J
Jauyzed

@Josue Alexander Ibarra answer works on centos 7 and version 9.5 if --dbname is not passed.

pg_dump postgresql://username:password@127.0.0.1:5432/mydatabase 

You're right, that's how it's supposed to look, I think what was wrong a few years back was my shell configuration. That's why it was essential for me to use --dbname
m
mpen

Note that, in windows, the pgpass.conf file must be in the following folder:

%APPDATA%\postgresql\pgpass.conf

if there's no postgresql folder inside the %APPDATA% folder, create it.

the pgpass.conf file content is something like:

localhost:5432:dbname:dbusername:dbpassword

cheers


m
manfall19

As detailed in this blog post , there are two ways to non interactively provide a password to PostgreSQL utilities such as the "pg_dump" command: using the ".pgpass" file or using the "PGPASSWORD" environment variable.


T
Tobias

Correct me if I'm wrong, but if the system user is the same as the database user, PostgreSQL won't ask for the password - it relies on the system for authentication. This might be a matter of configuration.

Thus, when I wanted the database owner postgres to backup his databases every night, I could create a crontab for it: crontab -e -u postgres. Of course, postgres would need to be allowed to execute cron jobs; thus it must be listed in /etc/cron.allow, or /etc/cron.deny must be empty.


You're sort of right here. Default Postgres configuration uses TRUST authentication for local system accounts. However most production setups get rid of this block right after installing RDBMS.
S
StartupGuy

Backup over ssh with password using temporary .pgpass credentials and push to S3:

#!/usr/bin/env bash
cd "$(dirname "$0")"

DB_HOST="*******.*********.us-west-2.rds.amazonaws.com"
DB_USER="*******"
SSH_HOST="my_user@host.my_domain.com"
BUCKET_PATH="bucket_name/backup"

if [ $# -ne 2 ]; then
    echo "Error: 2 arguments required"
    echo "Usage:"
    echo "  my-backup-script.sh <DB-name> <password>"
    echo "  <DB-name> = The name of the DB to backup"
    echo "  <password> = The DB password, which is also used for GPG encryption of the backup file"
    echo "Example:"
    echo "  my-backup-script.sh my_db my_password"
    exit 1
fi

DATABASE=$1
PASSWORD=$2

echo "set remote PG password .."
echo "$DB_HOST:5432:$DATABASE:$DB_USER:$PASSWORD" | ssh "$SSH_HOST" "cat > ~/.pgpass; chmod 0600 ~/.pgpass"
echo "backup over SSH and gzip the backup .."
ssh "$SSH_HOST" "pg_dump -U $DB_USER -h $DB_HOST -C --column-inserts $DATABASE" | gzip > ./tmp.gz
echo "unset remote PG password .."
echo "*********" | ssh "$SSH_HOST" "cat > ~/.pgpass"
echo "encrypt the backup .."
gpg --batch --passphrase "$PASSWORD" --cipher-algo AES256 --compression-algo BZIP2 -co "$DATABASE.sql.gz.gpg" ./tmp.gz

# Backing up to AWS obviously requires having your credentials to be set locally
# EC2 instances can use instance permissions to push files to S3
DATETIME=`date "+%Y%m%d-%H%M%S"`
aws s3 cp ./"$DATABASE.sql.gz.gpg" s3://"$BUCKET_PATH"/"$DATABASE"/db/"$DATETIME".sql.gz.gpg
# s3 is cheap, so don't worry about a little temporary duplication here
# "latest" is always good to have because it makes it easier for dev-ops to use
aws s3 cp ./"$DATABASE.sql.gz.gpg" s3://"$BUCKET_PATH"/"$DATABASE"/db/latest.sql.gz.gpg

echo "local clean-up .."
rm ./tmp.gz
rm "$DATABASE.sql.gz.gpg"

echo "-----------------------"
echo "To decrypt and extract:"
echo "-----------------------"
echo "gpg -d ./$DATABASE.sql.gz.gpg | gunzip > tmp.sql"
echo

Just substitute the first couple of config lines with whatever you need - obviously. For those not interested in the S3 backup part, take it out - obviously.

This script deletes the credentials in .pgpass afterward because in some environments, the default SSH user can sudo without a password, for example an EC2 instance with the ubuntu user, so using .pgpass with a different host account in order to secure those credential, might be pointless.


Password will get logged to terminal history this way, no?
@mpen Locally, yes. Remotely, no. In my case its OK to have in my local history because its a secure VM that does not allow remote access. If in your case that is not OK, just do history -c. When using with Jenkins, use the Inject passwords to the build as environment variables option so that the password is masked
o
ognjenkl

For Windows the pgpass.conf file should exist on path:

%APPDATA%\postgresql\pgpass.conf

On my Windows 10 absolute path it is:

C:\Users\Ognjen\AppData\Roaming\postgresql\pgpass.conf

Note: If there is no postgresql folder in %APPDATA%, create one with pgpass.conf file inside it.

Content of pgpass.conf could be:

*:5432:*:*:myDbPassword

Or more specific content could be:

localhost:5432:dbName:username:password

Note: Content of pgpass.conf must NOT end with white spaces (after password) or the error will occur.


s
saintlyzero

A secure way of passing the password is to store it in .pgpass file

Content of the .pgpass file will be in the format:

db_host:db_port:db_name:db_user:db_pass

#Eg
localhost:5432:db1:admin:tiger
localhost:5432:db2:admin:tiger

Now, store this file in the home directory of the user with permissions u=rw (0600) or less

To find the home directory of the user, use echo $HOME

Restrict permissions of the file chmod 0600 /home/ubuntu/.pgpass


m
mpen

In windows you can set the variable with the password before use pg_dump.exe, and can automate all in a bat file, for example:

C:\>SET PGPASSWORD=dbpass
C:\>"folder_where_is_pg_dump\pg_dump.exe" -f "dump_file" -h "db_host" -U db_usr --schema "db_schema" "db_name"

s
stefanoz

You just need to open pg_hba.conf and sets trust in all methods. That's works for me. Therefore the security is null.


s
szymond

Another (probably not secure) way to pass password is using input redirection i.e. calling

pg_dump [params] < [path to file containing password]


Concerning security - this file would need to be readable by the intended user(s) only; however, anyone with root rights would be able to change the security settings, and thus to read the unencrypted password. So yes, this is insecure ...
@Tobias is there any alternative? It would seem that anyone with root rights could always see the password no matter what technique other than entering the password interactively (and the question is about cron). postgresql.org/docs/9.3/static/auth-methods.html#GSSAPI-AUTH mentions GSSAPI supporting single sign-on but no mention if that works non-interactively.
Anyone with root rights can also read the .pgpass which is the recommended way. Therefore, I would not consider root access a security risk.
D
Dofri

the easiest way in my opinion, this: you edit you main postgres config file: pg_hba.conf there you have to add the following line:

host <you_db_name> <you_db_owner> 127.0.0.1/32 trust

and after this you need start you cron thus:

pg_dump -h 127.0.0.1 -U <you_db_user> <you_db_name> | gzip > /backup/db/$(date +%Y-%m-%d).psql.gz

and it worked without password


And you just destroyed system security. OK for a dev box, but nothing else.