pg_dump [-a | -s] [-b] [-c] [-C] [-d | -D] [-f file] [-F format] [-i] [-n | -N] [-o] [-O] [-R] [-S] [-t table] [-v] [-x] [-Z 0...9] [-h host] [-p port] [-u] dbname
pg_dump is a utility for dumping out a Postgres database into a script or archive file containing query commands. The script files are in text format and can be used to reconstruct the database, even on other machines and other architectures. The archive files, new with version 7.1, contain enough information for pg_restore to rebuild the database, but also allow pg_restore to be selective about what is restored, or even to reorder the items prior to being restored. The archive files are also designed to be portable across architectures.
pg_dump will produce the queries necessary to re-generate all user-defined types, functions, tables, indices, aggregates, and operators. In addition, all the data is copied out in text format so that it can be readily copied in again, as well as imported into tools for editing.
pg_dump is useful for dumping out the contents of a database to move from one Postgres installation to another. After running pg_dump, one should examine the output for any warnings, especially in light of the limitations listed below.
When used with one of the alternate file formats and combined with pg_restore, it provides a flexible archival and transfer mechanism. pg_dump can be used to backup an entire database, then pg_restore can be used to examine the archive and/or select which parts of the database are to be restored. See the pg_restore documentation for details.
pg_dump accepts the following command line arguments. (Long option forms are only available on some platforms.)
Specifies the name of the database to be extracted.
Dump only the data, not the schema (definitions).
Dump data and BLOB data.
Dump commands to clean (drop) the schema prior to (the commands for) creating it.
For plain text (script) output, include commands to create the database itself.
Dump data as proper INSERT commands (not COPY). This will make restoration very slow.
Dump data as INSERT commands with explicit column names. This will make restoration very slow.
Send output to the specified file.
Format can be one of the following:
output a plain text SQL script file (default)
output a tar archive suitable for input into pg_restore. Using this archive format allows reordering and/or exclusion of schema elements at the time the database is restored. It is also possible to limit which data is reloaded at restore time.
output a custom archive suitable for input into pg_restore. This is the most flexible format in that it allows reordering of data load as well as schema elements. This format is also compressed by default.
Ignore version mismatch between pg_dump and the database server. Since pg_dump knows a great deal about system catalogs, any given version of pg_dump is only intended to work with the corresponding release of the database server. Use this option if you need to override the version check (and if pg_dump then fails, don't say you weren't warned).
Suppress double quotes around identifiers unless absolutely necessary. This may cause trouble loading this dumped data if there are reserved words used for identifiers. This was the default behavior for pg_dump prior to version 6.4.
Include double quotes around identifiers. This is the default.
Dump object identifiers (OIDs) for every table.
In plain text output mode, do not set object ownership to match the original database. Typically, pg_dump issues (psql-specific) \connect statements to set ownership of schema elements.
In plain text output mode, prohibit pg_dump from issuing any \connect statements.
Dump only the schema (definitions), no data.
Specify the superuser user name to use when disabling triggers and/or setting ownership of schema elements.
Dump data for table only.
Specifies verbose mode.
Prevent dumping of ACLs (grant/revoke commands) and table ownership information.
Specify the compression level to use in archive formats that support compression (currently only the custom archive format supports compression).
pg_dump also accepts the following command line arguments for connection parameters:
Specifies the host name of the machine on which the postmaster is running. If host begins with a slash, it is used as the directory for the Unix domain socket.
Specifies the Internet TCP/IP port or local Unix domain socket file extension on which the postmaster is listening for connections. The port number defaults to 5432, or the value of the PGPORT environment variable (if set).
Use password authentication. Prompts for username and password.
Connection to database 'template1' failed. connectDBStart() -- connect() failed: No such file or directory Is the postmaster running locally and accepting connections on Unix socket '/tmp/.s.PGSQL.5432'?
pg_dump could not attach to the postmaster process on the specified host and port. If you see this message, ensure that the postmaster is running on the proper host and that you have specified the proper port.
Note: pg_dump internally executes SELECT statements. If you have problems running pg_dump, make sure you are able to select information from the database using, for example, psql.
pg_dump has a few limitations. The limitations mostly stem from difficulty in extracting certain meta-information from the system catalogs.
When dumping a single table or as plain text, pg_dump does not handle large objects. Large objects must be dumped in their entirety using one of the binary archive formats.
When doing a data only dump, pg_dump emits queries to disable triggers on user tables before inserting the data and queries to re-enable them after the data has been inserted. If the restore is stopped in the middle, the system catalogs may be left in the wrong state.
To dump a database:
$ pg_dump mydb > db.out
To reload this database:
$ psql -d database -f db.out
To dump a database called mydb that contains BLOBs to a tar file:
$ pg_dump -Ft -b mydb > db.tar
To reload this database (with BLOBs) to an existing database called newdb:
$ pg_restore -d newdb db.tar