Tools
Client Tool
Agens
Agens is the terminal-based front end of AgensGraph. Enter the query interactively to make AgensGraph return the results. Files or command line arguments may be typed as well. Agens also writes scripts and automate various tasks by providing a variety of meta commands and shell-like functions.
How to use:
agens [OPTION]... [DBNAME [USERNAME]]
Options
-a
–echo-all
All input lines are shown as standard outputs as read (not read interactively). This is equivalent to setting the ECHO variable to all.
-A
–no-align
Switches to non-aligned output mode (otherwise default output mode is aligned).
-b
–echo-errors
Shows failed SQL commands as standard error outputs. This is equivalent to setting the ECHO variable to error.
-c command
–command=command
Specifies command to cause agens to execute the specified command string. This option can be repeated and used by combining commands with -f
option. If -c
or -f
is specified, agens processes all -c
and -f
options without reading the command as a standard input and exits.
command must be a command string that can be fully parsed by the server or a single backslash command. Therefore, you cannot mix SQL and agens meta-commands in -c
option. You are allowed to use the repeated -c
option or use strings with a pipe (‘|’). Examples:
agens -c '\ x'-c 'SELECT * FROM test;'
or
echo '\x \\SELECT * FROM test;' | agens
(\\
is a delimiter metacommand.)
Each SQL command string delivered to -c
is sent to the server as a single query. For this reason, if no explicit BEGIN/COMMIT commands that can be split into multiple transactions are contained in a string, the server runs it in a single transaction even with multiple SQL commands in a string. Agens also outputs only the results of the last SQL command to the string. As standard inputs are transmitted separately here, this behavior differs from that when they are sent as agens’ standard input or when the same strings are read from a file.
Placing more than one command in a single -c
string often leads to unexpected results because of this behavior. It is better to use the -c
command repeatedly, or use echo as in the example above, or use the here-document shell, as in the example below, to enter multiple commands into the agens standard input.
psql <<EOF
\x
SELECT * FROM test;
EOF
-d dbname
–dbname=dbname
Specifies the name of the database to connect to. This is the same as specifying dbname as an argument, not the first option on the command line.
If this parameter has an equal (=) sign or starts with a valid URI prefix (postgresql:// or postgres://), it is processed as a conninfo string.
-e
–echo-queries
Copy all SQL commands sent to the server to standard output. This is equivalent to setting the variable ECHO in the query.
-E
–echo-hidden
Equivalent to turning the ECHO_HIDDEN variable to on.
-f filename
–file=filename
Read commands from a file with a filename, rather than standard input. This option can be repeated and combined in order using the -c
option. When specifying -c
or -f
, agens does not read commands as standard input. Instead, it processes all the -c
and -f
options in order and exits. Except for this, this option is generally equivalent to the metacommand \i
.
If the filename is -(hyphen), standard input reads EOF or up to a metacommand \q
. This allows interaction between interactive input and input from the file. In this case, however, Readline is not used (as if -n is specified).
Using this option is slightly different from using agens < filename. In general, both will work as expected, but using -f
will allow you to use useful features such as error messages with line numbers. This option may reduce startup overhead. On the other hand, a variant using the shell’s input changes guarantees (theoretically) the same output as the input it receives when everything is entered manually.
-F separator
–field-separator=separator
Use separator as the field delimiter for unaligned output. This is equivalent to \pset fieldsep
or \f
.
-h hostname
–host=hostname
Specifies the host name of the system on which the server is running. If the value begins with a slash, it is used as a directory for the Unix domain socket.
-H
–html
Turns on the HTML tabular output. This is equivalent to the \pset
format html or \H
command.
-l
–list
List all available databases and exit. Other non-connection options are ignored. Similar to meta-command \list
.
-L filename
–log-file=filename
Create all query logs in filename
-n
–no-readline
Do not use extended command line editing.
-o filename
–output=filename
Save all query results to the filename file. It is the same as the \o
command.
-p port
–port=port
Specifies the TCP port or local Unix domain socket file extension on which the server listens for connections. The default value is the value of the PGPORT environment variable or, if not set, specified at compile time (default: 5432).
-P assignment
–pset=assignment
Specifies output options in \pset
style. The name and value must be separated by an equal sign instead of a space. For example, to set the output format to LaTeX, use -P format=latex.
-q
–quiet
Specifies this for quiet work of agens. It basically outputs the start message and various information. With this option enabled, nothing would occur. This is useful when used with the -c
option. Equivalent to turning on the QUIET variable.
-R separator
–record-separator=separator
Use separator as the record delimiter for unaligned output. This is the same as the \pset recordsep
command.
-s
–single-step
Run in single-step mode. That is, a message is displayed to the user before each command is sent to the server, and an undo option is also displayed. Used to debug scripts.
-S
–single-line
Like a semicolon, a new line character is executed in single-line mode, which terminates the SQL command.
Notice: This mode is provided for those who want to use it and is not basically recommended for use. In particular, if you mix SQL and meta-commands on a single line, the order of execution may not be obvious to the first-time user.
-t
–tuples-only
Turn off column name and result row output. This is the same as the \t
command.
-T table_options
–table-attr=table_options
Specifies the options to place within the HTML table tag. For more information, refer to \pset
.
-U username
–username=username
Connect to the database with username instead of the default (you must have necessary permissions).
-v assignment
–set=assignment
–variable=assignment
Performs the same variable assignment as the meta-command \set
. If the name and value are present, they must be separated by an equal sign on the command line. To unset the variable, leave the equal sign. To set a variable to an empty value, use the equal sign and leave the value unchanged. Variables that are allocated/executed in the initial phase of startup but are reserved for internal use can be overwritten later on.
-V
–version
Shows agens version information and exits.
–revision
Shows agens revision information and exits.
-w
–no-password
Do not enter a password. If the server requires password authentication and a password in some other ways, such as .pgpass
file, the connection attempt will fail. This option is useful for deployment jobs or scripts where there is no user to enter a password.
As this option is set for the entire session, it affects use of a metacommand \connect
as well as initial connection attempt.
-W
–password
Forcibly sets agens to request a password before connecting to the database.
This option is not needed when the server requires password authentication; agens automatically requires a password as well. However, agens will waste connection attempts to know whether the server wants a password or not. In some cases it is better to type -W to avoid further connection attempts.
As this option is set for the entire session, it affects use of a metacommand \connect
as well as the initial connection attempt.
-x
–expanded
Turns on extended expression assignment mode. This is the same as the \w
command.
-X
–no-psqlrc
Do not read the startup file (either the system’s psqlrc
file or your .psqlrc
file).
-z
–field-separator-zero
Sets the field separator for unaligned output to 0 bytes.
-0
–record-separator-zero
Sets the record separator for unaligned output to 0 bytes. For example, it is useful for interfaces like xargs -0
.
-1
–single-transaction
This option should be used with one or more -c
and /
or -f
options.
This option wraps all commands in a single transaction by issuing BEGIN first, and COMMIT in the end. This ensures that all commands are completed successfully or that the changes do not take effect.
If you include BEGIN
, COMMIT
, or ROLLBACK
in the command itself, this option cannot have the desired effect. In addition, if you cannot execute individual commands within a transaction block, specifying this option will cause the entire transaction to fail.
-?
–help[ =options ]
Displays help for agens and exits. The optional options parameter (optional default) selects the part of agens described. The command describes agens’ backslash command, and options describes the command line options that can be passed to agens. Shows help for agens configuration variables.
Exit Status
Agens will return 0 if the shell is done successfully, or 1 if there is a fatal internal error (e.g. memory shortage and file cannot be found). Agens will return 2 if the session is not interactive because of incorrect connection to the server, or return 3 if the script has an error and the variable ON_ERROR_STOP
is set.
Usage
Connecting to a Database
Agens is a generic AgensGraph client application. In order to connect to the database, you need to know the name of the target database, hostname and port number of the server, and user name to connect to. Agens can advertise each parameter via command line options such as -d
, -h
, -p
and -U
. If an argument that does not belong to the options is found, it is interpreted as the database name (or the user name if the database name is already specified). Not all of these options are required, and there are useful defaults. If you omit the hostname, agens will connect to the server on the local host via a Unix domain socket or to localhost on a system that does not have a Unix domain socket via TCP/IP. The default port number is determined at compile time. As the database server uses the same defaults, you do not need to specify the port (number) in most cases. The default user name is the operating system user name and the default database name. You cannot connect to any database by user name. The database administrator should inform the user of the access rights.
When the default values are not appropriate, use the PGDATABASE
, PGHOST
, PGPORT
and /
or PGUSER
environment variables by setting them with appropriate values. If you do not want to enter your password regularly, it is convenient to use the ~/.pgpass
file.
Entering SQL Commands
In general, agens provides the name of the database to which agens is currently connected; additionally, it gives “=#” to superuser and a string named “=#” to end users. Here is an example:
agens testdb
agens (AgensGraph 2.16.0, based on PostgreSQL 16.9)
Type "help" for help.
testdb=#
At the prompt, the user may enter SQL commands. Typically, when the semicolon at the end of the command line is reached, the input line is sent to the server. The end of the line does not terminate the command. Thus, commands can be distributed across multiple lines for clarity. If a command is sent and executed without error, the result of the command is displayed on the screen.
Each time the command is executed, agens polls asynchronous notification events generated by LISTEN and NOTIFY.
C-style block comments are passed to the server for processing and removal, but the SQL standard comments are removed by agens.
Meta-Commands
Starting with a backslash that is not enclosed in quotes (entered in agens) is an agens meta-command processed by agens itself. This command makes agens more useful for administration or scripting. Meta-commands are usually called slash- or backslash-commands.
The format of agens commands: A backslash followed by a command verb and arguments. The arguments are separated from the command verb by a number of different whitespace characters.
In order to include a space character in the argument, use single quotes. To include single quotes in an argument, you must put two single quotes inside the single quotes. A single citation may include \n
(newline), \t
(tab), \B
(backspace), \r
(carriage return), \f
(form feed), \digits
(octal) and \xdigits
(hexadecimal). A backslash in front of other characters in single-quoted text cites whatever the character is.
The text enclosed in backticks (`) within an argument is considered to be a command line passed to shell. The output of the command (with the following linefeed removed) replaces the text inside the backticks.
If an agens variable name appears after a colon (:) without quotes, it is replaced with the value of the variable as described in SQL Interpolation.
Some commands use an SQL ID (e.g. table name) as an argument. This argument complies with the SQL syntax rules. Unquoted characters are processed as lowercase letters and double quotation marks (”) prevent case toggling and include spaces in identifiers. A double quote within double quotes is reduced to a single double quotation mark. For example, FOO”BAR”BAZ is interpreted as fooBARbaz and “A weird”” name” becomes A weird” name.
Parsing arguments stops at the end of a line or when another backslash is encountered without quotes. A backslash without quotes is considered start of a new metacommand. The special sequence \\
(two backslashes) marks the end of the argument; if there is an SQL command, it continues parsing. This allows you to mix SQL and agens commands on a single line. In any case, however, the meta-command cannot continue beyond the end of the line.
The following metadata is defined.
\a
If the current table output format is unaligned, it is switched to aligned. If it is not unaligned, it is set to unaligned. This command is kept for backwards compatibility.
\c
or \connect
[ -reuse-previous=on|off ] [ dbname [ username ] [ host ] [ port ] | conninfo ]
Sets up a new connection to the AgensGraph server. The connection parameters to be used can be specified using the location syntax or conninfo connection string.
If the command omits the database name, user, host, or port, the new connection can reuse the values of the previous connection. By default, the values of the previous connection are reused except when processing the conninfo string. Passing the first argument of -reuse-previous=on
or -reuse-previous=off
the default value. If the command specifies or does not reuse certain parameters, the libpq default value is used. Randomly specifying dbname, username, host and port is equivalent to omitting the parameters.
If the new connection is successful, the previous connection is closed. If the connection attempt fails (invalid username, access denied, etc.) with agens in the interactive mode, only the previous connection is maintained. If an error occurs when running a non-interactive script, processing stops immediately. This feature was chosen for user convenience in case of typos and as a safety mechanism to prevent scripts from accidentally working in other databases.
Here is an example:
db=# \c mydb myuser skai 5555
db=# \c service=mydb
db=# \c "host=localhost port=5555 dbname=mydb connect_timeout=10 sslmode=disable"
db=# \c postgresql://myuser@localhost/mydb?application_name=myapp
\C
[ title ]
Sets or clears the title of any table displayed as a result of a query. This command is equivalent to \pset title title
(the name of this command was derived from “caption” because it was previously used to set caption in the HTML table).
\cd
[ directory ]
Changes the current working directory to directory. Without arguments, the current user’s home directory is changed.
Tip : The current working directory can be checked by typing
\! pwd
.
\conninfo
Shows information on the current database connection.
\copy
{ table [ ( column_list ) ] | ( query ) } { from | to } { ‘filename’ | program ‘command’ | stdin | stdout | pstdin | pstdout } [ [ with ] ( option [, …] ) ]
Performs a front-end (client) copy. This is a job of executing the SQL COPY command. However, instead of reading or writing the specified file, it reads or writes the data between the server and the local file system. File access and permissions are the access and permissions of the local user (not the server), and SQL superuser privileges are not required.
If a program is specified, the command is executed as agens and is passed between the server and the client, or routed between commands. The redo permission is an execute permission of the local user, not the server, and SQL superuser privileges are not required.
In the case of \copy ... from stdin
, it reads data rows from the source where the command was executed, and reads until it encounters \.
or EOF. This option is useful for filling in-line tables in SQL script files. In the case of \copy ... to stdout
, the output is sent to the location where the agens command output is and the COPY count command state is not displayed (because it can be confused with data rows). Use pstdin or pstdout to read or write agens’ standard input or output, regardless of the current command source or \o
.
The syntax of this command is similar to the SQL COPY command. All options except data source/destination are the same as those specified in COPY. For this reason, special parsing rules apply to the \copy
command; agens’ substitution rules and backslash escapes do not apply.
Tip : This is not as efficient as the SQL COPY command because all data must pass through a client/server connection. SQL commands can be more efficient and better for a large volume of data.
\copyright
Displays the copyright and distribution terms of AgensGraph.
\crosstabview
[ colV [ colH [ colD [ sortcolH ] ] ] ]
Executes the current query buffer (e.g. \g
) and displays the result in the crossbar grid. The query must return at least three columns. The output columns identified by colV are vertical headers and the output columns identified by colH are horizontal headers. colD identifies the output column to display in the grid. sortcolH identifies optional sorting columns of horizontal headers.
Each column specification can be a column number (starting from 1) or a column name. Common SQL case collapse and quotation rules apply to column names. If omitted, colV becomes column 1 and colH becomes column 2. colH and colV must be different. If c colD is not specified, there must be exactly three columns in the query result; colV or colH is also displayed as colD.
The vertical header shown at the top left contains the values found in the column in the same order as the query results, but the duplicates are removed.
The horizontal header shown in the first row contains the value in the colH column where the column is de-duplicated. By default, these appear in the same order as the query results. However, if the optional sortcolH argument is given, the value must be an integer number; the colH value is displayed in the horizontal header sorted according to the sortcolH value.
If the x values of colH and the y values of colV in the crosstab grid are clear, then the cell at the intersection (x, y) contains a query result row (colH column value= x and colV value= y) and the value of colD column. If there is no such column, it returns a blank cell. If there are multiple rows, an error occurs.
\d[S+]
[ patterns ]
For each relation type (table, view, materialized view, index, sequence, and foreign table) or patterns match type, the columns, types, default values (if they are not defaults), and NULL corresponding to all columns are displayed,. Associated indexes, constraints, rules, and triggers are displayed as well. In the case of foreign tables, associated foreign servers are also displayed (“pattern matching” is defined in the pattern below).
For some relationship types, \d
displays additional information of each column, such as the column values of the sequences, index expressions for indexes, and foreign data wrapper options for foreign tables.
The command form \d+
is identical except that more information is displayed. All comments related to the columns in the table are displayed, including the OIDs in the table. If not related, the default replica ID setting is displayed.
By default, only user-generated objects are displayed, and patterns or S qualifiers are provided to include system objects.
Note: If
\d
is used without a pattern argument, it is equivalent to\dtvmsE
. A list of tables, views, materialized views, sequences, and foreign tables is displayed.
\da[S]
[ pattern ]
Lists aggregate functions with the return type and data type to be manipulated. If pattern is specified, only aggregates of names that match the pattern are displayed. By default, only user-generated objects are displayed, and patterns or S qualifiers are provided to include system objects.
\dA[+]
[ pattern ]
Lists access methods. If pattern is specified, only access methods with names matching the pattern are shown. When +
is added to the command name, each table realm is listed with options related to disk size, permissions, and description.
\db[+]
[ pattern ]
Lists tablespaces. If pattern is specified, only tablespace names matching the pattern are displayed. When +
is added to the command name, each table realm is listed with options related to disk size, permissions, and description.
\dc[S+]
[ pattern ]
Lists the conversions between character set encodings. If pattern is specified, only the conversion patterns with names matching the pattern are listed. By default, only user-generated objects are displayed, and patterns or S qualifiers are provided to include system objects. When +
is added to the command name, objects with an associated description are listed.
\dC[+]
[ pattern ]
Lists the types of cast. If pattern is specified, only the casts matching the pattern of the source or destination type are listed. When +
is added to the command name, objects with an associated description are listed.
\dd[S]
[ pattern ]
Type constraints, operator classes, operator families, rules, and descriptions of trigger objects are listed. All other comments can be seen by a backslash command of each corresponding object type.
\dd
shows a description of the appropriate type of visible object if an object or argument that matches pattern is not provided. In both cases, however, only objects with descriptions are listed. By default, only user-generated objects are displayed, and patterns or S qualifiers are provided to include system objects.
A description of the object can be created using the COMMENT SQL command.
\ddp
[ pattern ]
Lists the default access permission settings. The entries for each role (schema, if applicable) that has changed from the built-in defaults are listed. If patterns is specified only those entries whose role names or schema names match the pattern are listed.
The permission command of ALTER DEFAULT is used to set default access permissions. The meaning of the displayed permissions is described in GRANT.
\dD[S+]
[ pattern ]
Lists domains. If pattern is specified, only domains with names matching the pattern are displayed. By default, only user-generated objects are displayed, and patterns or S qualifiers are provided to include system objects. When +
is added to the command name, objects with an associated description are listed.
\dE[S+]
[ pattern ]
\di[S+]
[ pattern ]
\dm[S+]
[ pattern ]
\ds[S+]
[ pattern ]
\dt[S+]
[ pattern ]
\dv[S+]
[ pattern ]
In this command group, the characters E, i, m, s, t, and v represent foreign table, index, materialized view, sequence, table, and view, respectively. You can get a list of these types of objects by specifying some or all of these characters in any order. For example, \dit
lists indexes and tables. When +
is added to the command name, each object realm is listed with options related to disk size, permissions, and description. If patterns is specified, only objects with names matching the pattern are listed. By default, only user-generated objects are displayed, and patterns or S qualifiers are provided to include system objects.
\des[+]
[ pattern ]
Lists external servers (mnemonic: “external servers”). If pattern is specified, only servers with matching names are listed. If you use the format \des
, you will also see a full description of each server, including the server’s ACL, type, version, options, and description.
\det[+]
[ pattern ]
List external tables (mnemonic: “external tables”). If pattern is specified, only those entries whose table name or schema name matches the pattern are listed. If you use the format \det+
, you will also see general options and an description of external tables.
\deu[+]
[ pattern ]
Lists user mappings (mnemonic: “external users”). If pattern is specified, only mappings of user names that match the pattern are listed. If you use the format \deu+
, you will also see a list of additional information about each mapping.
Caution :
\deu+
may also display the remote users’ user names and passwords. Be careful not to disclose them.
\dew[+]
[ pattern ]
Lists external data wrappers (mnemonic: “external wrappers”). If pattern is specified, only external data wrappers matching the pattern are listed. If you use the format \dew+
, you will also see ACLs, options, and descriptions of external data wrappers.
\df[antwS+]
[ pattern ]
The function is listed with the result data type, argument data type, and function type classified as “agg” (aggregate), “normal”, “trigger”, or “window”. To display only certain types of functions, add the corresponding letter a, n, t, or w to the command. If pattern is specified, only functions whose names match the pattern are displayed. By default, only user-generated objects are displayed, and patterns or S qualifiers are provided to include system objects. If you use the format \df+
, you will see additional information on each feature, including variability, parallel safety, owner, security classification, access permissions, language, source code, and description.
Tip: To find a function that takes an argument or returns a value of a certain data format, use the pager’s search function to scroll through the
\df
output.
\dF[+]
[ pattern ]
Lists text search settings. If pattern is specified, only the settings for the names that match the pattern are displayed. If you use the format \dF+
, you will see a full description of each setting, including a default text search parser and a dictionary list of token types for each syntax.
\dFd[+]
[ pattern ]
Lists text search dictionaries. If pattern is specified, only the dictionary with the name matching the pattern is displayed. If you use the format \dFd+
, you will see additional information on each selected dictionary, including the default text search template and option values.
\dFp[+]
[ pattern ]
Lists text search parsers. If pattern is specified, only the parser whose name matches the pattern is displayed. If you use the format \dFp+
, you will see a full description of each parser, including a list of basic features and the recognized token types.
\dFt[+]
[ pattern ]
Lists text search templates. If pattern is specified, only templates with names matching the pattern are displayed. If you use the format \dFt+
, you will see additional information for each template, including the default feature names.
\dg[S+]
[ pattern ]
List the database roles. (This is now equivalent to \du
as the concepts of “user” and “group” have been integrated into “role”). By default, only user-created roles are displayed, and patterns or S qualifiers are provided to include system objects. If pattern is specified, only roles with names matching the pattern are listed. If you use the format \dg+
, you will see additional information on each role (currently, this adds a description of each role).
\dG[+]
[ pattern ]
Displays a list of graphs.
\dGe[+]
[ pattern ]
Displays a list of edge labels in the graph.
\dGl[+]
[ pattern ]
Displays a list of labels in the graph.
\dGv[+]
[ pattern ]
Displays a list of vertex labels in the graph.
\dGi[+]
[ pattern ]
Displays a list of property indexes in the graph.
\di[S+]
[ pattern ]
Displays a list of indexes.
\dl
This is an alias for \lo_list
that shows a list of large objects.
\dL[S+]
[ pattern ]
Lists the procedural languages. If pattern is specified, only the languages whose names match the pattern are listed. By default, only user-generated languages are displayed, and patterns or S qualifiers are provided to include system objects. When +
is added to the command name, languages are listed with their respective call handler, validator, access rights, and whether it is a system object.
\dn[S+]
[ pattern ]
Lists the schemas (namespaces). If pattern is specified, only the schemas with names matching the pattern are displayed. By default, only user-generated languages are displayed, and patterns or S qualifiers are provided to include system objects. When +
is added to the command name, each object is listed with its associated permissions and description (if any).
\do[S+]
[ pattern ]
Lists the operators, together with their operands and result types. If pattern is specified, only the operators whose names match the pattern are displayed. By default, only user-generated objects are displayed, and patterns or S qualifiers are provided to include system objects. When +
is appended to the command name, additional information on each operator is displayed (only the names of current basic features are displayed).
\dO[S+]
[ pattern ]
Lists the collations. If pattern is specified, only the collations of names matching the pattern are listed. By default, only user-generated languages are displayed, and patterns or S qualifiers are provided to include system objects. When +
is added to the command name, combinations are listed with their respective description. As only the collations that can be used with encoding of the current database are displayed, the result may be different from that you get from other databases of identical installation.
\dp
[ pattern ]
Lists access rights associated with tables, views, and sequences. If pattern is specified, only tables, views, and sequences with names that match the pattern are listed.
The GRANT and REVOKE commands are used to set access permissions. The meanings of the displayed permissions are described in GRANT.
\drds
[ role-pattern [ database-pattern ] ]
Lists the defined configuration settings. These settings can be role-specific, database-specific, or both. The role-pattern and database-pattern are used to select specific roles and databases, respectively. If this option omitted or +
is specified, all settings are listed, including each role-specific or database-specific configuration that is not specified.
The ALTER ROLE and ALTER DATABASE commands are used to define role and database specific configuration settings.
\dT[S+]
[ pattern ]
Lists the data types. If pattern is specified, only types with names matching the pattern are listed. When +
is added to the command name, all types are listed with their respective internal name and size (i.e. values with the permissions associated with the enum type). By default, only user-generated objects are displayed, and patterns or S qualifiers are provided to include system objects.
\du[S+]
[ pattern ]
Lists the database roles. (This command is equivalent to \dg
since the concepts of “user” and “group” were integrated into “role”). By default, only user-generated objects are displayed, and patterns or S qualifiers are provided to include system objects. If pattern is specified, only roles with names matching the pattern are shown. If you use the format \du+
, you will see additional information on each role (currently, this adds a description of each role).
\dx[+]
[ pattern ]
Lists installed extensions. If pattern is specified, only extensions whose names match the pattern are listed. If you use the format \dx+
, you will see all objects belonging to each matching extension.
\dy[+]
[ pattern ]
Lists the event triggers. If pattern is specified, only event triggers with names matching the pattern are listed. When +
is added to the command name, objects are listed with their associated descriptions.
\e
or \edit
[ filename ] [ line_number ]
If filename is specified, the file is edited, the editor closes, and the content is copied back to the lookup buffer. If filename is not specified, the current query buffer is copied to the temporary file edited in the same manner.
The new query buffer is re-parsed according to the general rules of agens. Here, the entire buffer is processed as a single line (as you cannot create a script in this way, use \i
). That is, queries that end with a semicolon are executed immediately. Otherwise, it will wait in the query buffer. Type a semicolon or press \g
or \r
to cancel.
If you specify a line number, agens places the cursor on the specified line in the file or query buffer. If only one digit argument is given, agens regards it as a line number, not as a file name.
Tip: See the [Environment] below for editor configuration and custom instructions.
\echo
text [ … ]
Separates an argument with a single space (standard output) and prints along with a newline character. This can be useful for inserting information into script output. Here is an example:
db=# \echo `date`
Tue Oct 26 21:40:57 CEST 1999
If the first argument is an unquoted -n
, no trailing linefeed is done.
Tip: If you use the
\o
command to resubmit a query output, you can use\qecho
instead of this command.
\ef
[ function_description [ line_number ] ]
This command patches and edits definitions of the named functions in the form of the CREATE OR REPLACE FUNCTION command. Editing is done in the same way as \edit
. After the editor terminates, the updated command waits in the query buffer. Type a semicolon or press \g
or \r
to cancel.
The target function can be specified by name alone or by name and arguments (e.g. foo(integer, text)
). If you have more than one function of the same name, you must specify the argument type.
If no function is specified, an empty CREATE FUNCTION template will be displayed for editing.
If a line number is specified, agens places the cursor on the specified line in the function body. (The function body does not usually start on the first line of the file.)
Tip: For more information on configuring/customizing the editor, see [Environment] below.
\encoding
[ encoding ]
Sets the client character set encoding. If there is no argument, this command displays the current encoding.
\ev
[ view_name [ line_number ] ]
This command patches and edits definitions of the named views in the form of the CREATE OR REPLACE VIEW command. Editing is done in the same way as \edit
. After the editor terminates, the updated command waits in the query buffer. Type a semicolon or press \g
or \r
to cancel.
If you do not specify a view, an empty CREATE VIEW template will be displayed for editing.
If a line number is specified, agens places the cursor on the specified line in the view definition.
\f
[ string ]
Sets the field delimiter for unordered query output. The default is the vertical bar (|). Regarding the usual way to set output options, refer to \pset
.
\g
[ filename ]
\g
[ |command ]
Sends the current query input buffer to the server, saves the query result to filename, or outputs the shell command command of the pipe (|). A file or command is written only if the query returns zero or more tuples successfully, rather than a SQL command that fails or does not return data.
\g
is essentially equal to a semicolon. \g
with an argument is a “one-shot” alternative to the \o
command.
\gexec
Sends the current query input buffer to the server and then processes each column of each row of the query output as an SQL statement to execute. For example, to create an index on each column of my_table
, you should do the followings:
dbdb=# SELECT format('create index on my_table(%I)', attname)
-# FROM pg_attribute
-# WHERE attrelid = 'my_table'::regclass AND attnum > 0
-# ORDER BY attnum
-# \gexec
CREATE INDEX
CREATE INDEX
CREATE INDEX
CREATE INDEX
The generated query is executed in the order that the rows are returned, and if there is more than one column, it runs from left to right within each row. NULL fields are ignored. As the generated query is sent to the server for literal processing, it cannot contain an agens meta-command or an agens variable reference. If individual queries fail and ON_ERROR_STOP
is not set, the remaining queries will continue to run. Execution of each query is subject to ECHO
processing. (If you use \gexec
, it is better to set ECHO
to all
or queries
.) Query logging, single-stage mode, timing, and other query execution features also apply to each query generated.
\gset
[ prefix ]
The server sends the current query to the input buffer and stores the result of the query in the agens variable (see [Variables]). The query to be executed must return exactly one row. Each column of a row is stored in a separate variable that is identical with the column. Here is an example:
db=# SELECT 'hello' AS var1, 10 AS var2
db-# \gset
db=# \echo :var1 :var2
hello 10
If prefix is specified, the corresponding string is appended to the column name of the query to create a variable name to use.
db=# SELECT 'hello' AS var1, 10 AS var2
db-# \gset result_
db=# \echo :result_var1 :result_var2
hello 10
If the column result is NULL, the variable is set to “unset.”
If the query fails or does not return a row, the variable is not changed.
\h
or \help
[ command ]
Provides syntax help for the specified SQL command. If command is not specified, agens lists all commands of which help information is available. If command is an asterisk(*
), it shows syntax help for all SQL commands.
Note: You do not need to enclose a command consisting of multiple words in quotes to simplify typing. We recommend you to enter the command with
\help alter table
.
\H
or \html
Turns on the HTML query output format. If the HTML format is already on, it will switch back to the default alignment text format. This command is used for compatibility and convenience. Refer to \pset
for other output option settings.
\i
or \include
filename
Reads the input from the file filename and executes it as if the input has been typed at the keyboard.
If filenameis -(hyphen), standard input reads up to EOF or metacommand \q
. This allows the interactive input to interact with the input from the file. Readline operation is used only when activated at the outermost level.
Note: You must set the variable
ECHO
toall
to view the rows read from the screen.
\ir
or \include_relative
filename
\l[+]
or \list[+]
[ pattern ]
Lists the database on the server and displays the name, owner, character set encoding, and access permissions. If pattern is specified, only databases matching the pattern are displayed. When + is added to the command name, the database size, default tablespace, and description are also displayed (the size information is available only to the databases that can be connected by the current user).
\lo_export
loid filename
Reads a large object using the OID loid of the database and write it to filename. Note that the database server has a subtle difference from the server function lo_export
, which operates according to the privileges of the user running in the same way as the server’s file system.
Tip: Use
\lo_list
to find OID of the large object.
\lo_import
filename [ comment ]
Saves the file as an AgensGraph large object. Use the specified annotation if necessary. Here is an example:
db=# \lo_import '/home/peter/pictures/photo.xcf' 'a picture of me'
lo_import 152801
The response indicates that the large object has received object ID 152801, which can be used to access large objects created in the future. It is always good to associate descriptions readable by humans with all objects to ensure readability. OIDs and comments can be viewed with the \lo_list
command.
This command is slightly different from the server side as it acts as a local user on the local file system, rather than on the user/file systems of the lo_import
server.
\lo_list
Displays a list of all AgensGraph large objects currently stored in the database and all annotations provided.
\lo_unlink
loid
Deletes a large object from the database using OID loid.
Tip: Use
\lo_list
to find OID of the large object.
\o
or \out
[ filename ]
\o
or \out
[ |command ]
Saves future query results to the filename file or saves future results to the shell command. If no argument is specified, the query output is reset to standard output.
“Query result” includes the output of various backslash commands (e.g. \d
) querying the database as well as all tables, command responses, and notifications from the database server, except error messages.
Tip: Use
\qecho
to insert text output between query results.
\p
or \print
prints out the current query buffer to standard output.
\password
[ username ]
Changes the password of the specified user (default is current user). This command prompts for a new password, encrypts it, and sends it to the server with the ALTER ROLE command. The new password is not recorded in the command history, server log, or anywhere else.
\prompt
[ text ] name
The user is prompted to enter the text assigned to the variable name. An optional prompt string and text can be specified (in the case of multiple operation messages, the text is enclosed in single quotes).
By default, \prompt
uses the terminal for input/output. However, in order to use the -f
command-line switch, \prompt
uses standard input/output.
\pset
[ option [ value ] ]
This command sets the option that affect output of the query results table. option indicates the option to set. The meaning of value varies depending on the option selected. For some options, if you omit value, the options are specified or unset as described in the specific options. If no action is specified and value is omitted, the current setting is displayed.
\pset
without an argument displays the current status of all output options.
The adjustable print options are:
border
value must be a number. In general, the larger the number, the more borders and lines it has. However, its details vary depending on particular formats. In HTML format, it is directly converted to theborder=
property. In most formats, only 0 (no border), 1 (inner split line), and 2 (table frame) are meaningful, and values more than 2 are processed the same asborder=2
. Inlatex
andlatex-longtable
, however, the value of 3 means that a divider can be added between the rows of data.columns
columns Sets a width limit to determine if the target width inwrapped
format and output is sufficient to switch to a pager or portrait display in extended silent mode. Zero (default) controls the detected screen width if the target width is controlled by theCOLUMNS
environment variable orCOLUMNS
is not set. Whencolumns
is 0, thewrapped
format affects only the display output. Withcolumns
0, the file and pipe output is also wrapped in that width.expanded (or x)
If value is specified, it must be set toon
/off
orauto
to enable or disable the expanded mode. If value is omitted, it is switched to on/off setting. In enhanced mode, query results are displayed in two columns. The column name is on the left and the data is shown on the right. This mode is useful when the data does not fit the screen in a normal “horizontal” state. The expanded mode in auto setting is used whenever one or more columns exist in the query output and they are wider than the screen. In other cases, normal mode is used. Automatic settings are only valid in sorted and wrapped formats. Other formats always behave as if expanded mode is turned off.fieldsep
fieldsep Specifies the field delimiter to be used as an unaligned output format. This allows other programs to generate their own tab-delimited or comma-separated output. To set the tab as a field delimiter, type\pset fieldsep '\t'
. The default field delimiter is'|'
.fieldsep_zero
fieldsep_zero Sets the field delimiter to 0 bytes for an unaligned output format.footer
If value is specified, you must set on/off to enable or disable the sum of the table footer (n rows). Turns the footer display on or off if the value is omitted.format
format Sets the output format tounaligned
,aligned
,wrapped
,html
,asciidoc
,latex
(usestabular
),latex-longtable
,troff-ms
. Abbreviations specific to each format are allowed (one letter is enough).\unaligned
uses all the columns in a row separated by the currently active field delimiter. This feature is useful for creating output that can be read by other programs (e.g. tab-separated or comma-separated format). Thealigned
format is a human readable standard format text output (default). Thewrapped
format appears to bealigned
, but it wraps wide data values over multiple lines to match the output to the target column width. The target width is determined as described in thecolumns
option. In agens, the column name does not attempt to wrap. Thus, if the total width required for the column headers exceeds the target, thewrapped
format behaves the same as thealigned
one.html
,asciidoc
,latex
,latex-longtable
,troff-ms
formats use a respective markup to create a table that can be included in a document; these are not complete documents. It may not be necessary for HTML, but LaTeX requires a complete document wrapper.Latex-longtable
also requires the LaTeXlongtable
andbooktabs
packages.linestyle
Sets border line drawing style to one ofascii
,old-ascii
, orunicode
. Abbreviations specific to each format are allowed (one letter is enough). The default setting is ascii. This option only affects aligned and wrapped output formats. Theascii
style uses regular ASCII characters. The newline of the data is displayed using the+
sign in the right margin. When a wrapped format wraps data from one line to another without a newline character, a dot (.) appears in the right margin of the first line and in the left margin of the dotted line. Theold-ascii
style uses regular ASCII characters with the formatting style used in earlier versions. The newline character of the data is displayed using the:
symbol instead of the left column delimiter. When data is wrapped from one line to another line without a newline,;
symbol is used instead of the left column delimiter. The Unicode style uses Unicode box drawing characters. The newline of the data is displayed using the carriage return symbol in the right margin. When data is wrapped from one line to another line without a newline character, a newline is displayed in the right margin of the first line and in the left margin of the next line. If the border setting is greater than 0, the linestyle option also determines the character that draws the border line. Normal ASCII characters can be used anywhere, but Unicode characters are better viewed on character-recognizing displays.null
Sets the string to print instead of the null value. The default value is to print nothing, which can be mistaken for an empty string. For example, we prefer\pset null '(null)'
.numericlocale
If value is specified, this must be on or off, which enables or disables locale-specific numeric notation. If this value is omitted, it switches to standard format and locale-specific numeric output.pager
Controls use of the pager program for query and agens help output. IfPAGER
is set in the environment variable, the output is displayed in the specified program format. Otherwise, platform-dependent defaults (e.g. more) are used. If the pager option is off, the pager program is not used. When the pager option is on, the pager output is terminated and can be used appropriately on a screen that it does not fit. You can also enable the pager option always so that pager can be used for all terminals regardless of their fit on the screen. Using\pset pager
with no value turns pager use on and off.pager_min_lines
Ifpager_min_lines
is set to a number greater than the page height, the pager program will not be called unless at least this output line is displayed. The default setting is 0.recordsep
Specifies the record (line) delimiter to use as an unaligned output format. The default is the newline character.recordsep_zero
Sets the record delimiter to 0 bytes to use for the unaligned output format.tableattr (or T)
The HTML format specifies attributes to place within a table tag (e.g.cellpadding
orbgcolor
). You are recommended not to specify a border here because the\pset
border has already been processed. If no value is specified, the table attribute is not configured. Thelatex-longtable
format controls the proportional width of each column containing left-aligned data types; it is specified as a list of values separated (e.g. “0.2 0.2 0.6”). Unspecified output columns use the last specified value.title (or C)
Sets the title of the table to be printed later. Can be used to provide output description tags. Without a value specified, the title is not configured.tuples_only (or t)
If value is specified, it must be either on or off to enable or disable tuple-only mode. If value is omitted, switch to standard format and tuple-only mode output. General output includes additional information such as column headers, titles, and various footers. In tuple-only mode, only actual table data are displayed.unicode_border_linestyle
Sets the border drawing style in a Unicode line style to eithersingle
ordouble
.unicode_column_linestyle
Sets the column drawing style in a Unicode line style to eithersingle
ordouble
.unicode_header_linestyle
Sets the header drawing style in a Unicode line style to eithersingle
ordouble
.
Tip: There are various shortcut commands for
\pset
. Refer to\
,\C
,\H
,\t
,\T
,\X
.
\q
or \quit
Exits agens. In the script file, only execution of the script is terminated.
\qecho
text [ … ]
This command is equivalent to \echo
, but the output is written to the query output channel as set in \o
.
\r
or \reset
The query buffer is reset (released).
\s
[ filename ]
Print the command line history of agens as filename. If filename is omitted, the record is written to standard output (use pager if applicable). This command cannot be used if agens were built without Readline support.
\set
[ name [ value [ … ] ] ]
Sets agens variable name to value or all if given more than one value. If only one argument is given, the variable is set to an empty value. To unset the variable, use the \unset
command.
\set
without arguments shows all the names and values currently set in agens.
Valid variable names may include letters, numbers, and underscores. Refer to [Variables]. Variable names are case-sensitive.
You can set arbitrary variables as you like, but agens treats them as special variables. This will be explained in [Variables].
Note: This command has nothing to do with the SQL command SET.
\setenv
name [ value ]
Sets the environment variable name to value or unset the environment variable if no value is provided. Here is an example:
db=# \setenv PAGER less
db=# \setenv LESS -imx4F
\sf[+]
function_description
This command fetches and displays the definition of the named function in the form of the CREATE OR REPLACE FUNCTION command. The definition is output to the current query output channel as set by \o
.
The target function can be specified by name alone or by name and argument (e.g. foo(integer, text)
). If you have more than one function with the same name, you must specify the argument type.
When +
is added to the command name, the output lines are numbered and the first line of the function body becomes the first line.
\sv[+]
view_name
This command fetches and displays the definition of the named view in the form of the CREATE OR REPLACE VIEW command. The definition is output to the current query output channel as set by \o
.
When +
is added to the command name, the output lines are numbered from 1.
\t
Toggles the header of output column name and the display at the bottom of row count. This command corresponds to \pset tuples_only
and is provided for user convenience.
\T
table_options
Specifies attributes in the table tag in HTML format. This command corresponds to \pset tableattr
table_options.
\timing
[ on | off ]
If there are no parameters, the time each SQL statement takes is displayed in milliseconds. Sets in the same way using parameters.
\unset
name
Releases (deletes) the agens variable name.
\w
or \write
filename
\w
or \write
| command
Prints the current query buffer to a filename file or output a pipe to a shell command command.
\watch
[ seconds ]
Repeatedly executes current query buffer (such as \g
) until the query is aborted or the query fails. Waits for the specified time (default 2) between runs. Each query result is represented by a header that contains the \pset
title string (if any), query start time, and delay interval.
\x
[ on | off | auto ]
Sets or toggles the expanded table format mode. Equivalent to \pset expanded
.
\z
[ pattern ]
Lists access rights associated with tables, views, and sequences. If a pattern is specified, only tables, views, and sequences whose names match the pattern are listed.
This is an alias for \dp
(“visibility privileges”).
\!
[ command ]
Switches to a separate shell or run a shell command. The argument is no longer interpreted and the shell sees it as it is. In particular, variable substitution rules and backslash escapes do not apply.
\?
[ topic ]
Displays help information. The optional topic parameter (commands
by default) selects the part of agens to be described. The command describes agens’ backslash commands
. options
describes the command line options that can be passed to agens. variables
show help info for the agens’ configuration variables.
Patterns
The various \d
commands use the pattern parameter to specify the name of the object to be displayed. In the simplest case, a pattern is the exact name of the object. Characters in a pattern are usually converted to lowercase, as in SQL names. For example, \dt FOO
displays a table called foo
. Placing double quotes around a pattern, as in SQL names, does not convert it to lowercase. If you need to include an actual double-quote character in the pattern, type two double quote characters within double quotation marks. This is consistent with the rules for the SQL citation identifier. For example, \dt "FOO""BAR"
displays a table named FOO"BAR
(not foo"bar
). S Unlike the general rules for SQL names, you may put double quotes around a certain portion of a pattern. For example, \dt FOO"FOO"BAR
shows a table named fooFOObar
Whenever the pattern parameter is omitted altogether, the \d
command displays all objects that appear in the current schema search path. This is equivalent to using *
as a pattern. (If an object is included in the search path and objects of the same kind do not appear before the search path, the object will be displayed in the search path. This is identical to the statement that an object can be referenced by name without explicit schema credentials.) Use *.*
To see all objects in the database, regardless of visibility.
Within a pattern, *
is an arbitrary sequence of characters (including no characters), and ?
matches any single character (this notation is similar to the Unix shell filename pattern). For instance, \dt int*
displays all tables whose names begin with int
. However, *
and ?
in double quotes lose their special meanings and show objects that are matched literally.
A pattern containing a dot (.) is interpreted as a schema name pattern and an object name pattern. For example, \dt foo*.*bar*
displays all tables that contain “bar” in the table name whose schema name begins with “foo.” If no dots appear, the pattern matches only those objects that appear in the current schema search path. Dots in double quotes lose their special meaning and show objects that are matched literally.
Advanced users can use regular-expression notation, such as character classes (e.g. [0-9]). All regular-expression special characters work as specified in the link. It is translated as a delimiter as mentioned above. That is,*
is translated into regular-expression notation. *
and ?
are literally translated into .
and $
; these pattern characters can be imitated as needed by writing .
as ?
, R*
as (R+|)
, and R?
as (R|)
. As the pattern does not match the full name, unlike general interpretation of regular expressions, $
does not match the regular expression (i.e. $
is added to the pattern automatically). If you do not want to lock the pattern, write *
at the start and/or end. Note that within double quotes all regular-expression special characters lose their special meaning and are literally matched. Regular-expression special characters are also matched literally in the operator name pattern (e.g. argument in \do
).
Advanced Features
Variables
Agens provides a variable substitution feature, which is similar to the normal Unix command shell. A variable is simply a name/value pair and can be a string of any length. The name must consist of letters (including non-Latin characters), numbers, and underscores.
To set a variable, use \set
, an agens meta-command. Here is an example:
db=# \set foo bar
Set the variable “foo” to “bar.” To search the content of a variable, enter a colon (:) before the name. For example:
db=# \echo :foo
bar
It works on both regular SQL commands and meta-commands. See [SQL Interpolation]below for more information.
If you call \set
without a second argument, the variable is set to an empty string value. To unset (delete) a variable, use the \unset
command. To display the values of all variables, call \set
with no arguments.
The number of these variables is specially processed by agens. You can change the value of a variable to indicate either a specific option setting that can be changed at runtime or a changeable state of agens in some cases. Using these variables for other purposes is allowed but not recommended as the program operation can go too fast. By convention, all special-processing variables are named with uppercase ASCII characters (possibly numbers and underscores). To ensure maximum compatibility in the future, do not use these variable names for your own use. Here is a list of all specially processed variables.
AUTOCOMMIT
When this is on
(default), each SQL command is committed automatically once it is complete successfully. If you defer commit in this mode, you should enter a BEGIN
or START TRANSACTION SQL
command. If set to off
or unset, SQL commands are not committed until COMMIT
or END
is explicitly called.
Note: In autocommit-off mode, you should explicitly abort the failed transaction by entering
ABORT
orROLLBACK
. Also, if you terminate the session without committing it, your work will be lost.
Note: The autocommit-on mode is a generic behavior of AgensGraph, but autocommit-off is closer to the SQL specification. If you want autocommit-off, you can set it using a
psqlrc
file available in the system or~/.psqlrc
file.
COMP_KEYWORD_CASE
Determines a character to use when completing a SQL keyword. When set to lower
or upper
, the completed word is shown in uppercase or lowercase, respectively. If it is set to preserve-lower
or preserve-upper
(default), the completed word is shown only when the word is already entered; the entered content is shown in lowercase or uppercase, respectively.
DBNAME
The name of the currently connected database. Although this is set every time you connect to the database (including program startup), you may turn it off.
ECHO
If set to all
, all non-empty input lines are read and printed to standard output (not applied to interactively-read lines). Use the switch -a
to select this behavior at program startup. If set to queries
, agens prints each query to standard output each time it is sent to the server (switch -e
). If set to error
, only the queries that caused an error will be displayed in the standard error output (switch -b
). If it is unset or set to none
(or any value other than above), the query will not be displayed.
ECHO_HIDDEN
If this variable is set to on
and the backslash command inquires the database, the query is displayed first. This feature allows you to study the AgensGraph internals and provide similar functionality in your own program (use switch -E
to select this behavior at program startup). If you set the variable to a value of noexec
, the query will be displayed but will not be actually sent to the server for execution.
ENCODING
Character set encoding of the current client.
FETCH_COUNT
If this variable is set to an integer value>0, what is fetched and displayed in a group of rows is the result of the SELECT
query, rather than the default behavior of collecting the entire result set. Thus, only a limited amount of memory is used, regardless of the size of the result set. When activating this function, the setting from 100 to 1000 is generally used. When using this feature, the query may fail after some rows are already displayed.
Tip: Although this feature allows all output formats, the default alignment format tends to be not great; this is because the
FETCH_COUNT
rows of each group are formatted separately, leading to various column widths across the row group. Other output formats work better.
HISTCONTROL
If this variable is set to ignorespace
, lines beginning with a space are not entered in the history list. If set to ignoredups
, lines that match the previous history lines are not entered. ignoreboth
combines these two options. If not set or set to none
(or a value other than above), all rows read in the interactive mode are stored in the history list.
Note: This is a feature from bash.
HISTFILE
The name of the file to use in order to save the history list. The default value is ~/.psql_history
. This is an example of input.
\set HISTFILE ~/.psql_history- :DBNAME
In ~/.psqlrc
, agens maintains a separate record of each database.
Note: This is a feature from bash.
HISTSIZE
The number of commands to store in the command history. The default value is 500.
Note: This is a feature from bash.
HOST
This is the database server host currently connected. Although this is set every time you connect to the database (including program startup), you may turn it off.
IGNOREEOF
For unset, sending an EOF character (typically Control + D) to the agens interactive session will terminate the application. If set to a numeric value, many EOF characters will be ignored before the application terminates. If the variable is set but has no numeric value, the default is 10.
Note: This is a feature from bash.
LASTOID
The last-affected OID value returned by INSERT or \lo_import
command. This variable is only valid until the result of the next SQL command is displayed.
ON_ERROR_ROLLBACK
If this is set to on
and the statement in the transaction block generates an error, the error will be ignored and the transaction will continue. When setting up an interactive session, these errors will be ignored only in the interactive session while the script file is read. If set to unset or off
, the statement in the transaction block that generates the error will abort the entire transaction. The error rollback mode works by issuing an implicit SAVEPOINT
immediately before each command in the transaction block and then rolling back to the save point if the command fails.
ON_ERROR_STOP
By default, command processing continues after an error. Setting this variable to on
will stop immediate-processing. In the interactive mode, agens returns to the command prompt. Otherwise, agens terminates, returns an error code 3, and distinguishes this case from the fatal error condition reported using an error code 1. In both cases, any scripts that are currently running (top-level scripts (if any) and any other scripts that can call other scripts) are terminated immediately. If the top-level command string contains multiple SQL commands, the processing is aborted with the current command.
PORT
This is the database server port you are currently connected to. Although this is set every time you connect to the database (including program startup), you may turn it off.
PROMPT1
PROMPT2
PROMPT3
Specifies how the agens results are displayed in the prompt. Refer to [Prompting] below.
QUIET
Setting this variable to on
is equivalent to the command line option -q
. This is not very useful in the interactive mode.
SHOW_CONTEXT
This variable can be set to never
, or errors
, always
to control whether the CONTEXT field should be displayed in the server’s message. The default is errors
(displayed in error messages but not in warning or warning message). No effect if VERBOSITY is set to terse
(see \errverbose
for a more detailed version of the error you just received).
SINGLELINE
Setting this variable to on is equivalent to the command line option -S
.
SINGLESTEP
Setting this variable to on is equivalent to the command line option -S
.
USER
Currently-connected database user. Although this is set every time you connect to the database (including program startup), you may turn it off.
VERBOSITY
This variable can be set to default
, verbose
, or terse
to control the details of the error report. See \errverbose
for a more detailed version of the error you just received.
SQL Interpolation
The core function of the agens variables is to be able to “interpolate” ordinary SQL statements as well as meta-command arguments. In addition, agens provides functions that allow SQL literals and variable values used as identifiers to be quoted appropriately. The syntax for interpreting the value without any quotes is prepended to the colon variable name (:). For example:
db=# \set foo 'my_table'
db=# SELECT * FROM :foo;
Query my_table
table. This may not be safe since the value of a variable is literally copied and unbalanced quotes or backslash instructions may be contained. Make sure you know where you put it.
Enclosing the value in quotes when used as an SQL literal or identifier is the safest way. To quote a variable’s value as an SQL literal, put the variable name in single quotation marks after the colon. To quote a value as an SQL identifier, write the variable name after the colon in double quotation marks. This structure correctly processes quotes and other special characters contained within variable values. Here’s an example that made the previous example safer.
db=# \set foo 'my_table'
db=# SELECT * FROM :"foo";
Variable interpolation is not performed within quoted SQL literals and identifiers. Therefore, a structure like :foo
will not produce a literal quoted from the value of the variable (it may not be safe because it does not correctly process quotes included in the value).
One example of using this mechanism is to copy a file’s content into a table column. First, load the file into a variable and then interpolate the variable’s value into a quoted string.
db=# \set content `cat my_file.txt`
db=# INSERT INTO my_table VALUES (:'content');
(This function does not work if my_file.txt
contains NUL bytes. agens does not support embedded NUL bytes in variable values.)
Since a colon can appear legitimately in an SQL command, obvious attempts to interpolate (i.e. :name, :’name’ or :”name”) are not replaced unless the named variable is currently set. In any case, you can escape the colon with a backslash to protect it from the replacement code.
The colon syntax for variables is a standard SQL for built-in query languages such as ECPG. The colon syntax for array slices and type casts is an AgensGraph extension and may conflict with the standard usage. The colon-quote syntax for escaping the value of a variable to an SQL literal or identifier is an extension of agens.
Prompting
Agens can be customized by the user as prompted. The three variables PROMPT1
, PROMPT2
, and PROMPT3
have a special escape sequence and a string that describes the appearance of the prompt. PROMPT1 is a generic prompt issued when agens requests a new command. PROMPT2 is issued when the command does not end with a semicolon or quotation marks and, therefore, more input is needed during command input. PROMPT3 is issued when the SQL COPY FROM STDIN
command is being executed and the row value must be entered into the terminal.
The value of the selected prompt variable is printed as is, except when there is a percentage sign (%
). Specific text is replaced by the following characters. The defined alternatives include:
For connections over a UNIX domain socket, the entire hostname (including the domain name) of the database server or [local
] is set to [local
: /dir/name] if the Unix domain socket is not in the default location.
The host name of the database server, truncated at the first dot, or [local
] if the connection is over a Unix domain socket.
The port number at which the database server is listening.
The database session user name. (The expansion of this value might change during a database session as the result of the command SET SESSION AUTHORIZATION
.)
The name of the current database.
Like %/, but the output is ~ (tilde) if the database is your default database.
If the session user is a database superuser, then a #, otherwise a >. (The expansion of this value might change during a database session as the result of the command SET SESSION AUTHORIZATION.)
The process ID of the backend currently connected to.
In prompt 1 normally =
, but ^
if in single-line mode, or !
if the session is disconnected from the database (which can happen if \connect
fails).
In prompt 2 %R is replaced by a character that depends on why psql expects more input: -
if the command simply wasn’t terminated yet, but *
if there is an unfinished /* ... */
comment, a single quote if there is an unfinished quoted string, a double quote if there is an unfinished quoted identifier, a dollar sign if there is an unfinished dollar-quoted string, or (
if there is an unmatched left parenthesis. In prompt 3 %R doesn’t produce anything.
The line number inside the current statement, starting from 1.
A prompt may include, for instance, terminal control characters that change the color, background, or style of the prompt text, or change the title of the terminal window. In order for Readline’s row editing to work properly, these nonprinting control characters must be enclosed in %[ and %] to be invisible. Multiple pairs of options can occur within the prompt.
Here is an example:
db=# \set PROMPT1 '%[%033[1;33;40m%]%n@%/%R%[%033[0m%]%# '
A bold (1;), yellow-on-black (33;40) prompt appears on a VT100 compatible, color-capable terminal.
Use %%
to insert a percentage sign at the prompt. The default prompt is %/%R%#
for prompts 1 and 2 and >>
for prompt 3.
Note: This is a feature that comes from tcsh.
Command-Line Editing
Agens supports the Readline library for convenient line editing and searching. When you exit agens, the command history is automatically saved and reloaded when you (re)start agens. Tab completion is supported even if the completion logic does not request SQL parsing. Queries generated by tab completion may conflict with other SQL commands (such as SET TRANSACTION ISOLATION LEVEL). If you do not like the tab completion, you may disable it by adding it to the home directory’s .inputrc
file.
$if agens
set disable-completion on
$endif
(This is a feature of Readline, not agens. Refer to the corresponding documentation for details.)
Environment
COLUMNS
If \pset
columns is 0, you need to adjust the wrapped
format and width, or switch from the extended automatic mode to the vertical format, in order to determine whether extensive output is required.
PAGER
If the query results do not fit on the screen, they are piped through this command. Typical values are more
or less
. The default value depends on the platform. You can disable PAGER
by setting it to blank or using pager-related options of the \pset
command.
PGDATABASE
PGHOST
PGPORT
PGUSER
Basic connection parameters (Reference).
PSQL_EDITOR
EDITOR
VISUAL
An editor used for \e
, \ef
and \ev
commands. These variables are checked in the order listed and are used in the first set order.
The built-in default editor is vi
on Unix systems and notepad.exe
on windows systems.
PSQL_EDITOR_LINENUMBER_ARG
When using \e
, \ef
, and \ev
with a line number argument, these variables specify the command line argument used to pass the start line number to the user’s editor. For editors like Emacs or vi, this is a plus (+) sign. If there has to be a space between the option name and the line number, include the trailing space in the variable value. Here is an example:
PSQL_EDITOR_LINENUMBER_ARG='+'
PSQL_EDITOR_LINENUMBER_ARG='--line '
On Unix systems, the default is +
(this corresponds to the default editor vi
and is useful for many other common editors). However, there is no default on Windows systems.
PSQL_HISTORY
A tilde (~
) expansion is performed as an alternative location to the command history file.
PSQLRC
A tilde (~
) extension is performed as an alternative location to the user’s .psqlrc
file.
SHELL
Commands executed with \!
.
TMPDIR
A directory to store temporary files (default is /tmp
). This utility uses environment variables supported by libpq.
Files
psqlrc
and ~/.psqlrc
Agens reads and executes commands from the system-wide startup file (psqlrc
) and your personal startup file (~/.psqlrc
) before connecting to the database and accepting normal commands unless you specify the -X
option. These files are typically used to set up clients and/or servers using the \set
and SET
commands.
The system-wide startup file is named as psqlrc
and can be found in the “System Configuration” directory of the installation. This directory is most reliably identified by running pg_config --sysconfdir
. By default, this directory is ../etc/
for the directory containing the AgensGraph executable(s). The name of this directory can be explicitly set via the PGSYSCONFDIR
environment variable.
The user’s personal startup file name is .psqlrc
and the calling user’s home directory is searched. On Windows where such a concept is not available, the name of the personal startup file is %APPDATA%\postgresql\psqlrc.conf
. The location of the user startup file can be explicitly set via the PSQLRC
environment variable.
.psql_history
Command line history is stored in ~/.psql_history
or %APPDATA%\postgresql\psql_history
file on Windows.
The location of the history file can be explicitly set via the PSQL_HISTORY environment variable.
Notes
Agens works best with major versions (old or new) of the server. The backslash command can fail, especially if the server is newer than agens itself. The generic ability to execute SQL commands and display query results should work on new major versions of the server, but cannot be guaranteed in every case. In the case where you want to use agens and try to connect an agens program to multiple servers of other major versions, you had better use the latest version. Alternatively, you should keep a copy of each major version of agens and use the version that matches the server.
Notes for Windows Users
Agens is built as a “console application”. Be especially cautious when using 8-bit characters in agens because the Windows console window uses a different encoding from the rest of the system. When agens searches for a problematic console code page, a warning message is displayed at startup. To change the console code page, you need two things as follows:
Enter and set
cmd.exe /c chcp 1252
code page (1252 is a code page suitable for German; enter an appropriate code page). If you are using Cygwin, you can put this command in /etc/profile.Since raster fonts do not work with ANSI code pages, set the console font to
Lucida Console
.
Examples
The first example (below) shows how to distribute commands over multiple lines of input.
agens=# CREATE TABLE my_table (
agens(# first integer not null default 0,
agens(# second text)
agens-# ;
CREATE TABLE
Check the definition of the table.
agens=# \d my_table
Table "my_table"
Attribute | Type | Modifier
-----------+---------+--------------------
first | integer | not null default 0
second | text |
You can change the prompt as follows:
agens=# \set PROMPT1 '%n@%m %~%R%# '
tester@[local] agens=#
Populate the table with data and query the table.
tester@[local] agens=# SELECT * FROM my_table;
first | second
-------+--------
1 | one
2 | two
3 | three
4 | four
(4 rows)
You can use the \pset
command to display tables in different ways.
tester@[local] agens=# \pset border 2
Border style is 2.
tester@[local] agens=# SELECT * FROM my_table;
+-------+--------+
| first | second |
+-------+--------+
| 1 | one |
| 2 | two |
| 3 | three |
| 4 | four |
+-------+--------+
(4 rows)
tester@[local] agens=# \pset border 0
Border style is 0.
tester@[local] agens=# SELECT * FROM my_table;
first second
----- ------
1 one
2 two
3 three
4 four
(4 rows)
tester@[local] agens=# \pset border 1
Border style is 1.
tester@[local] agens=# \pset format unaligned
Output format is unaligned.
tester@[local] agens=# \pset fieldsep ,
Field separator is ",".
tester@[local] agens=# \pset tuples_only
Showing only tuples.
tester@[local] agens=# SELECT second, first FROM my_table;
one,1
two,2
three,3
four,4
Or you can use a short command.
tester@[local] agens=# \a \t \x
Output format is aligned.
Tuples only is off.
Expanded display is on.
tester@[local] agens=# SELECT * FROM my_table;
-[ RECORD 1 ]-
first | 1
second | one
-[ RECORD 2 ]-
first | 2
second | two
-[ RECORD 3 ]-
first | 3
second | three
-[ RECORD 4 ]-
first | 4
second | four
If appropriate, you can use the \crosstabview
command to display the query results in a crosstab representation.
db=# SELECT first, second, first > 2 AS gt2 FROM my_table;
first | second | gt2
-------+--------+-----
1 | one | f
2 | two | f
3 | three | t
4 | four | t
(4 rows)
db=# \crosstabview first second
first | one | two | three | four
-------+-----+-----+-------+------
1 | f | | |
2 | | f | |
3 | | | t |
4 | | | | t
(4 rows)
This second example shows a multiplication table in which the numeric sequence is sorted in reverse order and the columns are in ascending numerical order.
agens=# SELECT t1.first as "A", t2.first+100 AS "B", t1.first*(t2.first+100) as "AxB",
agens-# row_number() over(order by t2.first) AS ord
agens-# FROM my_table t1 CROSS JOIN my_table t2 ORDER BY 1 DESC
agens-# \crosstabview "A" "B" "AxB" ord
A | 101 | 102 | 103 | 104
---+-----+-----+-----+-----
4 | 404 | 408 | 412 | 416
3 | 303 | 306 | 309 | 312
2 | 202 | 204 | 206 | 208
1 | 101 | 102 | 103 | 104
(4 rows)
pg_dump
pg_dump
is a utility that backs up the AgensGraph database. Performs consistent backups even while the database is in use. pg_dump
extracts it into a script file or other archive file.
Here is how to use:
pg_dump [option]... [dbname]
Options
dbname
Specifies the name of a database to dump. If not specified, the username specified in the connection will be used.
-a
–data-only
Dumps only data, not schema (data definition).
-b
–blobs
Includes a large object in the dump. This is the default behavior unless --schema
, --table
or --schema-only
is specified. Therefore, the -b
switch is useful only if a particular schema or table adds a large object to the requested dump. The blob is regarded as data, so it is included when --data-only
is used, but not included when it is not --schema-only
.
-c
–clean
Prints a command to clean up (delete) database objects before issuing a command to create a database object. (If --if-exists
is not specified and the object does not exist in the target database, a false error message may be generated.)
This option is meaningful only in plain text format. For archive formats, you can specify options when you call
pg_restore
.
-C
–create
Starts the output with a command that creates a database itself and reconnects to the created database. (If you use this form of script, the database of the target installation you want to connect to is not important before you run the script.) If --clean
is also specified, the script deletes the target database and then reconnects.
This option is meaningful only in plain text format. For archive formats, you can specify options when you call
pg_restore
.
-E encoding
–encoding=encoding
Creates a dump with the specified character set encoding. By default, dumps are written in database encodings. (Another way to achieve the same result is to set the PGCLIENTENCODING
environment variable to the desired dump encoding.)
-f file
–file=file
Sends the output to the specified file. This parameter can be omitted in a file-based output format where standard output is used. However, it should be provided for a directory output format that specifies a target directory instead of a file. In such a case, a directory (that has not existed) is created by pg_dump
.
-F format
–format=format(c|d|t|p)
Selects an output format. format can be one of the followings:
p
plain
Prints plain text SQL script files (default).c
custom
Prints user-defined type archives suitable forpg_restore
input. Along with the directory output format, this option is the most flexible output format that allows you to manually select the items kept during a restore and change their sequence. This format is compressed by default.d
directory
Prints user-defined type archives suitable forpg_restore
input. This will create each table, a file that will dump blob, and a directory that contains a so-called table of contents file that describes the objects dumped in machine-readable format that can be read bypg_restore
. Directory-type archives can be manipulated with standard Unix tools. For instance, files in uncompressed archives can be compressed with gzip. This format is compressed by default and supports parallel dumps.t
tar
Prints a tar-format archive suitable for input topg_restore
. Thetar
format is compatible with the directory format. Extracting a tar format archive creates a valid directory format archive. However, the tar format does not support compression. With the tar format, the relative order of table data items cannot be changed during a restore.
-j njobs
–jobs=njobs
Runs a parallel dump by dumping the njobs tables at the same time. This option reduces the dump time, but increases the load on the database server. As this option is the only output format that allows multiple processes to write data at the same time, it can be used only in directory output format.
Since pg_dump
will open an njobs+1 connection to the database, the max_connections setting should be high enough to accommodate all connections.
If you request an exclusive lock on a database object while executing a parallel dump, the dump may fail. This is because the pg_dump
master process requires the worker process to request a shared lock on the object to dump later so that no one deletes it while the dump is running. If another client requests an exclusive lock on the table, the lock is not granted, but it waits until the shared lock of the master process is released. As a result, no other accesses to the table are granted and they wait after the exclusive lock request. This includes the worker process that tries to dump the table. If there are no precautions, a classical deadlock occurs. To detect this conflict, the pg_dump
worker process requests another shared lock using the NOWAIT option. If this shared lock is not granted to the worker process, someone else has to request an exclusive lock; as there is no way to continue the dump, pg_dump
will have to stop dumping.
For consistent backup, the database server must support synchronized snapshots. With this feature, database clients can see the same dataset even if they use different connections. pg_dump -j
uses multiple database connections. It connects to the database once in the master process and connects to each worker job again. Without the synchronized snapshot feature, inconsistent backups may occur because different worker jobs cannot see the same data on each connection.
-n schema
–schema=schema
Dumps only schema that matches the schema. This selects both the schema itself and all objects contained in the schema. If this option is not specified, all non-system schemas in the target database are dumped. Multiple schemas can be selected by creating multiple -n
switches. Schema parameters are also interpreted in the same pattern as the rules used by agens’ \d
command, and wildcard characters can be used to select multiple schemas. When using wildcards, use quotation marks to prevent the shell from extending wildcards (refer to [Examples]).
Notice: If
-n
is specified,pg_dump
will not dump any other database objects that may vary depending on the schema selected. Therefore, there is no guarantee that the result of a particular schema dump can be restored to normal clean database by itself.
Notice: Non-schema objects such as blob are not dumped if
-n
is specified. You can add blob back to the dump using the--blobs
switch.
-N schema
–exclude-schema=schema
Do not dump schema that matches the schema pattern. Patterns are interpreted according to the same rules as -n
. -N
can be used more than once to exclude schemas that match one of several patterns.
Given both -n
and -N
, this operation dumps schemas that match at least one -n
switch, except for the -N
switch. If -N
appears without -n
, schemas that match -N
are excluded from normal dump.
-o
–oids
Dumps the object identifier (OID) into part of the data in all tables. Use this option if your application refers to an OID column in some ways (e.g. foreign key constraint). Otherwise, this option is not available.
-O
–no-owner
Do not print commands that set ownership of an object to match the original database. By default, pg_dump
issues an ALTER OWNER
or SET SESSION AUTHORIZATION
statement to set ownership of the created database object. This command will fail when the script is run, unless superuser (or the same user who owns all objects in the script) starts this command. Specify -O to create a script that grants ownership of all objects only to an applicable user, but can be restored by any user.
This option is meaningful only in plain text format. For archive formats, you can specify options when you call
pg_restore
.
-s
–schema-only
Dumps object definitions (schemas) only.
This option is the opposite of --data-only
. This is similar to --section=pre-data --section=post-data
.
(Do not confuse this with the --schema
option, which uses the word “schema” in other ways.)
To exclude table data only for a subset of the tables in the database, see --exclude-table-data
.
-S username
–superuser=username
Specifies the name of superuser to use when not using the trigger. This option is relevant only if --disable-triggers
is used (generally it is better to skip this step; instead you should start the results with a supercomputer).
-t table
–table=table
Dumps only a table whose name matches the table. To this end, “table” includes views, materialized views, sequences, and foreign tables. You may create multiple -t
switches to select multiple tables. As the table parameter is interpreted in the same pattern as the agens \d
command, you can select multiple tables using wildcard characters in the pattern. To prevent the shell from extending wildcards, use quotes (refer to [Examples]).
The -n
and -N
switches do not work when using -t
, because the tables selected with -t
are dumped regardless of the switches and non-table objects are not dumped.
Notice: If you specify
-t
, pg_dump will not attempt to dump other database objects on which the selected table depends. Therefore, there is no guarantee that the results for a particular database can be successfully restored to its own database.
-T table
–exclude-table=table
Do not dump tables that match the table pattern. This pattern is interpreted according to the same rules as -t
. -T
can be given more than once, except for tables that match one of the various patterns.
When both -t
and -T
are given, this action dumps tables that match at least one -t
switch, except for the -T
switch. If -T
is used without -t
, tables that match -T
are excluded from normal dumps.
-v
–verbose
Specifies the verbose mode. This will cause pg_dump to print the detailed object description and start/stop times to a dump file and process the message as a standard error.
-V
–version
Displays the version of pg_dump and exits.
-x
–no-privileges
–no-acl
Prevents dumping of access privileges (grant/revoke command).
-Z 0..9
–compress=0..9
Specifies the compression level to use. “0” means no compression. For the user-defined archive format, this specifies compression of individual table data segments, the default is to compress at a medium level. For plain text output, setting a nonzero compression level compresses the entire output file as supplied by gzip (but, the default is not compressed). The tar archive format currently does not support compression at all.
–binary-upgrade
This option is used by the full upgrade utility. Using this option for other purposes are not recommended or supported. The behavior of this option may change in future releases without notice.
–column-inserts
Dumps data into an INSERT
statement with an explicit column name (INSERT INTO table (column, …) VALUES …). This will greatly slow down the restoration. This option is primarily useful for creating dumps that can be loaded into a non-Agens database. This option generates a separate command for each row, so if an error occurs while reloading a row, only that row (not the entire table content) is lost.
–disable-dollar-quoting
This option disables use of dollar quotes for the function body and allows use of quotation marks using SQL standard string syntax.
–disable-triggers
This option is relevant only when creating a data-only dump. Instructs pg_dump
to include a command to temporarily disable trigger on the target table while data is being reloaded. Use this option in case of referential integrity check in which you do not want a call while reloading data or in the case where there are other triggers on the table.
Currently, the commands generated for --disable-triggers
must be run by superuser. Therefore, be careful when you specify the superuser name with -S or start the resulting script as superuser.
This option is meaningful only in plain text format. For archive formats, you can specify options when you call
pg_restore
.
–enable-row-security
This option is relevant only when dumping the content of a table with row security. By default, pg_dump
sets
row_security
to off so that all data is dumped from the table. An error occurs if the user does not have sufficient privileges to bypass row security. This parameter allows pg_dump
to dump some of the user-accessible table content by setting row_security
to on.
–exclude-table-data=table
Do not dump data for a table that matches the table pattern. --exclude-table-data
can be supplied more than once to exclude tables that match one of several patterns. This option is useful when you need to define a specific table, even if you do not need the data.
To exclude data for all tables in the database, refer to --schema-only
.
–if-exists
Use conditional commands (such as the IF EXISTS clause) to clean up database objects. This option is not valid unless --clean
is specified.
–inserts
Dumps data with INSERT
command instead of COPY
. This will greatly slow down the restoration speed. This option is primarily useful for creating dumps that can be loaded into a non-Agens database. This option generates a separate command for each row, so if an error occurs while reloading a row, only that row (not the entire table content) is lost. Rearranging the column order can cause the restore to fail. The --column-insert
option is slow, but can be used safely when changing the column order.
–lock-wait-timeout=timeout
Does not wait forever to acquire the shared table lock at the beginning of dump. It will fail if the table cannot be locked within the specified timeout. The timeout can be specified in the format allowed by the SET statement_timeout
(allowed values are integers in milliseconds).
–no-security-labels
Does not dump the security label.
–no-synchronized-snapshots
This option allows you to run pg_dump -j
. Refer to the description of the -j
parameter for details.
–no-tablespaces
Does not print a command to select tablespaces. When this option is used, all objects are created in the table realm, which is the default value during restoration.
This option is meaningful only in plain text format. For archive formats, you can specify options when you call
pg_restore
.
–no-unlogged-table-data
Does not dump the content of unlogged tables. This option does not affect whether the table definition (schema) is dumped or not. Limit only dumping of table data. Data in nonlogged tables is always excluded when dumping from the standby server.
–quote-all-identifiers
Forces all identifiers to be quoted. This option is recommended when the main version of AgensGraph dumps the database on a different server from pg_dump
, or when you load the output on another major version of the server. By default, pg_dump
quotes only identifiers that are reserved words in its major version. This may cause a compatibility problem when processing different versions of the server with different versions of the reserved word set. You can use --quote-all-identifier
to avoid this problem instead of using a hard-to-read dump script.
–section=section
Dumps only the named section. A section can be pre-data, data, or post-data. This option can be specified more than once to select multiple sections. The default is to dump all sections.
The data section includes actual table data, large-volume object content, and sequence values. The post-data entry contains the definitions of indexes, triggers, rules, and constraints (not validity checking constraints). The pre-data entry contains all other data definition entries.
–serializable-deferrable
Uses a serializable
transaction in dumps to ensure that the used snapshot matches the state of the database at a later time. This option ensures there is no risk of a dump failing or another transaction being rolled back to serialization_failure by waiting for a point in the transaction stream for which no exception is present.
This option is not useful for disaster recovery-only dumps. It can be useful to load a database copy while the database is still being updated, or to load a database copy for another read-only load sharing. Without it, the dump can eventually reflect a condition that is inconsistent with any subsequent execution of the committed transaction. For instance, you may use batch processing techniques to mark batches as closed without displaying all the items in the batch.
This option makes no difference if there is no active read/write transaction when pg_dump
starts. If a read-write transaction is active (enabled), the start of a dump can be delayed for an indeterminate amount of time. Once executed, performance will be the same regardless of use of switches.
–snapshot=snapshot
Uses a specified synchronized snapshot when dumping the database.
This option is useful when you need to synchronize a dump with a logical replication slot or a concurrent session.
For a parallel dump, the snapshot name defined by this option is used instead of creating a new snapshot.
–strict-names
Requires that each schema (-n
/--schema
) and table (-t
/--table
) qualifiers match at least one schema/table in the database to be dumped. Without a matching schema/table qualifier, pg_dump generates an error without
--strict-names
.
This option does not affect -N
/--exclude-schema
, -T
/--exclude-table
, --exclude-table-data
. An exclusion pattern that does not match an object is not regarded as an error.
–use-set-session-authorization
Prints the SQL standard SET SESSION AUTHORIZATION
command, instead of the ALTER OWNER
command, to identify the object ownership. By doing so, the dump can be compliant with the standard, but may not be restored properly depending on the record of the object in the dump. In addition, dumps using SET SESSION AUTHORIZATION
require that superuser privileges be restored correctly, but ALTER OWNER requires less privileges.
-?
–help
Displays help for pg_dump command line arguments and exits.
The following command-line options control database connection parameters.
-d dbname
–dbname=dbname
Specifies the name of the database to connect to. This is equivalent to specifying dbnameas an argument, not the first option on the command line.
If this parameter has an equal (=) sign or starts with a valid URI prefix (postgresql:// or postgres://), it is processed as a conninfo string.
-h host
–host=host
Specifies the host name of the system on which the server is running. If the value starts with a slash, it is used as the directory for Unix domain sockets. The default value is taken from the PGHOST environment variable (if set). Otherwise, a Unix domain socket connection is attempted.
-p port
–port=port
Specifies the TCP port or local Unix domain socket file extension on which the server listens for connections. The default value is the PGPORT environment variable. It should be set to the (compiled) default value.
-U username
–username=username
The name of the user to connect to.
-w
–no-password
Does not prompt for a password. If the server requires password authentication and cannot use the password in any other way, such as in a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where there is no user who may enter a password.
-W
–password
Before connecting to the database, run pg_dump
to enter the password.
As pg_dump
automatically requires a password when the server requests password authentication, this option is never needed. However, pg_dump
wastes connection attempts to find out if the server wants a password. In some cases, it is better to type -W to avoid extra connection attempts.
–role=rolename
Specifies the role name to be used to create a dump. With this option, pg_dump
issues the SET ROLE rolename command after connecting to the database. This is useful when an authenticated user (specified by -U) lacks the privileges required by pg_dump
, but has the authority to switch to the required privilege. If you have a policy that prevents you from logging in directly as superuser, you may use this option to generate a dump without violating the policy.
Examples
To dump a database called mydb
into a SQL script file:
pg_dump mydb > db.sql
To reload a script into a newly created database named newdb
:
agens -d newdb -f db.sql
To dump a database to a user-defined type archive file:
pg_dump -Fc mydb > db.dump
To dump a database to a directory format archive:
pg_dump -Fd mydb -f dumpdir
To dump a database into a directory format archive in parallel with five worker jobs:
pg_dump -Fd mydb -j 5 -f dumpdir
To reload an archive file into a newly created database called newdb
:
pg_restore -d newdb db.dump
To dump a single table called mytab
:
pg_dump -t mytab mydb > db.sql
To dump all the tables in the detroit
schema that start with the name emp
except for the table named employee_log
:
pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
To dump all schemas whose names start with east
or west
and end with gsm
, and exclude schemas whose names contain test
:
pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
or
pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
To dump all database objects except tables whose names start with ts_
:
pg_dump -T 'ts_*' mydb > db.sql
You must enclose the name in double quotes to specify a name that mixes uppercase or mixed-case characters in the -t
and related switches. Otherwise it is converted to lowercase. However, as double quotes apply only to shells, you must enclose them in quotes. Thus, to dump a single table with mixed case, you should do the following:
pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
pg_restore
pg_restore
is a utility that restores the AgensGraph database to an archive created by pg_dump
. Executes commands necessary to reconstruct the database at the time the database was saved. The archive file also allows
pg_restore
to change the order of items before selecting or restoring them. The archive file is designed to be applicable to the overall architecture.
pg_restore
can operate in two modes. If a database name is specified, pg_restore
connects to the specified database and restores the archived content directly to the database. Otherwise, a script containing the SQL commands required to rebuild the database is written and saved (recorded) to a file or standard output. This script output is identical with the plain text output format of pg_dump
. Thus, some of the options that control output are similar to the pg_dump
option.
Of course, pg_restor
e cannot restore information that is not in the archive file. For example, if you created an archive using the “data dump with INSERT command” option, pg_restore
cannot load data using the COPY statement.
Here is how to use:
pg_restore [OPTION]... [FILE]
Options
pg_restore
accepts the following command line arguments:
filename
Specifies the location of the archive file to be restored (or directory for directory-format archives). If not specified, standard input is used.
-a
–data-only
Restore only data that is not schema (data definition). Table data, large objects, and sequence values are restored if they are in the archive.
This option is similar to specifying --section=data
.
-c
–clean
Drops a database object before regenerating it. (If you do not use --if-exists
, a harmless error message may be generated when there is no object in the target database.)
-C
–create
Creates a database before restoring. If specified with -clean
, drop and recreate the target database before connection.
With this option, the database named -d
is used only to execute the initial DROP DATABASE
and CREATE DATABASE
commands. All data is restored to the database name that appears in the archive.
-d dbname
–dbname=dbname
Connects to the database dbname and restore it directly to the database.
-e
–exit-on-error
If an error occurs while sending an SQL command to the database, it terminates. The default is to continue and display the number of errors at the end of the restore.
-f filename
–file=filename
Specifies the output file of the generated script or, if used with -l
, specifies the file to list. The default is standard output.
-F format
–format=format
Specifies the archive format. Specifying this format is not required as pg_restore
automatically determines it. If specified, it can be one of the followings:
c
custom
A user-defined format ofpg_dump
.d
directory
A directory archive.t
tar
A tar archive.
-I index
–index=index
Only definitions of the named indexes are restored. Multiple indexes can be specified with multiple -I switches.
-j number-of-jobs
–jobs=number-of-jobs
Executes the most time-consuming part of pg_restore
as multiple concurrent tasks are used to load data, create indexes, or create constraints. This option greatly reduces the time to restore a large database to a server running on a multiprocessor system.
Each task is a process or thread, depending on the operating system, and uses a separate connection to the server.
The optimal value for this option varies depending on the hardware settings of the server, client, and network (e.g. the number of CPU cores, disk configuration). If you set a larger value than the number of CPU cores in the server, you can shorten the restore time in most cases. A too high value may cause performance degradation due to memory thrashing.
Only the user-defined and directory archive formats are supported in this option. The input must be a regular file or directory. This option is ignored when exporting scripts without connecting directly to the database server. You cannot use multiple jobs with the --single-transaction
option.
-l
–list
Lists the contents of the archive. The output of this operation can be used as input to the -L
option. Use a filtering switch such as -n
or -t
with -l
to restrict the listed items.
-L list-file
–use-list=list-file
Restores only the archive elements in list-file in the order they are listed in the file. If you use a filtering switch such as -n
or -t
with -L
, you further restrict what is restored.
list-file is usually created by editing the output of the previous -l
operation. Lines can be moved or removed, and annotations can be used with a semicolon (;) at the beginning of the line.
-n namespace
–schema=schema
Restores only the objects in the named schema. Multiple schemas can be specified with a multiple -n
switch. You can combine this option with -t
to restore only specific tables.
-O
–no-owner
Does not print commands that set ownership of the objects so that they match the original database. By default, pg_restore
sets ownership of schema elements created through the ALTER OWNER
or SET SESSION AUTHORIZATION
statement. This statement fails if superuser (or the same user who owns all objects in the script) does not make an initial connection to the database. Using -O allows all usernames to be used for initial connections; the user will own all the objects created.
-P function-name(argtype [, …])
–function=function-name(argtype [, …])
Restores only named functions. Be careful with the spelling of the function name and arguments as shown in the table of contents of the dump file. Multiple functions can be specified using a multiple -P
switch.
-s
–schema-only
Restores only the schema (data definition) to the extent of schema entry in the archive (not in the data).
This option is the opposite of --data-only
. It is similar to specifying --section=pre-data
,--section=post-data
.
(Do not confuse this with the --schema
option, which uses the word “schema” in other ways.)
-S username
–superuser=username
Specifies the superuser name to use when disabling trigger. This is only relevant if --disable-triggers
is used.
-t table
–table=table
Restores the definition or data of a named table. For this purpose, “table” includes views, materialized views, sequences, and foreign tables. Multiple tables can be selected by writing multiple -t switches. This option can be combined with the -n
option to specify a table in a particular schema.
Note: If you specify
-t
,pg_restore
does not attempt to restore any other database objects on which the selected table depends. Therefore, there is no guarantee that the result of a particular schema dump can be restored to normal clean database by itself.
Note: This flag does not work in the same way as the
-t
flag ofpg_dump
. There is currently no provision for wildcard matching inpg_restore
, and you cannot include the schema name in-t
.
-T trigger
–trigger=trigger
Restores only named triggers. Multiple triggers can be specified as multiple -T
switches.
-v
–verbose
Specifies the verbose mode.
-V
–version
Displays the version of pg_restore
and exits.
-x
–no-privileges
–no-acl
Prevents restoration of access privileges (grant/revoke command).
-1
–single-transaction
Runs a restore in a single transaction (i.e. wrap the command released by BEGIN/COMMIT). By doing this, all commands are completed successfully or the changes do not take effect. This option means --exit-on-error
.
–disable-triggers
This option is relevant only when performing a data-only restore. This instructs pg_restore
to temporarily disable trigger on the target table while executing a command to reload the data. Use this option in case of referential integrity check in which you do not want a call while reloading data or in the case where there are other triggers on the table.
Currently the command for --disable-trigger
must be run as superuser. Thus, specify the superuser name with -S
, or run pg_restore
as the AgensGraph superuser.
–enable-row-security
This option is relevant only when restoring the content of a table with row security. By default, pg_restore
sets row_security
to off so that all data is restored to the table. An error occurs if the user does not have sufficient privileges to bypass row security. This parameter allows pg_restore
to restore the content of the table with row security by setting row_security
to on. This operation may fail even if the user is not authorized to insert rows from the dump into the table.
As COPY FROM
does not support row security, the current dump should be an INSERT
type in this option.
–if-exists
Uses conditional commands (such as the IF EXISTS clause) to clean up database objects. This option is not valid unless –clean is specified.
–no-data-for-failed-tables
Basically, table data creation is restored even if the table creation command fails. If you use this option, data in those tables are skipped. This is useful if you already have content in the table you want in the target database. For example, auxiliary tables for PostgreSQL extensions, such as PostGIS, may already be loaded into the target database. Specifying this option prevents duplicate or obsolete data from loading.
This option is valid only when restoring directly to the database, not when generating a SQL script output.
–no-security-labels
The command to restore the security label does not print the command even if it is included in the archive.
–no-tablespaces
Does not print the command to select tablespaces. When this option is enabled, all objects are created in all tablespaces that are the default during a restore.
–section=section
Restores only the named sections. Section names can be pre-data, data, or post-data. This option can be specified more than once to select multiple sections. The default is to restore all sections.
The data section contains actual table data as well as large object definitions. A post-data item consists of the definitions of indexes, triggers, rules, and constraints (not validity check constraints). The Pre-data item consists of all other data definition entries.
–strict-names
Each schema (-n-schema-schema) and table (-T-table) qualifiers need to match one or more schemas/tables with the backup file.
–use-set-session-authorization
Prints the SQL standard SET SESSION AUTHORIZATION
command, instead of the ALTER OWNER command, to identify object ownership. By doing this, the dump can be compliant with the standard, but may not be restored properly depending on the record of the object in the dump.
-?
–help
Displays help for pg_restore
command line arguments and exits.
pg_restore
also accepts the following command line arguments for connection parameters:
-h host
–host=host
Specifies the host name of the system on which the server is running. If the value starts with a slash, it is used as a directory for Unix domain sockets. The default value is taken from the PGHOST environment variable (if set). Otherwise, a Unix domain socket connection is attempted.
-p port
–port=port
Specifies the TCP port or local Unix domain socket file extension on which the server listens for connections. The default value is the PGPORT environment variable and should be set to the (compiled) default.
-U username
–username=username
The name of the user to connect to.
-w
–no-password
Does not prompt for a password. If the server requires password authentication and cannot use the password in any other way, such as in a .pgpass
file, the connection attempt will fail. This option can be useful in batch jobs and scripts where there is no user who can enter the password.
-W
–password
Before connecting to the database, run pg_restore
to enter the password.
As pg_restore
automatically requires a password when the server requests password authentication, this option is never needed. However, pg_restore
wastes connection attempts to find out if the server wants a password. In some cases, it is better to type -W
to avoid extra connection attempts.
–role=rolename
Specifies the role name to be used to create a dump. With this option, pg_restore
issues the SET ROLE rolename command after connecting to the database. This is useful when an authenticated user (specified by -U) lacks the privileges required by pg_restore
, but has the authority to switch to the required privilege. If you have a policy that prevents you from logging in directly as superuser, you may use this option to generate a dump without violating the policy.
Examples
Suppose you have dumped a database called mydb
into a dump file in a user-defined format.
pg_dump -Fc mydb > db.dump
To delete the database and re-create the database in the dump:
dropdb mydb
pg_restore -C -d postgres db.dump
The database named in the -d
switch can be any database in the cluster. pg_restore
uses it to run the CREATE DATABASE command for mydb
. With -C
, the data is always restored to the database name that appears in the dump file.
To reload a dump into a new database called newdb:
createdb -T template0 newdb
pg_restore -d newdb db.dump
Connect directly to the database to be restored without using -C
. Replicate the new database from template0
, not template1
, to make sure it is initially empty.
To rearrange database entries, you must first dump the table of contents of the archive.
pg_restore -l db.dump > db.list
The list file consists of a header and a line for each entry as follows (example):
;
; Archive created at Mon Sep 14 13:55:39 2009
; dbname: DBDEMOS
; TOC Entries: 81
; Compression: 9
; Dump Version: 1.10-0
; Format: CUSTOM
; Integer: 4 bytes
; Offset: 8 bytes
; Dumped from database version: 8.3.5
; Dumped by pg_dump version: 8.3.8
;
;
; Selected TOC Entries:
;
3; 2615 2200 SCHEMA - public pasha
1861; 0 0 COMMENT - SCHEMA public pasha
1862; 0 0 ACL - public pasha
317; 1247 17715 TYPE public composite pasha
319; 1247 25899 DOMAIN public domain0 pasha
The semicolon marks the beginning of a comment, and the number at the beginning of the line indicates the internal archive ID assigned to each entry.
You can annotate, delete, and resort lines in a file. Here is an example:
10; 145433 TABLE map_resolutions postgres
;2; 145344 TABLE species postgres
;4; 145359 TABLE nt_header postgres
6; 145402 TABLE species_records postgres
;8; 145416 TABLE ss_old postgres
It can be used as input to pg_restore
and only entries 10 and 6 are restored in order.
pg_restore -L db.list db.dump
Server Tool
ag_ctl
ag_ctl
is a utility that initializes the AgensGraph cluster and starts, stops, or restarts the database server, or displays the status of the running server. You can start the server manually, but ag_ctl
performs operations such as reconnecting the log output and properly detaching it from the terminal/process groups. It also provides convenient options for controlled termination.
Here is how to use:
ag_ctl init[db] [-D DATADIR] [-s] [-o "OPTIONS"]
ag_ctl start [-w] [-t SECS] [-D DATADIR] [-s] [-l FILENAME] [-o "OPTIONS"]
ag_ctl stop [-W] [-t SECS] [-D DATADIR] [-s] [-m SHUTDOWN-MODE]
ag_ctl restart [-w] [-t SECS] [-D DATADIR] [-s] [-m SHUTDOWN-MODE]
[-o "OPTIONS"]
ag_ctl reload [-D DATADIR] [-s]
ag_ctl status [-D DATADIR]
ag_ctl promote [-D DATADIR] [-s]
ag_ctl kill SIGNALNAME PID
init
or initdb
mode creates a new AgensGraph database cluster. A database cluster is a collection of databases managed on a single server instance. This mode invokes the initdb
command.
The start
mode starts a new server. The server starts in the background and standard input is connected to /dev/null
(nul
on Windows). In Unix-like systems, the server’s standard output and standard error are sent to the standard output (not standard error) of ag_ctl
. You should then redirect the standard output of ag_ctl
to a file or export it to another process, such as rotatelogs, a log rotation program. On Windows, the server’s standard output and standard error are sent to the terminal by default. This default behavior can be changed by adding the server’s output to a log file using -l
. It is recommended to use either -l
or output redirection.
The stop
mode terminates the server running in the specified data directory. The -m
option allows you to choose between three different shutdown methods. “Smart” mode waits until all active clients are disconnected and online backup is complete. If the server is in a hot standby state, recovery and streaming replication are terminated when all clients are disconnected. “Fast” mode (default) will end the ongoing online backup without waiting for the client to be disconnected. All active transactions will be rolled back and the server will shut down if the client is forced to disconnect. “Immediate” mode immediately aborts all server processes. This will cause crash-recovery to run upon restart.
The restart
mode effectively executes stop and then executes start. This allows you to change the agens command line option. Restart may fail if the relative path specified on the command line is entered during server startup.
The reload
mode simply sends a SIGHUP signal to the agens process to read the configuration files (postgresql.conf, pg_hba.conf, etc.) again. Therefore, you may change configuration file options that do not require a full restart.
The status
mode verifies that the server is running in the specified data directory. If so, the PID and command line options used to invoke it will be displayed. If the server is not running, the process returns an exit status 3. If an accessible data directory is not specified, the process returns an exit status 4.
In the promote
mode, it instructs the standby server running in the specified data directory to terminate recovery and start the read/write operations.
Use the kill
mode to send a signal to the specified process. This is especially useful on Windows without the kill command. For a list of supported signal names, use –help.
Options
-c
–core-file
The server releases the soft resource limit stored in the core file to allow creation of the core files. This is useful for debugging or diagnosing problems by allowing the failed server process to obtain stack traces.
-D datadir
–pgdata datadir
Specifies the file system locations of the database configuration files. If omitted, the environment variable AGDATA will be used.
-l filename
–log filename
Adds the server log output to filename. If the file does not exist, a new file with the filename is created. As umask is set to 077, other users are now allowed to access the log file by default.
-m mode
–mode mode
Specifies the shutdown mode. mode can be the first letter of one of smart
, fast
and immediate
. If not set, fast
will be used.
-o options
Specifies the options to pass to the agens
command. Multiple option calls are added.
The options are usually enclosed in single or double quotes so they can be passed in group.
-o initdb-options
Specifies the options to pass directly to the initdb
command.
The options are usually enclosed in single or double quotes so they can be passed in group.
-p path
Specifies the location of the agens
executable. By default, the agens
executable runs in the same directory as ag_ctl
or fails in the hard-wired installation directory. You do not need to use this option unless you use it in general or there is an error that the agens
executable cannot be found.
In init
mode, use this option to specify the location of the initdb
executable.
-s
–silent
Only error messages can be printed.
-t
–timeout
The maximum time (seconds) to wait before startup or shutdown. The default value is the value of the PGCTLTIMEOUT environment variable, or 60 seconds (if not set).
-V
–version
Prints the ag_ctl
version and exits.
–revision
Shows the agens revision information and exits.
-w
Waits until startup or shutdown completes. Standby mode is the default option for shutdown, but not for startup. When waiting for startup, ag_ctl
repeatedly tries to connect to the server. When waiting for shutdown, it waits for the server to remove the PID file. This option allows you to enter an SSL password on startup. ag_ctl
returns an exit code depending on success of startup or shutdown.
-W
Does not wait for startup or shutdown to complete. This is the default for start and restart modes.
-?
–help
Displays help for ag_ctl
command line arguments and exits.
Options for Windows
-N servicename
The name of the system service to register. The name is used as a service name and a displayed name.
-P password
Password that enables service startup by the user.
-S start-type
Startup type of system service to be registered. The startup type can be auto
, demand
, or the first character of either. If it is omitted, auto will be used.
-U username
The name of the user to start the service. For domain users, use the DOMAIN\username
format.
Environment
AGDATA Default AGDATA data directory location.
Examples
Starting the Server
To start the server, use:
ag_ctl start
To start the server, wait for the server to accept the connection.
ag_ctl -w start
To start the server using port 5432 and run without fsync, use:
ag_ctl -o "-F -p 5432" start
Stopping the Server
To stop the server, use:
ag_ctl stop
You can use the -m
option to control how the server shuts down.
ag_ctl stop -m fast
Restarting the Server
Restarting the server is equivalent to stopping and restarting the server, except when saving/reusing the command line options passed to the previously run instance by ag_ctl
. To restart the server in its simplest form, use:
ag_ctl restart
Stops the server and waits for its restart for server restart.
ag_ctl -w restart
To restart using port 5432, disable fsync on restart.
ag_ctl -o "-F -p 5432" restart
Showing the Server Status
The following is the sample status output of ag_ctl
.
ag_ctl status
ag_ctl: server is running (PID: 2823)
/path/to/AgensGraph/bin/postgres
This is the command line to be called in restart mode.
pg_upgrade
pg_upgrade
allows you to upgrade the data stored in data files to the latest AgensGraph major version without any data dump/reload required for major version upgrades. pg_upgrade
performs a quick upgrade by creating a new system table and reusing the old user data file.
Here is how to use:
pg_upgrade [OPTION]...
Options
pg_upgrade accepts the following command line arguments:
-b bindir
–old-bindir=bindir
Previous AgensGraph bin directory (environment variable: PGBINOLD)
-B bindir
–new-bindir=bindir
New AgensGraph bin directory (environment variable: PGBINNEW)
-c
–check
Checks the cluster without changing the data.
-d datadir
–old-datadir=datadir
Previous cluster data directory (environment variable: PGDATAOLD)
-D datadir
–new-datadir=datadir
New cluster data directory (environment variable: PGDATANEW)
-j
–jobs
The number of processes or threads that can be used concurrently
-k
–link
Uses hard links instead of copying files to the new cluster.
-o options
–old-options options
Multiple option calls are added as an option passed directly to the previous agens command.
-O options
–new-options options
Multiple option calls are added as an option passed directly to the new agens command.
-p port
–old-port=port
Old cluster port number (environment variable: PGPORTOLD)
-P port
–new-port=port
New cluster port number (environment variable: PGPORTNEW)
-r
–retain
Maintains SQL and log files even after successful completion.
-U username
–username=username
Installation user name for the cluster (environment variable: PGUSER)
-v
–verbose
Enables verbose internal logging
-V
–version
Displays version information and exits.
-?
–help
Displays help and exits.
Usage
Here are the steps to perform an upgrade using pg_upgrade
:
Back up your existing database cluster.
mv /user/local/AgensGraph /usr/local/AgensGraph.old
Install a new AgensGraph binary.
Initialize the new AgensGraph cluster.
initdb
Terminate the services of the existing cluster and the new cluster.
ag_ctl -D /opt/AgensGraph/1.2/db_cluster stop
ag_ctl -D /opt/AgensGraph/1.3/db_cluster stop
Run pg_upgrade.
pg_upgrade -d oldCluster/data -D newCluster/data -b oldCluster/bin -B newCluster/bin
or
export PGDATAOLD=oldCluster/data export PGDATANEW=newCluster/data export PGBINOLD=oldCluster/bin export PGBINNEW=newCluster/bin pg_upgrade
Start a new AgensGraph.
ag_ctl start
After upgrading, perform the following tasks.
Statistics
As optimizer statistics are not sent bypg_upgrade
, you should run the command at the end of the upgrade to regenerate that information. You may need to set connection parameters to match the new cluster.Deleting old clusters If you are satisfied with the upgrade, you can delete the old cluster’s data directory by running the script mentioned upon completion of
pg_upgrade
.