Index of /postgresql/FAQ/FAQ_DEV.html |
Last updated: Wed Aug 22 20:10:01 EDT 2007
Current maintainer: Bruce Momjian (bruce@momjian.us)
The most recent version of this document can be viewed at http://www.postgresql.org/docs/faqs.FAQ_DEV.html.
Download the code and have a look around. See 1.8.
Subscribe to and read the pgsql-hackers mailing list (often termed 'hackers'). This is where the major contributors and core members of the project discuss development.
PostgreSQL is developed mostly in the C programming language. It also makes use of Yacc and Lex.
The source code is targeted at most of the popular Unix platforms and the Windows environment (XP, Windows 2000, and up).
Most developers make use of the open source development tool chain. If you have contributed to open source software before, you will probably be familiar with these tools. They include: GCC (http://gcc.gnu.org, GDB (www.gnu.org/software/gdb/gdb.html), autoconf (www.gnu.org/software/autoconf/) AND GNU make (www.gnu.org/software/make/make.html.
Developers using this tool chain on Windows make use of MingW (see http://www.mingw.org/).
Some developers use compilers from other software vendors with mixed results.
Developers who regularly rebuild the source often pass the --enable-depend flag to configure. The result is that when you make a modification to a C header file, all files depend upon that file are also rebuilt.
src/Makefile.custom can be used to set environment variables, like CUSTOM_COPT, that are used for every compile.
You can learn more about these features by consulting the archives, the SQL standards and the recommend texts (see 1.11).
Send an email to pgsql-hackers with a proposal for what you want to do (assuming your contribution is not trivial). Working in isolation is not advisable because others might be working on the same TODO item, or you might have misunderstood the TODO item. In the email, discuss both the internal implementation method you plan to use, and any user-visible changes (new syntax, etc). For complex patches, it is important to get community feeback on your proposal before starting work. Failure to do so might mean your patch is rejected. If your work is being sponsored by a company, read this article for tips on being more effective.
A web site is maintained for patches awaiting review, http://momjian.postgresql.org/cgi-bin/pgpatches, and those that are being kept for the next release, http://momjian.postgresql.org/cgi-bin/pgpatches_hold.
You will need to submit the patch to pgsql-patches@postgresql.org. It will be reviewed by other contributors to the project and will be either accepted or sent back for further work. To help ensure your patch is reviewed and committed in a timely fashion, please try to make sure your submission conforms to the following guidelines:
Even if you pass all of the above, the patch might still be rejected for other reasons. Please be prepared to listen to comments and make modifications.
You will be notified via email when the patch is applied, and your name will appear in the next version of the release notes.
Patch committers check several things before applying a patch:
Other than documentation in the source tree itself, you can find some papers/presentations discussing the code at http://www.postgresql.org/developer. An excellent presentation is at http://neilconway.org/talks/hacking/
There are several ways to obtain the source tree. Occasional developers can just get the most recent source tree snapshot from ftp://ftp.postgresql.org.
Regular developers might want to take advantage of anonymous access to our source code management system. The source tree is currently hosted in CVS. For details of how to obtain the source from CVS see http://developer.postgresql.org/docs/postgres/cvs.html.
Basic system testing
The easiest way to test your code is to ensure that it builds against the latest version of the code and that it does not generate compiler warnings.
It is worth advised that you pass --enable-cassert to configure. This will turn on assertions with in the source which will often show us bugs because they cause data corruption of segmentation violations. This generally makes debugging much easier.
Then, perform run time testing via psql.
Regression test suite
The next step is to test your changes against the existing regression test suite. To do this, issue "make check" in the root directory of the source tree. If any tests failure, investigate.
If you've deliberately changed existing behavior, this change might cause a regression test failure but not any actual regression. If so, you should also patch the regression test suite.
Other run time testing
Some developers make use of tools such as valgrind (http://valgrind.kde.org) for memory testing, gprof (which comes with the GNU binutils suite) and oprofile (http://oprofile.sourceforge.net/) for profiling and other related tools.
What about unit testing, static analysis, model checking...?
There have been a number of discussions about other testing frameworks and some developers are exploring these ideas.
Keep in mind the Makefiles do not have the proper dependencies for include files. You have to do a make clean and then another make. If you are using GCC you can use the --enable-depend option of configure to have the compiler compute the dependencies automatically.
First, all the files in the src/tools directory are designed for developers.
RELEASE_CHANGES changes we have to make for each release backend description/flowchart of the backend directories ccsym find standard defines made by your compiler copyright fixes copyright notices entab converts spaces to tabs, used by pgindent find_static finds functions that could be made static find_typedef finds typedefs in the source code find_badmacros finds macros that use braces incorrectly fsync a script to provide information about the cost of cache syncing system calls make_ctags make vi 'tags' file in each directory make_diff make *.orig and diffs of source make_etags make emacs 'etags' files make_keywords make comparison of our keywords and SQL'92 make_mkid make mkid ID files pgcvslog used to generate a list of changes for each release pginclude scripts for adding/removing include files pgindent indents source files pgtest a semi-automated build system thread a thread testing script
In src/include/catalog:
unused_oids a script which generates unused OIDs for use in system catalogs duplicate_oids finds duplicate OIDs in system catalog definitionsIf you point your browser at the tools/backend/index.html file, you will see few paragraphs describing the data flow, the backend components in a flow chart, and a description of the shared memory area. You can click on any flowchart box to see a description. If you then click on the directory name, you will be taken to the source directory, to browse the actual source code behind it. We also have several README files in some source directories to describe the function of the module. The browser will display these when you enter the directory also. The tools/backend directory is also contained on our web page under the title How PostgreSQL Processes a Query.
Second, you really should have an editor that can handle tags, so you can tag a function call to see the function definition, and then tag inside that function to see an even lower-level function, and then back out twice to return to the original function. Most editors support this via tags or etags files.
Third, you need to get id-utils from ftp://ftp.gnu.org/gnu/id-utils/
By running tools/make_mkid, an archive of source symbols can be created that can be rapidly queried.
Some developers make use of cscope, which can be found at http://cscope.sf.net/. Others use glimpse, which can be found at http://webglimpse.net/.
tools/make_diff has tools to create patch diff files that can be applied to the distribution. This produces context diffs, which is our preferred format.
Our standard format BSD style, with each level of code indented
one tab, where each tab is four spaces. You will need to set your editor
or file viewer to display tabs as four spaces:
vi in ~/.exrc: set tabstop=4 set sw=4 more: more -x4 less: less -x4
The tools/editors directory of the latest sources contains sample settings that can be used with the emacs, xemacs and vim editors, that assist in keeping to PostgreSQL coding standards.
pgindent will the format code by specifying flags to your operating system's utility indent. This article describes the value of a consistent coding style.
pgindent is run on all source files just before each beta
test period. It auto-formats all source files to make them
consistent. Comment blocks that need specific line breaks should be
formatted as block comments, where the comment starts as
/*------
. These comments will not be reformatted in
any way.
pginclude contains scripts used to add needed
#include
's to include files, and removed unneeded
#include
's.
When adding system types, you will need to assign oids to them. There is also a script called unused_oids in pgsql/src/include/catalog that shows the unused oids.
There are five good books:
The files configure and configure.in are part of the GNU autoconf package. Configure allows us to test for various capabilities of the OS, and to set variables that can then be tested in C programs and Makefiles. Autoconf is installed on the PostgreSQL main server. To add options to configure, edit configure.in, and then run autoconf to generate configure.
When configure is run by the user, it tests various OS capabilities, stores those in config.status and config.cache, and modifies a list of *.in files. For example, if there exists a Makefile.in, configure generates a Makefile that contains substitutions for all @var@ parameters found by configure.
When you need to edit files, make sure you don't waste time modifying files generated by configure. Edit the *.in file, and re-run configure to recreate the needed file. If you run make distclean from the top-level source directory, all files derived by configure are removed, so you see only the file contained in the source distribution.
There are a variety of places that need to be modified to add a new port. First, start in the src/template directory. Add an appropriate entry for your OS. Also, use src/config.guess to add your OS to src/template/.similar. You shouldn't match the OS version exactly. The configure test will look for an exact OS version number, and if not found, find a match without version number. Edit src/configure.in to add your new OS. (See configure item above.) You will need to run autoconf, or patch src/configure too.
Then, check src/include/port and add your new OS file, with appropriate values. Hopefully, there is already locking code in src/include/storage/s_lock.h for your CPU. There is also a src/makefiles directory for port-specific Makefile handling. There is a backend/port directory if you need special files for your OS.
There is always a temptation to use the newest operating system features as soon as they arrive. We resist that temptation.
First, we support 15+ operating systems, so any new feature has to be well established before we will consider it. Second, most new wizz-bang features don't provide dramatic improvements. Third, they usually have some downside, such as decreased reliability or additional code required. Therefore, we don't rush to use new features but rather wait for the feature to be established, then ask for testing to show that a measurable improvement is possible.
As an example, threads are not currently used in the backend code because:
So, we are not ignorant of new features. It is just that we are cautious about their adoption. The TODO list often contains links to discussions showing our reasoning in these areas.
This was written by Lamar Owen and Devrim Gündüz:
2006-10-16
As to how the RPMs are built -- to answer that question sanely requires us to know how much experience you have with the whole RPM paradigm. 'How is the RPM built?' is a multifaceted question. The obvious simple answer is that we maintain:
PGDG RPM Maintainer builds the SRPM and announces the SRPM to the pgsqlrpms-hackers list. This is a list where package builders are subscribed. Then, the builders download the SRPM and rebuild it on their machines.
We try to build on as many different canonical distributions as we can. Currently we are able to build on Red Hat Linux 9, RHEL 3 and above, and all Fedora Core Linux releases.
To test the binaries, we install them on our local machines and run regression tests. If the package builders uses postgres user to build the rpms, then it is possible to run regression tests during RPM builds.
Once the build passes these tests, the binary RPMs are sent back to PGDG RPM Maintainer and they are pushed to main FTP site, followed by a release announcement to pgsqlrpms-* lists, pgsql-general and pgsql-announce lists.
You will notice we said 'canonical' distributions above. That simply means that the machine is as stock 'out of the box' as practical -- that is, everything (except select few programs) on these boxen are installed by RPM; only official Red Hat released RPMs are used (except in unusual circumstances involving software that will not alter the build -- for example, installing a newer non-RedHat version of the Dia diagramming package is OK -- installing Python 2.1 on the box that has Python 1.5.2 installed is not, as that alters the PostgreSQL build). The RPM as uploaded is built to as close to out-of-the-box pristine as is possible. Only the standard released 'official to that release' compiler is used -- and only the standard official kernel is used as well.
PGDG RPM Building Project does not build RPMs for Mandrake .
We usually have only one SRPM for all platforms. This is because of our limited resources. However, on some cases, we may distribute different SRPMs for different platforms, depending on possible compilation problems, especially on older distros.
Please note that this is a volunteered job -- We are doing our best to keep packages up to date. We, at least, provide SRPMs for all platforms. For example, if you do not find a RHEL 4 x86_64 RPM in our FTP site, it means that we do not have a RHEL 4 x86_64 server around. If you have one and want to help us, please do not hesitate to build rpms and send to us :-) http://pgfoundry.org/docman/view.php/1000048/98/PostgreSQL-RPM-Installation-PGDG.pdf has some information about building binary RPMs using an SRPM.
PGDG RPM Building Project is a hosted on pgFoundry : http://pgfoundry.org/projects/pgsqlrpms. We are an open community, except one point : Our pgsqlrpms-hackers list is open to package builders only. Still, its archives are visible to public. We use a CVS server to save the work we have done so far. This includes spec files and patches; as well as documents.
As to why all these files aren't part of the source tree, well, unless there was a large cry for it to happen, we don't believe it should.
This was written by Tom Lane:
2001-05-07
If you just do basic "cvs checkout", "cvs update", "cvs commit", then you'll always be dealing with the HEAD version of the files in CVS. That's what you want for development, but if you need to patch past stable releases then you have to be able to access and update the "branch" portions of our CVS repository. We normally fork off a branch for a stable release just before starting the development cycle for the next release.
The first thing you have to know is the branch name for the branch you are interested in getting at. To do this, look at some long-lived file, say the top-level HISTORY file, with "cvs status -v" to see what the branch names are. (Thanks to Ian Lance Taylor for pointing out that this is the easiest way to do it.) Typical branch names are:
REL7_1_STABLE REL7_0_PATCHES REL6_5_PATCHES
OK, so how do you do work on a branch? By far the best way is to create a separate checkout tree for the branch and do your work in that. Not only is that the easiest way to deal with CVS, but you really need to have the whole past tree available anyway to test your work. (And you *better* test your work. Never forget that dot-releases tend to go out with very little beta testing --- so whenever you commit an update to a stable branch, you'd better be doubly sure that it's correct.)
Normally, to checkout the head branch, you just cd to the place you want to contain the toplevel "pgsql" directory and say
cvs ... checkout pgsql
To get a past branch, you cd to wherever you want it and say
cvs ... checkout -r BRANCHNAME pgsql
For example, just a couple days ago I did
mkdir ~postgres/REL7_1 cd ~postgres/REL7_1 cvs ... checkout -r REL7_1_STABLE pgsql
and now I have a maintenance copy of 7.1.*.
When you've done a checkout in this way, the branch name is "sticky": CVS automatically knows that this directory tree is for the branch, and whenever you do "cvs update" or "cvs commit" in this tree, you'll fetch or store the latest version in the branch, not the head version. Easy as can be.
So, if you have a patch that needs to apply to both the head and a recent stable branch, you have to make the edits and do the commit twice, once in your development tree and once in your stable branch tree. This is kind of a pain, which is why we don't normally fork the tree right away after a major release --- we wait for a dot-release or two, so that we won't have to double-patch the first wave of fixes.
There are three versions of the SQL standard: SQL-92, SQL:1999, and SQL:2003. They are endorsed by ANSI and ISO. Draft versions can be downloaded from:
Some SQL standards web pages are:
Many technical questions held by those new to the code have been answered on the pgsql-hackers mailing list - the archives of which can be found at http://archives.postgresql.org/pgsql-hackers/.
If you cannot find discussion or your particular question, feel free to put it to the list.
Major contributors also answer technical questions, including questions about development of new features, on IRC at irc.freenode.net in the #postgresql channel.
PostgreSQL website development is discussed on the pgsql-www@postgresql.org mailing list. The is a project page where the source code is available at http://gborg.postgresql.org/project/pgweb/projdisplay.php , the code for the next version of the website is under the "portal" module.
Currently the core developers see no SCMS that will provide enough benefit to outwiegh the pain involved in moving to a new SCMS. Typical problems that must be addressed by any new SCMS include:
Currently there is no intention for switching to a new SCMS until at least the end of the 8.4 development cycle sometime in late 2008. For more information please refer to the mailing list archives.
You first need to find the tuples(rows) you are interested in. There are two ways. First, SearchSysCache() and related functions allow you to query the system catalogs. This is the preferred way to access system tables, because the first call to the cache loads the needed rows, and future requests can return the results without accessing the base table. The caches use system table indexes to look up tuples. A list of available caches is located in src/backend/utils/cache/syscache.c. src/backend/utils/cache/lsyscache.c contains many column-specific cache lookup functions.
The rows returned are cache-owned versions of the heap rows. Therefore, you must not modify or delete the tuple returned by SearchSysCache(). What you should do is release it with ReleaseSysCache() when you are done using it; this informs the cache that it can discard that tuple if necessary. If you neglect to call ReleaseSysCache(), then the cache entry will remain locked in the cache until end of transaction, which is tolerable but not very desirable.
If you can't use the system cache, you will need to retrieve the data directly from the heap table, using the buffer cache that is shared by all backends. The backend automatically takes care of loading the rows into the buffer cache.
Open the table with heap_open(). You can then start a table scan with heap_beginscan(), then use heap_getnext() and continue as long as HeapTupleIsValid() returns true. Then do a heap_endscan(). Keys can be assigned to the scan. No indexes are used, so all rows are going to be compared to the keys, and only the valid rows returned.
You can also use heap_fetch() to fetch rows by block number/offset. While scans automatically lock/unlock rows from the buffer cache, with heap_fetch(), you must pass a Buffer pointer, and ReleaseBuffer() it when completed.
Once you have the row, you can get data that is common to all tuples, like t_self and t_oid, by merely accessing the HeapTuple structure entries. If you need a table-specific column, you should take the HeapTuple pointer, and use the GETSTRUCT() macro to access the table-specific start of the tuple. You then cast the pointer as a Form_pg_proc pointer if you are accessing the pg_proc table, or Form_pg_type if you are accessing pg_type. You can then access the columns by using a structure pointer:
((Form_pg_class) GETSTRUCT(tuple))->relnatts
You must not directly change live tuples in this way. The
best way is to use heap_modifytuple() and pass it your
original tuple, and the values you want changed. It returns a
palloc'ed tuple, which you pass to heap_replace(). You can
delete tuples by passing the tuple's t_self to
heap_destroy(). You use t_self for
heap_update() too. Remember, tuples can be either system
cache copies, which might go away after you call
ReleaseSysCache(), or read directly from disk buffers, which
go away when you heap_getnext(), heap_endscan, or
ReleaseBuffer(), in the heap_fetch() case. Or it may
be a palloc'ed tuple, that you must pfree() when finished.
Table, column, type, function, and view names are stored in system tables in columns of type Name. Name is a fixed-length, null-terminated type of NAMEDATALEN bytes. (The default value for NAMEDATALEN is 64 bytes.)
typedef struct nameData
{
char data[NAMEDATALEN];
} NameData;
typedef NameData *Name;
Table, column, type, function, and view names that come into the
backend via user queries are stored as variable-length,
null-terminated character strings.
Many functions are called with both types of names, ie. heap_open(). Because the Name type is null-terminated, it is safe to pass it to a function expecting a char *. Because there are many cases where on-disk names(Name) are compared to user-supplied names(char *), there are many cases where Name and char * are used interchangeably.
We do this because this allows a consistent way to pass data inside the backend in a flexible way. Every node has a NodeTag which specifies what type of data is inside the Node. Lists are groups of Nodes chained together as a forward-linked list.
Here are some of the List manipulation commands:
You can print nodes easily inside gdb. First, to disable output truncation when you use the gdb print command:
- lfirst(i), lfirst_int(i), lfirst_oid(i)
- return the data (a pointer, integer or OID respectively) of list cell i.
- lnext(i)
- return the next list cell after i.
- foreach(i, list)
- loop through list, assigning each list cell to i. It is important to note that i is a ListCell *, not the data in the List element. You need to use lfirst(i) to get at the data. Here is a typical code snippet that loops through a List containing Var *'s and processes each one:
List *list; ListCell *i; foreach(i, list) { Var *var = lfirst(i); /* process var here */ }
- lcons(node, list)
- add node to the front of list, or create a new list with node if list is NIL.
- lappend(list, node)
- add node to the end of list.
- list_concat(list1, list2)
- Concatenate list2 on to the end of list1.
- list_length(list)
- return the length of the list.
- list_nth(list, i)
- return the i'th element in list, counting from zero.
- lcons_int, ...
- There are integer versions of these: lcons_int, lappend_int, etc. Also versions for OID lists: lcons_oid, lappend_oid, etc.
(gdb) set print elements 0
Instead of printing values in gdb format, you can use the next two
commands to print out List, Node, and structure contents in a
verbose format that is easier to understand. List's are unrolled
into nodes, and nodes are printed in detail. The first prints in a
short format, and the second in a long format:
(gdb) call print(any_pointer)
(gdb) call pprint(any_pointer)
The output appears in the postmaster log file, or on your screen if
you are running a backend directly without a postmaster.
The structures passed around in the parser, rewriter, optimizer, and executor require quite a bit of support. Most structures have support routines in src/backend/nodes used to create, copy, read, and output those structures (in particular, the files copyfuncs.c and equalfuncs.c. Make sure you add support for your new field to these files. Find any other places the structure might need code for your new field. mkid is helpful with this (see 1.10).
palloc() and pfree() are used in place of malloc() and free() because we find it easier to automatically free all memory allocated when a query completes. This assures us that all memory that was allocated gets freed even if we have lost track of where we allocated it. There are special non-query contexts that memory can be allocated in. These affect when the allocated memory is freed by the backend.
ereport() is used to send messages to the front-end, and optionally terminate the current query being processed. The first parameter is an ereport level of DEBUG (levels 1-5), LOG, INFO, NOTICE, ERROR, FATAL, or PANIC. NOTICE prints on the user's terminal and the postmaster logs. INFO prints only to the user's terminal and LOG prints only to the server logs. (These can be changed from postgresql.conf.) ERROR prints in both places, and terminates the current query, never returning from the call. FATAL terminates the backend process. The remaining parameters of ereport are a printf-style set of parameters to print.
ereport(ERROR) frees most memory and open file descriptors so you don't need to clean these up before the call.
Normally, transactions can not see the rows they modify.
This allows UPDATE foo SET x = x + 1
to work
correctly.
However, there are cases where a transactions needs to see rows affected in previous parts of the transaction. This is accomplished using a Command Counter. Incrementing the counter allows transactions to be broken into pieces so each piece can see rows modified by previous pieces. CommandCounterIncrement() increments the Command Counter, creating a new part of the transaction.
First, try running configure with the --enable-cassert option, many assert()s monitor the progress of the backend and halt the program when something unexpected occurs.
The postmaster has a -d option that allows even more detailed information to be reported. The -d option takes a number that specifies the debug level. Be warned that high debug level values generate large log files.
If the postmaster is not running, you can actually run the postgres backend from the command line, and type your SQL statement directly. This is recommended only for debugging purposes. If you have compiled with debugging symbols, you can use a debugger to see what is happening. Because the backend was not started from postmaster, it is not running in an identical environment and locking/backend interaction problems might not be duplicated.
If the postmaster is running, start psql in
one window, then find the PID of the postgres
process used by psql using SELECT pg_backend_pid()
.
Use a debugger to attach to the postgres PID.
You can set breakpoints in the debugger and issue queries from
the other. If you are looking to find the location that is
generating an error or log message, set a breakpoint at
errfinish.
psql. If you are debugging postgres startup, you
can set PGOPTIONS="-W n", then start psql. This will
cause startup to delay for n seconds so you can attach
to the process with the debugger, set any breakpoints, and
continue through the startup sequence.
You can also compile with profiling to see what functions are taking execution time. The backend profile files will be deposited in the pgsql/data directory. The client profile file will be put in the client's current directory. Linux requires a compile with -DLINUX_PROFILE for proper profiling.
HIVE: All information for read only. Please respect copyright! |