Index of /postgresql/FAQ/TODO.html |
Current maintainer: Bruce Momjian (bruce@momjian.us)
Last updated: Thu Nov 23 11:18:03 EST 2006
The most recent version of this document can be viewed at
http://www.postgresql.org/docs/faqs.TODO.html.
A hyphen, "-", marks changes that will appear in the upcoming 8.3 release.
A percent sign, "%", marks items that are easier to implement.
Bracketed items, "[]", have more detail.
This list contains all known PostgreSQL bugs and feature requests. If
you would like to work on an item, please read the Developer's FAQ
first.
http://archives.postgresql.org/pgsql-patches/2006-06/msg00096.php
Lock table corruption following SIGTERM of an individual backend has been reported in 8.0. A possible cause was fixed in 8.1, but it is unknown whether other problems exist. This item mostly requires additional testing rather than of writing any new code. http://archives.postgresql.org/pgsql-hackers/2006-08/msg00174.php
Currently all schemas are owned by the super-user because they are copied from the template1 database.
This would allow administrators to see more detailed information from specific sections of the backend, e.g. checkpoints, autovacuum, etc. Another idea is to allow separate configuration files for each module, or allow arbitrary SET commands to be passed to them.
This would allow creation of partitioned tables without requiring creation of rules for INSERT/UPDATE/DELETE, and constraints for rapid partition selection. Options could include range and hash partition selection.
Currently, ALTER USER and ALTER DATABASE support per-user and per-database defaults. Consider adding per-user-and-database defaults so things like search_path can be defaulted for a specific user connecting to a specific database.
You can use any of the master/slave replication servers to use a standby server for data warehousing. To allow read/write queries to multiple servers, you need multi-master replication like pgcluster.
Currently, if a variable is commented out, it keeps the previous uncommented value until a server restarted. http://archives.postgresql.org/pgsql-hackers/2006-09/msg01481.php
Host name lookup could occur when the postmaster reads the pg_hba.conf file, or when the backend starts. Another solution would be to reverse lookup the connection IP and check that hostname against the host names in pg_hba.conf. We could also then check that the host name maps to the IP address.
All objects in the default database tablespace must have default tablespace specifications. This is because new databases are created by copying directories. If you mix default tablespace tables and tablespace-specified tables in the same directory, creating a new database from such a mixed directory would create a new database with tables that had incorrect explicit tablespaces. To fix this would require modifying pg_class in the newly copied database, which we don't currently do.
This item is difficult because a tablespace can contain objects from multiple databases. There is a server-side function that returns the databases which use a specific tablespace, so this requires a tool that will call that function and connect to each database to find the objects in each database for that tablespace.
It could start with a random tablespace from a supplied list and cycle through the list.
This is useful for checking PITR recovery.
This would allow server log information to be easily loaded into a database for analysis.
Change the MONEY data type to use DECIMAL internally, with special locale-aware output formatting. http://archives.postgresql.org/pgsql-general/2005-08/msg01432.php http://archives.postgresql.org/pgsql-hackers/2006-09/msg01107.php
Currently NUMERIC rounds the result to the specified precision. This means division can return a result that multiplied by the divisor is greater than the dividend, e.g. this returns a value > 10:
SELECT (10::numeric(2,0) / 6::numeric(2,0))::numeric(2,0) * 6;
The positive modulus result returned by NUMERICs might be considered
inaccurate, in one sense.
http://archives.postgresql.org/pgsql-hackers/2005-08/msg01142.php http://archives.postgresql.org/pgsql-hackers/2005-09/msg00012.php http://archives.postgresql.org/pgsql-hackers/2006-08/msg00149.php
http://archives.postgresql.org/pgsql-hackers/2006-03/msg00519.php
http://archives.postgresql.org/pgsql-hackers/2006-05/msg00072.php http://archives.postgresql.org/pgsql-hackers/2006-09/msg01681.php
http://archives.postgresql.org/pgsql-patches/2006-09/msg00209.php
http://archives.postgresql.org/pgsql-hackers/2006-07/msg00543.php
http://archives.postgresql.org/pgsql-hackers/2006-08/msg00979.php
If the TIMESTAMP value is stored with a time zone name, interval computations should adjust based on the time zone rules.
Currently, subtracting one date from another that crosses a daylight savings time adjustment can return '1 day 1 hour', but adding that back to the first date returns a time one hour in the future. This is caused by the adjustment of '25 hours' to '1 day 1 hour', and '1 day' is the same time the next day, even if daylight savings adjustments are involved.
http://archives.postgresql.org/pgsql-hackers/2006-01/msg00250.php
http://archives.postgresql.org/pgsql-bugs/2006-04/msg00248.php
The SQL standard states that the units after the string specify the units of the string, e.g. INTERVAL '2' MINUTE should return '00:02:00'. The current behavior has the units restrict the interval value to the specified unit or unit range, INTERVAL '70' SECOND returns '00:00:10'.
For syntax that isn't uniquely ISO or PG syntax, like '1' or '1:30', treat as ISO if there is a range specification clause, and as PG if there no clause is present, e.g. interpret '1:30' MINUTE TO SECOND as '1 minute 30 seconds', and interpret '1:30' as '1 hour, 30 minutes'.
This makes common cases like SELECT INTERVAL '1' MONTH SQL-standard results. The SQL standard supports a limited number of unit combinations and doesn't support unit names in the string. The PostgreSQL syntax is more flexible in the range of units supported, e.g. PostgreSQL supports '1 year 1 hour', while the SQL standard does not.
/contrib/lo offers this functionality.
This requires the TOAST column to be stored EXTERNAL.
http://archives.postgresql.org/pgsql-hackers/2005-09/msg00781.php
These would be for application use, not for use by pg_dump.
http://archives.postgresql.org/pgsql-hackers/2005-12/msg00948.php
Some special format flag would be required to request such accumulation. Such functionality could also be added to EXTRACT. Prevent accumulation that crosses the month/day boundary because of the uneven number of days in a month.
Currently locale can only be set during initdb. No global tables have locale-aware columns. However, the database template used during database creation might have locale-aware indexes. The indexes would need to be reindexed to match the new locale.
Right now only one encoding is allowed per database. [locale] http://archives.postgresql.org/pgsql-hackers/2005-03/msg00932.php http://archives.postgresql.org/pgsql-patches/2005-08/msg00309.php http://archives.postgresql.org/pgsql-patches/2006-03/msg00233.php http://archives.postgresql.org/pgsql-hackers/2006-09/msg00662.php
http://archives.postgresql.org/pgsql-hackers/2005-07/msg00272.php
http://archives.postgresql.org/pgsql-bugs/2005-10/msg00001.php http://archives.postgresql.org/pgsql-patches/2005-11/msg00173.php
Currently client_encoding is set in postgresql.conf, which defaults to the server encoding. http://archives.postgresql.org/pgsql-hackers/2006-08/msg01696.php
We can only auto-create rules for simple views. For more complex cases users will still have to write rules manually. http://archives.postgresql.org/pgsql-hackers/2006-03/msg00586.php
Another issue is whether underlying table changes should be reflected in the view, e.g. should SELECT * show additional columns if they are added after the view is created.
Currently only the owner can TRUNCATE a table because triggers are not called, and the table is locked in exclusive mode.
Currently, queries prepared via the libpq API are planned on first execute using the supplied parameters --- allow SQL PREPARE to do the same. Also, allow control over replanning prepared queries either manually or automatically when statistics for execute parameters differ dramatically from those used during planning.
Currently LISTEN/NOTIFY information is stored in pg_listener. Storing such information in memory would improve performance.
This would allow an informational message to be added to the notify message, perhaps indicating the row modified or other custom information.
This is similar to UPDATE, then for unmatched rows, INSERT. Whether concurrent access allows modifications which could cause row loss is implementation independent.
To implement this cleanly requires that the table have a unique index so duplicate checking can be easily performed. It is possible to do it without a unique index if we require the user to LOCK the table before the MERGE.
This would include resetting of all variables (RESET ALL), dropping of temporary tables, removing any NOTIFYs, cursors, open transactions, prepared queries, currval()s, etc. This could be used for connection pooling. We could also change RESET ALL to have this functionality. The difficult of this features is allowing RESET ALL to not affect changes made by the interface driver for its internal use. One idea is for this to be a protocol-only feature. Another approach is to notify the protocol when a RESET CONNECTION command is used. http://archives.postgresql.org/pgsql-patches/2006-04/msg00192.php
When this is done, backslash-quote should be prohibited in non-E'' strings because of possible confusion over how such strings treat backslashes. Basically, '' is always safe for a literal single quote, while \' might or might not be based on the backslash handling rules.
This would be useful for SERIAL nextval() calls and CHECK constraints.
http://archives.postgresql.org/pgsql-hackers/2006-07/msg01306.php
http://archives.postgresql.org/pgsql-patches/2006-02/msg00168.php
Currently non-global system tables must be in the default database tablespace. Global system tables can never be moved.
This might require some background daemon to maintain clustering during periods of low usage. It might also require tables to be only partially filled for easier reorganization. Another idea would be to create a merged heap/index data file so an index lookup would automatically access the heap data too. A third idea would be to store heap rows in hashed groups, perhaps using a user-supplied hash function. http://archives.postgresql.org/pgsql-performance/2004-08/msg00349.php
To do this, determine the ideal cluster index for each system table and set the cluster setting during initdb.
This requires the use of a savepoint before each COPY line is processed, with ROLLBACK on COPY failure.
On crash recovery, the table involved in the COPY would be removed or have its heap and index files truncated. One issue is that no other backend should be able to add to the table at the same time, which is something that is currently allowed.
The proposed syntax is:
GRANT SELECT ON ALL TABLES IN public TO phpuser; GRANT SELECT ON NEW TABLES IN public TO phpuser;
This requires using the row ctid to map cursor rows back to the original heap row. This become more complicated if WITH HOLD cursors are to be supported because WITH HOLD cursors have a copy of the row and no FOR UPDATE lock.
This is basically the same as SET search_path.
http://archives.postgresql.org/pgsql-hackers/2005-09/msg00174.php http://archives.postgresql.org/pgsql-hackers/2005-09/msg00174.php
This would allow UPDATE tab SET col = col + 1 to work if col has a unique index. Currently, uniqueness checks are done while the command is being executed, rather than at the end of the statement or transaction. http://people.planetpostgresql.org/greg/index.php?/archives/2006/06/10.html http://archives.postgresql.org/pgsql-hackers/2006-09/msg01458.php
A package would be a schema with session-local variables, public/private functions, and initialization functions. It is also possible to implement these capabilities in all schemas and not use a separate "packages" syntax at all. http://archives.postgresql.org/pgsql-hackers/2006-08/msg00384.php
http://archives.postgresql.org/pgsql-patches/2005-07/msg00458.php http://archives.postgresql.org/pgsql-patches/2006-05/msg00302.php http://archives.postgresql.org/pgsql-patches/2006-06/msg00031.php
PL/pgSQL cursors should support the same syntax as backend cursors.
http://archives.postgresql.org/pgsql-patches/2005-11/msg00045.php
http://archives.postgresql.org/pgsql-performance/2006-06/msg00305.php
http://archives.postgresql.org/pgsql-patches/2006-02/msg00165.php
http://archives.postgresql.org/pgsql-patches/2006-02/msg00288.php
pg_ctl can not read the pid file because it isn't located in the config directory but in the PGDATA directory. The solution is to allow pg_ctl to read and understand postgresql.conf to find the data_directory value.
This would allow non-psql clients to pull the same information out of the database as psql.
http://archives.postgresql.org/pgsql-hackers/2004-11/msg00014.php http://archives.postgresql.org/pgsql-hackers/2004-11/msg00014.php
Consider using auto-expanded mode for backslash commands like \df+.
Currently, SET <tab> causes a database lookup to check all supported session variables. This query causes problems because setting the transaction isolation level must be the first statement of a transaction.
Document differences between ecpg and the SQL standard and information about the Informix-compatibility module.
PQfnumber() should never have been doing lowercasing, but historically it has so we need a way to prevent it
Currently, all statement results are transferred to the libpq client before libpq makes the results available to the application. This feature would allow the application to make use of the first result rows while the rest are transferred, or held on the server waiting for them to be requested by libpq. One complexity is that a statement like SELECT 1/col could error out mid-way through the result set.
Right now all deferred trigger information is stored in backend memory. This could exhaust memory for very large trigger queues. This item involves dumping large queues into files.
This is currently possible by starting a multi-statement transaction, modifying the system tables, performing the desired SQL, restoring the system tables, and committing the transaction. ALTER TABLE ... TRIGGER requires a table lock so it is not ideal for this usage.
If the dump is known to be valid, allow foreign keys to be added without revalidating the data.
http://archives.postgresql.org/pgsql-patches/2005-07/msg00107.php
System tables are modified in many places in the backend without going through the executor and therefore not causing triggers to fire. To complete this item, the functions that modify system tables will have to fire triggers.
A more complex solution would be to save multiple plans for different cardinality and use the appropriate plan based on the EXECUTE values.
This is particularly important for references to temporary tables in PL/PgSQL because PL/PgSQL caches query plans. The only workaround in PL/PgSQL is to use EXECUTE. One complexity is that a function might itself drop and recreate dependent tables, causing it to invalidate its own query plan.
The main difficulty with this item is the problem of creating an index that can span more than one table.
Uniqueness (index) checks are done when updating a column even if the column is not modified by the UPDATE.
Such indexes could be more compact if there are only a few distinct values. Such indexes can also be compressed. Keeping such indexes updated can be costly. http://archives.postgresql.org/pgsql-patches/2005-07/msg00512.php
One solution is to create a partial index on an IS NULL expression.
This is possible now by creating an operator class with reversed sort operators. One complexity is that NULLs would then appear at the start of the result set, and this might affect certain sort types, like merge join.
This is difficult because it requires datatype-specific knowledge.
Currently only one hash bucket can be stored on a page. Ideally several hash buckets could be stored on a single page and greater granularity used for the hash algorithm.
Ideally this requires a separate test program that can be run at initdb time or optionally later. Consider O_SYNC when O_DIRECT exists.
Posix_fadvise() can control both sequential/random file caching and free-behind behavior, but it is unclear how the setting affects other backends that also have the file open, and the feature is not supported on all operating systems.
We could use a fixed row count and a +/- count to follow MVCC visibility rules, or a single cached value could be used and invalidated if anyone modifies the table. Another idea is to get a count directly from a unique index, but for this to be faster than a sequential scan it must avoid access to the heap to obtain tuple visibility information.
This would use the planner ANALYZE statistics to return an estimated count. http://archives.postgresql.org/pgsql-hackers/2005-11/msg00943.php
Currently indexes do not have enough tuple visibility information to allow data to be pulled from the index without also accessing the heap. One way to allow this is to set a bit on index tuples to indicate if a tuple is currently visible to all transactions when the first valid heap lookup happens. This bit would have to be cleared when a heap tuple is expired.
Another idea is to maintain a bitmap of heap pages where all rows are visible to all backends, and allow index lookups to reference that bitmap to avoid heap lookups, perhaps the same bitmap we might add someday to determine which heap pages need vacuuming. Frequently accessed bitmaps would have to be stored in shared memory. One 8k page of bitmaps could track 512MB of heap pages.
One possible implementation is to start sequential scans from the lowest numbered buffer in the shared cache, and when reaching the end wrap around to the beginning, rather than always starting sequential scans at the start of the table.
http://archives.postgresql.org/pgsql-hackers/2005-10/msg01419.php
For large table adjustments during VACUUM FULL, it is faster to reindex rather than update the index.
Moved tuples are invisible to other backends so they don't require a write lock. However, the read lock promotion to write lock could lead to deadlock situations.
http://archives.postgresql.org/pgsql-hackers/2006-02/msg01125.php http://archives.postgresql.org/pgsql-hackers/2006-03/msg00011.php
Instead of sequentially scanning the entire table, have the background writer or some other process record pages that have expired rows, then VACUUM can look at just those pages rather than the entire table. In the event of a system crash, the bitmap would probably be invalidated. One complexity is that index entries still have to be vacuumed, and doing this without an index scan (by using the heap values to find the index entry) might be slow and unreliable, especially for user-defined index functions.
http://archives.postgresql.org/pgsql-patches/2006-03/msg00142.php
While vacuum handles DELETEs fine, updating of non-indexed columns, like counters, are difficult for VACUUM to handle efficiently. This method is possible for same-page updates because a single index row can be used to point to both old and new values. http://archives.postgresql.org/pgsql-hackers/2006-06/msg01305.php http://archives.postgresql.org/pgsql-hackers/2006-06/msg01534.php
http://archives.postgresql.org/pgsql-hackers/2006-08/msg01852.php
This would prevent the overhead associated with process creation. Most operating systems have trivial process creation time compared to database startup overhead, but a few operating systems (Win32, Solaris) might benefit from threading. Also explore the idea of a single session using multiple threads to execute a statement faster.
This would allow a single query to make use of multiple CPU's or multiple I/O channels simultaneously. One idea is to create a background reader that can pre-fetch sequential and index scan pages needed by other backends. This could be expanded to allow concurrent reads from multiple devices in a partitioned table.
It is unclear if this should be done inside the backend code or done by something external like pgpool. The passing of file descriptors to existing backends is one of the difficulties with a backend approach.
Currently, to protect against partial disk page writes, we write full page images to WAL before they are modified so we can correct any partial page writes during recovery. These pages can also be eliminated from point-in-time archive files.
If CRC check fails during recovery, remember the page in case a later CRC for that page properly matches.
This allows most full page writes to happen in the background writer. It might cause problems for applying WAL on recovery into a partially-written page, but later the full page will be replaced from WAL.
http://archives.postgresql.org/pgsql-patches/2006-06/msg00025.php
Currently fsync of WAL requires the disk platter to perform a full rotation to fsync again. One idea is to write the WAL to different offsets that might reduce the rotational delay.
Instead of guaranteeing recovery of all committed transactions, this would provide improved performance by delaying WAL writes and fsync so an abrupt operating system restart might lose a few seconds of committed transactions but still be consistent. We could perhaps remove the 'fsync' parameter (which results in an an inconsistent database) in favor of this capability.
Allow tables to bypass WAL writes and just fsync() dirty pages on commit. This should be implemented using ALTER TABLE, e.g. ALTER TABLE PERSISTENCE [ DROP | TRUNCATE | DEFAULT ]. Tables using non-default logging should not use referential integrity with default-logging tables. A table without dirty buffers during a crash could perhaps avoid the drop/truncate.
To do this, only a single writer can modify the table, and writes must happen only on new pages so the new pages can be removed during crash recovery. Readers can continue accessing the table. Such tables probably cannot have indexes. One complexity is the handling of indexes on TOAST tables.
Right now, if no index exists, ORDER BY ... LIMIT # requires we sort all values to return the high/low value. Instead The idea is to do a sequential scan to find the high/low value, thus avoiding the sort. MIN/MAX already does this, but not for LIMIT > 1.
This would be beneficial when there are few distinct values. This is already used by GROUP BY.
This might replace GEQO, http://sixdemonbag.org/Djinni.
Async I/O allows multiple I/O requests to be sent to the disk with results coming back asynchronously. http://archives.postgresql.org/pgsql-hackers/2006-10/msg00820.php
This would remove the requirement for SYSV SHM but would introduce portability issues. Anonymous mmap (or mmap to /dev/zero) is required to prevent I/O overhead.
Doing I/O to large tables would consume a lot of address space or require frequent mapping/unmapping. Extending the file also causes mapping problems that might require mapping only individual pages, leading to thousands of mappings. Another problem is that there is no way to prevent I/O to disk from the dirty shared buffers so changes could hit disk before WAL is written.
Before subtransactions, there used to be only three fields needed to store these four values. This was possible because only the current transaction looks at the cmin/cmax values. If the current transaction created and expired the row the fields stored where xmin (same as xmax), cmin, cmax, and if the transaction was expiring a row from a another transaction, the fields stored were xmin (cmin was not needed), xmax, and cmax. Such a system worked because a transaction could only see rows from another completed transaction. However, subtransactions can see rows from outer transactions, and once the subtransaction completes, the outer transaction continues, requiring the storage of all four fields. With subtransactions, an outer transaction can create a row, a subtransaction expire it, and when the subtransaction completes, the outer transaction still has to have proper visibility of the row's cmin, for example, for cursors.
One possible solution is to create a phantom cid which represents a cmin/cmax pair and is stored in local memory. Another idea is to store both cmin and cmax only in local memory.
One idea is to create zero-or-one-byte-header versions of varlena data types. In involves setting the high-bit and 0-127 length in the single-byte header, or clear the high bit and store the 7-bit ASCII value in the rest of the byte. The small-header versions have no alignment requirements. http://archives.postgresql.org/pgsql-hackers/2006-09/msg01372.php
Particularly, move GPL-licensed /contrib/userlock and /contrib/dbmirror/clean_pending.pl.
This is probably not possible because 'gmake' and other compiler tools do not fully support quoting of paths with spaces.
http://archives.postgresql.org/pgsql-patches/2006-05/msg00040.php
http://archives.postgresql.org/pgsql-hackers/2006-09/msg02108.php
http://archives.postgresql.org/pgsql-patches/2005-06/msg00027.php
While Win32 supports 64-bit files, the MinGW API does not, meaning we have to build an fseeko replacement on top of the Win32 API, and we have to make sure MinGW handles it. Another option is to wait for the MinGW project to fix it, or use the code from the LibGW32C project as a guide.
This could allow SQL written for other databases to run without modification.
This can be done using dblink and two-phase commit.
http://archives.postgresql.org/pgsql-hackers/2004-04/msg00818.php http://archives.postgresql.org/pgsql-hackers/2006-10/msg01527.php
This eliminates the process protection we get from the current setup. Thread creation is usually the same overhead as process creation on modern systems, so it seems unwise to use a pure threaded model.
Optimizer hints are used to work around problems in the optimizer. We would rather have the problems reported and fixed. http://archives.postgresql.org/pgsql-hackers/2006-08/msg00506.php http://archives.postgresql.org/pgsql-hackers/2006-10/msg00517.php http://archives.postgresql.org/pgsql-hackers/2006-10/msg00663.php
Because we support postfix operators, it isn't possible to make AS optional and continue to use bison. http://archives.postgresql.org/pgsql-sql/2006-08/msg00164.php
HIVE: All information for read only. Please respect copyright! |