Commit 6302cc5c authored by Tres Seaver's avatar Tres Seaver Committed by GitHub

Merge branch 'master' into z-object-database

parents b791f66e 6c9748dd
...@@ -8,5 +8,6 @@ omit = ...@@ -8,5 +8,6 @@ omit =
[report] [report]
exclude_lines = exclude_lines =
pragma: nocover pragma: nocover
pragma: no cover
if __name__ == ['"]__main__['"]: if __name__ == ['"]__main__['"]:
assert False assert False
...@@ -20,3 +20,4 @@ testing.log ...@@ -20,3 +20,4 @@ testing.log
.dir-locals.el .dir-locals.el
htmlcov htmlcov
tmp tmp
*~
...@@ -9,20 +9,23 @@ matrix: ...@@ -9,20 +9,23 @@ matrix:
env: BUILOUT_OPTIONS=sphinx:eggs= env: BUILOUT_OPTIONS=sphinx:eggs=
- os: linux - os: linux
python: 2.7 python: 2.7
- os: linux
python: 3.3
- os: linux - os: linux
python: 3.4 python: 3.4
- os: linux - os: linux
python: 3.5 python: 3.5
- os: linux
python: 3.6
- python: 3.7
dist: xenial
sudo: true
install: install:
- pip install -U pip - pip install -U pip
- pip install zc.buildout - pip install -U setuptools zc.buildout
- buildout $BUILOUT_OPTIONS versions:sphinx=1.4.9 - buildout $BUILOUT_OPTIONS
script: script:
- if [[ $TRAVIS_PYTHON_VERSION != pypy* ]]; then bin/coverage run bin/coverage-test -v1j99; fi - if [[ $TRAVIS_PYTHON_VERSION != pypy* ]]; then bin/coverage run bin/coverage-test -v1j99; fi
- if [[ $TRAVIS_PYTHON_VERSION == 'pypy' || $TRAVIS_PYTHON_VERSION == 'pypy3' ]]; then bin/test -v1j99; fi - if [[ $TRAVIS_PYTHON_VERSION == pypy* ]]; then bin/test -v1j99; fi
- if [[ $TRAVIS_PYTHON_VERSION != 'pypy3' ]]; then pushd doc; make html; popd; fi - if [[ $TRAVIS_PYTHON_VERSION != pypy3* ]]; then make -C doc html; fi
- if [[ $TRAVIS_PYTHON_VERSION != pypy* ]]; then pip install coveralls; fi # install early enough to get into the cache - if [[ $TRAVIS_PYTHON_VERSION != pypy* ]]; then pip install coveralls; fi # install early enough to get into the cache
after_success: after_success:
- if [[ $TRAVIS_PYTHON_VERSION != pypy* ]]; then bin/coverage combine; fi - if [[ $TRAVIS_PYTHON_VERSION != pypy* ]]; then bin/coverage combine; fi
......
...@@ -2,6 +2,128 @@ ...@@ -2,6 +2,128 @@
Change History Change History
================ ================
5.5.0 (unreleased)
==================
- Remove support for ``python setup.py test``. It hadn't been working
for some time. See `issue #218
<https://github.com/zopefoundation/ZODB/issues/218>`_.
- Bump the dependency on zodbpickle to at least 1.0.1. This is
required to avoid a memory leak on Python 2.7. See `issue 203
<https://github.com/zopefoundation/ZODB/issues/203>`_.
- Bump the dependency on persistent to at least 4.4.0.
- Add support for Python 3.7.
- Make the internal support functions for dealing with OIDs (``p64``
and ``u64``) somewhat faster and raise more informative
exceptions on certain types of bad input. See `issue 216
<https://github.com/zopefoundation/ZODB/issues/216>`_.
5.4.0 (2018-03-26)
==================
- ZODB now uses pickle protocol 3 for both Python 2 and Python 3.
(Previously, protocol 2 was used for Python 2.)
The zodbpickle package provides a `zodbpickle.binary` string type
that should be used in Python 2 to cause binary strings to be saved
in a pickle binary format, so they can be loaded correctly in
Python 3. Pickle protocol 3 is needed for this to work correctly.
- Object identifiers in persistent references are saved as
`zodbpickle.binary` strings in Python 2, so that they are loaded
correctly in Python 3.
- If an object is missing from the index while packing a ``FileStorage``,
report its full ``oid``.
- Storage imports are a bit faster.
- Storages can be important from non-seekable sources, like
file-wrapped pipes.
5.3.0 (2017-08-30)
==================
- Add support for Python 3.6.
- Drop support for Python 3.3.
- Ensure that the ``HistoricalStorageAdapter`` forwards the ``release`` method to
its base instance. See `issue 78 <https://github.com/zopefoundation/ZODB/issues/788>`_.
- Use a higher pickle protocol (2) for serializing objects on Python
2; previously protocol 1 was used. This is *much* more efficient for
new-style classes (all persistent objects are new-style), at the
cost of being very slightly less efficient for old-style classes.
.. note:: On Python 2, this will now allow open ``file`` objects
(but **not** open blobs or sockets) to be pickled (loading
the object will result in a closed file); previously this
would result in a ``TypeError``. Doing so is not
recommended as they cannot be loaded in Python 3.
See `issue 179 <https://github.com/zopefoundation/ZODB/pull/179>`_.
5.2.4 (2017-05-17)
==================
- ``DB.close`` now explicitly frees internal resources. This is
helpful to avoid false positives in tests that check for leaks.
- Optimize getting the path to a blob file. See
`issue 161 <https://github.com/zopefoundation/ZODB/pull/161>`_.
- All classes are new-style classes on Python 2 (they were already
new-style on Python 3). This improves performance on PyPy. See
`issue 160 <https://github.com/zopefoundation/ZODB/pull/160>`_.
5.2.3 (2017-04-11)
==================
- Fix an import error. See `issue 158 <https://github.com/zopefoundation/ZODB/issues/158>`_.
5.2.2 (2017-04-11)
==================
- Fixed: A blob misfeature set blob permissions so that blobs and blob
directories were only readable by the database process owner, rather
than honoring user-controlled permissions (e.g. ``umask``).
See `issue 155 <https://github.com/zopefoundation/ZODB/issues/155>`_.
5.2.1 (2017-04-08)
==================
- Fixed: When opening FileStorages in read-only mode, non-existent
files were silently created. Creating a read-only file-storage
against a non-existent file errors.
5.2.0 (2017-02-09)
==================
- Call new afterCompletion API on storages to allow them to free
resources after transaction complete.
See `issue 147 <https://github.com/zodb/relstorage/issues/147>`__.
- Take advantage of the new transaction-manager explicit mode to avoid
starting transactions unnecessarily when transactions end.
- ``Connection.new_oid`` delegates to its storage, not the DB. This is
helpful for improving concurrency in MVCC storages like RelStorage.
See `issue 139 <https://github.com/zopefoundation/ZODB/issues/139>`_.
- ``persistent`` is no longer required at setup time.
See `issue 119 <https://github.com/zopefoundation/ZODB/issues/119>`_.
- ``Connection.close`` and ``Connection.open`` no longer race on
``self.transaction_manager``, which could lead to
``AttributeError``. This was a bug introduced in 5.0.1. See `issue
142 <https://github.com/zopefoundation/ZODB/pull/143>`_.
5.1.1 (2016-11-18) 5.1.1 (2016-11-18)
================== ==================
...@@ -43,7 +165,7 @@ ...@@ -43,7 +165,7 @@
Major internal improvements and cleanups plus: Major internal improvements and cleanups plus:
- Added a connection ``prefetch`` method that can be used to request - Added a connection ``prefetch`` method that can be used to request
that a storage prefect data an application will need:: that a storage prefetch data an application will need::
conn.prefetch(obj, ...) conn.prefetch(obj, ...)
...@@ -133,7 +255,7 @@ Concurrency Control (MVCC) implementation: ...@@ -133,7 +255,7 @@ Concurrency Control (MVCC) implementation:
layer. This underlying layer works by calling ``loadBefore``. The layer. This underlying layer works by calling ``loadBefore``. The
low-level storage ``load`` method isn't used any more. low-level storage ``load`` method isn't used any more.
This change allows server-nased storages like ZEO and NEO to be This change allows server-based storages like ZEO and NEO to be
implemented more simply and cleanly. implemented more simply and cleanly.
4.4.3 (2016-08-04) 4.4.3 (2016-08-04)
...@@ -366,5 +488,5 @@ Bugs Fixed ...@@ -366,5 +488,5 @@ Bugs Fixed
.. note:: .. note::
Please see ``doc/HISTORY.txt`` for changelog entries for older versions Please see https://github.com/zopefoundation/ZODB/blob/master/HISTORY.rst
of ZODB. for older versions of ZODB.
======================
For developers of ZODB
======================
Building
========
Bootstrap buildout, if necessary using ``bootstrap.py``::
python bootstrap.py
Run the buildout::
bin/buildout
Testing
=======
The ZODB checkouts are `buildouts <http://www.python.org/pypi/zc.buildout>`_.
When working from a ZODB checkout, first run the bootstrap.py script
to initialize the buildout:
% python bootstrap.py
and then use the buildout script to build ZODB and gather the dependencies:
% bin/buildout
This creates a test script:
% bin/test -v
This command will run all the tests, printing a single dot for each
test. When it finishes, it will print a test summary. The exact
number of tests can vary depending on platform and available
third-party libraries.::
Ran 1182 tests in 241.269s
OK
The test script has many more options. Use the ``-h`` or ``--help``
options to see a file list of options. The default test suite omits
several tests that depend on third-party software or that take a long
time to run. To run all the available tests use the ``--all`` option.
Running all the tests takes much longer.::
Ran 1561 tests in 1461.557s
OK
Our primary development platforms are Linux and Mac OS X. The test
suite should pass without error on these platforms and, hopefully,
Windows, although it can take a long time on Windows -- longer if you
use ZoneAlarm.
Generating docs
===============
cd to the doc directory and::
make html
Contributing
============
Almost any code change should include tests.
Any change that changes features should include documentation updates.
This source diff could not be displayed because it is too large. You can view the blob instead.
This source diff could not be displayed because it is too large. You can view the blob instead.
...@@ -2,9 +2,10 @@ include *.rst ...@@ -2,9 +2,10 @@ include *.rst
include *.txt include *.txt
include *.py include *.py
include *.ini include *.ini
include .coveragerc exclude .coveragerc
include .travis.yml exclude .travis.yml
include buildout.cfg exclude appveyor.yml
exclude buildout.cfg
include COPYING include COPYING
recursive-include doc * recursive-include doc *
...@@ -14,3 +15,5 @@ global-exclude *.dll ...@@ -14,3 +15,5 @@ global-exclude *.dll
global-exclude *.pyc global-exclude *.pyc
global-exclude *.pyo global-exclude *.pyo
global-exclude *.so global-exclude *.so
global-exclude *~
==== =======================================
ZODB ZODB, a Python object-oriented database
==== =======================================
Introduction .. image:: https://img.shields.io/pypi/v/ZODB.svg
============ :target: https://pypi.python.org/pypi/ZODB/
:alt: Latest release
The ZODB package provides a set of tools for using the Zope Object .. image:: https://img.shields.io/pypi/pyversions/ZODB.svg
Database (ZODB). :target: https://pypi.org/project/ZODB/
:alt: Supported Python versions
Our primary development platforms are Linux and Mac OS X. The test .. image:: https://travis-ci.org/zopefoundation/ZODB.svg?branch=master
suite should pass without error on these platforms and, hopefully, :target: https://travis-ci.org/zopefoundation/ZODB
Windows, although it can take a long time on Windows -- longer if you :alt: Build status
use ZoneAlarm.
.. image:: https://coveralls.io/repos/github/zopefoundation/ZODB/badge.svg
:target: https://coveralls.io/github/zopefoundation/ZODB
:alt: Coverage status
Compatibility .. image:: https://readthedocs.org/projects/zodb/badge/?version=latest
============= :target: https://zodb.readthedocs.io/en/latest/
:alt: Documentation status
ZODB 5 requires Python 2.7 (>= 2.7.9) or Python >= 3.3. ZODB provides an object-oriented database for Python that provides a
high-degree of transparency. ZODB runs on Python 2.7 or Python 3.4 and
above. It also runs on PyPy.
Documentation - no separate language for database operations
=============
See http://zodb-docs.readthedocs.io/en/latest/ - very little impact on your code to make objects persistent
For developers of ZODB - no database mapper that partially hides the database.
======================
Building Using an object-relational mapping **is not** like using an
--------- object-oriented database.
Bootstrap buildout, if necessary using ``bootstrap.py``:: - almost no seam between code and database.
python bootstrap.py ZODB is an ACID Transactional database.
Run the buildout:: To learn more, visit: http://www.zodb.org
bin/buildout The github repository is: at https://github.com/zopefoundation/zodb
Testing If you're interested in contributing to ZODB itself, see the
------- `developer notes
<https://github.com/zopefoundation/ZODB/blob/master/DEVELOPERS.rst>`_.
The ZODB checkouts are `buildouts <http://www.python.org/pypi/zc.buildout>`_.
When working from a ZODB checkout, first run the bootstrap.py script
to initialize the buildout:
% python bootstrap.py
and then use the buildout script to build ZODB and gather the dependencies:
% bin/buildout
This creates a test script:
% bin/test -v
This command will run all the tests, printing a single dot for each
test. When it finishes, it will print a test summary. The exact
number of tests can vary depending on platform and available
third-party libraries.::
Ran 1182 tests in 241.269s
OK
The test script has many more options. Use the ``-h`` or ``--help``
options to see a file list of options. The default test suite omits
several tests that depend on third-party software or that take a long
time to run. To run all the available tests use the ``--all`` option.
Running all the tests takes much longer.::
Ran 1561 tests in 1461.557s
OK
Generating docs
---------------
cd to the doc directory and::
make html
Contributing
------------
Almost any code change should include tests.
Any change that changes features should include documentation updates.
Maintenance scripts
-------------------
Several scripts are provided with the ZODB and can help for analyzing,
debugging, checking for consistency, summarizing content, reporting space used
by objects, doing backups, artificial load testing, etc.
Look at the ZODB/script directory for more informations.
License
=======
ZODB is distributed under the Zope Public License, an OSI-approved
open source license. Please see the LICENSE.txt file for terms and
conditions.
More information
================
See http://zodb.org/
.. image:: https://badges.gitter.im/zopefoundation/ZODB.svg
:alt: Join the chat at https://gitter.im/zopefoundation/ZODB
:target: https://gitter.im/zopefoundation/ZODB?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge
environment:
matrix:
- python: 27
- python: 27-x64
- python: 34
- python: 34-x64
- python: 35
- python: 35-x64
- python: 36
- python: 36-x64
- python: 37
- python: 37-x64
install:
- "SET PATH=C:\\Python%PYTHON%;c:\\Python%PYTHON%\\scripts;%PATH%"
- echo "C:\Program Files\Microsoft SDKs\Windows\v7.1\Bin\SetEnv.cmd" /x64 > "C:\Program Files (x86)\Microsoft Visual Studio 10.0\VC\bin\amd64\vcvars64.bat"
- pip install -e .
- pip install zope.testrunner zope.testing manuel
- pip install zc.buildout zc.recipe.testrunner zc.recipe.egg
build_script:
- buildout bootstrap
- bin\buildout parts=test
test_script:
- bin\test -vvv
...@@ -360,7 +360,7 @@ examples used so far:: ...@@ -360,7 +360,7 @@ examples used so far::
This program demonstrates a couple interesting things. First, this This program demonstrates a couple interesting things. First, this
program shows how persistent objects can refer to each other. The program shows how persistent objects can refer to each other. The
'self.manger' attribute of 'Employee' instances can refer to other 'self.manager' attribute of 'Employee' instances can refer to other
'Employee' instances. Unlike a relational database, there is no 'Employee' instances. Unlike a relational database, there is no
need to use indirection such as object ids when referring from one need to use indirection such as object ids when referring from one
persistent object to another. You can just use normal Python persistent object to another. You can just use normal Python
......
...@@ -11,15 +11,24 @@ database of all persistent references. ...@@ -11,15 +11,24 @@ database of all persistent references.
The second feature allows us to debug and repair PosKeyErrors by finding the The second feature allows us to debug and repair PosKeyErrors by finding the
persistent object(s) that point to the lost object. persistent object(s) that point to the lost object.
Note: This documentation applies to ZODB 3.9 and later. Earlier versions of the .. note::
ZODB are not supported, as they lack the fast storage iteration API's required This documentation applies to ZODB 3.9 and later. Earlier versions of the
by `zc.zodbdgc`. ZODB are not supported, as they lack the fast storage iteration API's required
by ``zc.zodbdgc``.
This documentation does not apply to
`RelStorage <http://pypi.python.org/pypi/RelStorage>`_ which has the same .. note::
features built-in, but accessible in different ways. Look at the options for Unless you're using multi-databases, this documentation does not apply to
the `zodbpack` script. The `--prepack` option creates a table containing the `RelStorage <http://pypi.python.org/pypi/RelStorage>`_ which has the same
same information as we are creating in the reference database. features built-in, but accessible in different ways. Look at the options for
the ``zodbpack`` script. The ``--prepack`` option creates a table containing the
same information as we are creating in the reference database.
If you *are* using multi-databases, be aware that RelStorage 2.0 is needed to
perform packing and garbage collection with ``zc.zodbdgc``, and those features only
work in history-free databases.
It's important to realize that there is currently no way to perform garbage collection
in a history-preserving multi-database RelStorage.
Setup Setup
----- -----
......
...@@ -14,12 +14,17 @@ Because ZODB is an object database: ...@@ -14,12 +14,17 @@ Because ZODB is an object database:
- almost no seam between code and database. - almost no seam between code and database.
- Relationships between objects are handled very naturally, supporting
complex object graphs without joins.
Check out the :doc:`tutorial`! Check out the :doc:`tutorial`!
ZODB runs on Python 2.7 or Python 3.4 and above. It also runs on PyPy.
Transactions Transactions
============ ============
Make programs easier to reason about. Transactions make programs easier to reason about.
Transactions are atomic Transactions are atomic
Changes made in a transaction are either saved in their entirety or Changes made in a transaction are either saved in their entirety or
...@@ -64,12 +69,6 @@ ZODB transaction support: ...@@ -64,12 +69,6 @@ ZODB transaction support:
Other notable ZODB features Other notable ZODB features
=========================== ===========================
Pluggable layered storage
ZODB has a pluggable storage architecture. This allows a variety of
storage schemes including memory-based, file-based and distributed
(client-server) storage. Through storage layering, storage
components provide compression, encryption, replication and more.
Database caching with invalidation Database caching with invalidation
Every database connection has a cache that is a consistent partial database Every database connection has a cache that is a consistent partial database
replica. When accessing database objects, data already in the cache replica. When accessing database objects, data already in the cache
...@@ -78,36 +77,43 @@ Database caching with invalidation ...@@ -78,36 +77,43 @@ Database caching with invalidation
to be invalidated. The next time invalidated objects are accessed to be invalidated. The next time invalidated objects are accessed
they'll be loaded from the database. they'll be loaded from the database.
This makes caching extremely efficient, but provides some limit to Applications don't have to invalidate cache entries. The database
the number of clients. The server has to send an invalidation invalidates cache entries automatically.
message to each client for each write.
Pluggable layered storage
ZODB has a pluggable storage architecture. This allows a variety of
storage schemes including memory-based, file-based and distributed
(client-server) storage. Through storage layering, storage
components provide compression, encryption, replication and more.
Easy testing Easy testing
Because application code rarely has database logic, it can
usually be unit tested without a database.
ZODB provides in-memory storage implementations as well as ZODB provides in-memory storage implementations as well as
copy-on-write layered "demo storage" implementations that make testing copy-on-write layered "demo storage" implementations that make testing
database-related code very easy. database-related code very easy.
Garbage collection
Removal of unused objects is automatic, so application developers
don't have to worry about referential integrity.
Binary large objects, Blobs
ZODB blobs are database-managed files. This can be especially
useful when serving media. If you use AWS, there's a Blob
implementation that stores blobs in S3 and caches them on disk.
Time travel Time travel
ZODB storages typically add new records on write and remove old ZODB storages typically add new records on write and remove old
records on "pack" operations. This allows limited time travel, back records on "pack" operations. This allows limited time travel, back
to the last pack time. This can be very useful for forensic to the last pack time. This can be very useful for forensic
analysis. analysis.
Binary large objects, Blobs
Many databases have these, but so does ZODB.
In applications, Blobs are files, so they can be treated as files in
many ways. This can be especially useful when serving media. If you
use AWS, there's a Blob implementation that stores blobs in S3 and
caches them on disk.
When should you use ZODB? When should you use ZODB?
========================= =========================
You want to focus on your application without writing a lot of database code. You want to focus on your application without writing a lot of database code.
Even if find you need to incorporate or switch to another database ZODB provides highly transparent persistence.
later, you can use ZODB in the early part of your project to make
initial discovery and learning much quicker.
Your application has complex relationships and data structures. Your application has complex relationships and data structures.
In relational databases you have to join tables to model complex In relational databases you have to join tables to model complex
...@@ -135,7 +141,7 @@ You access data through object attributes and methods. ...@@ -135,7 +141,7 @@ You access data through object attributes and methods.
enough to support some search. enough to support some search.
You read data a lot more than you write it. You read data a lot more than you write it.
ZODB caches aggressively, and if you're working set fits (or mostly ZODB caches aggressively, and if your working set fits (or mostly
fits) in memory, performance is very good because it rarely has to fits) in memory, performance is very good because it rarely has to
touch the database server. touch the database server.
...@@ -153,21 +159,22 @@ Need to test logic that uses your database. ...@@ -153,21 +159,22 @@ Need to test logic that uses your database.
When should you *not* use ZODB? When should you *not* use ZODB?
=============================== ===============================
- Search is a dominant data access path - You have very high write volume.
- You have high write volume
- Caching is unlikely to benefit you ZODB can commit thousands of transactions per second with suitable
storage configuration and without conflicting changes.
This can be the case when write volume is high, or when you tend to Internal search indexes can lead to lots of conflicts, and can
access small amounts of data from a working set way too large to fit in therefore limit write capacity. If you need high write volume and
memory and when there's no good mechanism for dividing the working search beyond mapping access, consider using external indexes.
set across application servers.
- You need to use non-Python tools to access your database. - You need to use non-Python tools to access your database.
especially tools designed to work with relational databases especially tools designed to work with relational databases
Newt DB addresses these issues to a significant degree. See
http://newtdb.org.
How does ZODB scale? How does ZODB scale?
==================== ====================
...@@ -203,6 +210,7 @@ Learning more ...@@ -203,6 +210,7 @@ Learning more
* `The ZODB Book (in progress) <http://zodb.readthedocs.org/en/latest/>`_ * `The ZODB Book (in progress) <http://zodb.readthedocs.org/en/latest/>`_
What is the expansion of "ZODB"? What is the expansion of "ZODB"?
================================ ================================
...@@ -214,6 +222,7 @@ developed as part of the Zope project. But ZODB doesn't depend on ...@@ -214,6 +222,7 @@ developed as part of the Zope project. But ZODB doesn't depend on
Zope in any way and is used in many projects that have nothing to do Zope in any way and is used in many projects that have nothing to do
with Zope. with Zope.
Downloads Downloads
========= =========
......
...@@ -166,7 +166,10 @@ faster than search. ...@@ -166,7 +166,10 @@ faster than search.
You can use BTrees to build indexes for efficient search, when You can use BTrees to build indexes for efficient search, when
necessary. If your application is search centric, or if you prefer to necessary. If your application is search centric, or if you prefer to
approach data access that way, then ZODB might not be the best approach data access that way, then ZODB might not be the best
technology for you. technology for you. Before you turn your back on the ZODB, it
may be worth checking out the up-and-coming Newt DB [#newtdb]_ project,
which combines the ZODB with Postgresql for indexing, search and access
from non-Python applications.
Transactions Transactions
============ ============
...@@ -245,3 +248,6 @@ individual topics. ...@@ -245,3 +248,6 @@ individual topics.
Objects aren't actually evicted, but their state is released, so Objects aren't actually evicted, but their state is released, so
they take up much less memory and any objects they referenced can they take up much less memory and any objects they referenced can
be removed from memory. be removed from memory.
.. [#newtdb]
Here is an overview of the Newt DB architecture: http://www.newtdb.org/en/latest/how-it-works.html
#!python
"""Bootstrap setuptools installation
If you want to use setuptools in your package's setup.py, just include this
file in the same directory with it, and add this to the top of your setup.py::
from ez_setup import use_setuptools
use_setuptools()
If you want to require a specific version of setuptools, set a download
mirror, or use an alternate download directory, you can do so by supplying
the appropriate options to ``use_setuptools()``.
This file can also be run as a script to install or upgrade setuptools.
"""
import sys
DEFAULT_VERSION = "0.6c9"
DEFAULT_URL = "http://pypi.python.org/packages/%s/s/setuptools/" % sys.version[:3]
md5_data = {
'setuptools-0.6b1-py2.3.egg': '8822caf901250d848b996b7f25c6e6ca',
'setuptools-0.6b1-py2.4.egg': 'b79a8a403e4502fbb85ee3f1941735cb',
'setuptools-0.6b2-py2.3.egg': '5657759d8a6d8fc44070a9d07272d99b',
'setuptools-0.6b2-py2.4.egg': '4996a8d169d2be661fa32a6e52e4f82a',
'setuptools-0.6b3-py2.3.egg': 'bb31c0fc7399a63579975cad9f5a0618',
'setuptools-0.6b3-py2.4.egg': '38a8c6b3d6ecd22247f179f7da669fac',
'setuptools-0.6b4-py2.3.egg': '62045a24ed4e1ebc77fe039aa4e6f7e5',
'setuptools-0.6b4-py2.4.egg': '4cb2a185d228dacffb2d17f103b3b1c4',
'setuptools-0.6c1-py2.3.egg': 'b3f2b5539d65cb7f74ad79127f1a908c',
'setuptools-0.6c1-py2.4.egg': 'b45adeda0667d2d2ffe14009364f2a4b',
'setuptools-0.6c2-py2.3.egg': 'f0064bf6aa2b7d0f3ba0b43f20817c27',
'setuptools-0.6c2-py2.4.egg': '616192eec35f47e8ea16cd6a122b7277',
'setuptools-0.6c3-py2.3.egg': 'f181fa125dfe85a259c9cd6f1d7b78fa',
'setuptools-0.6c3-py2.4.egg': 'e0ed74682c998bfb73bf803a50e7b71e',
'setuptools-0.6c3-py2.5.egg': 'abef16fdd61955514841c7c6bd98965e',
'setuptools-0.6c4-py2.3.egg': 'b0b9131acab32022bfac7f44c5d7971f',
'setuptools-0.6c4-py2.4.egg': '2a1f9656d4fbf3c97bf946c0a124e6e2',
'setuptools-0.6c4-py2.5.egg': '8f5a052e32cdb9c72bcf4b5526f28afc',
'setuptools-0.6c5-py2.3.egg': 'ee9fd80965da04f2f3e6b3576e9d8167',
'setuptools-0.6c5-py2.4.egg': 'afe2adf1c01701ee841761f5bcd8aa64',
'setuptools-0.6c5-py2.5.egg': 'a8d3f61494ccaa8714dfed37bccd3d5d',
'setuptools-0.6c6-py2.3.egg': '35686b78116a668847237b69d549ec20',
'setuptools-0.6c6-py2.4.egg': '3c56af57be3225019260a644430065ab',
'setuptools-0.6c6-py2.5.egg': 'b2f8a7520709a5b34f80946de5f02f53',
'setuptools-0.6c7-py2.3.egg': '209fdf9adc3a615e5115b725658e13e2',
'setuptools-0.6c7-py2.4.egg': '5a8f954807d46a0fb67cf1f26c55a82e',
'setuptools-0.6c7-py2.5.egg': '45d2ad28f9750e7434111fde831e8372',
'setuptools-0.6c8-py2.3.egg': '50759d29b349db8cfd807ba8303f1902',
'setuptools-0.6c8-py2.4.egg': 'cba38d74f7d483c06e9daa6070cce6de',
'setuptools-0.6c8-py2.5.egg': '1721747ee329dc150590a58b3e1ac95b',
'setuptools-0.6c9-py2.3.egg': 'a83c4020414807b496e4cfbe08507c03',
'setuptools-0.6c9-py2.4.egg': '260a2be2e5388d66bdaee06abec6342a',
'setuptools-0.6c9-py2.5.egg': 'fe67c3e5a17b12c0e7c541b7ea43a8e6',
'setuptools-0.6c9-py2.6.egg': 'ca37b1ff16fa2ede6e19383e7b59245a',
}
import sys, os
try: from hashlib import md5
except ImportError: from md5 import md5
def _validate_md5(egg_name, data):
if egg_name in md5_data:
digest = md5(data).hexdigest()
if digest != md5_data[egg_name]:
print >>sys.stderr, (
"md5 validation of %s failed! (Possible download problem?)"
% egg_name
)
sys.exit(2)
return data
def use_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
download_delay=15
):
"""Automatically find/download setuptools and make it available on sys.path
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end with
a '/'). `to_dir` is the directory where setuptools will be downloaded, if
it is not already available. If `download_delay` is specified, it should
be the number of seconds that will be paused before initiating a download,
should one be required. If an older version of setuptools is installed,
this routine will print a message to ``sys.stderr`` and raise SystemExit in
an attempt to abort the calling script.
"""
was_imported = 'pkg_resources' in sys.modules or 'setuptools' in sys.modules
def do_download():
egg = download_setuptools(version, download_base, to_dir, download_delay)
sys.path.insert(0, egg)
import setuptools; setuptools.bootstrap_install_from = egg
try:
import pkg_resources
except ImportError:
return do_download()
try:
pkg_resources.require("setuptools>="+version); return
except pkg_resources.VersionConflict, e:
if was_imported:
print >>sys.stderr, (
"The required version of setuptools (>=%s) is not available, and\n"
"can't be installed while this script is running. Please install\n"
" a more recent version first, using 'easy_install -U setuptools'."
"\n\n(Currently using %r)"
) % (version, e.args[0])
sys.exit(2)
else:
del pkg_resources, sys.modules['pkg_resources'] # reload ok
return do_download()
except pkg_resources.DistributionNotFound:
return do_download()
def download_setuptools(
version=DEFAULT_VERSION, download_base=DEFAULT_URL, to_dir=os.curdir,
delay = 15
):
"""Download setuptools from a specified location and return its filename
`version` should be a valid setuptools version number that is available
as an egg for download under the `download_base` URL (which should end
with a '/'). `to_dir` is the directory where the egg will be downloaded.
`delay` is the number of seconds to pause before an actual download attempt.
"""
import urllib2, shutil
egg_name = "setuptools-%s-py%s.egg" % (version,sys.version[:3])
url = download_base + egg_name
saveto = os.path.join(to_dir, egg_name)
src = dst = None
if not os.path.exists(saveto): # Avoid repeated downloads
try:
from distutils import log
if delay:
log.warn("""
---------------------------------------------------------------------------
This script requires setuptools version %s to run (even to display
help). I will attempt to download it for you (from
%s), but
you may need to enable firewall access for this script first.
I will start the download in %d seconds.
(Note: if this machine does not have network access, please obtain the file
%s
and place it in this directory before rerunning this script.)
---------------------------------------------------------------------------""",
version, download_base, delay, url
); from time import sleep; sleep(delay)
log.warn("Downloading %s", url)
src = urllib2.urlopen(url)
# Read/write all in one block, so we don't create a corrupt file
# if the download is interrupted.
data = _validate_md5(egg_name, src.read())
dst = open(saveto,"wb"); dst.write(data)
finally:
if src: src.close()
if dst: dst.close()
return os.path.realpath(saveto)
def main(argv, version=DEFAULT_VERSION):
"""Install or upgrade setuptools and EasyInstall"""
try:
import setuptools
except ImportError:
egg = None
try:
egg = download_setuptools(version, delay=0)
sys.path.insert(0,egg)
from setuptools.command.easy_install import main
return main(list(argv)+[egg]) # we're done here
finally:
if egg and os.path.exists(egg):
os.unlink(egg)
else:
if setuptools.__version__ == '0.0.1':
print >>sys.stderr, (
"You have an obsolete version of setuptools installed. Please\n"
"remove it from your system entirely before rerunning this script."
)
sys.exit(2)
req = "setuptools>="+version
import pkg_resources
try:
pkg_resources.require(req)
except pkg_resources.VersionConflict:
try:
from setuptools.command.easy_install import main
except ImportError:
from easy_install import main
main(list(argv)+[download_setuptools(delay=0)])
sys.exit(0) # try to force an exit
else:
if argv:
from setuptools.command.easy_install import main
main(argv)
else:
print "Setuptools version",version,"or greater has been installed."
print '(Run "ez_setup.py -U setuptools" to reinstall or upgrade.)'
def update_md5(filenames):
"""Update our built-in md5 registry"""
import re
for name in filenames:
base = os.path.basename(name)
f = open(name,'rb')
md5_data[base] = md5(f.read()).hexdigest()
f.close()
data = [" %r: %r,\n" % it for it in md5_data.items()]
data.sort()
repl = "".join(data)
import inspect
srcfile = inspect.getsourcefile(sys.modules[__name__])
f = open(srcfile, 'rb'); src = f.read(); f.close()
match = re.search("\nmd5_data = {\n([^}]+)}", src)
if not match:
print >>sys.stderr, "Internal error!"
sys.exit(2)
src = src[:match.start(1)] + repl + src[match.end(1):]
f = open(srcfile,'w')
f.write(src)
f.close()
if __name__=='__main__':
if len(sys.argv)>2 and sys.argv[1]=='--md5update':
update_md5(sys.argv[2:])
else:
main(sys.argv[1:])
[bdist_wheel]
universal = 1
...@@ -11,20 +11,10 @@ ...@@ -11,20 +11,10 @@
# FOR A PARTICULAR PURPOSE. # FOR A PARTICULAR PURPOSE.
# #
############################################################################## ##############################################################################
"""Zope Object Database: object database and persistence
The Zope Object Database provides an object-oriented database for
Python that provides a high-degree of transparency. Applications can
take advantage of object database features with few, if any, changes
to application logic. ZODB includes features such as a plugable storage
interface, rich transaction support, and undo.
"""
version = "5.1.1"
import os
from setuptools import setup, find_packages from setuptools import setup, find_packages
version = '5.5.0.dev0'
classifiers = """\ classifiers = """\
Intended Audience :: Developers Intended Audience :: Developers
License :: OSI Approved :: Zope Public License License :: OSI Approved :: Zope Public License
...@@ -32,9 +22,10 @@ Programming Language :: Python ...@@ -32,9 +22,10 @@ Programming Language :: Python
Programming Language :: Python :: 2 Programming Language :: Python :: 2
Programming Language :: Python :: 2.7 Programming Language :: Python :: 2.7
Programming Language :: Python :: 3 Programming Language :: Python :: 3
Programming Language :: Python :: 3.3
Programming Language :: Python :: 3.4 Programming Language :: Python :: 3.4
Programming Language :: Python :: 3.5 Programming Language :: Python :: 3.5
Programming Language :: Python :: 3.6
Programming Language :: Python :: 3.7
Programming Language :: Python :: Implementation :: CPython Programming Language :: Python :: Implementation :: CPython
Programming Language :: Python :: Implementation :: PyPy Programming Language :: Python :: Implementation :: PyPy
Topic :: Database Topic :: Database
...@@ -44,117 +35,57 @@ Operating System :: Unix ...@@ -44,117 +35,57 @@ Operating System :: Unix
Framework :: ZODB Framework :: ZODB
""" """
def _modname(path, base, name=''): def read(path):
if path == base: with open(path) as f:
return name return f.read()
dirname, basename = os.path.split(path)
return _modname(dirname, base, basename + '.' + name) long_description = read("README.rst") + "\n\n" + read("CHANGES.rst")
def _flatten(suite, predicate=lambda *x: True): tests_require = [
from unittest import TestCase 'manuel',
for suite_or_case in suite: 'zope.testing',
if predicate(suite_or_case): 'zope.testrunner >= 4.4.6',
if isinstance(suite_or_case, TestCase): ]
yield suite_or_case
else: setup(
for x in _flatten(suite_or_case): name="ZODB",
yield x version=version,
author="Jim Fulton",
def _no_layer(suite_or_case): author_email="jim@zope.com",
return getattr(suite_or_case, 'layer', None) is None maintainer="Zope Foundation and Contributors",
maintainer_email="zodb-dev@zope.org",
def _unittests_only(suite, mod_suite): keywords="database nosql python zope",
for case in _flatten(mod_suite, _no_layer): packages=find_packages('src'),
suite.addTest(case) package_dir={'': 'src'},
url='http://www.zodb.org/',
def alltests(): license="ZPL 2.1",
import logging platforms=["any"],
import pkg_resources classifiers=list(filter(None, classifiers.split("\n"))),
import unittest description=long_description.split('\n', 2)[1],
long_description=long_description,
# Something wacked in setting recursion limit when running setup test tests_require=tests_require,
import ZODB.FileStorage.tests extras_require={
del ZODB.FileStorage.tests._save_index
class NullHandler(logging.Handler):
level = 50
def emit(self, record):
pass
logging.getLogger().addHandler(NullHandler())
suite = unittest.TestSuite()
base = pkg_resources.working_set.find(
pkg_resources.Requirement.parse('ZODB')).location
for dirpath, dirnames, filenames in os.walk(base):
if os.path.basename(dirpath) == 'tests':
for filename in filenames:
if filename.endswith('.py') and filename.startswith('test'):
mod = __import__(
_modname(dirpath, base, os.path.splitext(filename)[0]),
{}, {}, ['*'])
_unittests_only(suite, mod.test_suite())
elif 'tests.py' in filenames:
mod = __import__(_modname(dirpath, base, 'tests'), {}, {}, ['*'])
_unittests_only(suite, mod.test_suite())
return suite
doclines = __doc__.split("\n")
def read_file(*path):
base_dir = os.path.dirname(__file__)
file_path = (base_dir, ) + tuple(path)
with open(os.path.join(*file_path), 'rb') as file:
return file.read()
long_description = str(
("\n".join(doclines[2:]) + "\n\n" +
".. contents::\n\n" +
read_file("README.rst").decode('latin-1') + "\n\n" +
read_file("CHANGES.rst").decode('latin-1')))
tests_require = ['zope.testing', 'manuel']
setup(name="ZODB",
version=version,
setup_requires=['persistent'],
author="Jim Fulton",
author_email="jim@zope.com",
maintainer="Zope Foundation and Contributors",
maintainer_email="zodb-dev@zope.org",
keywords="database nosql python zope",
packages = find_packages('src'),
package_dir = {'': 'src'},
url = 'http://www.zodb.org/',
license = "ZPL 2.1",
platforms = ["any"],
description = doclines[0],
classifiers = list(filter(None, classifiers.split("\n"))),
long_description = long_description,
test_suite="__main__.alltests", # to support "setup.py test"
tests_require = tests_require,
extras_require = {
'test': tests_require, 'test': tests_require,
}, },
install_requires = [ install_requires=[
'persistent >= 4.2.0', 'persistent >= 4.4.0',
'BTrees >= 4.2.0', 'BTrees >= 4.2.0',
'ZConfig', 'ZConfig',
'transaction >= 2.0.3', 'transaction >= 2.0.3',
'six', 'six',
'zc.lockfile', 'zc.lockfile',
'zope.interface', 'zope.interface',
'zodbpickle >= 0.6.0', 'zodbpickle >= 1.0.1',
], ],
zip_safe = False, zip_safe=False,
entry_points = """ entry_points="""
[console_scripts] [console_scripts]
fsdump = ZODB.FileStorage.fsdump:main fsdump = ZODB.FileStorage.fsdump:main
fsoids = ZODB.scripts.fsoids:main fsoids = ZODB.scripts.fsoids:main
fsrefs = ZODB.scripts.fsrefs:main fsrefs = ZODB.scripts.fsrefs:main
fstail = ZODB.scripts.fstail:Main fstail = ZODB.scripts.fstail:Main
repozo = ZODB.scripts.repozo:main repozo = ZODB.scripts.repozo:main
""", """,
include_package_data = True, include_package_data=True,
) python_requires='>=2.7,!=3.0.*,!=3.1.*,!=3.2.*,!=3.3.*',
)
...@@ -19,7 +19,7 @@ import time ...@@ -19,7 +19,7 @@ import time
from . import utils from . import utils
class ActivityMonitor: class ActivityMonitor(object):
"""ZODB load/store activity monitor """ZODB load/store activity monitor
This simple implementation just keeps a small log in memory This simple implementation just keeps a small log in memory
......
...@@ -211,7 +211,7 @@ class PersistentReference(object): ...@@ -211,7 +211,7 @@ class PersistentReference(object):
elif isinstance(data, list) and data[0] == 'm': elif isinstance(data, list) and data[0] == 'm':
return data[1][2] return data[1][2]
class PersistentReferenceFactory: class PersistentReferenceFactory(object):
data = None data = None
......
...@@ -126,7 +126,6 @@ class Connection(ExportImport, object): ...@@ -126,7 +126,6 @@ class Connection(ExportImport, object):
storage = storage.new_instance() storage = storage.new_instance()
self._normal_storage = self._storage = storage self._normal_storage = self._storage = storage
self.new_oid = db.new_oid
self._savepoint_storage = None self._savepoint_storage = None
# Do we need to join a txn manager? # Do we need to join a txn manager?
...@@ -200,6 +199,9 @@ class Connection(ExportImport, object): ...@@ -200,6 +199,9 @@ class Connection(ExportImport, object):
self._reader = ObjectReader(self, self._cache, self._db.classFactory) self._reader = ObjectReader(self, self._cache, self._db.classFactory)
def new_oid(self):
return self._storage.new_oid()
def add(self, obj): def add(self, obj):
"""Add a new object 'obj' to the database and assign it an oid.""" """Add a new object 'obj' to the database and assign it an oid."""
if self.opened is None: if self.opened is None:
...@@ -281,8 +283,7 @@ class Connection(ExportImport, object): ...@@ -281,8 +283,7 @@ class Connection(ExportImport, object):
raise ConnectionStateError("Cannot close a connection joined to " raise ConnectionStateError("Cannot close a connection joined to "
"a transaction") "a transaction")
if self._cache is not None: self._cache.incrgc() # This is a good time to do some GC
self._cache.incrgc() # This is a good time to do some GC
# Call the close callbacks. # Call the close callbacks.
if self.__onCloseCallbacks is not None: if self.__onCloseCallbacks is not None:
...@@ -302,6 +303,17 @@ class Connection(ExportImport, object): ...@@ -302,6 +303,17 @@ class Connection(ExportImport, object):
# closed the DB already, .e.g, ZODB.connection() does this. # closed the DB already, .e.g, ZODB.connection() does this.
self.transaction_manager.unregisterSynch(self) self.transaction_manager.unregisterSynch(self)
am = self._db._activity_monitor
if am is not None:
am.closedConnection(self)
# Drop transaction manager to release resources and help prevent errors
self.transaction_manager = None
if hasattr(self._storage, 'afterCompletion'):
self._storage.afterCompletion()
if primary: if primary:
for connection in self.connections.values(): for connection in self.connections.values():
if connection is not self: if connection is not self:
...@@ -318,12 +330,9 @@ class Connection(ExportImport, object): ...@@ -318,12 +330,9 @@ class Connection(ExportImport, object):
else: else:
self.opened = None self.opened = None
am = self._db._activity_monitor # We may have been reused by another thread at this point so
if am is not None: # we can't manipulate or check the state of `self` any more.
am.closedConnection(self)
# Drop transaction manager to release resources and help prevent errors
self.transaction_manager = None
def db(self): def db(self):
"""Returns a handle to the database this connection belongs to.""" """Returns a handle to the database this connection belongs to."""
...@@ -399,7 +408,6 @@ class Connection(ExportImport, object): ...@@ -399,7 +408,6 @@ class Connection(ExportImport, object):
def abort(self, transaction): def abort(self, transaction):
"""Abort a transaction and forget all changes.""" """Abort a transaction and forget all changes."""
# The order is important here. We want to abort registered # The order is important here. We want to abort registered
# objects before we process the cache. Otherwise, we may un-add # objects before we process the cache. Otherwise, we may un-add
# objects added in savepoints. If they've been modified since # objects added in savepoints. If they've been modified since
...@@ -473,7 +481,6 @@ class Connection(ExportImport, object): ...@@ -473,7 +481,6 @@ class Connection(ExportImport, object):
def commit(self, transaction): def commit(self, transaction):
"""Commit changes to an object""" """Commit changes to an object"""
transaction = transaction.data(self) transaction = transaction.data(self)
if self._savepoint_storage is not None: if self._savepoint_storage is not None:
...@@ -726,20 +733,13 @@ class Connection(ExportImport, object): ...@@ -726,20 +733,13 @@ class Connection(ExportImport, object):
def newTransaction(self, transaction, sync=True): def newTransaction(self, transaction, sync=True):
self._readCurrent.clear() self._readCurrent.clear()
self._storage.sync(sync)
try: invalidated = self._storage.poll_invalidations()
self._storage.sync(sync) if invalidated is None:
invalidated = self._storage.poll_invalidations() # special value: the transaction is so old that
if invalidated is None: # we need to flush the whole cache.
# special value: the transaction is so old that invalidated = self._cache.cache_data.copy()
# we need to flush the whole cache. self._cache.invalidate(invalidated)
invalidated = self._cache.cache_data.copy()
self._cache.invalidate(invalidated)
except AttributeError:
assert self._storage is None
# Now is a good time to collect some garbage.
self._cache.incrgc()
def afterCompletion(self, transaction): def afterCompletion(self, transaction):
# Note that we we call newTransaction here for 2 reasons: # Note that we we call newTransaction here for 2 reasons:
...@@ -750,7 +750,14 @@ class Connection(ExportImport, object): ...@@ -750,7 +750,14 @@ class Connection(ExportImport, object):
# finalizing previous ones without calling begin. We pass # finalizing previous ones without calling begin. We pass
# False to avoid possiblyt expensive sync calls to not # False to avoid possiblyt expensive sync calls to not
# penalize well-behaved applications that call begin. # penalize well-behaved applications that call begin.
self.newTransaction(transaction, False) if hasattr(self._storage, 'afterCompletion'):
self._storage.afterCompletion()
if not self.explicit_transactions:
self.newTransaction(transaction, False)
# Now is a good time to collect some garbage.
self._cache.incrgc()
# Transaction-manager synchronization -- ISynchronizer # Transaction-manager synchronization -- ISynchronizer
########################################################################## ##########################################################################
...@@ -765,8 +772,9 @@ class Connection(ExportImport, object): ...@@ -765,8 +772,9 @@ class Connection(ExportImport, object):
return self._reader.getState(p) return self._reader.getState(p)
def setstate(self, obj): def setstate(self, obj):
"""Turns the ghost 'obj' into a real object by loading its state from """Load the state for an (ghost) object
the database.""" """
oid = obj._p_oid oid = obj._p_oid
if self.opened is None: if self.opened is None:
...@@ -880,33 +888,38 @@ class Connection(ExportImport, object): ...@@ -880,33 +888,38 @@ class Connection(ExportImport, object):
self.transaction_manager = transaction_manager self.transaction_manager = transaction_manager
self.explicit_transactions = getattr(transaction_manager,
'explicit', False)
self.opened = time.time() self.opened = time.time()
if self._reset_counter != global_reset_counter: if self._reset_counter != global_reset_counter:
# New code is in place. Start a new cache. # New code is in place. Start a new cache.
self._resetCache() self._resetCache()
# This newTransaction is to deal with some pathalogical cases: if not self.explicit_transactions:
# # This newTransaction is to deal with some pathalogical cases:
# a) Someone opens a connection when a transaction isn't #
# active and proceeeds without calling begin on a # a) Someone opens a connection when a transaction isn't
# transaction manager. We initialize the transaction for # active and proceeeds without calling begin on a
# the connection, but we don't do a storage sync, since # transaction manager. We initialize the transaction for
# this will be done if a well-nehaved application calls # the connection, but we don't do a storage sync, since
# begin, and we don't want to penalize well-behaved # this will be done if a well-nehaved application calls
# transactions by syncing twice, as storage syncs might be # begin, and we don't want to penalize well-behaved
# expensive. # transactions by syncing twice, as storage syncs might be
# b) Lots of tests assume that connection transaction # expensive.
# information is set on open. # b) Lots of tests assume that connection transaction
# # information is set on open.
# Fortunately, this is a cheap operation. It doesn't really #
# cost much, if anything. # Fortunately, this is a cheap operation. It doesn't
self.newTransaction(None, False) # really cost much, if anything. Well, except for
# RelStorage, in which case it adds a server round
# trip.
self.newTransaction(None, False)
transaction_manager.registerSynch(self) transaction_manager.registerSynch(self)
if self._cache is not None: self._cache.incrgc() # This is a good time to do some GC
self._cache.incrgc() # This is a good time to do some GC
if delegate: if delegate:
# delegate open to secondary connections # delegate open to secondary connections
...@@ -932,7 +945,7 @@ class Connection(ExportImport, object): ...@@ -932,7 +945,7 @@ class Connection(ExportImport, object):
c._storage.release() c._storage.release()
c._storage = c._normal_storage = None c._storage = c._normal_storage = None
c._cache = PickleCache(self, 0, 0) c._cache = PickleCache(self, 0, 0)
c.transaction_manager = None c.close(False)
########################################################################## ##########################################################################
# Python protocol # Python protocol
...@@ -1101,7 +1114,7 @@ class Connection(ExportImport, object): ...@@ -1101,7 +1114,7 @@ class Connection(ExportImport, object):
yield ob._p_oid yield ob._p_oid
@implementer(IDataManagerSavepoint) @implementer(IDataManagerSavepoint)
class Savepoint: class Savepoint(object):
def __init__(self, datamanager, state): def __init__(self, datamanager, state):
self.datamanager = datamanager self.datamanager = datamanager
...@@ -1112,7 +1125,7 @@ class Savepoint: ...@@ -1112,7 +1125,7 @@ class Savepoint:
@implementer(IBlobStorage) @implementer(IBlobStorage)
class TmpStore: class TmpStore(object):
"""A storage-like thing to support savepoints.""" """A storage-like thing to support savepoints."""
...@@ -1180,7 +1193,7 @@ class TmpStore: ...@@ -1180,7 +1193,7 @@ class TmpStore:
targetpath = self._getBlobPath() targetpath = self._getBlobPath()
if not os.path.exists(targetpath): if not os.path.exists(targetpath):
os.makedirs(targetpath, 0o700) os.makedirs(targetpath)
targetname = self._getCleanFilename(oid, serial) targetname = self._getCleanFilename(oid, serial)
rename_or_copy_blob(blobfilename, targetname, chmod=False) rename_or_copy_blob(blobfilename, targetname, chmod=False)
...@@ -1315,7 +1328,7 @@ class TransactionMetaData(object): ...@@ -1315,7 +1328,7 @@ class TransactionMetaData(object):
@property @property
def _extension(self): def _extension(self):
warnings.warn("_extension is deprecated, use extension", warnings.warn("_extension is deprecated, use extension",
DeprecationWarning) DeprecationWarning, stacklevel=2)
return self.extension return self.extension
@_extension.setter @_extension.setter
......
...@@ -24,7 +24,7 @@ from . import utils ...@@ -24,7 +24,7 @@ from . import utils
from ZODB.broken import find_global from ZODB.broken import find_global
from ZODB.utils import z64 from ZODB.utils import z64
from ZODB.Connection import Connection, TransactionMetaData from ZODB.Connection import Connection, TransactionMetaData, noop
from ZODB._compat import Pickler, _protocol, BytesIO from ZODB._compat import Pickler, _protocol, BytesIO
import ZODB.serialize import ZODB.serialize
...@@ -107,6 +107,10 @@ class AbstractConnectionPool(object): ...@@ -107,6 +107,10 @@ class AbstractConnectionPool(object):
size = property(getSize, lambda self, v: self.setSize(v)) size = property(getSize, lambda self, v: self.setSize(v))
def clear(self):
pass
class ConnectionPool(AbstractConnectionPool): class ConnectionPool(AbstractConnectionPool):
def __init__(self, size, timeout=1<<31): def __init__(self, size, timeout=1<<31):
...@@ -230,6 +234,11 @@ class ConnectionPool(AbstractConnectionPool): ...@@ -230,6 +234,11 @@ class ConnectionPool(AbstractConnectionPool):
self.available[:] = [i for i in self.available self.available[:] = [i for i in self.available
if i[1] not in to_remove] if i[1] not in to_remove]
def clear(self):
while self.pop():
pass
class KeyedConnectionPool(AbstractConnectionPool): class KeyedConnectionPool(AbstractConnectionPool):
# this pool keeps track of keyed connections all together. It makes # this pool keeps track of keyed connections all together. It makes
# it possible to make assertions about total numbers of keyed connections. # it possible to make assertions about total numbers of keyed connections.
...@@ -285,6 +294,11 @@ class KeyedConnectionPool(AbstractConnectionPool): ...@@ -285,6 +294,11 @@ class KeyedConnectionPool(AbstractConnectionPool):
if not pool.all: if not pool.all:
del self.pools[key] del self.pools[key]
def clear(self):
for pool in self.pools.values():
pool.clear()
self.pools.clear()
@property @property
def test_all(self): def test_all(self):
result = set() result = set()
...@@ -632,19 +646,26 @@ class DB(object): ...@@ -632,19 +646,26 @@ class DB(object):
is closed, so they stop behaving usefully. Perhaps close() is closed, so they stop behaving usefully. Perhaps close()
should also close all the Connections. should also close all the Connections.
""" """
noop = lambda *a: None
self.close = noop self.close = noop
@self._connectionMap @self._connectionMap
def _(c): def _(conn):
if c.transaction_manager is not None: if conn.transaction_manager is not None:
c.transaction_manager.abort() for c in six.itervalues(conn.connections):
c.afterCompletion = c.newTransaction = c.close = noop # Prevent connections from implicitly starting new
c._release_resources() # transactions.
c.explicit_transactions = True
conn.transaction_manager.abort()
conn._release_resources()
self._mvcc_storage.close() self._mvcc_storage.close()
del self.storage del self.storage
del self._mvcc_storage del self._mvcc_storage
# clean up references to other DBs
self.databases = {}
# clean up the connection pool
self.pool.clear()
self.historical_pool.clear()
def getCacheSize(self): def getCacheSize(self):
"""Get the configured cache size (objects). """Get the configured cache size (objects).
...@@ -987,7 +1008,13 @@ class DB(object): ...@@ -987,7 +1008,13 @@ class DB(object):
return ContextManager(self, note) return ContextManager(self, note)
def new_oid(self): def new_oid(self):
return self.storage.new_oid() """
Return a new oid from the storage.
Kept for backwards compatibility only. New oids should be
allocated in a transaction using an open Connection.
"""
return self.storage.new_oid() # pragma: no cover
def open_then_close_db_when_connection_closes(self): def open_then_close_db_when_connection_closes(self):
"""Create and return a connection. """Create and return a connection.
...@@ -999,7 +1026,7 @@ class DB(object): ...@@ -999,7 +1026,7 @@ class DB(object):
return conn return conn
class ContextManager: class ContextManager(object):
"""PEP 343 context manager """PEP 343 context manager
""" """
......
...@@ -29,9 +29,9 @@ from ZODB._compat import PersistentPickler, Unpickler, BytesIO, _protocol ...@@ -29,9 +29,9 @@ from ZODB._compat import PersistentPickler, Unpickler, BytesIO, _protocol
logger = logging.getLogger('ZODB.ExportImport') logger = logging.getLogger('ZODB.ExportImport')
class ExportImport: class ExportImport(object):
def exportFile(self, oid, f=None): def exportFile(self, oid, f=None, bufsize=64 * 1024):
if f is None: if f is None:
f = TemporaryFile(prefix="EXP") f = TemporaryFile(prefix="EXP")
elif isinstance(f, six.string_types): elif isinstance(f, six.string_types):
...@@ -64,7 +64,7 @@ class ExportImport: ...@@ -64,7 +64,7 @@ class ExportImport:
f.write(blob_begin_marker) f.write(blob_begin_marker)
f.write(p64(os.stat(blobfilename).st_size)) f.write(p64(os.stat(blobfilename).st_size))
blobdata = open(blobfilename, "rb") blobdata = open(blobfilename, "rb")
cp(blobdata, f) cp(blobdata, f, bufsize=bufsize)
blobdata.close() blobdata.close()
f.write(export_end_marker) f.write(export_end_marker)
...@@ -158,18 +158,23 @@ class ExportImport: ...@@ -158,18 +158,23 @@ class ExportImport:
oids[ooid] = oid = self._storage.new_oid() oids[ooid] = oid = self._storage.new_oid()
return_oid_list.append(oid) return_oid_list.append(oid)
# Blob support if (b'blob' in data and
blob_begin = f.read(len(blob_begin_marker)) isinstance(self._reader.getGhost(data), Blob)
if blob_begin == blob_begin_marker: ):
# Blob support
# Make sure we have a (redundant, overly) blob marker.
if f.read(len(blob_begin_marker)) != blob_begin_marker:
raise ValueError("No data for blob object")
# Copy the blob data to a temporary file # Copy the blob data to a temporary file
# and remember the name # and remember the name
blob_len = u64(f.read(8)) blob_len = u64(f.read(8))
blob_filename = mktemp() blob_filename = mktemp(self._storage.temporaryDirectory())
blob_file = open(blob_filename, "wb") blob_file = open(blob_filename, "wb")
cp(f, blob_file, blob_len) cp(f, blob_file, blob_len)
blob_file.close() blob_file.close()
else: else:
f.seek(-len(blob_begin_marker),1)
blob_filename = None blob_filename = None
pfile = BytesIO(data) pfile = BytesIO(data)
......
...@@ -267,6 +267,10 @@ class FileStorage( ...@@ -267,6 +267,10 @@ class FileStorage(
if exc.errno == errno.EFBIG: if exc.errno == errno.EFBIG:
# The file is too big to open. Fail visibly. # The file is too big to open. Fail visibly.
raise raise
if read_only:
# When open request is read-only we do not want to create
# the file
raise
if exc.errno == errno.ENOENT: if exc.errno == errno.ENOENT:
# The file doesn't exist. Create it. # The file doesn't exist. Create it.
create = 1 create = 1
...@@ -1308,14 +1312,14 @@ class FileStorage( ...@@ -1308,14 +1312,14 @@ class FileStorage(
if self.pack_keep_old: if self.pack_keep_old:
# Helpers that move oid dir or revision file to the old dir. # Helpers that move oid dir or revision file to the old dir.
os.mkdir(old, 0o777) os.mkdir(old)
link_or_copy(os.path.join(self.blob_dir, '.layout'), link_or_copy(os.path.join(self.blob_dir, '.layout'),
os.path.join(old, '.layout')) os.path.join(old, '.layout'))
def handle_file(path): def handle_file(path):
newpath = old+path[lblob_dir:] newpath = old+path[lblob_dir:]
dest = os.path.dirname(newpath) dest = os.path.dirname(newpath)
if not os.path.exists(dest): if not os.path.exists(dest):
os.makedirs(dest, 0o700) os.makedirs(dest)
os.rename(path, newpath) os.rename(path, newpath)
handle_dir = handle_file handle_dir = handle_file
else: else:
...@@ -1364,7 +1368,7 @@ class FileStorage( ...@@ -1364,7 +1368,7 @@ class FileStorage(
file_path = os.path.join(path, file_name) file_path = os.path.join(path, file_name)
dest = os.path.dirname(old+file_path[lblob_dir:]) dest = os.path.dirname(old+file_path[lblob_dir:])
if not os.path.exists(dest): if not os.path.exists(dest):
os.makedirs(dest, 0o700) os.makedirs(dest)
link_or_copy(file_path, old+file_path[lblob_dir:]) link_or_copy(file_path, old+file_path[lblob_dir:])
def iterator(self, start=None, stop=None): def iterator(self, start=None, stop=None):
...@@ -2079,7 +2083,7 @@ class Record(_DataRecord): ...@@ -2079,7 +2083,7 @@ class Record(_DataRecord):
self.pos = pos self.pos = pos
class UndoSearch: class UndoSearch(object):
def __init__(self, file, pos, first, last, filter=None): def __init__(self, file, pos, first, last, filter=None):
self.file = file self.file = file
...@@ -2140,7 +2144,7 @@ class UndoSearch: ...@@ -2140,7 +2144,7 @@ class UndoSearch:
d.update(e) d.update(e)
return d return d
class FilePool: class FilePool(object):
closed = False closed = False
writing = False writing = False
......
...@@ -56,7 +56,7 @@ def fmt(p64): ...@@ -56,7 +56,7 @@ def fmt(p64):
# Return a nicely formatted string for a packaged 64-bit value # Return a nicely formatted string for a packaged 64-bit value
return "%016x" % u64(p64) return "%016x" % u64(p64)
class Dumper: class Dumper(object):
"""A very verbose dumper for debuggin FileStorage problems.""" """A very verbose dumper for debuggin FileStorage problems."""
# TODO: Should revise this class to use FileStorageFormatter. # TODO: Should revise this class to use FileStorageFormatter.
......
...@@ -270,7 +270,7 @@ class GC(FileStorageFormatter): ...@@ -270,7 +270,7 @@ class GC(FileStorageFormatter):
if oid == z64 and len(oid2curpos) == 0: if oid == z64 and len(oid2curpos) == 0:
# special case, pack to before creation time # special case, pack to before creation time
continue continue
raise raise KeyError(oid)
reachable[oid] = pos reachable[oid] = pos
for oid in self.findrefs(pos): for oid in self.findrefs(pos):
......
...@@ -332,7 +332,7 @@ class MappingStorage(object): ...@@ -332,7 +332,7 @@ class MappingStorage(object):
raise ZODB.POSException.StorageTransactionError( raise ZODB.POSException.StorageTransactionError(
"tpc_vote called with wrong transaction") "tpc_vote called with wrong transaction")
class TransactionRecord: class TransactionRecord(object):
status = ' ' status = ' '
...@@ -344,8 +344,8 @@ class TransactionRecord: ...@@ -344,8 +344,8 @@ class TransactionRecord:
self.extension = extension self.extension = extension
self.data = data self.data = data
_extension = property(lambda self: self._extension, _extension = property(lambda self: self.extension,
lambda self, v: setattr(self, '_extension', v), lambda self, v: setattr(self, 'extension', v),
) )
def __iter__(self): def __iter__(self):
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
"""Provide backward compatibility with storages that only have undoLog().""" """Provide backward compatibility with storages that only have undoLog()."""
class UndoLogCompatible: class UndoLogCompatible(object):
def undoInfo(self, first=0, last=-20, specification=None): def undoInfo(self, first=0, last=-20, specification=None):
if specification: if specification:
......
...@@ -16,6 +16,8 @@ from six import PY3 ...@@ -16,6 +16,8 @@ from six import PY3
IS_JYTHON = sys.platform.startswith('java') IS_JYTHON = sys.platform.startswith('java')
_protocol = 3
from zodbpickle import binary
if not PY3: if not PY3:
# Python 2.x # Python 2.x
...@@ -34,7 +36,6 @@ if not PY3: ...@@ -34,7 +36,6 @@ if not PY3:
HIGHEST_PROTOCOL = cPickle.HIGHEST_PROTOCOL HIGHEST_PROTOCOL = cPickle.HIGHEST_PROTOCOL
IMPORT_MAPPING = {} IMPORT_MAPPING = {}
NAME_MAPPING = {} NAME_MAPPING = {}
_protocol = 1
FILESTORAGE_MAGIC = b"FS21" FILESTORAGE_MAGIC = b"FS21"
else: else:
# Python 3.x: can't use stdlib's pickle because # Python 3.x: can't use stdlib's pickle because
...@@ -69,7 +70,6 @@ else: ...@@ -69,7 +70,6 @@ else:
def loads(s): def loads(s):
return zodbpickle.pickle.loads(s, encoding='ASCII', errors='bytes') return zodbpickle.pickle.loads(s, encoding='ASCII', errors='bytes')
_protocol = 3
FILESTORAGE_MAGIC = b"FS30" FILESTORAGE_MAGIC = b"FS30"
......
...@@ -288,6 +288,7 @@ class Blob(persistent.Persistent): ...@@ -288,6 +288,7 @@ class Blob(persistent.Persistent):
tempdir = self._p_jar.db()._storage.temporaryDirectory() tempdir = self._p_jar.db()._storage.temporaryDirectory()
else: else:
tempdir = tempfile.gettempdir() tempdir = tempfile.gettempdir()
filename = utils.mktemp(dir=tempdir, prefix="BUC") filename = utils.mktemp(dir=tempdir, prefix="BUC")
self._p_blob_uncommitted = filename self._p_blob_uncommitted = filename
...@@ -337,6 +338,16 @@ class BlobFile(file): ...@@ -337,6 +338,16 @@ class BlobFile(file):
self.blob.closed(self) self.blob.closed(self)
super(BlobFile, self).close() super(BlobFile, self).close()
def __reduce__(self):
# Python 3 cannot pickle an open file with any pickle protocol
# because of the underlying _io.BufferedReader/Writer object.
# Python 2 cannot pickle a file with a protocol < 2, but
# protocol 2 *can* pickle an open file; the result of unpickling
# is a closed file object.
# It's pointless to do that with a blob, so we make sure to
# prohibit it on all versions.
raise TypeError("Pickling a BlobFile is not allowed")
_pid = str(os.getpid()) _pid = str(os.getpid())
def log(msg, level=logging.INFO, subsys=_pid, exc_info=False): def log(msg, level=logging.INFO, subsys=_pid, exc_info=False):
...@@ -344,7 +355,7 @@ def log(msg, level=logging.INFO, subsys=_pid, exc_info=False): ...@@ -344,7 +355,7 @@ def log(msg, level=logging.INFO, subsys=_pid, exc_info=False):
logger.log(level, message, exc_info=exc_info) logger.log(level, message, exc_info=exc_info)
class FilesystemHelper: class FilesystemHelper(object):
# Storages that implement IBlobStorage can choose to use this # Storages that implement IBlobStorage can choose to use this
# helper class to generate and parse blob filenames. This is not # helper class to generate and parse blob filenames. This is not
# a set-in-stone interface for all filesystem operations dealing # a set-in-stone interface for all filesystem operations dealing
...@@ -366,11 +377,11 @@ class FilesystemHelper: ...@@ -366,11 +377,11 @@ class FilesystemHelper:
def create(self): def create(self):
if not os.path.exists(self.base_dir): if not os.path.exists(self.base_dir):
os.makedirs(self.base_dir, 0o700) os.makedirs(self.base_dir)
log("Blob directory '%s' does not exist. " log("Blob directory '%s' does not exist. "
"Created new directory." % self.base_dir) "Created new directory." % self.base_dir)
if not os.path.exists(self.temp_dir): if not os.path.exists(self.temp_dir):
os.makedirs(self.temp_dir, 0o700) os.makedirs(self.temp_dir)
log("Blob temporary directory '%s' does not exist. " log("Blob temporary directory '%s' does not exist. "
"Created new directory." % self.temp_dir) "Created new directory." % self.temp_dir)
...@@ -388,13 +399,16 @@ class FilesystemHelper: ...@@ -388,13 +399,16 @@ class FilesystemHelper:
(self.layout_name, self.base_dir, layout)) (self.layout_name, self.base_dir, layout))
def isSecure(self, path): def isSecure(self, path):
"""Ensure that (POSIX) path mode bits are 0700.""" import warnings
return (os.stat(path).st_mode & 0o77) == 0 warnings.warn(
"isSecure is deprecated. Permissions are no longer set by ZODB",
DeprecationWarning, stacklevel=2)
def checkSecure(self): def checkSecure(self):
if not self.isSecure(self.base_dir): import warnings
log('Blob dir %s has insecure mode setting' % self.base_dir, warnings.warn(
level=logging.WARNING) "checkSecure is deprecated. Permissions are no longer set by ZODB",
DeprecationWarning, stacklevel=2)
def getPathForOID(self, oid, create=False): def getPathForOID(self, oid, create=False):
"""Given an OID, return the path on the filesystem where """Given an OID, return the path on the filesystem where
...@@ -414,7 +428,7 @@ class FilesystemHelper: ...@@ -414,7 +428,7 @@ class FilesystemHelper:
if create and not os.path.exists(path): if create and not os.path.exists(path):
try: try:
os.makedirs(path, 0o700) os.makedirs(path)
except OSError: except OSError:
# We might have lost a race. If so, the directory # We might have lost a race. If so, the directory
# must exist now # must exist now
...@@ -515,7 +529,7 @@ class FilesystemHelper: ...@@ -515,7 +529,7 @@ class FilesystemHelper:
yield oid, path yield oid, path
class NoBlobsFileSystemHelper: class NoBlobsFileSystemHelper(object):
@property @property
def temp_dir(self): def temp_dir(self):
...@@ -570,18 +584,21 @@ class BushyLayout(object): ...@@ -570,18 +584,21 @@ class BushyLayout(object):
r'(0x[0-9a-f]{1,2}\%s){7,7}0x[0-9a-f]{1,2}$' % os.path.sep) r'(0x[0-9a-f]{1,2}\%s){7,7}0x[0-9a-f]{1,2}$' % os.path.sep)
def oid_to_path(self, oid): def oid_to_path(self, oid):
directories = []
# Create the bushy directory structure with the least significant byte # Create the bushy directory structure with the least significant byte
# first # first
for byte in ascii_bytes(oid): oid_bytes = ascii_bytes(oid)
if isinstance(byte,INT_TYPES): # Py3k iterates byte strings as ints hex_bytes = binascii.hexlify(oid_bytes)
hex_segment_bytes = b'0x' + binascii.hexlify(bytes([byte])) assert len(hex_bytes) == 16
hex_segment_string = hex_segment_bytes.decode('ascii')
else:
hex_segment_string = '0x%s' % binascii.hexlify(byte)
directories.append(hex_segment_string)
return os.path.sep.join(directories) directories = [b'0x' + hex_bytes[x:x+2]
for x in range(0, 16, 2)]
if bytes is not str: # py3
sep_bytes = os.path.sep.encode('ascii')
path_bytes = sep_bytes.join(directories)
return path_bytes.decode('ascii')
else:
return os.path.sep.join(directories)
def path_to_oid(self, path): def path_to_oid(self, path):
if self.blob_path_pattern.match(path) is None: if self.blob_path_pattern.match(path) is None:
...@@ -632,7 +649,6 @@ class BlobStorageMixin(object): ...@@ -632,7 +649,6 @@ class BlobStorageMixin(object):
# XXX Log warning if storage is ClientStorage # XXX Log warning if storage is ClientStorage
self.fshelper = FilesystemHelper(blob_dir, layout) self.fshelper = FilesystemHelper(blob_dir, layout)
self.fshelper.create() self.fshelper.create()
self.fshelper.checkSecure()
self.dirty_oids = [] self.dirty_oids = []
def _blob_init_no_blobs(self): def _blob_init_no_blobs(self):
...@@ -908,8 +924,9 @@ def rename_or_copy_blob(f1, f2, chmod=True): ...@@ -908,8 +924,9 @@ def rename_or_copy_blob(f1, f2, chmod=True):
with open(f2, 'wb') as file2: with open(f2, 'wb') as file2:
utils.cp(file1, file2) utils.cp(file1, file2)
remove_committed(f1) remove_committed(f1)
if chmod: if chmod:
os.chmod(f2, stat.S_IREAD) set_not_writable(f2)
if sys.platform == 'win32': if sys.platform == 'win32':
# On Windows, you can't remove read-only files, so make the # On Windows, you can't remove read-only files, so make the
...@@ -982,3 +999,17 @@ def copyTransactionsFromTo(source, destination): ...@@ -982,3 +999,17 @@ def copyTransactionsFromTo(source, destination):
destination.tpc_vote(trans) destination.tpc_vote(trans)
destination.tpc_finish(trans) destination.tpc_finish(trans)
NO_WRITE = ~ (stat.S_IWUSR | stat.S_IWGRP | stat.S_IWOTH)
READ_PERMS = stat.S_IRUSR | stat.S_IRGRP | stat.S_IROTH
def set_not_writable(path):
perms = stat.S_IMODE(os.lstat(path).st_mode)
# Not writable:
perms &= NO_WRITE
# Read perms from folder:
perms |= stat.S_IMODE(os.lstat(os.path.dirname(path)).st_mode) & READ_PERMS
os.chmod(path, perms)
...@@ -163,7 +163,7 @@ def find_global(modulename, globalname, ...@@ -163,7 +163,7 @@ def find_global(modulename, globalname,
If we "repair" a missing global:: If we "repair" a missing global::
>>> class ZODBnotthere: >>> class ZODBnotthere(object):
... atall = [] ... atall = []
>>> sys.modules['ZODB.not'] = ZODBnotthere >>> sys.modules['ZODB.not'] = ZODBnotthere
...@@ -174,7 +174,7 @@ def find_global(modulename, globalname, ...@@ -174,7 +174,7 @@ def find_global(modulename, globalname,
>>> find_global('ZODB.not.there', 'atall') is ZODBnotthere.atall >>> find_global('ZODB.not.there', 'atall') is ZODBnotthere.atall
True True
Of course, if we beak it again:: Of course, if we break it again::
>>> del sys.modules['ZODB.not'] >>> del sys.modules['ZODB.not']
>>> del sys.modules['ZODB.not.there'] >>> del sys.modules['ZODB.not.there']
...@@ -233,7 +233,7 @@ def rebuild(modulename, globalname, *args): ...@@ -233,7 +233,7 @@ def rebuild(modulename, globalname, *args):
If we "repair" the brokenness:: If we "repair" the brokenness::
>>> class notthere: # fake notthere module >>> class notthere(object): # fake notthere module
... class atall(object): ... class atall(object):
... def __new__(self, *args): ... def __new__(self, *args):
... ob = object.__new__(self) ... ob = object.__new__(self)
......
...@@ -103,7 +103,7 @@ def storageFromURL(url): ...@@ -103,7 +103,7 @@ def storageFromURL(url):
def storageFromConfig(section): def storageFromConfig(section):
return section.open() return section.open()
class BaseConfig: class BaseConfig(object):
"""Object representing a configured storage or database. """Object representing a configured storage or database.
Methods: Methods:
......
...@@ -14,7 +14,7 @@ ...@@ -14,7 +14,7 @@
import persistent.mapping import persistent.mapping
class fixer: class fixer(object):
def __of__(self, parent): def __of__(self, parent):
def __setstate__(state, self=parent): def __setstate__(state, self=parent):
self._container=state self._container=state
...@@ -23,7 +23,7 @@ class fixer: ...@@ -23,7 +23,7 @@ class fixer:
fixer=fixer() fixer=fixer()
class hack: pass class hack(object): pass
hack=hack() hack=hack()
def __basicnew__(): def __basicnew__():
......
...@@ -27,7 +27,7 @@ from ZODB._compat import loads ...@@ -27,7 +27,7 @@ from ZODB._compat import loads
from persistent.TimeStamp import TimeStamp from persistent.TimeStamp import TimeStamp
class TxnHeader: class TxnHeader(object):
"""Object representing a transaction record header. """Object representing a transaction record header.
Attribute Position Value Attribute Position Value
...@@ -100,7 +100,7 @@ class TxnHeader: ...@@ -100,7 +100,7 @@ class TxnHeader:
tlen = u64(self._file.read(8)) tlen = u64(self._file.read(8))
return TxnHeader(self._file, self._pos - (tlen + 8)) return TxnHeader(self._file, self._pos - (tlen + 8))
class DataHeader: class DataHeader(object):
"""Object representing a data record header. """Object representing a data record header.
Attribute Position Value Attribute Position Value
......
...@@ -1243,6 +1243,16 @@ class IMVCCPrefetchStorage(IMVCCStorage): ...@@ -1243,6 +1243,16 @@ class IMVCCPrefetchStorage(IMVCCStorage):
more than once. more than once.
""" """
class IMVCCAfterCompletionStorage(IMVCCStorage):
def afterCompletion():
"""Notify a storage that a transaction has ended.
The storage may choose to use this opportunity to release resources.
See ``transaction.interfaces.ISynchronizer.afterCompletion``.
"""
class IStorageCurrentRecordIteration(IStorage): class IStorageCurrentRecordIteration(IStorage):
def record_iternext(next=None): def record_iternext(next=None):
......
...@@ -27,10 +27,9 @@ class Base(object): ...@@ -27,10 +27,9 @@ class Base(object):
def __getattr__(self, name): def __getattr__(self, name):
if name in self._copy_methods: if name in self._copy_methods:
if hasattr(self._storage, name): m = getattr(self._storage, name)
m = getattr(self._storage, name) setattr(self, name, m)
setattr(self, name, m) return m
return m
raise AttributeError(name) raise AttributeError(name)
...@@ -204,7 +203,12 @@ class HistoricalStorageAdapter(Base): ...@@ -204,7 +203,12 @@ class HistoricalStorageAdapter(Base):
return False return False
def release(self): def release(self):
pass try:
release = self._storage.release
except AttributeError:
pass
else:
release()
close = release close = release
......
...@@ -26,7 +26,7 @@ def FakeUnpickler(f): ...@@ -26,7 +26,7 @@ def FakeUnpickler(f):
return unpickler return unpickler
class Report: class Report(object):
def __init__(self): def __init__(self):
self.OIDMAP = {} self.OIDMAP = {}
self.TYPEMAP = {} self.TYPEMAP = {}
...@@ -67,8 +67,7 @@ def report(rep): ...@@ -67,8 +67,7 @@ def report(rep):
fmts = "%46s %7d %8dk %5.1f%% %7.2f" # summary format fmts = "%46s %7d %8dk %5.1f%% %7.2f" # summary format
print(fmt % ("Class Name", "Count", "TBytes", "Pct", "AvgSize")) print(fmt % ("Class Name", "Count", "TBytes", "Pct", "AvgSize"))
print(fmt % ('-'*46, '-'*7, '-'*9, '-'*5, '-'*7)) print(fmt % ('-'*46, '-'*7, '-'*9, '-'*5, '-'*7))
typemap = rep.TYPEMAP.keys() typemap = sorted(rep.TYPEMAP)
typemap.sort()
cumpct = 0.0 cumpct = 0.0
for t in typemap: for t in typemap:
pct = rep.TYPESIZE[t] * 100.0 / rep.DBYTES pct = rep.TYPESIZE[t] * 100.0 / rep.DBYTES
......
...@@ -44,7 +44,7 @@ from ZODB._compat import FILESTORAGE_MAGIC ...@@ -44,7 +44,7 @@ from ZODB._compat import FILESTORAGE_MAGIC
class FormatError(ValueError): class FormatError(ValueError):
"""There is a problem with the format of the FileStorage.""" """There is a problem with the format of the FileStorage."""
class Status: class Status(object):
checkpoint = b'c' checkpoint = b'c'
undone = b'u' undone = b'u'
......
...@@ -112,7 +112,7 @@ def main(): ...@@ -112,7 +112,7 @@ def main():
except getopt.error as msg: except getopt.error as msg:
error(2, msg) error(2, msg)
class Options: class Options(object):
stype = 'FileStorage' stype = 'FileStorage'
dtype = 'FileStorage' dtype = 'FileStorage'
verbose = 0 verbose = 0
...@@ -329,7 +329,7 @@ def doit(srcdb, dstdb, options): ...@@ -329,7 +329,7 @@ def doit(srcdb, dstdb, options):
# helper to deal with differences between old-style store() return and # helper to deal with differences between old-style store() return and
# new-style store() return that supports ZEO # new-style store() return that supports ZEO
class RevidAccumulator: class RevidAccumulator(object):
def __init__(self): def __init__(self):
self.data = {} self.data = {}
......
...@@ -164,7 +164,7 @@ def parseargs(argv): ...@@ -164,7 +164,7 @@ def parseargs(argv):
except getopt.error as msg: except getopt.error as msg:
usage(1, msg) usage(1, msg)
class Options: class Options(object):
mode = None # BACKUP, RECOVER or VERIFY mode = None # BACKUP, RECOVER or VERIFY
file = None # name of input Data.fs file file = None # name of input Data.fs file
repository = None # name of directory holding backups repository = None # name of directory holding backups
......
...@@ -19,7 +19,7 @@ import zope.testing.renormalizing ...@@ -19,7 +19,7 @@ import zope.testing.renormalizing
checker = zope.testing.renormalizing.RENormalizing([ checker = zope.testing.renormalizing.RENormalizing([
(re.compile( (re.compile(
'[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}\.[0-9]+'), r'[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}\.[0-9]+'),
'2007-11-10 15:18:48.543001'), '2007-11-10 15:18:48.543001'),
(re.compile('hash=[0-9a-f]{40}'), (re.compile('hash=[0-9a-f]{40}'),
'hash=b16422d09fabdb45d4e4325e4b42d7d6f021d3c3'), 'hash=b16422d09fabdb45d4e4325e4b42d7d6f021d3c3'),
...@@ -29,13 +29,13 @@ checker = zope.testing.renormalizing.RENormalizing([ ...@@ -29,13 +29,13 @@ checker = zope.testing.renormalizing.RENormalizing([
# Python 3 produces larger pickles, even when we use zodbpickle :( # Python 3 produces larger pickles, even when we use zodbpickle :(
# this changes all the offsets and sizes in fstail.txt # this changes all the offsets and sizes in fstail.txt
(re.compile("user='' description='' " (re.compile("user='' description='' "
"length=[0-9]+ offset=[0-9]+ \(\+23\)"), r"length=[0-9]+ offset=[0-9]+ \(\+23\)"),
"user='' description='' " "user='' description='' "
"length=<LENGTH> offset=<OFFSET> (+23)"), "length=<LENGTH> offset=<OFFSET> (+23)"),
(re.compile("user='' description='initial database creation' " (re.compile("user='' description='initial database creation' "
"length=[0-9]+ offset=4 \(\+48\)"), r"length=[0-9]+ offset=4 \(\+48\)"),
"user='' description='initial database creation' " "user='' description='initial database creation' "
"length=<LENGTH> offset=4 (+48)"), "length=<LENGTH> offset=4 (+48)"),
]) ])
def test_suite(): def test_suite():
......
...@@ -38,7 +38,7 @@ def _read_file(name, mode='rb'): ...@@ -38,7 +38,7 @@ def _read_file(name, mode='rb'):
return f.read() return f.read()
class OurDB: class OurDB(object):
_file_name = None _file_name = None
...@@ -241,7 +241,7 @@ class Test_parseargs(unittest.TestCase): ...@@ -241,7 +241,7 @@ class Test_parseargs(unittest.TestCase):
sys.stderr.getvalue()) sys.stderr.getvalue())
class FileopsBase: class FileopsBase(object):
def _makeChunks(self): def _makeChunks(self):
from ZODB.scripts.repozo import READCHUNK from ZODB.scripts.repozo import READCHUNK
...@@ -316,7 +316,7 @@ class Test_checksum(unittest.TestCase, FileopsBase): ...@@ -316,7 +316,7 @@ class Test_checksum(unittest.TestCase, FileopsBase):
self.assertEqual(sum, md5(b'x' * 42).hexdigest()) self.assertEqual(sum, md5(b'x' * 42).hexdigest())
class OptionsTestBase: class OptionsTestBase(object):
_repository_directory = None _repository_directory = None
_data_directory = None _data_directory = None
...@@ -408,7 +408,7 @@ class Test_concat(OptionsTestBase, unittest.TestCase): ...@@ -408,7 +408,7 @@ class Test_concat(OptionsTestBase, unittest.TestCase):
def test_w_ofp(self): def test_w_ofp(self):
class Faux: class Faux(object):
_closed = False _closed = False
def __init__(self): def __init__(self):
self._written = [] self._written = []
......
...@@ -123,7 +123,7 @@ import threading ...@@ -123,7 +123,7 @@ import threading
import time import time
import transaction import transaction
class JobProducer: class JobProducer(object):
def __init__(self): def __init__(self):
self.jobs = [] self.jobs = []
...@@ -143,7 +143,7 @@ class JobProducer: ...@@ -143,7 +143,7 @@ class JobProducer:
class MBox: class MBox(object):
def __init__(self, filename): def __init__(self, filename):
if ' ' in filename: if ' ' in filename:
...@@ -247,7 +247,7 @@ def setup(lib_python): ...@@ -247,7 +247,7 @@ def setup(lib_python):
PLexicon('lex', '', Splitter(), CaseNormalizer()) PLexicon('lex', '', Splitter(), CaseNormalizer())
) )
class extra: class extra(object):
doc_attr = 'PrincipiaSearchSource' doc_attr = 'PrincipiaSearchSource'
lexicon_id = 'lex' lexicon_id = 'lex'
index_type = 'Okapi BM25 Rank' index_type = 'Okapi BM25 Rank'
...@@ -371,7 +371,7 @@ def index(connection, messages, catalog, max): ...@@ -371,7 +371,7 @@ def index(connection, messages, catalog, max):
return message.number return message.number
class IndexJob: class IndexJob(object):
needs_mbox = 1 needs_mbox = 1
catalog = 1 catalog = 1
prefix = 'index' prefix = 'index'
...@@ -444,7 +444,7 @@ def edit(connection, mbox, catalog=1): ...@@ -444,7 +444,7 @@ def edit(connection, mbox, catalog=1):
return norig, ndel, nins return norig, ndel, nins
class EditJob: class EditJob(object):
needs_mbox = 1 needs_mbox = 1
prefix = 'edit' prefix = 'edit'
catalog = 1 catalog = 1
...@@ -480,7 +480,7 @@ def search(connection, terms, number): ...@@ -480,7 +480,7 @@ def search(connection, terms, number):
return n return n
class SearchJob: class SearchJob(object):
def __init__(self, terms='', number=10): def __init__(self, terms='', number=10):
......
...@@ -139,7 +139,8 @@ from persistent import Persistent ...@@ -139,7 +139,8 @@ from persistent import Persistent
from persistent.wref import WeakRefMarker, WeakRef from persistent.wref import WeakRefMarker, WeakRef
from ZODB import broken from ZODB import broken
from ZODB.POSException import InvalidObjectReference from ZODB.POSException import InvalidObjectReference
from ZODB._compat import PersistentPickler, PersistentUnpickler, BytesIO, _protocol from ZODB._compat import PersistentPickler, PersistentUnpickler, BytesIO
from ZODB._compat import _protocol, binary
_oidtypes = bytes, type(None) _oidtypes = bytes, type(None)
...@@ -159,7 +160,7 @@ def myhasattr(obj, name, _marker=object()): ...@@ -159,7 +160,7 @@ def myhasattr(obj, name, _marker=object()):
return getattr(obj, name, _marker) is not _marker return getattr(obj, name, _marker) is not _marker
class ObjectWriter: class ObjectWriter(object):
"""Serializes objects for storage in the database. """Serializes objects for storage in the database.
The ObjectWriter creates object pickles in the ZODB format. It The ObjectWriter creates object pickles in the ZODB format. It
...@@ -183,16 +184,16 @@ class ObjectWriter: ...@@ -183,16 +184,16 @@ class ObjectWriter:
"""Return the persistent id for obj. """Return the persistent id for obj.
>>> from ZODB.tests.util import P >>> from ZODB.tests.util import P
>>> class DummyJar: >>> class DummyJar(object):
... xrefs = True ... xrefs = True
... def new_oid(self): ... def new_oid(self):
... return 42 ... return b'42'
... def db(self): ... def db(self):
... return self ... return self
... databases = {} ... databases = {}
>>> jar = DummyJar() >>> jar = DummyJar()
>>> class O: >>> class O(object):
... _p_jar = jar ... _p_jar = jar
>>> writer = ObjectWriter(O) >>> writer = ObjectWriter(O)
...@@ -204,24 +205,31 @@ class ObjectWriter: ...@@ -204,24 +205,31 @@ class ObjectWriter:
>>> bob = P('bob') >>> bob = P('bob')
>>> oid, cls = writer.persistent_id(bob) >>> oid, cls = writer.persistent_id(bob)
>>> oid >>> oid
42 '42'
>>> cls is P >>> cls is P
True True
To work with Python 3, the oid in the persistent id is of the
zodbpickle binary type:
>>> oid.__class__ is binary
True
If a persistent object does not already have an oid and jar, If a persistent object does not already have an oid and jar,
these will be assigned by persistent_id(): these will be assigned by persistent_id():
>>> bob._p_oid >>> bob._p_oid
42 '42'
>>> bob._p_jar is jar >>> bob._p_jar is jar
True True
If the object already has a persistent id, the id is not changed: If the object already has a persistent id, the id is not changed:
>>> bob._p_oid = 24 >>> bob._p_oid = b'24'
>>> oid, cls = writer.persistent_id(bob) >>> oid, cls = writer.persistent_id(bob)
>>> oid >>> oid
24 '24'
>>> cls is P >>> cls is P
True True
...@@ -247,9 +255,9 @@ class ObjectWriter: ...@@ -247,9 +255,9 @@ class ObjectWriter:
>>> sam = PNewArgs('sam') >>> sam = PNewArgs('sam')
>>> writer.persistent_id(sam) >>> writer.persistent_id(sam)
42 '42'
>>> sam._p_oid >>> sam._p_oid
42 '42'
>>> sam._p_jar is jar >>> sam._p_jar is jar
True True
...@@ -260,7 +268,7 @@ class ObjectWriter: ...@@ -260,7 +268,7 @@ class ObjectWriter:
Check that a classic class doesn't get identified improperly: Check that a classic class doesn't get identified improperly:
>>> class ClassicClara: >>> class ClassicClara(object):
... pass ... pass
>>> clara = ClassicClara() >>> clara = ClassicClara()
...@@ -312,6 +320,8 @@ class ObjectWriter: ...@@ -312,6 +320,8 @@ class ObjectWriter:
obj.oid = oid obj.oid = oid
obj.dm = target._p_jar obj.dm = target._p_jar
obj.database_name = obj.dm.db().database_name obj.database_name = obj.dm.db().database_name
oid = binary(oid)
if obj.dm is self._jar: if obj.dm is self._jar:
return ['w', (oid, )] return ['w', (oid, )]
else: else:
...@@ -366,6 +376,7 @@ class ObjectWriter: ...@@ -366,6 +376,7 @@ class ObjectWriter:
self._jar, obj, self._jar, obj,
) )
oid = binary(oid)
klass = type(obj) klass = type(obj)
if hasattr(klass, '__getnewargs__'): if hasattr(klass, '__getnewargs__'):
# We don't want to save newargs in object refs. # We don't want to save newargs in object refs.
...@@ -432,7 +443,7 @@ class ObjectWriter: ...@@ -432,7 +443,7 @@ class ObjectWriter:
def __iter__(self): def __iter__(self):
return NewObjectIterator(self._stack) return NewObjectIterator(self._stack)
class NewObjectIterator: class NewObjectIterator(object):
# The pickler is used as a forward iterator when the connection # The pickler is used as a forward iterator when the connection
# is looking for new objects to pickle. # is looking for new objects to pickle.
...@@ -452,7 +463,7 @@ class NewObjectIterator: ...@@ -452,7 +463,7 @@ class NewObjectIterator:
next = __next__ next = __next__
class ObjectReader: class ObjectReader(object):
def __init__(self, conn=None, cache=None, factory=None): def __init__(self, conn=None, cache=None, factory=None):
self._conn = conn self._conn = conn
......
...@@ -32,7 +32,7 @@ from .. import utils ...@@ -32,7 +32,7 @@ from .. import utils
ZERO = b'\0'*8 ZERO = b'\0'*8
class BasicStorage: class BasicStorage(object):
def checkBasics(self): def checkBasics(self):
self.assertEqual(self._storage.lastTransaction(), ZERO) self.assertEqual(self._storage.lastTransaction(), ZERO)
......
...@@ -55,7 +55,7 @@ class PCounter4(PCounter): ...@@ -55,7 +55,7 @@ class PCounter4(PCounter):
def _p_resolveConflict(self, oldState, savedState): def _p_resolveConflict(self, oldState, savedState):
raise RuntimeError("Can't get here; not enough args") raise RuntimeError("Can't get here; not enough args")
class ConflictResolvingStorage: class ConflictResolvingStorage(object):
def checkResolve(self, resolvable=True): def checkResolve(self, resolvable=True):
db = DB(self._storage) db = DB(self._storage)
...@@ -131,7 +131,7 @@ class ConflictResolvingStorage: ...@@ -131,7 +131,7 @@ class ConflictResolvingStorage:
self._dostoreNP, self._dostoreNP,
oid, revid=revid1, data=zodb_pickle(obj)) oid, revid=revid1, data=zodb_pickle(obj))
class ConflictResolvingTransUndoStorage: class ConflictResolvingTransUndoStorage(object):
def checkUndoConflictResolution(self): def checkUndoConflictResolution(self):
# This test is based on checkNotUndoable in the # This test is based on checkNotUndoable in the
......
...@@ -21,7 +21,7 @@ import sys ...@@ -21,7 +21,7 @@ import sys
from time import time, sleep from time import time, sleep
from ZODB.tests.MinPO import MinPO from ZODB.tests.MinPO import MinPO
class HistoryStorage: class HistoryStorage(object):
def checkSimpleHistory(self): def checkSimpleHistory(self):
self._checkHistory((11, 12, 13)) self._checkHistory((11, 12, 13))
......
...@@ -31,7 +31,7 @@ except ImportError: ...@@ -31,7 +31,7 @@ except ImportError:
# Py3: zip() already returns an iterable. # Py3: zip() already returns an iterable.
pass pass
class IteratorCompare: class IteratorCompare(object):
def iter_verify(self, txniter, revids, val0): def iter_verify(self, txniter, revids, val0):
eq = self.assertEqual eq = self.assertEqual
...@@ -203,7 +203,7 @@ class ExtendedIteratorStorage(IteratorCompare): ...@@ -203,7 +203,7 @@ class ExtendedIteratorStorage(IteratorCompare):
self.iter_verify(txniter, [revid3], 13) self.iter_verify(txniter, [revid3], 13)
class IteratorDeepCompare: class IteratorDeepCompare(object):
def compare(self, storage1, storage2): def compare(self, storage1, storage2):
eq = self.assertEqual eq = self.assertEqual
......
...@@ -211,7 +211,7 @@ class ExtStorageClientThread(StorageClientThread): ...@@ -211,7 +211,7 @@ class ExtStorageClientThread(StorageClientThread):
for obj in iter: for obj in iter:
pass pass
class MTStorage: class MTStorage(object):
"Test a storage with multiple client threads executing concurrently." "Test a storage with multiple client threads executing concurrently."
def _checkNThreads(self, n, constructor, *args): def _checkNThreads(self, n, constructor, *args):
......
...@@ -44,7 +44,7 @@ ZERO = b'\0'*8 ...@@ -44,7 +44,7 @@ ZERO = b'\0'*8
# ids, not as the object's state. This makes the referencesf stuff work, # ids, not as the object's state. This makes the referencesf stuff work,
# because it pickle sniffs for persistent ids (so we have to get those # because it pickle sniffs for persistent ids (so we have to get those
# persistent ids into the root object's pickle). # persistent ids into the root object's pickle).
class Root: class Root(object):
pass pass
...@@ -99,7 +99,7 @@ def pdumps(obj): ...@@ -99,7 +99,7 @@ def pdumps(obj):
return s.getvalue() return s.getvalue()
class PackableStorageBase: class PackableStorageBase(object):
# We keep a cache of object ids to instances so that the unpickler can # We keep a cache of object ids to instances so that the unpickler can
# easily return any persistent object. # easily return any persistent object.
...@@ -768,7 +768,7 @@ class ClientThread(TestThread): ...@@ -768,7 +768,7 @@ class ClientThread(TestThread):
conn.close() conn.close()
class ElapsedTimer: class ElapsedTimer(object):
def __init__(self, start_time): def __init__(self, start_time):
self.start_time = start_time self.start_time = start_time
......
...@@ -15,7 +15,7 @@ ...@@ -15,7 +15,7 @@
from ZODB.utils import load_current from ZODB.utils import load_current
class PersistentStorage: class PersistentStorage(object):
def checkUpdatesPersist(self): def checkUpdatesPersist(self):
oids = [] oids = []
......
...@@ -16,7 +16,7 @@ from ZODB.POSException import ReadOnlyError, Unsupported ...@@ -16,7 +16,7 @@ from ZODB.POSException import ReadOnlyError, Unsupported
from ZODB.utils import load_current from ZODB.utils import load_current
class ReadOnlyStorage: class ReadOnlyStorage(object):
def _create_data(self): def _create_data(self):
# test a read-only storage that already has some data # test a read-only storage that already has some data
......
...@@ -20,7 +20,7 @@ from ZODB.utils import p64, u64, load_current ...@@ -20,7 +20,7 @@ from ZODB.utils import p64, u64, load_current
ZERO = '\0'*8 ZERO = '\0'*8
class RevisionStorage: class RevisionStorage(object):
def checkLoadSerial(self): def checkLoadSerial(self):
oid = self._storage.new_oid() oid = self._storage.new_oid()
......
...@@ -69,7 +69,7 @@ OID = "\000" * 8 ...@@ -69,7 +69,7 @@ OID = "\000" * 8
SERIALNO = "\000" * 8 SERIALNO = "\000" * 8
TID = "\000" * 8 TID = "\000" * 8
class SynchronizedStorage: class SynchronizedStorage(object):
def verifyNotCommitting(self, callable, *args): def verifyNotCommitting(self, callable, *args):
self.assertRaises(StorageTransactionError, callable, *args) self.assertRaises(StorageTransactionError, callable, *args)
......
This diff is collapsed.
...@@ -149,7 +149,7 @@ class Transaction(object): ...@@ -149,7 +149,7 @@ class Transaction(object):
def __getattr__(self, name): def __getattr__(self, name):
return getattr(self.__trans, name) return getattr(self.__trans, name)
class ZConfigHex: class ZConfigHex(object):
_factory = HexStorage _factory = HexStorage
......
...@@ -137,7 +137,7 @@ an exception: ...@@ -137,7 +137,7 @@ an exception:
Clean up: Clean up:
>>> for a_db in dbmap.values(): >>> for a_db in list(dbmap.values()):
... a_db.close() ... a_db.close()
......
...@@ -26,7 +26,7 @@ Make a change locally: ...@@ -26,7 +26,7 @@ Make a change locally:
>>> rt = cn.root() >>> rt = cn.root()
>>> rt['a'] = 1 >>> rt['a'] = 1
Sync isn't called when a connectiin is opened, even though that Sync isn't called when a connection is opened, even though that
implicitly starts a new transaction: implicitly starts a new transaction:
>>> st.sync_called >>> st.sync_called
...@@ -40,7 +40,7 @@ Sync is only called when we explicitly start a new transaction: ...@@ -40,7 +40,7 @@ Sync is only called when we explicitly start a new transaction:
True True
>>> st.sync_called = False >>> st.sync_called = False
BTW, calling ``sync()`` on a connectin starts a new transaction, which BTW, calling ``sync()`` on a connection starts a new transaction, which
caused ``sync()`` to be called on the storage: caused ``sync()`` to be called on the storage:
>>> cn.sync() >>> cn.sync()
...@@ -49,7 +49,7 @@ caused ``sync()`` to be called on the storage: ...@@ -49,7 +49,7 @@ caused ``sync()`` to be called on the storage:
>>> st.sync_called = False >>> st.sync_called = False
``sync()`` is not called by the Connection's ``afterCompletion()`` ``sync()`` is not called by the Connection's ``afterCompletion()``
hook after the commit completes, because we'll sunc when a new hook after the commit completes, because we'll sync when a new
transaction begins: transaction begins:
>>> transaction.commit() >>> transaction.commit()
...@@ -81,7 +81,7 @@ traceback then ;-) ...@@ -81,7 +81,7 @@ traceback then ;-)
>>> cn.close() >>> cn.close()
As a special case, if a synchronizer registers while a transaction is As a special case, if a synchronizer registers while a transaction is
in flight, then newTransaction and this the storage sync method is in flight, then newTransaction and thus the storage sync method is
called: called:
>>> tm = transaction.TransactionManager() >>> tm = transaction.TransactionManager()
......
...@@ -24,7 +24,7 @@ import time ...@@ -24,7 +24,7 @@ import time
from ZODB.ActivityMonitor import ActivityMonitor from ZODB.ActivityMonitor import ActivityMonitor
class FakeConnection: class FakeConnection(object):
loads = 0 loads = 0
stores = 0 stores = 0
......
...@@ -32,7 +32,7 @@ def test_integration(): ...@@ -32,7 +32,7 @@ def test_integration():
We'll create a fake module with a class: We'll create a fake module with a class:
>>> class NotThere: >>> class NotThere(object):
... Atall = type('Atall', (persistent.Persistent, ), ... Atall = type('Atall', (persistent.Persistent, ),
... {'__module__': 'ZODB.not.there'}) ... {'__module__': 'ZODB.not.there'})
......
...@@ -307,7 +307,7 @@ class LRUCacheTests(CacheTestBase): ...@@ -307,7 +307,7 @@ class LRUCacheTests(CacheTestBase):
if details['state'] is None: # i.e., it's a ghost if details['state'] is None: # i.e., it's a ghost
self.assertTrue(details['rc'] > 0) self.assertTrue(details['rc'] > 0)
class StubDataManager: class StubDataManager(object):
def setklassstate(self, object): def setklassstate(self, object):
pass pass
......
...@@ -33,13 +33,15 @@ class ConfigTestBase(ZODB.tests.util.TestCase): ...@@ -33,13 +33,15 @@ class ConfigTestBase(ZODB.tests.util.TestCase):
def _test(self, s): def _test(self, s):
db = self._opendb(s) db = self._opendb(s)
self.storage = db._storage try:
# Do something with the database to make sure it works self.storage = db._storage
cn = db.open() # Do something with the database to make sure it works
rt = cn.root() cn = db.open()
rt["test"] = 1 rt = cn.root()
transaction.commit() rt["test"] = 1
db.close() transaction.commit()
finally:
db.close()
class ZODBConfigTest(ConfigTestBase): class ZODBConfigTest(ConfigTestBase):
...@@ -73,6 +75,16 @@ class ZODBConfigTest(ConfigTestBase): ...@@ -73,6 +75,16 @@ class ZODBConfigTest(ConfigTestBase):
def test_file_config2(self): def test_file_config2(self):
path = tempfile.mktemp() path = tempfile.mktemp()
# first pass to actually create database file
self._test(
"""
<zodb>
<filestorage>
path %s
</filestorage>
</zodb>
""" % path)
# write operations must be disallowed on read-only access
cfg = """ cfg = """
<zodb> <zodb>
<filestorage> <filestorage>
......
...@@ -36,7 +36,7 @@ checker = renormalizing.RENormalizing([ ...@@ -36,7 +36,7 @@ checker = renormalizing.RENormalizing([
# Python 3 bytes add a "b". # Python 3 bytes add a "b".
(re.compile("b('.*?')"), r"\1"), (re.compile("b('.*?')"), r"\1"),
# Python 3 removes empty list representation. # Python 3 removes empty list representation.
(re.compile("set\(\[\]\)"), r"set()"), (re.compile(r"set\(\[\]\)"), r"set()"),
# Python 3 adds module name to exceptions. # Python 3 adds module name to exceptions.
(re.compile("ZODB.POSException.POSKeyError"), r"POSKeyError"), (re.compile("ZODB.POSException.POSKeyError"), r"POSKeyError"),
(re.compile("ZODB.POSException.ReadConflictError"), r"ReadConflictError"), (re.compile("ZODB.POSException.ReadConflictError"), r"ReadConflictError"),
...@@ -198,7 +198,7 @@ class SetstateErrorLoggingTests(ZODB.tests.util.TestCase): ...@@ -198,7 +198,7 @@ class SetstateErrorLoggingTests(ZODB.tests.util.TestCase):
record.msg, record.msg,
"Shouldn't load state for ZODB.tests.testConnection.StubObject" "Shouldn't load state for ZODB.tests.testConnection.StubObject"
" 0x01 when the connection is closed") " 0x01 when the connection is closed")
self.assert_(record.exc_info) self.assertTrue(record.exc_info)
class UserMethodTests(unittest.TestCase): class UserMethodTests(unittest.TestCase):
...@@ -1060,6 +1060,7 @@ def doctest_lp485456_setattr_in_setstate_doesnt_cause_multiple_stores(): ...@@ -1060,6 +1060,7 @@ def doctest_lp485456_setattr_in_setstate_doesnt_cause_multiple_stores():
>>> conn.close() >>> conn.close()
""" """
class _PlayPersistent(Persistent): class _PlayPersistent(Persistent):
def setValueWithSize(self, size=0): self.value = size*' ' def setValueWithSize(self, size=0): self.value = size*' '
__init__ = setValueWithSize __init__ = setValueWithSize
...@@ -1212,7 +1213,7 @@ class ModifyOnGetStateObject(Persistent): ...@@ -1212,7 +1213,7 @@ class ModifyOnGetStateObject(Persistent):
return Persistent.__getstate__(self) return Persistent.__getstate__(self)
class StubStorage: class StubStorage(object):
"""Very simple in-memory storage that does *just* enough to support tests. """Very simple in-memory storage that does *just* enough to support tests.
Only one concurrent transaction is supported. Only one concurrent transaction is supported.
...@@ -1301,16 +1302,78 @@ class StubStorage: ...@@ -1301,16 +1302,78 @@ class StubStorage:
return z64 return z64
class TestConnectionInterface(unittest.TestCase): class TestConnection(unittest.TestCase):
def test_connection_interface(self): def test_connection_interface(self):
from ZODB.interfaces import IConnection from ZODB.interfaces import IConnection
db = databaseFromString("<zodb>\n<mappingstorage/>\n</zodb>") db = databaseFromString("<zodb>\n<mappingstorage/>\n</zodb>")
cn = db.open() cn = db.open()
verifyObject(IConnection, cn) verifyObject(IConnection, cn)
db.close()
def test_storage_afterCompletionCalled(self):
db = ZODB.DB(None)
conn = db.open()
data = []
conn._storage.afterCompletion = lambda : data.append(None)
conn.transaction_manager.commit()
self.assertEqual(len(data), 1)
conn.close()
self.assertEqual(len(data), 2)
db.close()
def test_explicit_transactions_no_newTransactuon_on_afterCompletion(self):
syncs = []
from .MVCCMappingStorage import MVCCMappingStorage
storage = MVCCMappingStorage()
new_instance = storage.new_instance
def new_instance2():
inst = new_instance()
sync = inst.sync
def sync2(*args):
sync()
syncs.append(1)
inst.sync = sync2
return inst
storage.new_instance = new_instance2
db = ZODB.DB(storage)
del syncs[:] # Need to do this to clear effect of getting the
# root object
# We don't want to depend on latest transaction package, so
# just set attr for test:
tm = transaction.TransactionManager()
tm.explicit = True
conn = db.open(tm)
self.assertEqual(len(syncs), 0)
conn.transaction_manager.begin()
self.assertEqual(len(syncs), 1)
conn.transaction_manager.commit()
self.assertEqual(len(syncs), 1)
conn.transaction_manager.begin()
self.assertEqual(len(syncs), 2)
conn.transaction_manager.abort()
self.assertEqual(len(syncs), 2)
conn.close()
self.assertEqual(len(syncs), 2)
# For reference, in non-explicit mode:
conn = db.open()
self.assertEqual(len(syncs), 3)
conn._storage.sync = syncs.append
conn.transaction_manager.begin()
self.assertEqual(len(syncs), 4)
conn.transaction_manager.abort()
self.assertEqual(len(syncs), 5)
conn.close()
db.close()
class StubDatabase: class StubDatabase(object):
def __init__(self): def __init__(self):
self.storage = StubStorage() self.storage = StubStorage()
...@@ -1330,6 +1393,6 @@ def test_suite(): ...@@ -1330,6 +1393,6 @@ def test_suite():
s = unittest.makeSuite(ConnectionDotAdd) s = unittest.makeSuite(ConnectionDotAdd)
s.addTest(unittest.makeSuite(SetstateErrorLoggingTests)) s.addTest(unittest.makeSuite(SetstateErrorLoggingTests))
s.addTest(doctest.DocTestSuite(checker=checker)) s.addTest(doctest.DocTestSuite(checker=checker))
s.addTest(unittest.makeSuite(TestConnectionInterface)) s.addTest(unittest.makeSuite(TestConnection))
s.addTest(unittest.makeSuite(EstimatedSizeTests)) s.addTest(unittest.makeSuite(EstimatedSizeTests))
return s return s
...@@ -397,6 +397,31 @@ def minimally_test_connection_timeout(): ...@@ -397,6 +397,31 @@ def minimally_test_connection_timeout():
""" """
def cleanup_on_close():
"""Verify that various references are cleared on close
>>> db = ZODB.DB(None)
>>> conn = db.open()
>>> conn.root.x = 'x'
>>> transaction.commit()
>>> conn.close()
>>> historical_conn = db.open(at=db.lastTransaction())
>>> historical_conn.close()
>>> db.close()
>>> db.databases
{}
>>> db.pool.pop() is None
True
>>> [pool is None for pool in db.historical_pool.pools.values()]
[]
"""
def test_suite(): def test_suite():
s = unittest.makeSuite(DBTests) s = unittest.makeSuite(DBTests)
s.addTest(doctest.DocTestSuite( s.addTest(doctest.DocTestSuite(
......
...@@ -160,7 +160,7 @@ def setUp(test): ...@@ -160,7 +160,7 @@ def setUp(test):
def testSomeDelegation(): def testSomeDelegation():
r""" r"""
>>> import six >>> import six
>>> class S: >>> class S(object):
... def __init__(self, name): ... def __init__(self, name):
... self.name = name ... self.name = name
... def getSize(self): ... def getSize(self):
......
...@@ -689,6 +689,19 @@ def pack_with_open_blob_files(): ...@@ -689,6 +689,19 @@ def pack_with_open_blob_files():
>>> db.close() >>> db.close()
""" """
def readonly_open_nonexistent_file():
"""
Make sure error is reported when non-existent file is tried to be opened
read-only.
>>> try:
... fs = ZODB.FileStorage.FileStorage('nonexistent.fs', read_only=True)
... except Exception as e:
... # Python2 raises IOError; Python3 - FileNotFoundError
... print("error: %s" % str(e)) # doctest: +ELLIPSIS
error: ... No such file or directory: 'nonexistent.fs'
"""
def test_suite(): def test_suite():
suite = unittest.TestSuite() suite = unittest.TestSuite()
for klass in [ for klass in [
......
...@@ -33,7 +33,7 @@ from ZODB.tests import ( ...@@ -33,7 +33,7 @@ from ZODB.tests import (
Synchronization, Synchronization,
) )
class MVCCTests: class MVCCTests(object):
def checkClosingNestedDatabasesWorks(self): def checkClosingNestedDatabasesWorks(self):
# This tests for the error described in # This tests for the error described in
......
...@@ -11,6 +11,7 @@ ...@@ -11,6 +11,7 @@
# FOR A PARTICULAR PURPOSE. # FOR A PARTICULAR PURPOSE.
# #
############################################################################## ##############################################################################
from collections import namedtuple
import ZODB.MappingStorage import ZODB.MappingStorage
import unittest import unittest
import ZODB.tests.hexstorage import ZODB.tests.hexstorage
...@@ -61,9 +62,35 @@ class MappingStorageHexTests(MappingStorageTests): ...@@ -61,9 +62,35 @@ class MappingStorageHexTests(MappingStorageTests):
self._storage = ZODB.tests.hexstorage.HexStorage( self._storage = ZODB.tests.hexstorage.HexStorage(
ZODB.MappingStorage.MappingStorage()) ZODB.MappingStorage.MappingStorage())
MockTransaction = namedtuple(
'transaction',
['user', 'description', 'extension']
)
class MappingStorageTransactionRecordTests(unittest.TestCase):
def setUp(self):
self._transaction_record = ZODB.MappingStorage.TransactionRecord(
0,
MockTransaction('user', 'description', 'extension'),
''
)
def check_set__extension(self):
self._transaction_record._extension = 'new'
self.assertEqual(self._transaction_record.extension, 'new')
def check_get__extension(self):
self.assertEqual(
self._transaction_record.extension,
self._transaction_record._extension
)
def test_suite(): def test_suite():
suite = unittest.makeSuite(MappingStorageTests, 'check') suite = unittest.TestSuite()
suite = unittest.makeSuite(MappingStorageHexTests, 'check') suite.addTest(unittest.makeSuite(MappingStorageTests, 'check'))
suite.addTest(unittest.makeSuite(MappingStorageHexTests, 'check'))
suite.addTest(unittest.makeSuite(MappingStorageTransactionRecordTests, 'check'))
return suite return suite
if __name__ == "__main__": if __name__ == "__main__":
......
...@@ -37,7 +37,7 @@ class TestPList(unittest.TestCase): ...@@ -37,7 +37,7 @@ class TestPList(unittest.TestCase):
uu2 = PersistentList(u2) uu2 = PersistentList(u2)
v = PersistentList(tuple(u)) v = PersistentList(tuple(u))
class OtherList: class OtherList(object):
def __init__(self, initlist): def __init__(self, initlist):
self.__data = initlist self.__data = initlist
def __len__(self): def __len__(self):
......
...@@ -18,6 +18,8 @@ import unittest ...@@ -18,6 +18,8 @@ import unittest
from persistent import Persistent from persistent import Persistent
from persistent.wref import WeakRef from persistent.wref import WeakRef
import zope.testing.setupstack
import ZODB.tests.util import ZODB.tests.util
from ZODB import serialize from ZODB import serialize
from ZODB._compat import Pickler, PersistentUnpickler, BytesIO, _protocol, IS_JYTHON from ZODB._compat import Pickler, PersistentUnpickler, BytesIO, _protocol, IS_JYTHON
...@@ -100,7 +102,7 @@ class SerializerTestCase(unittest.TestCase): ...@@ -100,7 +102,7 @@ class SerializerTestCase(unittest.TestCase):
def test_myhasattr(self): def test_myhasattr(self):
class OldStyle: class OldStyle(object):
bar = "bar" bar = "bar"
def __getattr__(self, name): def __getattr__(self, name):
if name == "error": if name == "error":
...@@ -135,6 +137,9 @@ class SerializerTestCase(unittest.TestCase): ...@@ -135,6 +137,9 @@ class SerializerTestCase(unittest.TestCase):
top.ref = WeakRef(o) top.ref = WeakRef(o)
pickle = serialize.ObjectWriter().serialize(top) pickle = serialize.ObjectWriter().serialize(top)
# Make sure the persistent id is pickled using the 'C',
# SHORT_BINBYTES opcode:
self.assertTrue(b'C\x04abcd' in pickle)
refs = [] refs = []
u = PersistentUnpickler(None, refs.append, BytesIO(pickle)) u = PersistentUnpickler(None, refs.append, BytesIO(pickle))
...@@ -143,6 +148,18 @@ class SerializerTestCase(unittest.TestCase): ...@@ -143,6 +148,18 @@ class SerializerTestCase(unittest.TestCase):
self.assertEqual(refs, [['w', (b'abcd',)]]) self.assertEqual(refs, [['w', (b'abcd',)]])
def test_protocol_3_binary_handling(self):
from ZODB.serialize import _protocol
self.assertEqual(3, _protocol) # Yeah, whitebox
o = PersistentObject()
o._p_oid = b'o'
o.o = PersistentObject()
o.o._p_oid = b'o.o'
pickle = serialize.ObjectWriter().serialize(o)
# Make sure the persistent id is pickled using the 'C',
# SHORT_BINBYTES opcode:
self.assertTrue(b'C\x03o.o' in pickle)
class SerializerFunctestCase(unittest.TestCase): class SerializerFunctestCase(unittest.TestCase):
......
...@@ -128,12 +128,34 @@ class TestUtils(unittest.TestCase): ...@@ -128,12 +128,34 @@ class TestUtils(unittest.TestCase):
self.assertEqual(get_pickle_metadata(pickle), self.assertEqual(get_pickle_metadata(pickle),
(__name__, ExampleClass.__name__)) (__name__, ExampleClass.__name__))
def test_p64_bad_object(self):
with self.assertRaises(ValueError) as exc:
p64(2 ** 65)
e = exc.exception
# The args will be whatever the struct.error args were,
# which vary from version to version and across implementations,
# followed by the bad value
self.assertEqual(e.args[-1], 2 ** 65)
def test_u64_bad_object(self):
with self.assertRaises(ValueError) as exc:
u64(b'123456789')
e = exc.exception
# The args will be whatever the struct.error args were,
# which vary from version to version and across implementations,
# followed by the bad value
self.assertEqual(e.args[-1], b'123456789')
class ExampleClass(object): class ExampleClass(object):
pass pass
def test_suite(): def test_suite():
return unittest.TestSuite(( suite = unittest.defaultTestLoader.loadTestsFromName(__name__)
unittest.makeSuite(TestUtils), suite.addTest(
doctest.DocFileSuite('../utils.txt', checker=checker), doctest.DocFileSuite('../utils.txt', checker=checker)
)) )
return suite
...@@ -598,7 +598,7 @@ class PoisonedError(Exception): ...@@ -598,7 +598,7 @@ class PoisonedError(Exception):
pass pass
# PoisonedJar arranges to raise PoisonedError from interesting places. # PoisonedJar arranges to raise PoisonedError from interesting places.
class PoisonedJar: class PoisonedJar(object):
def __init__(self, break_tpc_begin=False, break_tpc_vote=False, def __init__(self, break_tpc_begin=False, break_tpc_vote=False,
break_savepoint=False): break_savepoint=False):
self.break_tpc_begin = break_tpc_begin self.break_tpc_begin = break_tpc_begin
...@@ -629,7 +629,7 @@ class PoisonedJar: ...@@ -629,7 +629,7 @@ class PoisonedJar:
pass pass
class PoisonedObject: class PoisonedObject(object):
def __init__(self, poisonedjar): def __init__(self, poisonedjar):
self._p_jar = poisonedjar self._p_jar = poisonedjar
......
...@@ -38,32 +38,38 @@ class TransactionMetaDataTests(unittest.TestCase): ...@@ -38,32 +38,38 @@ class TransactionMetaDataTests(unittest.TestCase):
self.assertEqual(t.user, b'user') self.assertEqual(t.user, b'user')
self.assertEqual(t.description, b'description') self.assertEqual(t.description, b'description')
self.assertEqual(t.extension, dict(foo='FOO')) self.assertEqual(t.extension, dict(foo='FOO'))
self.assertEqual(t._extension, t.extension) with warnings.catch_warnings():
warnings.simplefilter("ignore")
self.assertEqual(t._extension, t.extension)
def test_constructor_default_args(self): def test_constructor_default_args(self):
t = TransactionMetaData() t = TransactionMetaData()
self.assertEqual(t.user, b'') self.assertEqual(t.user, b'')
self.assertEqual(t.description, b'') self.assertEqual(t.description, b'')
self.assertEqual(t.extension, {}) self.assertEqual(t.extension, {})
self.assertEqual(t._extension, t.extension) with warnings.catch_warnings():
warnings.simplefilter("ignore")
self.assertEqual(t._extension, t.extension)
def test_set_extension(self): def test_set_extension(self):
t = TransactionMetaData(u'', u'', b'') t = TransactionMetaData(u'', u'', b'')
self.assertEqual(t.user, b'') self.assertEqual(t.user, b'')
self.assertEqual(t.description, b'') self.assertEqual(t.description, b'')
self.assertEqual(t.extension, {}) self.assertEqual(t.extension, {})
self.assertEqual(t._extension, t.extension) with warnings.catch_warnings():
warnings.simplefilter("ignore")
for name in 'extension', '_extension':
data = {name: name + 'foo'}
setattr(t, name, data)
self.assertEqual(t.extension, data)
self.assertEqual(t._extension, t.extension)
data = {}
setattr(t, name, data)
self.assertEqual(t.extension, data)
self.assertEqual(t._extension, t.extension) self.assertEqual(t._extension, t.extension)
for name in 'extension', '_extension':
data = {name: name + 'foo'}
setattr(t, name, data)
self.assertEqual(t.extension, data)
self.assertEqual(t._extension, t.extension)
data = {}
setattr(t, name, data)
self.assertEqual(t.extension, data)
self.assertEqual(t._extension, t.extension)
def test_used_by_connection(self): def test_used_by_connection(self):
import ZODB import ZODB
from ZODB.MappingStorage import MappingStorage from ZODB.MappingStorage import MappingStorage
...@@ -109,4 +115,3 @@ def test_suite(): ...@@ -109,4 +115,3 @@ def test_suite():
if __name__ == '__main__': if __name__ == '__main__':
unittest.main(defaultTest='test_suite') unittest.main(defaultTest='test_suite')
...@@ -52,7 +52,7 @@ class RegularObject(Persistent): ...@@ -52,7 +52,7 @@ class RegularObject(Persistent):
class PersistentObject(Persistent): class PersistentObject(Persistent):
pass pass
class CacheTests: class CacheTests(object):
def test_cache(self): def test_cache(self):
r"""Test basic cache methods. r"""Test basic cache methods.
......
...@@ -80,7 +80,7 @@ checker = renormalizing.RENormalizing([ ...@@ -80,7 +80,7 @@ checker = renormalizing.RENormalizing([
# Python 3 produces larger pickles, even when we use zodbpickle :( # Python 3 produces larger pickles, even when we use zodbpickle :(
# this changes all the offsets and sizes # this changes all the offsets and sizes
(re.compile(r'\bsize=[0-9]+\b'), 'size=<SIZE>'), (re.compile(r'\bsize=[0-9]+\b'), 'size=<SIZE>'),
(re.compile(r'\offset=[0-9]+\b'), 'offset=<OFFSET>'), (re.compile(r'\boffset=[0-9]+\b'), 'offset=<OFFSET>'),
]) ])
......
##############################################################################
#
# Copyright (c) 2017 Zope Foundation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
import unittest
from ZODB import mvccadapter
class TestBase(unittest.TestCase):
def test_getattr_does_not_hide_exceptions(self):
class TheException(Exception):
pass
class RaisesOnAccess(object):
@property
def thing(self):
raise TheException()
base = mvccadapter.Base(RaisesOnAccess())
base._copy_methods = ('thing',)
with self.assertRaises(TheException):
getattr(base, 'thing')
def test_getattr_raises_if_missing(self):
base = mvccadapter.Base(self)
base._copy_methods = ('thing',)
with self.assertRaises(AttributeError):
getattr(base, 'thing')
class TestHistoricalStorageAdapter(unittest.TestCase):
def test_forwards_release(self):
class Base(object):
released = False
def release(self):
self.released = True
base = Base()
adapter = mvccadapter.HistoricalStorageAdapter(base, None)
adapter.release()
self.assertTrue(base.released)
...@@ -478,55 +478,31 @@ def packing_with_uncommitted_data_undoing(): ...@@ -478,55 +478,31 @@ def packing_with_uncommitted_data_undoing():
>>> database.close() >>> database.close()
""" """
def test_blob_file_permissions():
def secure_blob_directory():
""" """
This is a test for secure creation and verification of secure settings of >>> blob_storage = create_storage()
blob directories. >>> conn = ZODB.connection(blob_storage)
>>> conn.root.x = ZODB.blob.Blob(b'test')
>>> blob_storage = create_storage(blob_dir='blobs') >>> conn.transaction_manager.commit()
Two directories are created:
>>> os.path.isdir('blobs')
True
>>> tmp_dir = os.path.join('blobs', 'tmp')
>>> os.path.isdir(tmp_dir)
True
They are only accessible by the owner:
>>> oct(os.stat('blobs').st_mode)[-5:]
'40700'
>>> oct(os.stat(tmp_dir).st_mode)[-5:]
'40700'
These settings are recognized as secure: Blobs have the readability of their parent directories:
>>> blob_storage.fshelper.isSecure('blobs') >>> import stat
True >>> READABLE = stat.S_IRUSR | stat.S_IRGRP | stat.S_IROTH
>>> blob_storage.fshelper.isSecure(tmp_dir) >>> path = conn.root.x.committed()
>>> ((os.stat(path).st_mode & READABLE) ==
... (os.stat(os.path.dirname(path)).st_mode & READABLE))
True True
After making the permissions of tmp_dir more liberal, the directory is The committed file isn't writable:
recognized as insecure:
>>> os.chmod(tmp_dir, 0o40711) >>> WRITABLE = stat.S_IWUSR | stat.S_IWGRP | stat.S_IWOTH
>>> blob_storage.fshelper.isSecure(tmp_dir) >>> os.stat(path).st_mode & WRITABLE
False 0
Clean up:
>>> blob_storage.close()
>>> conn.close()
""" """
# On windows, we can't create secure blob directories, at least not
# with APIs in the standard library, so there's no point in testing
# this.
if sys.platform == 'win32':
del secure_blob_directory
def loadblob_tmpstore(): def loadblob_tmpstore():
""" """
This is a test for assuring that the TmpStore's loadBlob implementation This is a test for assuring that the TmpStore's loadBlob implementation
......
...@@ -222,7 +222,7 @@ And load the pickle: ...@@ -222,7 +222,7 @@ And load the pickle:
Oooooof course, this won't work if the subobjects aren't persistent: Oooooof course, this won't work if the subobjects aren't persistent:
>>> class NP: >>> class NP(object):
... pass ... pass
......
...@@ -33,7 +33,7 @@ def tearDown(test): ...@@ -33,7 +33,7 @@ def tearDown(test):
def test_suite(): def test_suite():
base, src = os.path.split(os.path.dirname(os.path.dirname(ZODB.__file__))) base, src = os.path.split(os.path.dirname(os.path.dirname(ZODB.__file__)))
assert src == 'src' assert src == 'src', src
base = join(base, 'doc') base = join(base, 'doc')
guide = join(base, 'guide') guide = join(base, 'guide')
reference = join(base, 'reference') reference = join(base, 'reference')
...@@ -54,4 +54,3 @@ def test_suite(): ...@@ -54,4 +54,3 @@ def test_suite():
if __name__ == '__main__': if __name__ == '__main__':
unittest.main(defaultTest='test_suite') unittest.main(defaultTest='test_suite')
...@@ -67,7 +67,7 @@ def test_new_ghost_w_persistent_class(): ...@@ -67,7 +67,7 @@ def test_new_ghost_w_persistent_class():
""" """
# XXX need to update files to get newer testing package # XXX need to update files to get newer testing package
class FakeModule: class FakeModule(object):
def __init__(self, name, dict): def __init__(self, name, dict):
self.__dict__ = dict self.__dict__ = dict
self.__name__ = name self.__name__ = name
......
...@@ -37,6 +37,11 @@ checker = renormalizing.RENormalizing([ ...@@ -37,6 +37,11 @@ checker = renormalizing.RENormalizing([
r"\1"), r"\1"),
(re.compile('b(".*?")'), (re.compile('b(".*?")'),
r"\1"), r"\1"),
# Persistent 4.4 changes the repr of persistent subclasses,
# and it is slightly different with the C extension and
# pure-Python module
(re.compile('ZODB.tests.testcrossdatabasereferences.'),
''),
# Python 3 adds module name to exceptions. # Python 3 adds module name to exceptions.
(re.compile("ZODB.interfaces.BlobError"), (re.compile("ZODB.interfaces.BlobError"),
r"BlobError"), r"BlobError"),
...@@ -99,7 +104,7 @@ class P(persistent.Persistent): ...@@ -99,7 +104,7 @@ class P(persistent.Persistent):
def __repr__(self): def __repr__(self):
return 'P(%s)' % self.name return 'P(%s)' % self.name
class MininalTestLayer: class MininalTestLayer(object):
__bases__ = () __bases__ = ()
__module__ = '' __module__ = ''
......
...@@ -13,7 +13,7 @@ ...@@ -13,7 +13,7 @@
############################################################################## ##############################################################################
import warnings import warnings
class WarningsHook: class WarningsHook(object):
"""Hook to capture warnings generated by Python. """Hook to capture warnings generated by Python.
The function warnings.showwarning() is designed to be hooked by The function warnings.showwarning() is designed to be hooked by
......
...@@ -18,10 +18,10 @@ import sys ...@@ -18,10 +18,10 @@ import sys
import time import time
import threading import threading
from binascii import hexlify, unhexlify from binascii import hexlify, unhexlify
from struct import pack, unpack
from tempfile import mkstemp from tempfile import mkstemp
from persistent.TimeStamp import TimeStamp from persistent.timestamp import TimeStamp
from ZODB._compat import Unpickler from ZODB._compat import Unpickler
from ZODB._compat import BytesIO from ZODB._compat import BytesIO
...@@ -84,18 +84,29 @@ assert sys.hexversion >= 0x02030000 ...@@ -84,18 +84,29 @@ assert sys.hexversion >= 0x02030000
# The distinction between ints and longs is blurred in Python 2.2, # The distinction between ints and longs is blurred in Python 2.2,
# so u64() are U64() really the same. # so u64() are U64() really the same.
_OID_STRUCT = struct.Struct('>Q')
_OID_PACK = _OID_STRUCT.pack
_OID_UNPACK = _OID_STRUCT.unpack
def p64(v): def p64(v):
"""Pack an integer or long into a 8-byte string""" """Pack an integer or long into a 8-byte string."""
return pack(">Q", v) try:
return _OID_PACK(v)
except struct.error as e:
raise ValueError(*(e.args + (v,)))
def u64(v): def u64(v):
"""Unpack an 8-byte string into a 64-bit long integer.""" """Unpack an 8-byte string into a 64-bit long integer."""
return unpack(">Q", v)[0] try:
return _OID_UNPACK(v)[0]
except struct.error as e:
raise ValueError(*(e.args + (v,)))
U64 = u64 U64 = u64
def cp(f1, f2, length=None): def cp(f1, f2, length=None, bufsize=64 * 1024):
"""Copy all data from one file to another. """Copy all data from one file to another.
It copies the data from the current position of the input file (f1) It copies the data from the current position of the input file (f1)
...@@ -106,7 +117,7 @@ def cp(f1, f2, length=None): ...@@ -106,7 +117,7 @@ def cp(f1, f2, length=None):
""" """
read = f1.read read = f1.read
write = f2.write write = f2.write
n = 8192 n = bufsize
if length is None: if length is None:
old_pos = f1.tell() old_pos = f1.tell()
...@@ -293,7 +304,7 @@ class locked(object): ...@@ -293,7 +304,7 @@ class locked(object):
if os.environ.get('DEBUG_LOCKING'): # pragma: no cover if os.environ.get('DEBUG_LOCKING'): # pragma: no cover
# NOTE: This only works on Python 3. # NOTE: This only works on Python 3.
class Lock: class Lock(object):
lock_class = threading.Lock lock_class = threading.Lock
......
...@@ -3,7 +3,7 @@ ...@@ -3,7 +3,7 @@
See http://stackoverflow.com/questions/9153473/sphinx-values-for-attributes-reported-as-none/39276413 See http://stackoverflow.com/questions/9153473/sphinx-values-for-attributes-reported-as-none/39276413
""" """
class ValueDoc: class ValueDoc(object):
def __init__(self, text): def __init__(self, text):
self.text = text self.text = text
......
...@@ -2,34 +2,29 @@ ...@@ -2,34 +2,29 @@
# Jython 2.7rc2 does work, but unfortunately has an issue running # Jython 2.7rc2 does work, but unfortunately has an issue running
# with Tox 1.9.2 (http://bugs.jython.org/issue2325) # with Tox 1.9.2 (http://bugs.jython.org/issue2325)
#envlist = py26,py27,py33,py34,pypy,simple,jython,pypy3 #envlist = py26,py27,py33,py34,pypy,simple,jython,pypy3
envlist = py27,py33,py34,py35,pypy,simple,pypy3 envlist = py27,py34,py35,py36,py37,pypy,pypy3
[testenv] [testenv]
# ZODB.tests.testdocumentation needs to find
# itself in the source tree to locate the doc/
# directory. 'usedevelop' is more like what
# buildout.cfg does, and is simpler than having
# testdocumentation.py also understand how to climb
# out of the tox site-packages.
usedevelop = true
commands = commands =
# Run unit tests first. # Run unit tests first.
zope-testrunner -u --test-path=src --auto-color --auto-progress zope-testrunner -u --test-path=src []
# Only run functional tests if unit tests pass. # Only run functional tests if unit tests pass.
zope-testrunner -f --test-path=src --auto-color --auto-progress zope-testrunner -f -j5 --test-path=src []
# without explicit deps, setup.py test will download a bunch of eggs into $PWD
deps = deps =
manuel .[test]
zope.testing
zope.testrunner >= 4.4.6
[testenv:simple]
# Test that 'setup.py test' works
basepython =
python2.7
commands =
python setup.py test -q
deps = {[testenv]deps}
[testenv:coverage] [testenv:coverage]
basepython = python2.7 basepython = python2.7
usedevelop = true
commands = commands =
coverage run --source=ZODB -m zope.testrunner --test-path=src --auto-color --auto-progress coverage run --source=ZODB -m zope.testrunner --test-path=src []
coverage report coverage report
deps = deps =
coverage
{[testenv]deps} {[testenv]deps}
coverage
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment