Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Z
ZODB
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Kirill Smelkov
ZODB
Commits
694ac459
Commit
694ac459
authored
Dec 17, 2007
by
Jim Fulton
Browse files
Options
Browse Files
Download
Plain Diff
tagged
parents
f0cbd414
4f9bbb17
Changes
47
Show whitespace changes
Inline
Side-by-side
Showing
47 changed files
with
4897 additions
and
1654 deletions
+4897
-1654
HISTORY.txt
HISTORY.txt
+0
-106
NEWS.txt
NEWS.txt
+100
-34
buildout.cfg
buildout.cfg
+1
-1
doc/HOWTO-Blobs-NFS.txt
doc/HOWTO-Blobs-NFS.txt
+2
-2
setup.py
setup.py
+39
-55
src/ZEO/tests/filecache.txt
src/ZEO/tests/filecache.txt
+0
-332
src/ZEO/tests/testConnection.py
src/ZEO/tests/testConnection.py
+1
-1
src/ZEO/tests/test_cache.py
src/ZEO/tests/test_cache.py
+2
-136
src/ZEO/tests/zeo-fan-out.test
src/ZEO/tests/zeo-fan-out.test
+2
-2
src/ZEO/zrpc/connection.py
src/ZEO/zrpc/connection.py
+4
-1
src/ZODB/Connection.py
src/ZODB/Connection.py
+92
-76
src/ZODB/DB.py
src/ZODB/DB.py
+251
-320
src/ZODB/ExportImport.py
src/ZODB/ExportImport.py
+15
-13
src/ZODB/POSException.py
src/ZODB/POSException.py
+14
-15
src/ZODB/blob.py
src/ZODB/blob.py
+1
-0
src/ZODB/component.xml
src/ZODB/component.xml
+5
-11
src/ZODB/config.py
src/ZODB/config.py
+3
-3
src/ZODB/historical_connections.txt
src/ZODB/historical_connections.txt
+0
-406
src/ZODB/interfaces.py
src/ZODB/interfaces.py
+36
-18
src/ZODB/scripts/fstail.py
src/ZODB/scripts/fstail.py
+2
-2
src/ZODB/scripts/fstail.txt
src/ZODB/scripts/fstail.txt
+0
-40
src/ZODB/scripts/tests.py
src/ZODB/scripts/tests.py
+4
-10
src/ZODB/serialize.py
src/ZODB/serialize.py
+1
-1
src/ZODB/tests/VersionStorage.py
src/ZODB/tests/VersionStorage.py
+147
-0
src/ZODB/tests/blob_basic.txt
src/ZODB/tests/blob_basic.txt
+0
-1
src/ZODB/tests/blob_connection.txt
src/ZODB/tests/blob_connection.txt
+11
-39
src/ZODB/tests/dbopen.txt
src/ZODB/tests/dbopen.txt
+7
-7
src/ZODB/tests/testConnectionSavepoint.py
src/ZODB/tests/testConnectionSavepoint.py
+0
-15
src/ZODB/tests/testDB.py
src/ZODB/tests/testDB.py
+78
-7
src/ZODB/tests/testZODB.py
src/ZODB/tests/testZODB.py
+14
-0
src/ZODB/tests/test_misc.py
src/ZODB/tests/test_misc.py
+37
-0
src/transaction/DEPENDENCIES.cfg
src/transaction/DEPENDENCIES.cfg
+2
-0
src/transaction/README.txt
src/transaction/README.txt
+13
-0
src/transaction/__init__.py
src/transaction/__init__.py
+29
-0
src/transaction/_manager.py
src/transaction/_manager.py
+155
-0
src/transaction/_transaction.py
src/transaction/_transaction.py
+674
-0
src/transaction/interfaces.py
src/transaction/interfaces.py
+476
-0
src/transaction/savepoint.txt
src/transaction/savepoint.txt
+289
-0
src/transaction/tests/__init__.py
src/transaction/tests/__init__.py
+1
-0
src/transaction/tests/abstestIDataManager.py
src/transaction/tests/abstestIDataManager.py
+57
-0
src/transaction/tests/doom.txt
src/transaction/tests/doom.txt
+138
-0
src/transaction/tests/savepointsample.py
src/transaction/tests/savepointsample.py
+184
-0
src/transaction/tests/test_SampleDataManager.py
src/transaction/tests/test_SampleDataManager.py
+412
-0
src/transaction/tests/test_SampleResourceManager.py
src/transaction/tests/test_SampleResourceManager.py
+435
-0
src/transaction/tests/test_register_compat.py
src/transaction/tests/test_register_compat.py
+154
-0
src/transaction/tests/test_savepoint.py
src/transaction/tests/test_savepoint.py
+69
-0
src/transaction/tests/test_transaction.py
src/transaction/tests/test_transaction.py
+940
-0
No files found.
HISTORY.txt
View file @
694ac459
What's new in ZODB 3.8.0
========================
General
-------
- The ZODB Storage APIs have been documented and cleaned up.
- ZODB versions are now officially deprecated and support for them
will be removed in ZODB 3.9. (They have been widely recognized as
deprecated for quite a while.)
- Changed the automatic garbage collection when opening a connection to only
apply the garbage collections on those connections in the pool that are
closed. (This fixed issue 113932.)
ZEO
---
- (3.8a1) ZEO's strategoes for avoiding client cache verification were
improved in the case that servers are restarted. Before, if
transactions were committed after the restart, clients that were up
to date or nearly up to date at the time of the restart and then
connected had to verify their caches. Now, it is far more likely
that a client that reconnects soon after a server restart won't have
to verify its cache.
- (3.8a1) Fixed a serious bug that could cause clients that disconnect from and
reconnect to a server to get bad invalidation data if the server
serves multiple storages with active writes.
- (3.8a1) It is now theoretically possible to use a ClientStorage in a storage
server. This might make it possible to offload read load from a
storage server at the cost of increasing write latency. This should
increase write throughput by offloading reads from the final storage
server. This feature is somewhat experimental. It has tests, but
hasn't been used in production.
Transactions
------------
- (3.8a1) Add a doom() and isDoomed() interface to the transaction module.
First step towards the resolution of
http://www.zope.org/Collectors/Zope3-dev/655
A doomed transaction behaves exactly the same way as an active transaction
but raises an error on any attempt to commit it, thus forcing an abort.
Doom is useful in places where abort is unsafe and an exception cannot be
raised. This occurs when the programmer wants the code following the doom to
run but not commit. It is unsafe to abort in these circumstances as a
following get() may implicitly open a new transaction.
Any attempt to commit a doomed transaction will raise a DoomedTransaction
exception.
- (3.8a1) Clean up the ZODB imports in transaction.
Clean up weird import dance with ZODB. This is unnecessary since the
transaction module stopped being imported in ZODB/__init__.py in rev 39622.
- (3.8a1) Support for subtransactions has been removed in favor of
save points.
Blobs
-----
- (3.8b1) Updated the Blob implementation in a number of ways. Some
of these are backward incompatible with 3.8a1:
o The Blob class now lives in ZODB.blob
o The blob openDetached method has been replaced by the committed method.
- (3.8a1) Added new blob feature. See the ZODB/Blobs directory for
documentation.
ZODB now handles (reasonably) large binary objects efficiently. Useful to
use from a few kilobytes to at least multiple hundred megabytes.
BTrees
------
- (3.8a1) Added support for 64-bit integer BTrees as separate types.
(For now, we're retaining compile-time support for making the regular
integer BTrees 64-bit.)
- (3.8a1) Normalize names in modules so that BTrees, Buckets, Sets, and
TreeSets can all be accessed with those names in the modules (e.g.,
BTrees.IOBTree.BTree). This is in addition to the older names (e.g.,
BTrees.IOBTree.IOBTree). This allows easier drop-in replacement, which
can especially be simplify code for packages that want to support both
32-bit and 64-bit BTrees.
- (3.8a1) Describe the interfaces for each module and actually declare
the interfaces for each.
- (3.8a1) Fix module references so klass.__module__ points to the Python
wrapper module, not the C extension.
- (3.8a1) introduce module families, to group all 32-bit and all 64-bit
modules.
What's new in ZODB3 3.7.0
==========================
Release date: 2007-04-20
...
...
NEWS.txt
View file @
694ac459
What's new
in ZODB 3.9
.0
What's new
on ZODB 3.8
.0
========================
General
-------
-
(3.9.0a1) Document conflict resolution (see ZODB/ConflictResolution.txt)
.
-
The ZODB Storage APIs have been documented and cleaned up
.
-
(3.9.0a1) Bugfix the situation in which comparing persistent objects (for
instance, as members in BTree set or keys of BTree) might cause data
inconsistency during conflict resolution.
-
ZODB versions are now officially deprecated and support for them
will be removed in ZODB 3.9. (They have been widely recognized as
deprecated for quite a while.)
- (3.9.0a1) Support multidatabase references in conflict resolution.
- (3.9.0a1) Make it possible to examine oid and (in some situations) database
name of persistent object references during conflict resolution.
- Changed the automatic garbage collection when opening a connection to only
apply the garbage collections on those connections in the pool that are
closed. (This fixed issue 113932.)
- (3.9.0a1) Moved 'transaction' module out of ZODB.
ZODB depends upon this module, but it must be installed separately.
- (3.8.0b3) Document conflict resolution (see ZODB/ConflictResolution.txt).
- (3.9.0a1) ZODB installation now requires setuptools.
- (3.9.0a1) Added `offset` information to output of `fstail`
script. Added test harness for this script.
- (3.8.0b3) Bugfix the situation in which comparing persistent objects (for
instance, as members in BTree set or keys of BTree) might cause data
inconsistency during conflict resolution.
- (3.9.0a1) Fixed bug 153316: persistent and BTrees were using `int`
for memory sizes which caused errors on x86_64 Intel Xeon machines
(using 64-bit Linux).
- (3.8.0b3) Support multidatabase references in conflict resolution.
- (3.
9.0a1) Removed version support from connections and DB. Versions
are still in the storages; this is an incremental step
.
- (3.
8.0b3) Make it possible to examine oid and (in some situations) database
name of persistent object references during conflict resolution
.
- (3.9.0a1) Added support for read-only, historical connections based
on datetimes or serials (TIDs). See
src/ZODB/historical_connections.txt.
- (3.8.0b3) Added missing data attribute for conflict errors.
- (3.
9.0a1) Fixed small bug that the Connection.isReadOnly method didn't
work after a savepoint
.
- (3.
8.0b5) Fixed bug 153316: persistent and BTrees gave errors on x86_64
Intel XEON platforms
.
ZEO
---
- (3.
9.0a1
) Bug #98275: Made ZEO cache more tolerant when invalidating current
- (3.
8.0b6
) Bug #98275: Made ZEO cache more tolerant when invalidating current
versions of objects.
- (3.
9.0a1
) Fixed a serious bug that could cause client I/O to stop
- (3.
8.0b4, 3.8.0b5
) Fixed a serious bug that could cause client I/O to stop
(hang). This was accomonied by a critical log message along the
lines of: "RuntimeError: dictionary changed size during iteration".
(In b4, the bug was only partially fixed.)
- (3.8a1) ZEO's strategoes for avoiding client cache verification were
improved in the case that servers are restarted. Before, if
transactions were committed after the restart, clients that were up
to date or nearly up to date at the time of the restart and then
connected had to verify their caches. Now, it is far more likely
that a client that reconnects soon after a server restart won't have
to verify its cache.
- (3.8a1) Fixed a serious bug that could cause clients that disconnect from and
reconnect to a server to get bad invalidation data if the server
serves multiple storages with active writes.
- (3.8a1) It is now theoretically possible to use a ClientStorage in a storage
server. This might make it possible to offload read load from a
storage server at the cost of increasing write latency. This should
increase write throughput by offloading reads from the final storage
server. This feature is somewhat experimental. It has tests, but
hasn't been used in production.
Transactions
------------
- (3.9.0a1) 'transaction' module is not included in ZODB anymore. It
is now just a ZODB dependency (via setuptools declarations).
- (3.8a1) Add a doom() and isDoomed() interface to the transaction module.
First step towards the resolution of
http://www.zope.org/Collectors/Zope3-dev/655
A doomed transaction behaves exactly the same way as an active transaction
but raises an error on any attempt to commit it, thus forcing an abort.
Doom is useful in places where abort is unsafe and an exception cannot be
raised. This occurs when the programmer wants the code following the doom to
run but not commit. It is unsafe to abort in these circumstances as a
following get() may implicitly open a new transaction.
Any attempt to commit a doomed transaction will raise a DoomedTransaction
exception.
- (3.8a1) Clean up the ZODB imports in transaction.
Clean up weird import dance with ZODB. This is unnecessary since the
transaction module stopped being imported in ZODB/__init__.py in rev 39622.
- (3.8a1) Support for subtransactions has been removed in favor of
save points.
Blobs
-----
- (3.9.0a1) Fixed bug #127182: Blobs were subclassable which was not desired.
- (3.8b5) Fixed bug #130459: Packing was broken by uncommitted blob data.
- (3.8b4) Fixed bug #127182: Blobs were subclassable which was not desired.
- (3.
9.0a1
) Fixed bug #126007: tpc_abort had untested code path that was
- (3.
8b3
) Fixed bug #126007: tpc_abort had untested code path that was
broken.
- (3.
9.0a1
) Fixed bug #129921: getSize() function in BlobStorage could not
- (3.
8b3
) Fixed bug #129921: getSize() function in BlobStorage could not
deal with garbage files
- (3.9.0a1) Fixed bug in which MVCC would not work for blobs.
- (3.8b1) Updated the Blob implementation in a number of ways. Some
of these are backward incompatible with 3.8a1:
o The Blob class now lives in ZODB.blob
o The blob openDetached method has been replaced by the committed method.
- (3.8a1) Added new blob feature. See the ZODB/Blobs directory for
documentation.
ZODB now handles (reasonably) large binary objects efficiently. Useful to
use from a few kilobytes to at least multiple hundred megabytes.
BTrees
------
-
- (3.8a1) Added support for 64-bit integer BTrees as separate types.
(For now, we're retaining compile-time support for making the regular
integer BTrees 64-bit.)
- (3.8a1) Normalize names in modules so that BTrees, Buckets, Sets, and
TreeSets can all be accessed with those names in the modules (e.g.,
BTrees.IOBTree.BTree). This is in addition to the older names (e.g.,
BTrees.IOBTree.IOBTree). This allows easier drop-in replacement, which
can especially be simplify code for packages that want to support both
32-bit and 64-bit BTrees.
- (3.8a1) Describe the interfaces for each module and actually declare
the interfaces for each.
- (3.8a1) Fix module references so klass.__module__ points to the Python
wrapper module, not the C extension.
- (3.8a1) introduce module families, to group all 32-bit and all 64-bit
modules.
buildout.cfg
View file @
694ac459
[buildout]
develop = .
transaction
develop = .
parts = test scripts
find-links = http://download.zope.org/distribution/
...
...
doc/HOWTO-Blobs-NFS.txt
View file @
694ac459
...
...
@@ -72,11 +72,11 @@ configuration should look like this::
<filestorage>
path $INSTANCE/var/Data.fs
<filestorage>
b
l
ob-dir $SERVER/blobs
bob-dir $SERVER/blobs
</blobstorage>
(Remember to manually replace $SERVER and $CLIENT with the exported directory
as accessible b
y
either the ZEO server or the ZEO client.)
as accessible b
ei
either the ZEO server or the ZEO client.)
Conclusion
----------
...
...
setup.py
View file @
694ac459
...
...
@@ -20,7 +20,7 @@ to application logic. ZODB includes features such as a plugable storage
interface, rich transaction support, and undo.
"""
VERSION
=
"3.8.0
c1
"
VERSION
=
"3.8.0
b6
"
# The (non-obvious!) choices for the Trove Development Status line:
# Development Status :: 5 - Production/Stable
...
...
@@ -28,7 +28,6 @@ VERSION = "3.8.0c1"
# Development Status :: 3 - Alpha
classifiers
=
"""
\
Development Status :: 4 - Beta
Intended Audience :: Developers
License :: OSI Approved :: Zope Public License
Programming Language :: Python
...
...
@@ -38,9 +37,26 @@ Operating System :: Microsoft :: Windows
Operating System :: Unix
"""
from
setuptools
import
setup
entry_points
=
"""
try
:
from
setuptools
import
setup
except
ImportError
:
from
distutils.core
import
setup
extra
=
dict
(
scripts
=
[
"src/ZODB/scripts/fsdump.py"
,
"src/ZODB/scripts/fsoids.py"
,
"src/ZODB/scripts/fsrefs.py"
,
"src/ZODB/scripts/fstail.py"
,
"src/ZODB/scripts/fstest.py"
,
"src/ZODB/scripts/repozo.py"
,
"src/ZEO/scripts/zeopack.py"
,
"src/ZEO/scripts/runzeo.py"
,
"src/ZEO/scripts/zeopasswd.py"
,
"src/ZEO/scripts/mkzeoinst.py"
,
"src/ZEO/scripts/zeoctl.py"
,
],
)
else
:
entry_points
=
"""
[console_scripts]
fsdump = ZODB.FileStorage.fsdump:main
fsoids = ZODB.scripts.fsoids:main
...
...
@@ -53,8 +69,19 @@ entry_points = """
mkzeoinst = ZEO.mkzeoinst:main
zeoctl = ZEO.zeoctl:main
"""
scripts
=
[]
extra
=
dict
(
install_requires
=
[
'zope.interface'
,
'zope.proxy'
,
'zope.testing'
,
'ZConfig'
,
'zdaemon'
,
],
zip_safe
=
False
,
entry_points
=
entry_points
,
include_package_data
=
True
,
)
scripts
=
[]
import
glob
import
os
...
...
@@ -149,6 +176,7 @@ packages = ["BTrees", "BTrees.tests",
"ZODB"
,
"ZODB.FileStorage"
,
"ZODB.tests"
,
"ZODB.scripts"
,
"persistent"
,
"persistent.tests"
,
"transaction"
,
"transaction.tests"
,
"ThreadedAsync"
,
"ZopeUndo"
,
"ZopeUndo.tests"
,
]
...
...
@@ -159,6 +187,8 @@ def copy_other_files(cmd, outputbase):
extensions
=
[
"*.conf"
,
"*.xml"
,
"*.txt"
,
"*.sh"
]
directories
=
[
"BTrees"
,
"transaction"
,
"transaction/tests"
,
"persistent/tests"
,
"ZEO"
,
"ZEO/scripts"
,
...
...
@@ -210,24 +240,9 @@ class MyDistribution(Distribution):
self
.
cmdclass
[
'build_py'
]
=
MyPyBuilder
self
.
cmdclass
[
'install_lib'
]
=
MyLibInstaller
def
alltests
():
# use the zope.testing testrunner machinery to find all the
# test suites we've put under ourselves
from
zope.testing.testrunner
import
get_options
from
zope.testing.testrunner
import
find_suites
from
zope.testing.testrunner
import
configure_logging
configure_logging
()
from
unittest
import
TestSuite
here
=
os
.
path
.
abspath
(
os
.
path
.
dirname
(
sys
.
argv
[
0
]))
args
=
sys
.
argv
[:]
src
=
os
.
path
.
join
(
here
,
'src'
)
defaults
=
[
'--test-path'
,
src
]
options
=
get_options
(
args
,
defaults
)
suites
=
list
(
find_suites
(
options
))
return
TestSuite
(
suites
)
doclines
=
__doc__
.
split
(
"
\
n
"
)
setup
(
name
=
"ZODB3"
,
version
=
VERSION
,
maintainer
=
"Zope Corporation"
,
...
...
@@ -244,35 +259,4 @@ setup(name="ZODB3",
classifiers
=
filter
(
None
,
classifiers
.
split
(
"
\
n
"
)),
long_description
=
"
\
n
"
.
join
(
doclines
[
2
:]),
distclass
=
MyDistribution
,
test_suite
=
"__main__.alltests"
,
# to support "setup.py test"
tests_require
=
[
'zope.interface'
,
'zope.proxy'
,
'zope.testing'
,
'transaction'
,
'zdaemon'
,
],
install_requires
=
[
'zope.interface'
,
'zope.proxy'
,
'zope.testing'
,
'ZConfig'
,
'zdaemon'
,
'transaction'
,
],
zip_safe
=
False
,
entry_points
=
"""
[console_scripts]
fsdump = ZODB.FileStorage.fsdump:main
fsoids = ZODB.scripts.fsoids:main
fsrefs = ZODB.scripts.fsrefs:main
fstail = ZODB.scripts.fstail:Main
repozo = ZODB.scripts.repozo:main
zeopack = ZEO.scripts.zeopack:main
runzeo = ZEO.runzeo:main
zeopasswd = ZEO.zeopasswd:main
mkzeoinst = ZEO.mkzeoinst:main
zeoctl = ZEO.zeoctl:main
"""
,
include_package_data
=
True
,
)
**
extra
)
src/ZEO/tests/filecache.txt
deleted
100644 → 0
View file @
f0cbd414
====================================
The client cache file implementation
====================================
This test exercises the FileCache implementation which is responsible for
maintaining the ZEO client cache on disk. Specifics of persistent cache files
are not tested.
As the FileCache calls back to the client cache we'll use a dummy to monitor
those calls:
>>> from ZEO.tests.test_cache import ClientCacheDummy, oid
>>> tid = oid
>>> cache_dummy = ClientCacheDummy()
We'll instanciate a FileCache with 200 bytes of space:
>>> from ZEO.cache import FileCache
>>> fc = FileCache(maxsize=200, fpath=None, parent=cache_dummy)
Initially the cache is empty:
>>> len(fc)
0
>>> list(fc)
[]
>>> fc.getStats()
(0, 0, 0, 0, 0)
Basic usage
===========
Objects are represented in the cache using a special `Object` object. Let's
start with an object of the size 100 bytes:
>>> from ZEO.cache import Object
>>> obj1_1 = Object(key=(oid(1), tid(1)), version='', data='#'*100,
... start_tid=tid(1), end_tid=None)
Notice that the actual object size is a bit larger because of the headers that
are written for each object:
>>> obj1_1.size
122
Initially the object is not in the cache:
>>> (oid(1), tid(1)) in fc
False
We can add it to the cache:
>>> fc.add(obj1_1)
And now it's in the cache:
>>> (oid(1), tid(1)) in fc
True
>>> len(fc)
1
We can get it back and the object will be equal but not identical to the one we
stored:
>>> obj1_1_copy = fc.access((oid(1), tid(1)))
>>> obj1_1_copy.data == obj1_1.data
True
>>> obj1_1_copy.key == obj1_1.key
True
>>> obj1_1_copy is obj1_1
False
The cache allows us to iterate over all entries in it:
>>> list(fc) # doctest: +ELLIPSIS
[<ZEO.cache.Entry object at 0x...>]
When an object gets superseded we can update it. This only modifies the header,
not the actual data. This is useful when invalidations tell us about the
`end_tid` of an object:
>>> obj1_1.data = '.' * 100
>>> obj1_1.end_tid = tid(2)
>>> fc.update(obj1_1)
When loading it again we can see that the data was not changed:
>>> obj1_1_copy = fc.access((oid(1), tid(1)))
>>> obj1_1_copy.data # doctest: +ELLIPSIS
'#############...################'
>>> obj1_1_copy.end_tid
'\x00\x00\x00\x00\x00\x00\x00\x02'
Objects can be explicitly removed from the cache:
>>> fc.remove((oid(1), tid(1)))
>>> len(fc)
0
>>> (oid(1), tid(1)) in fc
False
Evicting objects
================
When the cached data consumes the whole cache file and more objects need to be
stored the oldest stored objects are evicted until enough space is available.
In the next sections we'll exercise some of the special cases of the file
format and look at the cache after each step.
The current state is a cache with two records: the one object which we removed
from the cache and another free record the reaches to the end of the file.
The first record has a size of 143 bytes:
143 = 1 ('f') + 4 (size) + 8 (OID) + 8 (TID) + 8 (end_tid) + 2 (version length) +
4 (data length) + 100 (old data) + 8 (OID)
The second record has a size of 45 bytes:
45 = 1 ('f') + 4 (size) + 40 (free space)
Note that the last byte is an 'x' because the initialisation of the cache file
forced the absolute size of the file by seeking to byte 200 and writing an 'x'.
>>> from ZEO.tests.test_cache import hexprint
>>> hexprint(fc.f)
00000000 5a 45 43 33 00 00 00 00 00 00 00 00 66 00 00 00 |ZEC3........f...|
00000010 8f 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 |................|
00000020 01 00 00 00 00 00 00 00 02 00 00 00 00 00 64 23 |..............d#|
00000030 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000040 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000050 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000060 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000070 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000080 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000090 23 23 23 00 00 00 00 00 00 00 01 66 00 00 00 2d |###........f...-|
000000a0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 |................|
000000c0 00 00 00 00 00 00 00 78 |.......x |
Case 1: Allocating a new block that fits after the last used one
>>> obj2_1 = Object(key=(oid(2), tid(1)), version='', data='**',
... start_tid=tid(1), end_tid=None)
>>> fc.add(obj2_1)
The new block fits exactly in the remaining 45 bytes (43 bytes header + 2
bytes payload) so the beginning of the data is the same except for the last 45
bytes:
>>> hexprint(fc.f) # doctest: +REPORT_NDIFF
00000000 5a 45 43 33 00 00 00 00 00 00 00 00 66 00 00 00 |ZEC3........f...|
00000010 8f 00 00 00 00 00 00 00 01 00 00 00 00 00 00 00 |................|
00000020 01 00 00 00 00 00 00 00 02 00 00 00 00 00 64 23 |..............d#|
00000030 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000040 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000050 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000060 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000070 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000080 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 23 |################|
00000090 23 23 23 00 00 00 00 00 00 00 01 61 00 00 00 2d |###........a...-|
000000a0 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01 |................|
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 02 2a 2a |..............**|
000000c0 00 00 00 00 00 00 00 02 |........ |
Case 2: Allocating a block that wraps around and frees *exactly* one block
>>> obj3_1 = Object(key=(oid(3), tid(1)), version='', data='@'*100,
... start_tid=tid(1), end_tid=None)
>>> fc.add(obj3_1)
>>> hexprint(fc.f) # doctest: +REPORT_NDIFF
00000000 5a 45 43 33 00 00 00 00 00 00 00 00 61 00 00 00 |ZEC3........a...|
00000010 8f 00 00 00 00 00 00 00 03 00 00 00 00 00 00 00 |................|
00000020 01 00 00 00 00 00 00 00 00 00 00 00 00 00 64 40 |..............d@|
00000030 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000040 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000050 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000060 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000070 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000080 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000090 40 40 40 00 00 00 00 00 00 00 03 61 00 00 00 2d |@@@........a...-|
000000a0 00 00 00 00 00 00 00 02 00 00 00 00 00 00 00 01 |................|
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 02 2a 2a |..............**|
000000c0 00 00 00 00 00 00 00 02 |........ |
Case 3: Allocating a block that requires 1 byte less than the next block
>>> obj4_1 = Object(key=(oid(4), tid(1)), version='', data='~',
... start_tid=tid(1), end_tid=None)
>>> fc.add(obj4_1)
>>> hexprint(fc.f) # doctest: +REPORT_NDIFF
00000000 5a 45 43 33 00 00 00 00 00 00 00 00 61 00 00 00 |ZEC3........a...|
00000010 8f 00 00 00 00 00 00 00 03 00 00 00 00 00 00 00 |................|
00000020 01 00 00 00 00 00 00 00 00 00 00 00 00 00 64 40 |..............d@|
00000030 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000040 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000050 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000060 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000070 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000080 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 40 |@@@@@@@@@@@@@@@@|
00000090 40 40 40 00 00 00 00 00 00 00 03 61 00 00 00 2c |@@@........a...,|
000000a0 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00 01 |................|
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 01 7e 00 |..............~.|
000000c0 00 00 00 00 00 00 04 31 |.......1 |
Case 4: Allocating a block that requires 2 bytes less than the next block
>>> obj4_1 = Object(key=(oid(5), tid(1)), version='', data='^'*98,
... start_tid=tid(1), end_tid=None)
>>> fc.add(obj4_1)
>>> hexprint(fc.f) # doctest: +REPORT_NDIFF
00000000 5a 45 43 33 00 00 00 00 00 00 00 00 61 00 00 00 |ZEC3........a...|
00000010 8d 00 00 00 00 00 00 00 05 00 00 00 00 00 00 00 |................|
00000020 01 00 00 00 00 00 00 00 00 00 00 00 00 00 62 5e |..............b^|
00000030 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e |^^^^^^^^^^^^^^^^|
00000040 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e |^^^^^^^^^^^^^^^^|
00000050 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e |^^^^^^^^^^^^^^^^|
00000060 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e |^^^^^^^^^^^^^^^^|
00000070 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e |^^^^^^^^^^^^^^^^|
00000080 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e 5e |^^^^^^^^^^^^^^^^|
00000090 5e 00 00 00 00 00 00 00 05 32 03 61 00 00 00 2c |^........2.a...,|
000000a0 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00 01 |................|
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 01 7e 00 |..............~.|
000000c0 00 00 00 00 00 00 04 31 |.......1 |
Case 5: Allocating a block that requires 3 bytes less than the next block
The end of the file is already a bit crowded and would create a rather complex
situation to work on. We create an entry with the size of 95 byte which will
be inserted at the beginning of the file, leaving a 3 byte free space after
it.
>>> obj4_1 = Object(key=(oid(6), tid(1)), version='', data='+'*95,
... start_tid=tid(1), end_tid=None)
>>> fc.add(obj4_1)
>>> hexprint(fc.f) # doctest: +REPORT_NDIFF
00000000 5a 45 43 33 00 00 00 00 00 00 00 00 61 00 00 00 |ZEC3........a...|
00000010 8a 00 00 00 00 00 00 00 06 00 00 00 00 00 00 00 |................|
00000020 01 00 00 00 00 00 00 00 00 00 00 00 00 00 5f 2b |.............._+|
00000030 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b |++++++++++++++++|
00000040 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b |++++++++++++++++|
00000050 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b |++++++++++++++++|
00000060 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b |++++++++++++++++|
00000070 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b |++++++++++++++++|
00000080 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 2b 00 00 |++++++++++++++..|
00000090 00 00 00 00 00 06 33 00 05 32 03 61 00 00 00 2c |......3..2.a...,|
000000a0 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00 01 |................|
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 01 7e 00 |..............~.|
000000c0 00 00 00 00 00 00 04 31 |.......1 |
Case 6: Allocating a block that requires 4 bytes less than the next block
As in our previous case, we'll write a block that only fits in the first
block's place to avoid dealing with the cluttering at the end of the cache
file.
>>> obj4_1 = Object(key=(oid(7), tid(1)), version='', data='-'*91,
... start_tid=tid(1), end_tid=None)
>>> fc.add(obj4_1)
>>> hexprint(fc.f) # doctest: +REPORT_NDIFF
00000000 5a 45 43 33 00 00 00 00 00 00 00 00 61 00 00 00 |ZEC3........a...|
00000010 86 00 00 00 00 00 00 00 07 00 00 00 00 00 00 00 |................|
00000020 01 00 00 00 00 00 00 00 00 00 00 00 00 00 5b 2d |..............[-|
00000030 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d |----------------|
00000040 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d |----------------|
00000050 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d |----------------|
00000060 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d |----------------|
00000070 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d |----------------|
00000080 2d 2d 2d 2d 2d 2d 2d 2d 2d 2d 00 00 00 00 00 00 |----------......|
00000090 00 07 34 00 00 06 33 00 05 32 03 61 00 00 00 2c |..4...3..2.a...,|
000000a0 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00 01 |................|
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 01 7e 00 |..............~.|
000000c0 00 00 00 00 00 00 04 31 |.......1 |
Case 7: Allocating a block that requires >= 5 bytes less than the next block
Again, we replace the block at the beginning of the cache.
>>> obj4_1 = Object(key=(oid(8), tid(1)), version='', data='='*86,
... start_tid=tid(1), end_tid=None)
>>> fc.add(obj4_1)
>>> hexprint(fc.f) # doctest: +REPORT_NDIFF
00000000 5a 45 43 33 00 00 00 00 00 00 00 00 61 00 00 00 |ZEC3........a...|
00000010 81 00 00 00 00 00 00 00 08 00 00 00 00 00 00 00 |................|
00000020 01 00 00 00 00 00 00 00 00 00 00 00 00 00 56 3d |..............V=|
00000030 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d |================|
00000040 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d |================|
00000050 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d |================|
00000060 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d |================|
00000070 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d 3d |================|
00000080 3d 3d 3d 3d 3d 00 00 00 00 00 00 00 08 66 00 00 |=====........f..|
00000090 00 05 34 00 00 06 33 00 05 32 03 61 00 00 00 2c |..4...3..2.a...,|
000000a0 00 00 00 00 00 00 00 04 00 00 00 00 00 00 00 01 |................|
000000b0 00 00 00 00 00 00 00 00 00 00 00 00 00 01 7e 00 |..............~.|
000000c0 00 00 00 00 00 00 04 31 |.......1 |
Statistic functions
===================
The `getStats` method talks about the added objects, added bytes, evicted
objects, evicted bytes and accesses to the cache:
>>> fc.getStats()
(8, 917, 5, 601, 2)
We can reset the stats by calling the `clearStats` method:
>>> fc.clearStats()
>>> fc.getStats()
(0, 0, 0, 0, 0)
Cleanup
=======
As the cache is non-persistent, its file will be gone from disk after closing
the cache:
>>> fc.f # doctest: +ELLIPSIS
<open file '<fdopen>', mode 'w+b' at 0x...>
>>> fc.close()
>>> fc.f
src/ZEO/tests/testConnection.py
View file @
694ac459
...
...
@@ -46,7 +46,7 @@ class MappingStorageConfig:
def
getConfig
(
self
,
path
,
create
,
read_only
):
return
"""<mappingstorage 1/>"""
class
FileStorageConnectionTests
(
FileStorageConfig
,
ConnectionTests
.
ConnectionTests
,
...
...
src/ZEO/tests/test_cache.py
View file @
694ac459
...
...
@@ -14,16 +14,11 @@
"""Basic unit tests for a multi-version client cache."""
import
os
import
random
import
tempfile
import
unittest
import
doctest
import
string
import
sys
import
ZEO.cache
from
ZODB.utils
import
p64
,
repr_to_oid
from
ZODB.utils
import
p64
n1
=
p64
(
1
)
n2
=
p64
(
2
)
...
...
@@ -31,60 +26,6 @@ n3 = p64(3)
n4
=
p64
(
4
)
n5
=
p64
(
5
)
def
hexprint
(
file
):
file
.
seek
(
0
)
data
=
file
.
read
()
offset
=
0
while
data
:
line
,
data
=
data
[:
16
],
data
[
16
:]
printable
=
""
hex
=
""
for
character
in
line
:
if
character
in
string
.
printable
and
not
ord
(
character
)
in
[
12
,
13
,
9
]:
printable
+=
character
else
:
printable
+=
'.'
hex
+=
character
.
encode
(
'hex'
)
+
' '
hex
=
hex
[:
24
]
+
' '
+
hex
[
24
:]
hex
=
hex
.
ljust
(
49
)
printable
=
printable
.
ljust
(
16
)
print
'%08x %s |%s|'
%
(
offset
,
hex
,
printable
)
offset
+=
16
class
ClientCacheDummy
(
object
):
def
__init__
(
self
):
self
.
objects
=
{}
def
_evicted
(
self
,
o
):
if
o
.
key
in
self
.
objects
:
del
self
.
objects
[
o
.
key
]
def
oid
(
o
):
repr
=
'%016x'
%
o
return
repr_to_oid
(
repr
)
tid
=
oid
class
FileCacheFuzzing
(
unittest
.
TestCase
):
def
testFileCacheFuzzing
(
self
):
cache_dummy
=
ClientCacheDummy
()
fc
=
ZEO
.
cache
.
FileCache
(
maxsize
=
5000
,
fpath
=
None
,
parent
=
cache_dummy
)
for
i
in
xrange
(
10000
):
size
=
random
.
randint
(
0
,
5500
)
obj
=
ZEO
.
cache
.
Object
(
key
=
(
oid
(
i
),
oid
(
1
)),
version
=
''
,
data
=
'*'
*
size
,
start_tid
=
oid
(
1
),
end_tid
=
None
)
fc
.
add
(
obj
)
hexprint
(
fc
.
f
)
fc
.
close
()
class
CacheTests
(
unittest
.
TestCase
):
def
setUp
(
self
):
...
...
@@ -185,76 +126,6 @@ class CacheTests(unittest.TestCase):
# TODO: Need to make sure eviction of non-current data
# and of version data are handled correctly.
def
_run_fuzzing
(
self
):
current_tid
=
1
current_oid
=
1
def
log
(
*
args
):
#print args
pass
cache
=
self
.
fuzzy_cache
objects
=
self
.
fuzzy_cache_client
.
objects
for
operation
in
xrange
(
10000
):
op
=
random
.
choice
([
'add'
,
'access'
,
'remove'
,
'update'
,
'settid'
])
if
not
objects
:
op
=
'add'
log
(
op
)
if
op
==
'add'
:
current_oid
+=
1
key
=
(
oid
(
current_oid
),
tid
(
current_tid
))
object
=
ZEO
.
cache
.
Object
(
key
=
key
,
version
=
''
,
data
=
'*'
*
random
.
randint
(
1
,
60
*
1024
),
start_tid
=
tid
(
current_tid
),
end_tid
=
None
)
assert
key
not
in
objects
log
(
key
,
len
(
object
.
data
),
current_tid
)
cache
.
add
(
object
)
if
(
object
.
size
+
ZEO
.
cache
.
OBJECT_HEADER_SIZE
>
cache
.
maxsize
-
ZEO
.
cache
.
ZEC3_HEADER_SIZE
):
assert
key
not
in
cache
else
:
objects
[
key
]
=
object
assert
key
in
cache
,
key
elif
op
==
'access'
:
key
=
random
.
choice
(
objects
.
keys
())
log
(
key
)
object
=
objects
[
key
]
found
=
cache
.
access
(
key
)
assert
object
.
data
==
found
.
data
assert
object
.
key
==
found
.
key
assert
object
.
size
==
found
.
size
==
(
len
(
object
.
data
)
+
object
.
TOTAL_FIXED_SIZE
)
elif
op
==
'remove'
:
key
=
random
.
choice
(
objects
.
keys
())
log
(
key
)
cache
.
remove
(
key
)
assert
key
not
in
cache
assert
key
not
in
objects
elif
op
==
'update'
:
key
=
random
.
choice
(
objects
.
keys
())
object
=
objects
[
key
]
log
(
key
,
object
.
key
)
if
not
object
.
end_tid
:
object
.
end_tid
=
tid
(
current_tid
)
log
(
key
,
current_tid
)
cache
.
update
(
object
)
elif
op
==
'settid'
:
current_tid
+=
1
log
(
current_tid
)
cache
.
settid
(
tid
(
current_tid
))
cache
.
close
()
def
testFuzzing
(
self
):
random
.
seed
()
seed
=
random
.
randint
(
0
,
sys
.
maxint
)
random
.
seed
(
seed
)
self
.
fuzzy_cache_client
=
ClientCacheDummy
()
self
.
fuzzy_cache
=
ZEO
.
cache
.
FileCache
(
random
.
randint
(
100
,
50
*
1024
),
None
,
self
.
fuzzy_cache_client
)
try
:
self
.
_run_fuzzing
()
except
:
print
"Error in fuzzing with seed"
,
seed
hexprint
(
self
.
fuzzy_cache
.
f
)
raise
def
testSerialization
(
self
):
self
.
cache
.
store
(
n1
,
""
,
n2
,
None
,
"data for n1"
)
self
.
cache
.
store
(
n2
,
"version"
,
n2
,
None
,
"version data for n2"
)
...
...
@@ -281,10 +152,5 @@ class CacheTests(unittest.TestCase):
eq
(
copy
.
current
,
self
.
cache
.
current
)
eq
(
copy
.
noncurrent
,
self
.
cache
.
noncurrent
)
def
test_suite
():
suite
=
unittest
.
TestSuite
()
suite
.
addTest
(
unittest
.
makeSuite
(
CacheTests
))
suite
.
addTest
(
unittest
.
makeSuite
(
FileCacheFuzzing
))
suite
.
addTest
(
doctest
.
DocFileSuite
(
'filecache.txt'
))
return
suite
return
unittest
.
makeSuite
(
CacheTests
)
src/ZEO/tests/zeo-fan-out.test
View file @
694ac459
...
...
@@ -4,7 +4,7 @@ ZEO Fan Out
We
should
be
able
to
set
up
ZEO
servers
with
ZEO
clients
.
Let
's see
if we can make it work.
We'
ll
use
some
helper functions. The first is a helper that starts
We'
ll
use
some
helper functions. The first is a help
t
er that starts
ZEO servers for us and another one that picks ports.
We'll start the first server:
...
...
@@ -16,7 +16,7 @@ We'll start the first server:
... '<filestorage 1>\n path fs\n</filestorage>\n', zconf0, port0)
Then we'll start 2 others that use this one:
Then we'
'
ll start 2 others that use this one:
>>> port1 = ZEO.tests.testZEO.get_port()
>>> zconf1 = ZEO.tests.forker.ZEOConfig(('', port1))
...
...
src/ZEO/zrpc/connection.py
View file @
694ac459
...
...
@@ -82,7 +82,10 @@ def client_loop():
continue
if
not
(
r
or
w
or
e
):
for
obj
in
client_map
.
itervalues
():
# The line intentionally doesn't use iterators. Other
# threads can close dispatchers, causeing the socket
# map to shrink.
for
obj
in
client_map
.
values
():
if
isinstance
(
obj
,
Connection
):
# Send a heartbeat message as a reply to a
# non-existent message id.
...
...
src/ZODB/Connection.py
View file @
694ac459
...
...
@@ -44,7 +44,7 @@ from ZODB.ExportImport import ExportImport
from
ZODB
import
POSException
from
ZODB.POSException
import
InvalidObjectReference
,
ConnectionStateError
from
ZODB.POSException
import
ConflictError
,
ReadConflictError
from
ZODB.POSException
import
Unsupported
,
ReadOnlyHistoryError
from
ZODB.POSException
import
Unsupported
from
ZODB.POSException
import
POSKeyError
from
ZODB.serialize
import
ObjectWriter
,
ObjectReader
,
myhasattr
from
ZODB.utils
import
p64
,
u64
,
z64
,
oid_repr
,
positive_id
...
...
@@ -79,20 +79,17 @@ class Connection(ExportImport, object):
##########################################################################
# Connection methods, ZODB.IConnection
def
__init__
(
self
,
db
,
cache_size
=
400
,
before
=
None
):
def
__init__
(
self
,
db
,
version
=
''
,
cache_size
=
400
):
"""Create a new Connection."""
self
.
_log
=
logging
.
getLogger
(
'ZODB.Connection'
)
self
.
_debug_info
=
()
self
.
_db
=
db
# historical connection
self
.
before
=
before
# Multi-database support
self
.
connections
=
{
self
.
_db
.
database_name
:
self
}
self
.
_version
=
version
self
.
_normal_storage
=
self
.
_storage
=
db
.
_storage
self
.
new_oid
=
db
.
_storage
.
new_oid
self
.
_savepoint_storage
=
None
...
...
@@ -116,6 +113,13 @@ class Connection(ExportImport, object):
# persistent data set.
self
.
_pre_cache
=
{}
if
version
:
# Caches for versions end up empty if the version
# is not used for a while. Non-version caches
# keep their content indefinitely.
# Unclear: Why do we want version caches to behave this way?
self
.
_cache
.
cache_drain_resistance
=
100
# List of all objects (not oids) registered as modified by the
# persistence machinery, or by add(), or whose access caused a
# ReadConflictError (just to be able to clean them up from the
...
...
@@ -136,7 +140,7 @@ class Connection(ExportImport, object):
# During commit, all objects go to either _modified or _creating:
# Dict of oid->flag of new objects (without serial), either
# added by add() or implicitly added (discovered by the
# added by add() or implicit
e
ly added (discovered by the
# serializer during commit). The flag is True for implicit
# adding. Used during abort to remove created objects from the
# _cache, and by persistent_id to check that a new object isn't
...
...
@@ -182,6 +186,8 @@ class Connection(ExportImport, object):
# the upper bound on transactions visible to this connection.
# That is, all object revisions must be written before _txn_time.
# If it is None, then the current revisions are acceptable.
# If the connection is in a version, mvcc will be disabled, because
# loadBefore() only returns non-version data.
self
.
_txn_time
=
None
# To support importFile(), implemented in the ExportImport base
...
...
@@ -235,10 +241,7 @@ class Connection(ExportImport, object):
# This appears to be an MVCC violation because we are loading
# the must recent data when perhaps we shouldnt. The key is
# that we are only creating a ghost!
# A disadvantage to this optimization is that _p_serial cannot be
# trusted until the object has been loaded, which affects both MVCC
# and historical connections.
p
,
serial
=
self
.
_storage
.
load
(
oid
,
''
)
p
,
serial
=
self
.
_storage
.
load
(
oid
,
self
.
_version
)
obj
=
self
.
_reader
.
getGhost
(
p
)
# Avoid infiniate loop if obj tries to load its state before
...
...
@@ -309,22 +312,22 @@ class Connection(ExportImport, object):
# get back here.
else
:
self
.
_opened
=
None
am
=
self
.
_db
.
_activity_monitor
if
am
is
not
None
:
am
.
closedConnection
(
self
)
def
db
(
self
):
"""Returns a handle to the database this connection belongs to."""
return
self
.
_db
def
isReadOnly
(
self
):
"""Returns True if this connection is read only."""
"""Returns True if th
e storage for th
is connection is read only."""
if
self
.
_opened
is
None
:
raise
ConnectionStateError
(
"The database connection is closed"
)
return
self
.
before
is
not
None
or
self
.
_storage
.
isReadOnly
()
return
self
.
_storage
.
isReadOnly
()
def
invalidate
(
self
,
tid
,
oids
):
"""Notify the Connection that transaction 'tid' invalidated oids."""
if
self
.
before
is
not
None
:
# this is an historical connection. Invalidations are irrelevant.
return
self
.
_inv_lock
.
acquire
()
try
:
if
self
.
_txn_time
is
None
:
...
...
@@ -340,17 +343,24 @@ class Connection(ExportImport, object):
finally
:
self
.
_inv_lock
.
release
()
def
root
(
self
):
"""Return the database root object."""
return
self
.
get
(
z64
)
def
getVersion
(
self
):
"""Returns the version this connection is attached to."""
if
self
.
_storage
is
None
:
raise
ConnectionStateError
(
"The database connection is closed"
)
return
self
.
_version
def
get_connection
(
self
,
database_name
):
"""Return a Connection for the named database."""
connection
=
self
.
connections
.
get
(
database_name
)
if
connection
is
None
:
new_con
=
self
.
_db
.
databases
[
database_name
].
open
(
transaction_manager
=
self
.
transaction_manager
,
before
=
self
.
before
,
version
=
self
.
_version
,
)
self
.
connections
.
update
(
new_con
.
connections
)
new_con
.
connections
=
self
.
connections
...
...
@@ -533,9 +543,6 @@ class Connection(ExportImport, object):
def
_commit
(
self
,
transaction
):
"""Commit changes to an object"""
if
self
.
before
is
not
None
:
raise
ReadOnlyHistoryError
()
if
self
.
_import
:
# We are importing an export file. We alsways do this
# while making a savepoint so we can copy export data
...
...
@@ -614,14 +621,15 @@ class Connection(ExportImport, object):
raise
ValueError
(
"Can't commit with opened blobs."
)
s
=
self
.
_storage
.
storeBlob
(
oid
,
serial
,
p
,
obj
.
_uncommitted
(),
''
,
transaction
)
self
.
_version
,
transaction
)
# we invalidate the object here in order to ensure
# that that the next attribute access of its name
# unghostify it, which will cause its blob data
# to be reattached "cleanly"
obj
.
_p_invalidate
()
else
:
s
=
self
.
_storage
.
store
(
oid
,
serial
,
p
,
''
,
transaction
)
s
=
self
.
_storage
.
store
(
oid
,
serial
,
p
,
self
.
_version
,
transaction
)
self
.
_store_count
+=
1
# Put the object in the cache before handling the
# response, just in case the response contains the
...
...
@@ -820,20 +828,11 @@ class Connection(ExportImport, object):
# the code if we could drop support for it.
# (BTrees.Length does.)
if
self
.
before
is
not
None
:
# Load data that was current before the time we have.
before
=
self
.
before
t
=
self
.
_storage
.
loadBefore
(
obj
.
_p_oid
,
before
)
if
t
is
None
:
raise
POSKeyError
()
# historical connection!
p
,
serial
,
end
=
t
else
:
# There is a harmless data race with self._invalidated. A
# dict update could go on in another thread, but we don't care
# because we have to check again after the load anyway.
if
self
.
_invalidatedCache
:
raise
ReadConflictError
()
...
...
@@ -843,7 +842,7 @@ class Connection(ExportImport, object):
self
.
_load_before_or_conflict
(
obj
)
return
p
,
serial
=
self
.
_storage
.
load
(
obj
.
_p_oid
,
''
)
p
,
serial
=
self
.
_storage
.
load
(
obj
.
_p_oid
,
self
.
_version
)
self
.
_load_count
+=
1
self
.
_inv_lock
.
acquire
()
...
...
@@ -871,7 +870,7 @@ class Connection(ExportImport, object):
def
_load_before_or_conflict
(
self
,
obj
):
"""Load non-current state for obj or raise ReadConflictError."""
if
not
self
.
_setstate_noncurrent
(
obj
):
if
not
((
not
self
.
_version
)
and
self
.
_setstate_noncurrent
(
obj
)
):
self
.
_register
(
obj
)
self
.
_conflicts
[
obj
.
_p_oid
]
=
True
raise
ReadConflictError
(
object
=
obj
)
...
...
@@ -898,12 +897,6 @@ class Connection(ExportImport, object):
assert
self
.
_txn_time
<=
end
,
(
u64
(
self
.
_txn_time
),
u64
(
end
))
self
.
_reader
.
setGhostState
(
obj
,
data
)
obj
.
_p_serial
=
start
# MVCC Blob support
if
isinstance
(
obj
,
Blob
):
obj
.
_p_blob_uncommitted
=
None
obj
.
_p_blob_committed
=
self
.
_storage
.
loadBlob
(
obj
.
_p_oid
,
start
)
return
True
def
_handle_independent
(
self
,
obj
):
...
...
@@ -1033,7 +1026,11 @@ class Connection(ExportImport, object):
# Python protocol
def
__repr__
(
self
):
return
'<Connection at %08x>'
%
(
positive_id
(
self
),)
if
self
.
_version
:
ver
=
' (in version %s)'
%
`self._version`
else
:
ver
=
''
return
'<Connection at %08x%s>'
%
(
positive_id
(
self
),
ver
)
# Python protocol
##########################################################################
...
...
@@ -1043,6 +1040,17 @@ class Connection(ExportImport, object):
__getitem__
=
get
def
modifiedInVersion
(
self
,
oid
):
"""Returns the version the object with the given oid was modified in.
If it wasn't modified in a version, the current version of this
connection is returned.
"""
try
:
return
self
.
_db
.
modifiedInVersion
(
oid
)
except
KeyError
:
return
self
.
getVersion
()
def
exchange
(
self
,
old
,
new
):
# called by a ZClasses method that isn't executed by the test suite
oid
=
old
.
_p_oid
...
...
@@ -1068,7 +1076,7 @@ class Connection(ExportImport, object):
def
savepoint
(
self
):
if
self
.
_savepoint_storage
is
None
:
tmpstore
=
TmpStore
(
self
.
_normal_storage
)
tmpstore
=
TmpStore
(
self
.
_
version
,
self
.
_
normal_storage
)
self
.
_savepoint_storage
=
tmpstore
self
.
_storage
=
self
.
_savepoint_storage
...
...
@@ -1113,7 +1121,7 @@ class Connection(ExportImport, object):
if
isinstance
(
self
.
_reader
.
getGhost
(
data
),
Blob
):
blobfilename
=
src
.
loadBlob
(
oid
,
serial
)
s
=
self
.
_storage
.
storeBlob
(
oid
,
serial
,
data
,
blobfilename
,
''
,
transaction
)
self
.
_version
,
transaction
)
# we invalidate the object here in order to ensure
# that that the next attribute access of its name
# unghostify it, which will cause its blob data
...
...
@@ -1121,7 +1129,7 @@ class Connection(ExportImport, object):
self
.
invalidate
(
s
,
{
oid
:
True
})
else
:
s
=
self
.
_storage
.
store
(
oid
,
serial
,
data
,
''
,
transaction
)
self
.
_version
,
transaction
)
self
.
_handle_serial
(
s
,
oid
,
change
=
False
)
src
.
close
()
...
...
@@ -1171,14 +1179,23 @@ class TmpStore:
implements
(
IBlobStorage
)
def
__init__
(
self
,
storage
):
def
__init__
(
self
,
base_version
,
storage
):
self
.
_storage
=
storage
for
method
in
(
'getName'
,
'new_oid'
,
'getSize'
,
'sortKey'
,
'loadBefore'
,
'isReadOnly'
):
setattr
(
self
,
method
,
getattr
(
storage
,
method
))
try
:
supportsVersions
=
storage
.
supportsVersions
except
AttributeError
:
pass
else
:
if
supportsVersions
():
self
.
modifiedInVersion
=
storage
.
modifiedInVersion
self
.
versionEmpty
=
storage
.
versionEmpty
self
.
_base_version
=
base_version
self
.
_file
=
tempfile
.
TemporaryFile
()
# position: current file position
# _tpos: file position at last commit point
...
...
@@ -1196,7 +1213,7 @@ class TmpStore:
def
load
(
self
,
oid
,
version
):
pos
=
self
.
index
.
get
(
oid
)
if
pos
is
None
:
return
self
.
_storage
.
load
(
oid
,
''
)
return
self
.
_storage
.
load
(
oid
,
self
.
_base_version
)
self
.
_file
.
seek
(
pos
)
h
=
self
.
_file
.
read
(
8
)
oidlen
=
u64
(
h
)
...
...
@@ -1211,7 +1228,7 @@ class TmpStore:
def
store
(
self
,
oid
,
serial
,
data
,
version
,
transaction
):
# we have this funny signature so we can reuse the normal non-commit
# commit logic
assert
version
==
''
assert
version
==
self
.
_base_version
self
.
_file
.
seek
(
self
.
position
)
l
=
len
(
data
)
if
serial
is
None
:
...
...
@@ -1225,8 +1242,7 @@ class TmpStore:
def
storeBlob
(
self
,
oid
,
serial
,
data
,
blobfilename
,
version
,
transaction
):
assert
version
==
''
serial
=
self
.
store
(
oid
,
serial
,
data
,
''
,
transaction
)
serial
=
self
.
store
(
oid
,
serial
,
data
,
version
,
transaction
)
targetpath
=
self
.
_getBlobPath
(
oid
)
if
not
os
.
path
.
exists
(
targetpath
):
...
...
src/ZODB/DB.py
View file @
694ac459
...
...
@@ -21,8 +21,6 @@ import cPickle, cStringIO, sys
import
threading
from
time
import
time
,
ctime
import
logging
import
datetime
import
calendar
from
ZODB.broken
import
find_global
from
ZODB.utils
import
z64
...
...
@@ -33,15 +31,12 @@ from ZODB.utils import WeakSet
from
zope.interface
import
implements
from
ZODB.interfaces
import
IDatabase
import
BTrees.OOBTree
import
transaction
from
persistent.TimeStamp
import
TimeStamp
logger
=
logging
.
getLogger
(
'ZODB.DB'
)
class
Abstract
ConnectionPool
(
object
):
class
_
ConnectionPool
(
object
):
"""Manage a pool of connections.
CAUTION: Methods should be called under the protection of a lock.
...
...
@@ -67,58 +62,33 @@ class AbstractConnectionPool(object):
connectionDebugInfo() can still gather statistics.
"""
def
__init__
(
self
,
size
,
timeout
=
Non
e
):
def
__init__
(
self
,
pool_siz
e
):
# The largest # of connections we expect to see alive simultaneously.
self
.
_size
=
size
# The minimum number of seconds that an available connection should
# be kept, or None.
self
.
_timeout
=
timeout
self
.
pool_size
=
pool_size
# A weak set of all connections we've seen. A connection vanishes
# from this set if pop() hands it out, it's not reregistered via
# repush(), and it becomes unreachable.
self
.
all
=
WeakSet
()
def
setSize
(
self
,
size
):
# A stack of connections available to hand out. This is a subset
# of self.all. push() and repush() add to this, and may remove
# the oldest available connections if the pool is too large.
# pop() pops this stack. There are never more than pool_size entries
# in this stack.
# In Python 2.4, a collections.deque would make more sense than
# a list (we push only "on the right", but may pop from both ends).
self
.
available
=
[]
def
set_pool_size
(
self
,
pool_size
):
"""Change our belief about the expected maximum # of live connections.
If the pool_size is smaller than the current value, this may discard
the oldest available connections.
"""
self
.
_size
=
size
self
.
_reduce_size
()
def
setTimeout
(
self
,
timeout
):
old
=
self
.
_timeout
self
.
_timeout
=
timeout
if
timeout
is
not
None
and
old
!=
timeout
and
(
old
is
None
or
old
>
timeout
):
self
.
pool_size
=
pool_size
self
.
_reduce_size
()
def
getSize
(
self
):
return
self
.
_size
def
getTimeout
(
self
):
return
self
.
_timeout
timeout
=
property
(
getTimeout
,
setTimeout
)
size
=
property
(
getSize
,
setSize
)
class
ConnectionPool
(
AbstractConnectionPool
):
def
__init__
(
self
,
size
,
timeout
=
None
):
super
(
ConnectionPool
,
self
).
__init__
(
size
,
timeout
)
# A stack of connections available to hand out. This is a subset
# of self.all. push() and repush() add to this, and may remove
# the oldest available connections if the pool is too large.
# pop() pops this stack. There are never more than size entries
# in this stack. The keys are time.time() values of the push or
# repush calls.
self
.
available
=
BTrees
.
OOBTree
.
Bucket
()
def
push
(
self
,
c
):
"""Register a new available connection.
...
...
@@ -126,12 +96,12 @@ class ConnectionPool(AbstractConnectionPool):
stack even if we're over the pool size limit.
"""
assert
c
not
in
self
.
all
assert
c
not
in
self
.
available
.
values
()
assert
c
not
in
self
.
available
self
.
_reduce_size
(
strictly_less
=
True
)
self
.
all
.
add
(
c
)
self
.
available
[
time
()]
=
c
self
.
available
.
append
(
c
)
n
=
len
(
self
.
all
)
limit
=
self
.
size
limit
=
self
.
pool_
size
if
n
>
limit
:
reporter
=
logger
.
warn
if
n
>
2
*
limit
:
...
...
@@ -146,46 +116,34 @@ class ConnectionPool(AbstractConnectionPool):
older available connections.
"""
assert
c
in
self
.
all
assert
c
not
in
self
.
available
.
values
()
assert
c
not
in
self
.
available
self
.
_reduce_size
(
strictly_less
=
True
)
self
.
available
[
time
()]
=
c
self
.
available
.
append
(
c
)
def
_reduce_size
(
self
,
strictly_less
=
False
):
"""Throw away the oldest available connections until we're under our
target size (strictly_less=False, the default) or no more than that
(strictly_less=True).
"""
if
self
.
timeout
is
None
:
threshhold
=
None
else
:
threshhold
=
time
()
-
self
.
timeout
target
=
self
.
size
target
=
self
.
pool_size
if
strictly_less
:
target
-=
1
for
t
,
c
in
list
(
self
.
available
.
items
()):
if
(
len
(
self
.
available
)
>
target
or
threshhold
is
not
None
and
t
<
threshhold
):
del
self
.
available
[
t
]
while
len
(
self
.
available
)
>
target
:
c
=
self
.
available
.
pop
(
0
)
self
.
all
.
remove
(
c
)
# While application code may still hold a reference to `c`,
# there's little useful that can be done with this Connection
# anymore. Its cache may be holding on to limited resources,
# and we replace the cache with an empty one now so that we
# don't have to wait for gc to reclaim it. Note that it's not
# possible for DB.open() to return `c` again: `c` can never be
#
in an open state again.
# possible for DB.open() to return `c` again: `c` can never
# be
in an open state again.
# TODO: Perhaps it would be better to break the reference
# cycles between `c` and `c._cache`, so that refcounting
# reclaims both right now. But if user code _does_ have a
# strong reference to `c` now, breaking the cycle would not
# reclaim `c` now, and `c` would be left in a user-visible
# crazy state.
# cycles between `c` and `c._cache`, so that refcounting reclaims
# both right now. But if user code _does_ have a strong
# reference to `c` now, breaking the cycle would not reclaim `c`
# now, and `c` would be left in a user-visible crazy state.
c
.
_resetCache
()
else
:
break
def
reduce_size
(
self
):
self
.
_reduce_size
()
def
pop
(
self
):
"""Pop an available connection and return it.
...
...
@@ -196,159 +154,23 @@ class ConnectionPool(AbstractConnectionPool):
"""
result
=
None
if
self
.
available
:
result
=
self
.
available
.
pop
(
self
.
available
.
maxKey
()
)
result
=
self
.
available
.
pop
()
# Leave it in self.all, so we can still get at it for statistics
# while it's alive.
assert
result
in
self
.
all
return
result
def
map
(
self
,
f
):
"""For every live connection c, invoke f(c)."""
self
.
all
.
map
(
f
)
def
availableGC
(
self
):
"""Perform garbage collection on available connections.
If a connection is no longer viable because it has timed out, it is
garbage collected."""
if
self
.
timeout
is
None
:
threshhold
=
None
else
:
threshhold
=
time
()
-
self
.
timeout
for
t
,
c
in
tuple
(
self
.
available
.
items
()):
if
threshhold
is
not
None
and
t
<
threshhold
:
del
self
.
available
[
t
]
self
.
all
.
remove
(
c
)
c
.
_resetCache
()
else
:
c
.
cacheGC
()
class
KeyedConnectionPool
(
AbstractConnectionPool
):
# this pool keeps track of keyed connections all together. It makes
# it possible to make assertions about total numbers of keyed connections.
# The keys in this case are "before" TIDs, but this is used by other
# packages as well.
# see the comments in ConnectionPool for method descriptions.
def
map
(
self
,
f
,
open_connections
=
True
):
"""For every live connection c, invoke f(c).
def
__init__
(
self
,
size
,
timeout
=
None
):
super
(
KeyedConnectionPool
,
self
).
__init__
(
size
,
timeout
)
# key: {time.time: connection}
self
.
available
=
BTrees
.
family32
.
OO
.
Bucket
()
# time.time: key
self
.
closed
=
BTrees
.
family32
.
OO
.
Bucket
()
If `open_connections` is false then only call f(c) on closed
connections.
def
push
(
self
,
c
,
key
):
assert
c
not
in
self
.
all
available
=
self
.
available
.
get
(
key
)
if
available
is
None
:
available
=
self
.
available
[
key
]
=
BTrees
.
family32
.
OO
.
Bucket
()
else
:
assert
c
not
in
available
.
values
()
self
.
_reduce_size
(
strictly_less
=
True
)
self
.
all
.
add
(
c
)
t
=
time
()
available
[
t
]
=
c
self
.
closed
[
t
]
=
key
n
=
len
(
self
.
all
)
limit
=
self
.
size
if
n
>
limit
:
reporter
=
logger
.
warn
if
n
>
2
*
limit
:
reporter
=
logger
.
critical
reporter
(
"DB.open() has %s open connections with a size "
"of %s"
,
n
,
limit
)
def
repush
(
self
,
c
,
key
):
assert
c
in
self
.
all
self
.
_reduce_size
(
strictly_less
=
True
)
available
=
self
.
available
.
get
(
key
)
if
available
is
None
:
available
=
self
.
available
[
key
]
=
BTrees
.
family32
.
OO
.
Bucket
()
else
:
assert
c
not
in
available
.
values
()
t
=
time
()
available
[
t
]
=
c
self
.
closed
[
t
]
=
key
def
_reduce_size
(
self
,
strictly_less
=
False
):
if
self
.
timeout
is
None
:
threshhold
=
None
else
:
threshhold
=
time
()
-
self
.
timeout
target
=
self
.
size
if
strictly_less
:
target
-=
1
for
t
,
key
in
tuple
(
self
.
closed
.
items
()):
if
(
len
(
self
.
available
)
>
target
or
threshhold
is
not
None
and
t
<
threshhold
):
del
self
.
closed
[
t
]
c
=
self
.
available
[
key
].
pop
(
t
)
if
not
self
.
available
[
key
]:
del
self
.
available
[
key
]
self
.
all
.
remove
(
c
)
c
.
_resetCache
()
else
:
break
def
reduce_size
(
self
):
self
.
_reduce_size
()
def
pop
(
self
,
key
):
result
=
None
available
=
self
.
available
.
get
(
key
)
if
available
:
t
=
available
.
maxKey
()
result
=
available
.
pop
(
t
)
del
self
.
closed
[
t
]
if
not
available
:
del
self
.
available
[
key
]
assert
result
in
self
.
all
return
result
def
map
(
self
,
f
):
"""
if
open_connections
:
self
.
all
.
map
(
f
)
def
availableGC
(
self
):
if
self
.
timeout
is
None
:
threshhold
=
None
else
:
threshhold
=
time
()
-
self
.
timeout
for
t
,
key
in
tuple
(
self
.
closed
.
items
()):
if
threshhold
is
not
None
and
t
<
threshhold
:
del
self
.
closed
[
t
]
c
=
self
.
available
[
key
].
pop
(
t
)
if
not
self
.
available
[
key
]:
del
self
.
available
[
key
]
self
.
all
.
remove
(
c
)
c
.
_resetCache
()
else
:
self
.
available
[
key
][
t
].
cacheGC
()
def
toTimeStamp
(
dt
):
utc_struct
=
dt
.
utctimetuple
()
# if this is a leapsecond, this will probably fail. That may be a good
# thing: leapseconds are not really accounted for with serials.
args
=
utc_struct
[:
5
]
+
(
utc_struct
[
5
]
+
dt
.
microsecond
/
1000000.0
,)
return
TimeStamp
(
*
args
)
def
getTID
(
at
,
before
):
if
at
is
not
None
:
if
before
is
not
None
:
raise
ValueError
(
'can only pass zero or one of `at` and `before`'
)
if
isinstance
(
at
,
datetime
.
datetime
):
at
=
toTimeStamp
(
at
)
else
:
at
=
TimeStamp
(
at
)
before
=
repr
(
at
.
laterThan
(
at
))
elif
before
is
not
None
:
if
isinstance
(
before
,
datetime
.
datetime
):
before
=
repr
(
toTimeStamp
(
before
))
else
:
before
=
repr
(
TimeStamp
(
before
))
return
before
map
(
f
,
self
.
available
)
class
DB
(
object
):
"""The Object Database
...
...
@@ -380,27 +202,27 @@ class DB(object):
- `User Methods`: __init__, open, close, undo, pack, classFactory
- `Inspection Methods`: getName, getSize, objectCount,
getActivityMonitor, setActivityMonitor
- `Connection Pool Methods`: getPoolSize, getHistoricalPoolSize,
setPoolSize, setHistoricalPoolSize, getHistoricalTimeout,
setHistoricalTimeout
- `Connection Pool Methods`: getPoolSize, getVersionPoolSize,
removeVersionPool, setPoolSize, setVersionPoolSize
- `Transaction Methods`: invalidate
- `Other Methods`: lastTransaction, connectionDebugInfo
- `Version Methods`: modifiedInVersion, abortVersion, commitVersion,
versionEmpty
- `Cache Inspection Methods`: cacheDetail, cacheExtremeDetail,
cacheFullSweep, cacheLastGCTime, cacheMinimize, cacheSize,
cacheDetailSize, getCacheSize, get
Historical
CacheSize, setCacheSize,
set
Historical
CacheSize
cacheDetailSize, getCacheSize, get
Version
CacheSize, setCacheSize,
set
Version
CacheSize
"""
implements
(
IDatabase
)
klass
=
Connection
# Class to use for connections
_activity_monitor
=
next
=
previous
=
None
_activity_monitor
=
None
def
__init__
(
self
,
storage
,
pool_size
=
7
,
cache_size
=
400
,
historical_pool_size
=
3
,
historical_cache_size
=
1000
,
historical_timeout
=
300
,
version_pool_size
=
3
,
version_cache_size
=
100
,
database_name
=
'unnamed'
,
databases
=
None
,
):
...
...
@@ -410,24 +232,23 @@ class DB(object):
- `storage`: the storage used by the database, e.g. FileStorage
- `pool_size`: expected maximum number of open connections
- `cache_size`: target size of Connection object cache
- `historical_pool_size`: expected maximum number of total
historical connections
- `historical_cache_size`: target size of Connection object cache for
historical (`at` or `before`) connections
- `historical_timeout`: minimum number of seconds that
an unused historical connection will be kept, or None.
- `version_pool_size`: expected maximum number of connections (per
version)
- `version_cache_size`: target size of Connection object cache for
version connections
"""
# Allocate lock.
x
=
threading
.
RLock
()
self
.
_a
=
x
.
acquire
self
.
_r
=
x
.
release
#
pools and cache sizes
self
.
pool
=
ConnectionPool
(
pool_size
)
self
.
historical_pool
=
KeyedConnectionPool
(
historical_pool_size
,
historical_timeout
)
#
Setup connection pools and cache info
# _pools maps a version string to a _ConnectionPool object.
self
.
_pools
=
{}
self
.
_pool_size
=
pool_size
self
.
_cache_size
=
cache_size
self
.
_historical_cache_size
=
historical_cache_size
self
.
_version_pool_size
=
version_pool_size
self
.
_version_cache_size
=
version_cache_size
# Setup storage
self
.
_storage
=
storage
...
...
@@ -475,6 +296,7 @@ class DB(object):
databases
[
database_name
]
=
self
self
.
_setupUndoMethods
()
self
.
_setupVersionMethods
()
self
.
history
=
storage
.
history
def
_setupUndoMethods
(
self
):
...
...
@@ -494,6 +316,25 @@ class DB(object):
raise
NotImplementedError
self
.
undo
=
undo
def
_setupVersionMethods
(
self
):
storage
=
self
.
_storage
try
:
self
.
supportsVersions
=
storage
.
supportsVersions
except
AttributeError
:
self
.
supportsVersions
=
lambda
:
False
if
self
.
supportsVersions
():
self
.
versionEmpty
=
storage
.
versionEmpty
self
.
versions
=
storage
.
versions
self
.
modifiedInVersion
=
storage
.
modifiedInVersion
else
:
self
.
versionEmpty
=
lambda
version
:
True
self
.
versions
=
lambda
max
=
None
:
()
self
.
modifiedInVersion
=
lambda
oid
:
''
def
commitVersion
(
*
a
,
**
k
):
raise
NotImplementedError
self
.
commitVersion
=
self
.
abortVersion
=
commitVersion
# This is called by Connection.close().
def
_returnToPool
(
self
,
connection
):
"""Return a connection to the pool.
...
...
@@ -510,23 +351,46 @@ class DB(object):
if
am
is
not
None
:
am
.
closedConnection
(
connection
)
if
connection
.
before
:
self
.
historical_pool
.
repush
(
connection
,
connection
.
before
)
else
:
self
.
pool
.
repush
(
connection
)
version
=
connection
.
_version
try
:
pool
=
self
.
_pools
[
version
]
except
KeyError
:
# No such version. We must have deleted the pool.
# Just let the connection go.
# We need to break circular refs to make it really go.
# TODO: Figure out exactly which objects are involved in the
# cycle.
connection
.
__dict__
.
clear
()
return
pool
.
repush
(
connection
)
finally
:
self
.
_r
()
def
_connectionMap
(
self
,
f
):
"""Call f(c) for all connections c in all pools, live and historical.
def
_connectionMap
(
self
,
f
,
open_connections
=
True
):
"""Call f(c) for all connections c in all pools in all versions.
If `open_connections` is false then f(c) is only called on closed
connections.
"""
self
.
_a
()
try
:
self
.
pool
.
map
(
f
)
self
.
historical_pool
.
map
(
f
)
for
pool
in
self
.
_pools
.
values
():
pool
.
map
(
f
,
open_connections
=
open_connections
)
finally
:
self
.
_r
()
def
abortVersion
(
self
,
version
,
txn
=
None
):
warnings
.
warn
(
"Versions are deprecated and will become unsupported "
"in ZODB 3.9"
,
DeprecationWarning
,
2
)
if
txn
is
None
:
txn
=
transaction
.
get
()
txn
.
register
(
AbortVersion
(
self
,
version
))
def
cacheDetail
(
self
):
"""Return information on objects in the various caches
...
...
@@ -639,6 +503,15 @@ class DB(object):
"""
self
.
_storage
.
close
()
def
commitVersion
(
self
,
source
,
destination
=
''
,
txn
=
None
):
warnings
.
warn
(
"Versions are deprecated and will become unsupported "
"in ZODB 3.9"
,
DeprecationWarning
,
2
)
if
txn
is
None
:
txn
=
transaction
.
get
()
txn
.
register
(
CommitVersion
(
self
,
source
,
destination
))
def
getCacheSize
(
self
):
return
self
.
_cache_size
...
...
@@ -649,19 +522,24 @@ class DB(object):
return
self
.
_storage
.
getName
()
def
getPoolSize
(
self
):
return
self
.
pool
.
size
return
self
.
_pool_
size
def
getSize
(
self
):
return
self
.
_storage
.
getSize
()
def
getHistoricalCacheSize
(
self
):
return
self
.
_historical_cache_size
def
getHistoricalPoolSize
(
self
):
return
self
.
historical_pool
.
size
def
getVersionCacheSize
(
self
):
warnings
.
warn
(
"Versions are deprecated and will become unsupported "
"in ZODB 3.9"
,
DeprecationWarning
,
2
)
return
self
.
_version_cache_size
def
getHistoricalTimeout
(
self
):
return
self
.
historical_pool
.
timeout
def
getVersionPoolSize
(
self
):
warnings
.
warn
(
"Versions are deprecated and will become unsupported "
"in ZODB 3.9"
,
DeprecationWarning
,
2
)
return
self
.
_version_pool_size
def
invalidate
(
self
,
tid
,
oids
,
connection
=
None
,
version
=
''
):
"""Invalidate references to a given oid.
...
...
@@ -671,11 +549,13 @@ class DB(object):
passed in to prevent useless (but harmless) messages to the
connection.
"""
# Storages, esp. ZEO tests, need the version argument still. :-/
assert
version
==
''
if
connection
is
not
None
:
version
=
connection
.
_version
# Notify connections.
def
inval
(
c
):
if
c
is
not
connection
:
if
(
c
is
not
connection
and
(
not
version
or
c
.
_version
==
version
)):
c
.
invalidate
(
tid
,
oids
)
self
.
_connectionMap
(
inval
)
...
...
@@ -687,69 +567,79 @@ class DB(object):
def
objectCount
(
self
):
return
len
(
self
.
_storage
)
def
open
(
self
,
transaction_manager
=
None
,
at
=
None
,
before
=
None
):
def
open
(
self
,
version
=
''
,
transaction_manager
=
None
):
"""Return a database Connection for use by application code.
The optional `version` argument can be used to specify that a
version connection is desired.
Note that the connection pool is managed as a stack, to
increase the likelihood that the connection's stack will
include useful objects.
:Parameters:
- `version`: the "version" that all changes will be made
in, defaults to no version.
- `transaction_manager`: transaction manager to use. None means
use the default transaction manager.
- `at`: a datetime.datetime or 8 character transaction id of the
time to open the database with a read-only connection. Passing
both `at` and `before` raises a ValueError, and passing neither
opens a standard writable transaction of the newest state.
A timezone-naive datetime.datetime is treated as a UTC value.
- `before`: like `at`, but opens the readonly state before the
tid or datetime.
"""
# `at` is normalized to `before`, since we use storage.loadBefore
# as the underlying implementation of both.
before
=
getTID
(
at
,
before
)
if
(
before
is
not
None
and
before
>
self
.
lastTransaction
()
and
before
>
getTID
(
self
.
lastTransaction
(),
None
)):
if
version
:
if
not
self
.
supportsVersions
():
raise
ValueError
(
'cannot open an historical connection in the future.'
)
"Versions are not supported by this database."
)
warnings
.
warn
(
"Versions are deprecated and will become unsupported "
"in ZODB 3.9"
,
DeprecationWarning
,
2
)
self
.
_a
()
try
:
# pool <- the _ConnectionPool for this version
pool
=
self
.
_pools
.
get
(
version
)
if
pool
is
None
:
if
version
:
size
=
self
.
_version_pool_size
else
:
size
=
self
.
_pool_size
self
.
_pools
[
version
]
=
pool
=
_ConnectionPool
(
size
)
assert
pool
is
not
None
# result <- a connection
if
before
is
not
None
:
result
=
self
.
historical_pool
.
pop
(
before
)
result
=
pool
.
pop
()
if
result
is
None
:
c
=
self
.
klass
(
self
,
self
.
_historical_cache_size
,
before
)
self
.
historical_pool
.
push
(
c
,
before
)
result
=
self
.
historical_pool
.
pop
(
before
)
if
version
:
size
=
self
.
_version_cache_size
else
:
result
=
self
.
pool
.
pop
()
if
result
is
None
:
c
=
self
.
klass
(
self
,
self
.
_cache_size
)
self
.
pool
.
push
(
c
)
result
=
self
.
pool
.
pop
()
size
=
self
.
_cache_size
c
=
self
.
klass
(
self
,
version
,
size
)
pool
.
push
(
c
)
result
=
pool
.
pop
()
assert
result
is
not
None
#
open the connection
.
#
Tell the connection it belongs to self
.
result
.
open
(
transaction_manager
)
# A good time to do some cache cleanup.
# (note we already have the lock)
self
.
pool
.
availableGC
()
self
.
historical_pool
.
availableGC
()
self
.
_connectionMap
(
lambda
c
:
c
.
cacheGC
(),
open_connections
=
False
)
return
result
finally
:
self
.
_r
()
def
removeVersionPool
(
self
,
version
):
try
:
del
self
.
_pools
[
version
]
except
KeyError
:
pass
def
connectionDebugInfo
(
self
):
result
=
[]
t
=
time
()
def
get_info
(
c
):
# `result`, `time` and `
before
` are lexically inherited.
# `result`, `time` and `
version
` are lexically inherited.
o
=
c
.
_opened
d
=
c
.
getDebugInfo
()
if
d
:
...
...
@@ -762,10 +652,10 @@ class DB(object):
result
.
append
({
'opened'
:
o
and
(
"%s (%.2fs)"
%
(
ctime
(
o
),
t
-
o
)),
'info'
:
d
,
'
before'
:
before
,
'
version'
:
version
,
})
for
before
,
pool
in
self
.
_pools
.
items
():
for
version
,
pool
in
self
.
_pools
.
items
():
pool
.
map
(
get_info
)
return
result
...
...
@@ -807,40 +697,48 @@ class DB(object):
self
.
_a
()
try
:
self
.
_cache_size
=
size
pool
=
self
.
_pools
.
get
(
''
)
if
pool
is
not
None
:
def
setsize
(
c
):
c
.
_cache
.
cache_size
=
size
self
.
pool
.
map
(
setsize
)
pool
.
map
(
setsize
)
finally
:
self
.
_r
()
def
setHistoricalCacheSize
(
self
,
size
):
def
setVersionCacheSize
(
self
,
size
):
warnings
.
warn
(
"Versions are deprecated and will become unsupported "
"in ZODB 3.9"
,
DeprecationWarning
,
2
)
self
.
_a
()
try
:
self
.
_
historical
_cache_size
=
size
self
.
_
version
_cache_size
=
size
def
setsize
(
c
):
c
.
_cache
.
cache_size
=
size
self
.
historical_pool
.
map
(
setsize
)
for
version
,
pool
in
self
.
_pools
.
items
():
if
version
:
pool
.
map
(
setsize
)
finally
:
self
.
_r
()
def
setPoolSize
(
self
,
size
):
self
.
_a
()
try
:
self
.
pool
.
size
=
size
finally
:
self
.
_r
()
self
.
_pool_size
=
size
self
.
_reset_pool_sizes
(
size
,
for_versions
=
False
)
def
setHistoricalPoolSize
(
self
,
size
):
self
.
_a
()
try
:
self
.
historical_pool
.
size
=
size
finally
:
self
.
_r
()
def
setVersionPoolSize
(
self
,
size
):
warnings
.
warn
(
"Versions are deprecated and will become unsupported "
"in ZODB 3.9"
,
DeprecationWarning
,
2
)
self
.
_version_pool_size
=
size
self
.
_reset_pool_sizes
(
size
,
for_versions
=
True
)
def
setHistoricalTimeout
(
self
,
timeout
):
def
_reset_pool_sizes
(
self
,
size
,
for_versions
=
False
):
self
.
_a
()
try
:
self
.
historical_pool
.
timeout
=
timeout
for
version
,
pool
in
self
.
_pools
.
items
():
if
(
version
!=
''
)
==
for_versions
:
pool
.
set_pool_size
(
size
)
finally
:
self
.
_r
()
...
...
@@ -870,7 +768,7 @@ resource_counter_lock = threading.Lock()
resource_counter
=
0
class
ResourceManager
(
object
):
"""Transaction participation for a
n
undo resource."""
"""Transaction participation for a
version or
undo resource."""
# XXX This implementation is broken. Subclasses invalidate oids
# in their commit calls. Invalidations should not be sent until
...
...
@@ -913,6 +811,39 @@ class ResourceManager(object):
def
commit
(
self
,
obj
,
txn
):
raise
NotImplementedError
class
CommitVersion
(
ResourceManager
):
def
__init__
(
self
,
db
,
version
,
dest
=
''
):
super
(
CommitVersion
,
self
).
__init__
(
db
)
self
.
_version
=
version
self
.
_dest
=
dest
def
commit
(
self
,
ob
,
t
):
# XXX see XXX in ResourceManager
dest
=
self
.
_dest
tid
,
oids
=
self
.
_db
.
_storage
.
commitVersion
(
self
.
_version
,
self
.
_dest
,
t
)
oids
=
dict
.
fromkeys
(
oids
,
1
)
self
.
_db
.
invalidate
(
tid
,
oids
,
version
=
self
.
_dest
)
if
self
.
_dest
:
# the code above just invalidated the dest version.
# now we need to invalidate the source!
self
.
_db
.
invalidate
(
tid
,
oids
,
version
=
self
.
_version
)
class
AbortVersion
(
ResourceManager
):
def
__init__
(
self
,
db
,
version
):
super
(
AbortVersion
,
self
).
__init__
(
db
)
self
.
_version
=
version
def
commit
(
self
,
ob
,
t
):
# XXX see XXX in ResourceManager
tid
,
oids
=
self
.
_db
.
_storage
.
abortVersion
(
self
.
_version
,
t
)
self
.
_db
.
invalidate
(
tid
,
dict
.
fromkeys
(
oids
,
1
),
version
=
self
.
_version
)
class
TransactionalUndo
(
ResourceManager
):
def
__init__
(
self
,
db
,
tid
):
...
...
src/ZODB/ExportImport.py
View file @
694ac459
...
...
@@ -47,7 +47,7 @@ class ExportImport:
continue
done_oids
[
oid
]
=
True
try
:
p
,
serial
=
load
(
oid
,
''
)
p
,
serial
=
load
(
oid
,
self
.
_version
)
except
:
logger
.
debug
(
"broken reference for oid %s"
,
repr
(
oid
),
exc_info
=
True
)
...
...
@@ -127,6 +127,8 @@ class ExportImport:
return
Ghost
(
oid
)
version
=
self
.
_version
while
1
:
header
=
f
.
read
(
16
)
if
header
==
export_end_marker
:
...
...
@@ -178,9 +180,9 @@ class ExportImport:
if
blob_filename
is
not
None
:
self
.
_storage
.
storeBlob
(
oid
,
None
,
data
,
blob_filename
,
''
,
transaction
)
version
,
transaction
)
else
:
self
.
_storage
.
store
(
oid
,
None
,
data
,
''
,
transaction
)
self
.
_storage
.
store
(
oid
,
None
,
data
,
version
,
transaction
)
export_end_marker
=
'
\
377
'
*
16
...
...
src/ZODB/POSException.py
View file @
694ac459
...
...
@@ -17,10 +17,6 @@ $Id$"""
from
ZODB.utils
import
oid_repr
,
readable_tid_repr
# BBB: We moved the two transactions to the transaction package
from
transaction.interfaces
import
TransactionError
,
TransactionFailedError
def
_fmt_undo
(
oid
,
reason
):
s
=
reason
and
(
": %s"
%
reason
)
or
""
return
"Undo error %s%s"
%
(
oid_repr
(
oid
),
s
)
...
...
@@ -48,6 +44,18 @@ class POSKeyError(KeyError, POSError):
def
__str__
(
self
):
return
oid_repr
(
self
.
args
[
0
])
class
TransactionError
(
POSError
):
"""An error occurred due to normal transaction processing."""
class
TransactionFailedError
(
POSError
):
"""Cannot perform an operation on a transaction that previously failed.
An attempt was made to commit a transaction, or to join a transaction,
but this transaction previously raised an exception during an attempt
to commit it. The transaction must be explicitly aborted, either by
invoking abort() on the transaction, or begin() on its transaction
manager.
"""
class
ConflictError
(
TransactionError
):
"""Two transactions tried to modify the same object at once.
...
...
@@ -98,12 +106,11 @@ class ConflictError(TransactionError):
# avoid circular import chain
from
ZODB.utils
import
get_pickle_metadata
self
.
class_name
=
"%s.%s"
%
get_pickle_metadata
(
data
)
## else:
## if message != "data read conflict error":
## raise RuntimeError
self
.
serials
=
serials
self
.
data
=
data
def
__str__
(
self
):
extras
=
[]
if
self
.
oid
:
...
...
@@ -234,10 +241,6 @@ class DanglingReferenceError(TransactionError):
return
"from %s to %s"
%
(
oid_repr
(
self
.
referer
),
oid_repr
(
self
.
missing
))
############################################################################
# Only used in storages; versions are no longer supported.
class
VersionError
(
POSError
):
"""An error in handling versions occurred."""
...
...
@@ -250,7 +253,6 @@ class VersionLockError(VersionError, TransactionError):
An attempt was made to modify an object that has been modified in an
unsaved version.
"""
############################################################################
class
UndoError
(
POSError
):
"""An attempt was made to undo a non-undoable transaction."""
...
...
@@ -297,9 +299,6 @@ class ExportError(POSError):
class
Unsupported
(
POSError
):
"""A feature was used that is not supported by the storage."""
class
ReadOnlyHistoryError
(
POSError
):
"""Unable to add or modify objects in an historical connection."""
class
InvalidObjectReference
(
POSError
):
"""An object contains an invalid reference to another object.
...
...
src/ZODB/blob.py
View file @
694ac459
...
...
@@ -490,6 +490,7 @@ class BlobStorage(SpecificationDecoratorBase):
base_dir
=
self
.
fshelper
.
base_dir
for
oid
,
oid_path
in
self
.
fshelper
.
listOIDs
():
files
=
os
.
listdir
(
oid_path
)
for
filename
in
files
:
filepath
=
os
.
path
.
join
(
oid_path
,
filename
)
whatever
,
serial
=
self
.
fshelper
.
splitBlobFilename
(
filepath
)
...
...
src/ZODB/component.xml
View file @
694ac459
...
...
@@ -180,22 +180,16 @@
and exceeding twice pool-size connections causes a critical
message to be logged.
</description>
<key
name=
"
historical
-pool-size"
datatype=
"integer"
default=
"3"
/>
<key
name=
"
version
-pool-size"
datatype=
"integer"
default=
"3"
/>
<description>
The expected maximum
total number of historical connections
simultaneously ope
n.
The expected maximum
number of connections simultaneously open
per versio
n.
</description>
<key
name=
"
historical-cache-size"
datatype=
"integer"
default=
"10
00"
/>
<key
name=
"
version-cache-size"
datatype=
"integer"
default=
"1
00"
/>
<description>
Target size, in number of objects, of each
historical
connection's
Target size, in number of objects, of each
version
connection's
object cache.
</description>
<key
name=
"historical-timeout"
datatype=
"time-interval"
default=
"5m"
/>
<description>
The minimum interval that an unused historical connection should be
kept.
</description>
<key
name=
"database-name"
default=
"unnamed"
/>
<description>
When multidatabases are in use, this is the name given to this
...
...
src/ZODB/config.py
View file @
694ac459
...
...
@@ -68,6 +68,7 @@ def storageFromURL(url):
def
storageFromConfig
(
section
):
return
section
.
open
()
class
BaseConfig
:
"""Object representing a configured storage or database.
...
...
@@ -98,9 +99,8 @@ class ZODBDatabase(BaseConfig):
return
ZODB
.
DB
(
storage
,
pool_size
=
section
.
pool_size
,
cache_size
=
section
.
cache_size
,
historical_pool_size
=
section
.
historical_pool_size
,
historical_cache_size
=
section
.
historical_cache_size
,
historical_timeout
=
section
.
historical_timeout
,
version_pool_size
=
section
.
version_pool_size
,
version_cache_size
=
section
.
version_cache_size
,
database_name
=
section
.
database_name
,
databases
=
databases
)
except
:
...
...
src/ZODB/historical_connections.txt
deleted
100644 → 0
View file @
f0cbd414
======================
Historical Connections
======================
Usage
=====
A database can be opened with a read-only, historical connection when given
a specific transaction or datetime. This can enable full-context application
level conflict resolution, historical exploration and preparation for reverts,
or even the use of a historical database revision as "production" while
development continues on a "development" head.
A database can be opened historically ``at`` or ``before`` a given transaction
serial or datetime. Here's a simple example. It should work with any storage
that supports ``loadBefore``. Unfortunately that does not include
MappingStorage, so we use a FileStorage instance. Also unfortunately, as of
this writing there is no reliable way to determine if a storage truly
implements loadBefore, or if it simply returns None (as in BaseStorage), other
than reading code.
We'll begin our example with a fairly standard set up. We
- make a storage and a database;
- open a normal connection;
- modify the database through the connection;
- commit a transaction, remembering the time in UTC;
- modify the database again; and
- commit a transaction.
>>> import ZODB.FileStorage
>>> storage = ZODB.FileStorage.FileStorage(
... 'HistoricalConnectionTests.fs', create=True)
>>> import ZODB
>>> db = ZODB.DB(storage)
>>> conn = db.open()
>>> import persistent.mapping
>>> conn.root()['first'] = persistent.mapping.PersistentMapping(count=0)
>>> import transaction
>>> transaction.commit()
>>> import datetime
>>> now = datetime.datetime.utcnow()
>>> root = conn.root()
>>> root['second'] = persistent.mapping.PersistentMapping()
>>> root['first']['count'] += 1
>>> transaction.commit()
Now we will show a historical connection. We'll open one using the ``now``
value we generated above, and then demonstrate that the state of the original
connection, at the mutable head of the database, is different than the
historical state.
>>> transaction1 = transaction.TransactionManager()
>>> historical_conn = db.open(transaction_manager=transaction1, at=now)
>>> sorted(conn.root().keys())
['first', 'second']
>>> conn.root()['first']['count']
1
>>> historical_conn.root().keys()
['first']
>>> historical_conn.root()['first']['count']
0
Moreover, the historical connection cannot commit changes.
>>> historical_conn.root()['first']['count'] += 1
>>> historical_conn.root()['first']['count']
1
>>> transaction1.commit()
Traceback (most recent call last):
...
ReadOnlyHistoryError
>>> transaction1.abort()
>>> historical_conn.root()['first']['count']
0
(It is because of the mutable behavior outside of transactional semantics that
we must have a separate connection, and associated object cache, per thread,
even though the semantics should be readonly.)
As demonstrated, a timezone-naive datetime will be interpreted as UTC. You
can also pass a timezone-aware datetime or a serial (transaction id).
Here's opening with a serial--the serial of the root at the time of the first
commit.
>>> historical_serial = historical_conn.root()._p_serial
>>> historical_conn.close()
>>> historical_conn = db.open(transaction_manager=transaction1,
... at=historical_serial)
>>> historical_conn.root().keys()
['first']
>>> historical_conn.root()['first']['count']
0
>>> historical_conn.close()
We've shown the ``at`` argument. You can also ask to look ``before`` a datetime
or serial. (It's an error to pass both [#not_both]_) In this example, we're
looking at the database immediately prior to the most recent change to the
root.
>>> serial = conn.root()._p_serial
>>> historical_conn = db.open(
... transaction_manager=transaction1, before=serial)
>>> historical_conn.root().keys()
['first']
>>> historical_conn.root()['first']['count']
0
In fact, ``at`` arguments are translated into ``before`` values because the
underlying mechanism is a storage's loadBefore method. When you look at a
connection's ``before`` attribute, it is normalized into a ``before`` serial,
no matter what you pass into ``db.open``.
>>> print conn.before
None
>>> historical_conn.before == serial
True
>>> conn.close()
Configuration
=============
Like normal connections, the database lets you set how many total historical
connections can be active without generating a warning, and
how many objects should be kept in each historical connection's object cache.
>>> db.getHistoricalPoolSize()
3
>>> db.setHistoricalPoolSize(4)
>>> db.getHistoricalPoolSize()
4
>>> db.getHistoricalCacheSize()
1000
>>> db.setHistoricalCacheSize(2000)
>>> db.getHistoricalCacheSize()
2000
In addition, you can specify the minimum number of seconds that an unused
historical connection should be kept.
>>> db.getHistoricalTimeout()
300
>>> db.setHistoricalTimeout(400)
>>> db.getHistoricalTimeout()
400
All three of these values can be specified in a ZConfig file. We're using
mapping storage for simplicity, but remember, as we said at the start of this
document, mapping storage will not work for historical connections (and in fact
may seem to work but then fail confusingly) because it does not implement
loadBefore.
>>> import ZODB.config
>>> db2 = ZODB.config.databaseFromString('''
... <zodb>
... <mappingstorage/>
... historical-pool-size 5
... historical-cache-size 1500
... historical-timeout 6m
... </zodb>
... ''')
>>> db2.getHistoricalPoolSize()
5
>>> db2.getHistoricalCacheSize()
1500
>>> db2.getHistoricalTimeout()
360
Let's actually look at these values at work by shining some light into what
has been a black box up to now. We'll actually do some white box examination
of what is going on in the database, pools and connections.
Historical connections are held in a single connection pool with mappings
from the ``before`` TID to available connections. First we'll put a new
pool on the database so we have a clean slate.
>>> historical_conn.close()
>>> from ZODB.DB import KeyedConnectionPool
>>> db.historical_pool = KeyedConnectionPool(
... db.historical_pool.size, db.historical_pool.timeout)
Now lets look what happens to the pool when we create and close an historical
connection.
>>> pool = db.historical_pool
>>> len(pool.all)
0
>>> len(pool.available)
0
>>> historical_conn = db.open(
... transaction_manager=transaction1, before=serial)
>>> len(pool.all)
1
>>> len(pool.available)
0
>>> historical_conn in pool.all
True
>>> historical_conn.close()
>>> len(pool.all)
1
>>> len(pool.available)
1
>>> pool.available.keys()[0] == serial
True
>>> len(pool.available.values()[0])
1
Now we'll open and close two for the same serial to see what happens to the
data structures.
>>> historical_conn is db.open(
... transaction_manager=transaction1, before=serial)
True
>>> len(pool.all)
1
>>> len(pool.available)
0
>>> transaction2 = transaction.TransactionManager()
>>> historical_conn2 = db.open(
... transaction_manager=transaction2, before=serial)
>>> len(pool.all)
2
>>> len(pool.available)
0
>>> historical_conn2.close()
>>> len(pool.all)
2
>>> len(pool.available)
1
>>> len(pool.available.values()[0])
1
>>> historical_conn.close()
>>> len(pool.all)
2
>>> len(pool.available)
1
>>> len(pool.available.values()[0])
2
If you change the historical cache size, that changes the size of the
persistent cache on our connection.
>>> historical_conn._cache.cache_size
2000
>>> db.setHistoricalCacheSize(1500)
>>> historical_conn._cache.cache_size
1500
Now let's look at pool sizes. We'll set it to two, then open and close three
connections. We should end up with only two available connections.
>>> db.setHistoricalPoolSize(2)
>>> historical_conn = db.open(
... transaction_manager=transaction1, before=serial)
>>> historical_conn2 = db.open(
... transaction_manager=transaction2, before=serial)
>>> transaction3 = transaction.TransactionManager()
>>> historical_conn3 = db.open(
... transaction_manager=transaction3, at=historical_serial)
>>> len(pool.all)
3
>>> len(pool.available)
0
>>> historical_conn3.close()
>>> len(pool.all)
3
>>> len(pool.available)
1
>>> len(pool.available.values()[0])
1
>>> historical_conn2.close()
>>> len(pool.all)
3
>>> len(pool.available)
2
>>> len(pool.available.values()[0])
1
>>> len(pool.available.values()[1])
1
>>> historical_conn.close()
>>> len(pool.all)
2
>>> len(pool.available)
1
>>> len(pool.available.values()[0])
2
Notice it dumped the one that was closed at the earliest time.
Finally, we'll look at the timeout. We'll need to monkeypatch ``time`` for
this. (The funky __import__ of DB is because some ZODB __init__ shenanigans
make the DB class mask the DB module.)
>>> db.getHistoricalTimeout()
400
>>> import time
>>> delta = 200
>>> def stub_time():
... return time.time() + delta
...
>>> DB_module = __import__('ZODB.DB', globals(), locals(), ['chicken'])
>>> original_time = DB_module.time
>>> DB_module.time = stub_time
>>> historical_conn = db.open(before=serial)
>>> len(pool.all)
2
>>> len(pool.available)
1
A close or an open will do garbage collection on the timed out connections.
>>> delta += 200
>>> historical_conn.close()
>>> len(pool.all)
1
>>> len(pool.available)
1
>>> len(pool.available.values()[0])
1
Invalidations
=============
Invalidations are ignored for historical connections. This is another white box
test.
>>> historical_conn = db.open(
... transaction_manager=transaction1, at=serial)
>>> conn = db.open()
>>> sorted(conn.root().keys())
['first', 'second']
>>> conn.root()['first']['count']
1
>>> sorted(historical_conn.root().keys())
['first', 'second']
>>> historical_conn.root()['first']['count']
1
>>> conn.root()['first']['count'] += 1
>>> conn.root()['third'] = persistent.mapping.PersistentMapping()
>>> transaction.commit()
>>> len(historical_conn._invalidated)
0
>>> historical_conn.close()
Note that if you try to open an historical connection to a time in the future,
you will get an error.
>>> historical_conn = db.open(at=datetime.datetime.utcnow())
Traceback (most recent call last):
...
ValueError: cannot open an historical connection in the future.
Warnings
========
First, if you use datetimes to get a historical connection, be aware that the
conversion from datetime to transaction id has some pitfalls. Generally, the
transaction ids in the database are only as time-accurate as the system clock
was when the transaction id was created. Moreover, leap seconds are handled
somewhat naively in the ZODB (largely because they are handled naively in Unix/
POSIX time) so any minute that contains a leap second may contain serials that
are a bit off. This is not generally a problem for the ZODB, because serials
are guaranteed to increase, but it does highlight the fact that serials are not
guaranteed to be accurately connected to time. Generally, they are about as
reliable as time.time.
Second, historical connections currently introduce potentially wide variance in
memory requirements for the applications. Since you can open up many
connections to different serials, and each gets their own pool, you may collect
quite a few connections. For now, at least, if you use this feature you need to
be particularly careful of your memory usage. Get rid of pools when you know
you can, and reuse the exact same values for ``at`` or ``before`` when
possible. If historical connections are used for conflict resolution, these
connections will probably be temporary--not saved in a pool--so that the extra
memory usage would also be brief and unlikely to overlap.
.. ......... ..
.. Footnotes ..
.. ......... ..
.. [#not_both] It is an error to try and pass both `at` and `before`.
>>> historical_conn = db.open(
... transaction_manager=transaction1, at=now, before=historical_serial)
Traceback (most recent call last):
...
ValueError: can only pass zero or one of `at` and `before`
\ No newline at end of file
src/ZODB/interfaces.py
View file @
694ac459
...
...
@@ -34,10 +34,9 @@ class IConnection(Interface):
loading objects from that Connection. Objects loaded by one
thread should not be used by another thread.
A Connection can be frozen to a serial--a transaction id, a single point in
history-- when it is created. By default, a Connection is not associated
with a serial; it uses current data. A Connection frozen to a serial is
read-only.
A Connection can be associated with a single version when it is
created. By default, a Connection is not associated with a
version; it uses non-version data.
Each Connection provides an isolated, consistent view of the
database, by managing independent copies of objects in the
...
...
@@ -102,7 +101,8 @@ class IConnection(Interface):
User Methods:
root, get, add, close, db, sync, isReadOnly, cacheGC,
cacheFullSweep, cacheMinimize
cacheFullSweep, cacheMinimize, getVersion,
modifiedInVersion
Experimental Methods:
onCloseCallbacks
...
...
@@ -226,6 +226,9 @@ class IConnection(Interface):
The root is a persistent.mapping.PersistentMapping.
"""
def
getVersion
():
"""Returns the version this connection is attached to."""
# Multi-database support.
connections
=
Attribute
(
...
...
@@ -322,7 +325,7 @@ class IStorageDB(Interface):
there would be so many that it would be inefficient to do so.
"""
def
invalidate
(
transaction_id
,
oids
):
def
invalidate
(
transaction_id
,
oids
,
version
=
''
):
"""Invalidate object ids committed by the given transaction
The oids argument is an iterable of object identifiers.
...
...
@@ -353,15 +356,13 @@ class IDatabase(IStorageDB):
entry.
"""
)
def
open
(
transaction_manager
=
None
,
serial
=
''
):
def
open
(
version
=
''
,
transaction_manager
=
None
):
"""Return an IConnection object for use by application code.
version: the "version" that all changes will be made
in, defaults to no version.
transaction_manager: transaction manager to use. None means
use the default transaction manager.
serial: the serial (transaction id) of the database to open.
An empty string (the default) means to open it to the newest
serial. Specifying a serial results in a read-only historical
connection.
Note that the connection pool is managed as a stack, to
increase the likelihood that the connection's stack will
...
...
@@ -440,7 +441,7 @@ class IStorage(Interface):
This is used soley for informational purposes.
"""
def
history
(
oid
,
size
=
1
):
def
history
(
oid
,
version
,
size
=
1
):
"""Return a sequence of history information dictionaries.
Up to size objects (including no objects) may be returned.
...
...
@@ -456,6 +457,10 @@ class IStorage(Interface):
tid
The transaction identifier of the transaction that
committed the version.
version
The version that the revision is in. If the storage
doesn't support versions, then this must be an empty
string.
user_name
The user identifier, if any (or an empty string) of the
user on whos behalf the revision was committed.
...
...
@@ -486,14 +491,18 @@ class IStorage(Interface):
This is used soley for informational purposes.
"""
def
load
(
oid
):
"""Load data for an object id
def
load
(
oid
,
version
):
"""Load data for an object id
and version
A data record and serial are returned. The serial is a
transaction identifier of the transaction that wrote the data
record.
A POSKeyError is raised if there is no record for the object id.
A POSKeyError is raised if there is no record for the object
id and version.
Storages that don't support versions must ignore the version
argument.
"""
def
loadBefore
(
oid
,
tid
):
...
...
@@ -566,7 +575,7 @@ class IStorage(Interface):
has a reasonable chance of being unique.
"""
def
store
(
oid
,
serial
,
data
,
transaction
):
def
store
(
oid
,
serial
,
data
,
version
,
transaction
):
"""Store data for the object id, oid.
Arguments:
...
...
@@ -585,6 +594,11 @@ class IStorage(Interface):
data
The data record. This is opaque to the storage.
version
The version to store the data is. If the storage doesn't
support versions, this should be an empty string and the
storage is allowed to ignore it.
transaction
A transaction object. This should match the current
transaction for the storage, set by tpc_begin.
...
...
@@ -693,7 +707,7 @@ class IStorageRestoreable(IStorage):
# failed to take into account records after the pack time.
def
restore
(
oid
,
serial
,
data
,
prev_txn
,
transaction
):
def
restore
(
oid
,
serial
,
data
,
version
,
prev_txn
,
transaction
):
"""Write data already committed in a separate database
The restore method is used when copying data from one database
...
...
@@ -713,6 +727,9 @@ class IStorageRestoreable(IStorage):
The record data. This will be None if the transaction
undid the creation of the object.
version
The version identifier for the record
prev_txn
The identifier of a previous transaction that held the
object data. The target storage can sometimes use this
...
...
@@ -729,6 +746,7 @@ class IStorageRecordInformation(Interface):
"""
oid
=
Attribute
(
"The object id"
)
version
=
Attribute
(
"The version"
)
data
=
Attribute
(
"The data record"
)
class
IStorageTransactionInformation
(
Interface
):
...
...
@@ -918,7 +936,7 @@ class IBlob(Interface):
class
IBlobStorage
(
Interface
):
"""A storage supporting BLOBs."""
def
storeBlob
(
oid
,
oldserial
,
data
,
blob
,
transaction
):
def
storeBlob
(
oid
,
oldserial
,
data
,
blob
,
version
,
transaction
):
"""Stores data that has a BLOB attached."""
def
loadBlob
(
oid
,
serial
):
...
...
src/ZODB/scripts/fstail.py
View file @
694ac459
...
...
@@ -33,8 +33,8 @@ def main(path, ntxn):
th
.
read_meta
()
print
"%s: hash=%s"
%
(
th
.
get_timestamp
(),
binascii
.
hexlify
(
hash
))
print
(
"user=%r description=%r length=%d
offset=%d
"
%
(
th
.
user
,
th
.
descr
,
th
.
length
,
th
.
get_data_offset
()
))
print
(
"user=%r description=%r length=%d"
%
(
th
.
user
,
th
.
descr
,
th
.
length
))
print
th
=
th
.
prev_txn
()
i
-=
1
...
...
src/ZODB/scripts/fstail.txt
deleted
100644 → 0
View file @
f0cbd414
====================
The `fstail` utility
====================
The `fstail` utility shows information for a FileStorage about the last `n`
transactions:
We have to prepare a FileStorage first:
>>> from ZODB.FileStorage import FileStorage
>>> from ZODB.DB import DB
>>> import transaction
>>> from tempfile import mktemp
>>> storagefile = mktemp()
>>> base_storage = FileStorage(storagefile)
>>> database = DB(base_storage)
>>> connection1 = database.open()
>>> root = connection1.root()
>>> root['foo'] = 1
>>> transaction.commit()
Now lets have a look at the last transactions of this FileStorage:
>>> from ZODB.scripts.fstail import main
>>> main(storagefile, 5)
2007-11-10 15:18:48.543001: hash=b16422d09fabdb45d4e4325e4b42d7d6f021d3c3
user='' description='' length=138 offset=191
<BLANKLINE>
2007-11-10 15:18:48.543001: hash=b16422d09fabdb45d4e4325e4b42d7d6f021d3c3
user='' description='initial database creation' length=156 offset=52
<BLANKLINE>
Now clean up the storage again:
>>> import os
>>> base_storage.close()
>>> os.unlink(storagefile)
>>> os.unlink(storagefile+'.index')
>>> os.unlink(storagefile+'.lock')
>>> os.unlink(storagefile+'.tmp')
src/ZODB/scripts/tests.py
View file @
694ac459
...
...
@@ -11,21 +11,15 @@
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""
Test harness for scripts
.
"""
XXX short summary goes here
.
$Id$
"""
import
unittest
import
re
from
zope.testing
import
doctest
,
renormalizing
checker
=
renormalizing
.
RENormalizing
([
(
re
.
compile
(
'[0-9]{4}-[0-9]{2}-[0-9]{2} [0-9]{2}:[0-9]{2}:[0-9]{2}
\
.[
0
-9]+'
),
'2007-11-10 15:18:48.543001'
),
(
re
.
compile
(
'hash=[0-9a-f]{40}'
),
'hash=b16422d09fabdb45d4e4325e4b42d7d6f021d3c3'
)])
from
zope.testing
import
doctest
def
test_suite
():
return
unittest
.
TestSuite
((
doctest
.
DocFileSuite
(
'referrers.txt'
,
'fstail.txt'
,
checker
=
checker
),
doctest
.
DocFileSuite
(
'referrers.txt'
),
))
src/ZODB/serialize.py
View file @
694ac459
...
...
@@ -371,7 +371,7 @@ class ObjectWriter:
return
oid
# Note that we never get here for persistent classes.
# We'll use d
ir
ect refs for normal classes.
# We'll use d
ri
ect refs for normal classes.
if
database_name
:
return
[
'm'
,
(
database_name
,
oid
,
klass
)]
...
...
src/ZODB/tests/VersionStorage.py
View file @
694ac459
...
...
@@ -394,6 +394,153 @@ class VersionStorage:
self
.
_storage
.
tpc_finish
(
t
)
self
.
assertEqual
(
oids
,
[
oid
])
def
checkPackVersions
(
self
):
db
=
DB
(
self
.
_storage
)
cn
=
db
.
open
(
version
=
"testversion"
)
root
=
cn
.
root
()
obj
=
root
[
"obj"
]
=
MinPO
(
"obj"
)
root
[
"obj2"
]
=
MinPO
(
"obj2"
)
txn
=
transaction
.
get
()
txn
.
note
(
"create 2 objs in version"
)
txn
.
commit
()
obj
.
value
=
"77"
txn
=
transaction
.
get
()
txn
.
note
(
"modify obj in version"
)
txn
.
commit
()
# undo the modification to generate a mix of backpointers
# and versions for pack to chase
info
=
db
.
undoInfo
()
db
.
undo
(
info
[
0
][
"id"
])
txn
=
transaction
.
get
()
txn
.
note
(
"undo modification"
)
txn
.
commit
()
snooze
()
self
.
_storage
.
pack
(
time
.
time
(),
referencesf
)
db
.
commitVersion
(
"testversion"
)
txn
=
transaction
.
get
()
txn
.
note
(
"commit version"
)
txn
.
commit
()
cn
=
db
.
open
()
root
=
cn
.
root
()
root
[
"obj"
]
=
"no version"
txn
=
transaction
.
get
()
txn
.
note
(
"modify obj"
)
txn
.
commit
()
self
.
_storage
.
pack
(
time
.
time
(),
referencesf
)
def
checkPackVersionsInPast
(
self
):
db
=
DB
(
self
.
_storage
)
cn
=
db
.
open
(
version
=
"testversion"
)
root
=
cn
.
root
()
obj
=
root
[
"obj"
]
=
MinPO
(
"obj"
)
root
[
"obj2"
]
=
MinPO
(
"obj2"
)
txn
=
transaction
.
get
()
txn
.
note
(
"create 2 objs in version"
)
txn
.
commit
()
obj
.
value
=
"77"
txn
=
transaction
.
get
()
txn
.
note
(
"modify obj in version"
)
txn
.
commit
()
t0
=
time
.
time
()
snooze
()
# undo the modification to generate a mix of backpointers
# and versions for pack to chase
info
=
db
.
undoInfo
()
db
.
undo
(
info
[
0
][
"id"
])
txn
=
transaction
.
get
()
txn
.
note
(
"undo modification"
)
txn
.
commit
()
self
.
_storage
.
pack
(
t0
,
referencesf
)
db
.
commitVersion
(
"testversion"
)
txn
=
transaction
.
get
()
txn
.
note
(
"commit version"
)
txn
.
commit
()
cn
=
db
.
open
()
root
=
cn
.
root
()
root
[
"obj"
]
=
"no version"
txn
=
transaction
.
get
()
txn
.
note
(
"modify obj"
)
txn
.
commit
()
self
.
_storage
.
pack
(
time
.
time
(),
referencesf
)
def
checkPackVersionReachable
(
self
):
db
=
DB
(
self
.
_storage
)
cn
=
db
.
open
()
root
=
cn
.
root
()
names
=
"a"
,
"b"
,
"c"
for
name
in
names
:
root
[
name
]
=
MinPO
(
name
)
transaction
.
commit
()
for
name
in
names
:
cn2
=
db
.
open
(
version
=
name
)
rt2
=
cn2
.
root
()
obj
=
rt2
[
name
]
obj
.
value
=
MinPO
(
"version"
)
transaction
.
commit
()
cn2
.
close
()
root
[
"d"
]
=
MinPO
(
"d"
)
transaction
.
commit
()
snooze
()
self
.
_storage
.
pack
(
time
.
time
(),
referencesf
)
cn
.
sync
()
# make sure all the non-version data is there
for
name
,
obj
in
root
.
items
():
self
.
assertEqual
(
name
,
obj
.
value
)
# make sure all the version-data is there,
# and create a new revision in the version
for
name
in
names
:
cn2
=
db
.
open
(
version
=
name
)
rt2
=
cn2
.
root
()
obj
=
rt2
[
name
].
value
self
.
assertEqual
(
obj
.
value
,
"version"
)
obj
.
value
=
"still version"
transaction
.
commit
()
cn2
.
close
()
db
.
abortVersion
(
"b"
)
txn
=
transaction
.
get
()
txn
.
note
(
"abort version b"
)
txn
.
commit
()
t
=
time
.
time
()
snooze
()
L
=
db
.
undoInfo
()
db
.
undo
(
L
[
0
][
"id"
])
txn
=
transaction
.
get
()
txn
.
note
(
"undo abort"
)
txn
.
commit
()
self
.
_storage
.
pack
(
t
,
referencesf
)
cn2
=
db
.
open
(
version
=
"b"
)
rt2
=
cn2
.
root
()
self
.
assertEqual
(
rt2
[
"b"
].
value
.
value
,
"still version"
)
def
checkLoadBeforeVersion
(
self
):
eq
=
self
.
assertEqual
oid
=
self
.
_storage
.
new_oid
()
...
...
src/ZODB/tests/blob_basic.txt
View file @
694ac459
...
...
@@ -172,4 +172,3 @@ Blobs are not subclassable::
Traceback (most recent call last):
...
TypeError: Blobs do not support subclassing.
src/ZODB/tests/blob_connection.txt
View file @
694ac459
...
...
@@ -15,8 +15,7 @@
Connection support for Blobs tests
==================================
Connections handle Blobs specially. To demonstrate that, we first need a Blob
with some data:
Connections handle Blobs specially. To demonstrate that, we first need a Blob with some data:
>>> from ZODB.interfaces import IBlob
>>> from ZODB.blob import Blob
...
...
@@ -26,16 +25,13 @@ with some data:
>>> data.write("I'm a happy Blob.")
>>> data.close()
We also need a database with a blob supporting storage. (We're going to use
FileStorage rather than MappingStorage here because we will want ``loadBefore``
for one of our examples.)
We also need a database with a blob supporting storage:
>>>
import ZODB.File
Storage
>>>
from ZODB.MappingStorage import Mapping
Storage
>>> from ZODB.blob import BlobStorage
>>> from ZODB.DB import DB
>>> from tempfile import mkdtemp
>>> base_storage = ZODB.FileStorage.FileStorage(
... 'BlobTests.fs', create=True)
>>> base_storage = MappingStorage("test")
>>> blob_dir = mkdtemp()
>>> blob_storage = BlobStorage(blob_dir, base_storage)
>>> database = DB(blob_storage)
...
...
@@ -55,55 +51,31 @@ calling the blob's open method:
>>> root['anotherblob'] = anotherblob
>>> nothing = transaction.commit()
Getting stuff out of there works similar
ly
:
Getting stuff out of there works similar:
>>> transaction2 = transaction.TransactionManager()
>>> connection2 = database.open(transaction_manager=transaction2)
>>> connection2 = database.open()
>>> root = connection2.root()
>>> blob2 = root['myblob']
>>> IBlob.providedBy(blob2)
True
>>> blob2.open("r").read()
"I'm a happy Blob."
>>> transaction2.abort()
MVCC also works.
>>> transaction3 = transaction.TransactionManager()
>>> connection3 = database.open(transaction_manager=transaction3)
>>> f = connection.root()['myblob'].open('w')
>>> f.write('I am an ecstatic Blob.')
>>> f.close()
>>> transaction.commit()
>>> connection3.root()['myblob'].open('r').read()
"I'm a happy Blob."
>>> transaction2.abort()
>>> transaction3.abort()
>>> connection2.close()
>>> connection3.close()
You can't put blobs into a database that has uses a Non-Blob-Storage, though:
>>> from ZODB.MappingStorage import MappingStorage
>>> no_blob_storage = MappingStorage()
>>> database2 = DB(no_blob_storage)
>>> connection
2 = database2.open(transaction_manager=transaction2
)
>>> root = connection
2
.root()
>>> connection
3 = database2.open(
)
>>> root = connection
3
.root()
>>> root['myblob'] = Blob()
>>> transaction
2
.commit() # doctest: +ELLIPSIS
>>> transaction.commit() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
Unsupported: Storing Blobs in <ZODB.MappingStorage.MappingStorage instance at ...> is not supported.
>>> transaction2.abort()
>>> connection2.close()
After testing this, we don't need the storage directory and databases anymore:
While we are testing this, we don't need the storage directory and
databases anymore:
>>> transaction.abort()
>>> connection.close()
>>> database.close()
>>> database2.close()
>>> blob_storage.close()
>>> base_storage.cleanup()
src/ZODB/tests/dbopen.txt
View file @
694ac459
...
...
@@ -146,7 +146,7 @@ Reaching into the internals, we can see that db's connection pool now has
two connections available for reuse, and knows about three connections in
all:
>>> pool = db.
pool
>>> pool = db.
_pools['']
>>> len(pool.available)
2
>>> len(pool.all)
...
...
@@ -219,7 +219,7 @@ closed one out of the available connection stack.
>>> conns = [db.open() for dummy in range(6)]
>>> len(handler.records) # 3 warnings for the "excess" connections
3
>>> pool = db.
pool
>>> pool = db.
_pools['']
>>> len(pool.available), len(pool.all)
(0, 6)
...
...
@@ -239,12 +239,12 @@ Closing connections adds them to the stack:
Closing another one will purge the one with MARKER 0 from the stack
(since it was the first added to the stack):
>>> [c.MARKER for c in pool.available
.values()
]
>>> [c.MARKER for c in pool.available]
[0, 1, 2]
>>> conns[0].close() # MARKER 3
>>> len(pool.available), len(pool.all)
(3, 5)
>>> [c.MARKER for c in pool.available
.values()
]
>>> [c.MARKER for c in pool.available]
[1, 2, 3]
Similarly for the other two:
...
...
@@ -252,7 +252,7 @@ Similarly for the other two:
>>> conns[1].close(); conns[2].close()
>>> len(pool.available), len(pool.all)
(3, 3)
>>> [c.MARKER for c in pool.available
.values()
]
>>> [c.MARKER for c in pool.available]
[3, 4, 5]
Reducing the pool size may also purge the oldest closed connections:
...
...
@@ -260,7 +260,7 @@ Reducing the pool size may also purge the oldest closed connections:
>>> db.setPoolSize(2) # gets rid of MARKER 3
>>> len(pool.available), len(pool.all)
(2, 2)
>>> [c.MARKER for c in pool.available
.values()
]
>>> [c.MARKER for c in pool.available]
[4, 5]
Since MARKER 5 is still the last one added to the stack, it will be the
...
...
@@ -297,7 +297,7 @@ Now open more connections so that the total exceeds pool_size (2):
>>> conn1 = db.open()
>>> conn2 = db.open()
>>> pool = db.
pool
>>> pool = db.
_pools['']
>>> len(pool.all), len(pool.available) # all Connections are in use
(3, 0)
...
...
src/ZODB/tests/testConnectionSavepoint.py
View file @
694ac459
...
...
@@ -138,21 +138,6 @@ Verify all the values are as expected:
>>> db.close()
"""
def
testIsReadonly
():
"""
\
The connection isReadonly method relies on the _storage to have an isReadOnly.
We simply rely on the underlying storage method.
>>> import ZODB.tests.util
>>> db = ZODB.tests.util.DB()
>>> connection = db.open()
>>> root = connection.root()
>>> root['a'] = 1
>>> sp = transaction.savepoint()
>>> connection.isReadOnly()
False
"""
def
test_suite
():
return
unittest
.
TestSuite
((
doctest
.
DocFileSuite
(
'testConnectionSavepoint.txt'
),
...
...
src/ZODB/tests/testDB.py
View file @
694ac459
...
...
@@ -14,7 +14,7 @@
import
os
import
time
import
unittest
import
datetime
import
warnings
import
transaction
...
...
@@ -35,6 +35,8 @@ class DBTests(unittest.TestCase):
self
.
__path
=
os
.
path
.
abspath
(
'test.fs'
)
store
=
ZODB
.
FileStorage
.
FileStorage
(
self
.
__path
)
self
.
db
=
ZODB
.
DB
(
store
)
warnings
.
filterwarnings
(
'ignore'
,
message
=
'Versions are deprecated'
,
module
=
__name__
)
def
tearDown
(
self
):
self
.
db
.
close
()
...
...
@@ -42,8 +44,8 @@ class DBTests(unittest.TestCase):
if
os
.
path
.
exists
(
self
.
__path
+
s
):
os
.
remove
(
self
.
__path
+
s
)
def
dowork
(
self
):
c
=
self
.
db
.
open
()
def
dowork
(
self
,
version
=
''
):
c
=
self
.
db
.
open
(
version
)
r
=
c
.
root
()
o
=
r
[
time
.
time
()]
=
MinPO
(
0
)
transaction
.
commit
()
...
...
@@ -51,16 +53,85 @@ class DBTests(unittest.TestCase):
o
.
value
=
MinPO
(
i
)
transaction
.
commit
()
o
=
o
.
value
serial
=
o
.
_p_serial
root_serial
=
r
.
_p_serial
c
.
close
()
return
serial
,
root_serial
# make sure the basic methods are callable
def
testSets
(
self
):
self
.
db
.
setCacheSize
(
15
)
self
.
db
.
setHistoricalCacheSize
(
15
)
self
.
db
.
setVersionCacheSize
(
15
)
def
test_removeVersionPool
(
self
):
# Test that we can remove a version pool
# This is white box because we check some internal data structures
self
.
dowork
()
self
.
dowork
(
'v2'
)
c1
=
self
.
db
.
open
(
'v1'
)
c1
.
close
()
# return to pool
c12
=
self
.
db
.
open
(
'v1'
)
c12
.
close
()
# return to pool
self
.
assert_
(
c1
is
c12
)
# should be same
pools
=
self
.
db
.
_pools
self
.
assertEqual
(
len
(
pools
),
3
)
self
.
assertEqual
(
nconn
(
pools
),
3
)
self
.
db
.
removeVersionPool
(
'v1'
)
self
.
assertEqual
(
len
(
pools
),
2
)
self
.
assertEqual
(
nconn
(
pools
),
2
)
c12
=
self
.
db
.
open
(
'v1'
)
c12
.
close
()
# return to pool
self
.
assert_
(
c1
is
not
c12
)
# should be different
self
.
assertEqual
(
len
(
pools
),
3
)
self
.
assertEqual
(
nconn
(
pools
),
3
)
def
_test_for_leak
(
self
):
self
.
dowork
()
self
.
dowork
(
'v2'
)
while
1
:
c1
=
self
.
db
.
open
(
'v1'
)
self
.
db
.
removeVersionPool
(
'v1'
)
c1
.
close
()
# return to pool
def
test_removeVersionPool_while_connection_open
(
self
):
# Test that we can remove a version pool
# This is white box because we check some internal data structures
self
.
dowork
()
self
.
dowork
(
'v2'
)
c1
=
self
.
db
.
open
(
'v1'
)
c1
.
close
()
# return to pool
c12
=
self
.
db
.
open
(
'v1'
)
self
.
assert_
(
c1
is
c12
)
# should be same
pools
=
self
.
db
.
_pools
self
.
assertEqual
(
len
(
pools
),
3
)
self
.
assertEqual
(
nconn
(
pools
),
3
)
self
.
db
.
removeVersionPool
(
'v1'
)
self
.
assertEqual
(
len
(
pools
),
2
)
self
.
assertEqual
(
nconn
(
pools
),
2
)
c12
.
close
()
# should leave pools alone
self
.
assertEqual
(
len
(
pools
),
2
)
self
.
assertEqual
(
nconn
(
pools
),
2
)
c12
=
self
.
db
.
open
(
'v1'
)
c12
.
close
()
# return to pool
self
.
assert_
(
c1
is
not
c12
)
# should be different
self
.
assertEqual
(
len
(
pools
),
3
)
self
.
assertEqual
(
nconn
(
pools
),
3
)
def
test_references
(
self
):
...
...
src/ZODB/tests/testZODB.py
View file @
694ac459
...
...
@@ -136,6 +136,20 @@ class ZODBTests(unittest.TestCase):
def
checkExportImportAborted
(
self
):
self
.
checkExportImport
(
abort_it
=
True
)
def
checkVersionOnly
(
self
):
# Make sure the changes to make empty transactions a no-op
# still allow things like abortVersion(). This should work
# because abortVersion() calls tpc_begin() itself.
conn
=
self
.
_db
.
open
(
"version"
)
try
:
r
=
conn
.
root
()
r
[
1
]
=
1
transaction
.
commit
()
finally
:
conn
.
close
()
self
.
_db
.
abortVersion
(
"version"
)
transaction
.
commit
()
def
checkResetCache
(
self
):
# The cache size after a reset should be 0. Note that
# _resetCache is not a public API, but the resetCaches()
...
...
src/ZODB/tests/test
historicalconnections
.py
→
src/ZODB/tests/test
_misc
.py
View file @
694ac459
##############################################################################
#
# Copyright (c) 200
7
Zope Corporation and Contributors.
# Copyright (c) 200
6
Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
...
...
@@ -11,34 +11,27 @@
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Misc tests :)
"""
$Id$
"""
import
unittest
from
zope.testing
import
doctest
,
module
from
zope.testing
import
doctest
def
conflict_error_retains_data_passed
():
r"""
def
setUp
(
test
):
module
.
setUp
(
test
,
'historical_connections_txt'
)
ConflictError can be passed a data record which it claims to retain as
an attribute.
def
tearDown
(
test
):
test
.
globs
[
'db'
].
close
()
test
.
globs
[
'db2'
].
close
()
test
.
globs
[
'storage'
].
close
()
test
.
globs
[
'storage'
].
cleanup
()
# the DB class masks the module because of __init__ shenanigans
DB_module
=
__import__
(
'ZODB.DB'
,
globals
(),
locals
(),
[
'chicken'
])
DB_module
.
time
=
test
.
globs
[
'original_time'
]
module
.
tearDown
(
test
)
>>> import ZODB.POSException
>>>
>>> ZODB.POSException.ConflictError(data='cM\nC\n').data
'cM\nC\n'
"""
def
test_suite
():
return
unittest
.
TestSuite
((
doctest
.
DocFileSuite
(
'../historical_connections.txt'
,
setUp
=
setUp
,
tearDown
=
tearDown
,
optionflags
=
doctest
.
INTERPRET_FOOTNOTES
,
),
doctest
.
DocTestSuite
(),
))
if
__name__
==
'__main__'
:
unittest
.
main
(
defaultTest
=
'test_suite'
)
src/transaction/DEPENDENCIES.cfg
0 → 100644
View file @
694ac459
ZODB
zope.interface
src/transaction/README.txt
0 → 100644
View file @
694ac459
============
Transactions
============
This package contains a generic transaction implementation for Python. It is
mainly used by the ZODB, though.
Note that the data manager API, ``transaction.interfaces.IDataManager``,
is syntactically simple, but semantically complex. The semantics
were not easy to express in the interface. This could probably use
more work. The semantics are presented in detail through examples of
a sample data manager in ``transaction.tests.test_SampleDataManager``.
src/transaction/__init__.py
0 → 100644
View file @
694ac459
############################################################################
#
# Copyright (c) 2001, 2002, 2004 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
############################################################################
"""Exported transaction functions.
$Id$
"""
from
transaction._transaction
import
Transaction
from
transaction._manager
import
TransactionManager
,
ThreadTransactionManager
manager
=
ThreadTransactionManager
()
get
=
manager
.
get
begin
=
manager
.
begin
commit
=
manager
.
commit
abort
=
manager
.
abort
doom
=
manager
.
doom
isDoomed
=
manager
.
isDoomed
savepoint
=
manager
.
savepoint
src/transaction/_manager.py
0 → 100644
View file @
694ac459
############################################################################
#
# Copyright (c) 2004 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
############################################################################
"""A TransactionManager controls transaction boundaries.
It coordinates application code and resource managers, so that they
are associated with the right transaction.
"""
import
thread
from
ZODB.utils
import
WeakSet
,
deprecated37
from
transaction._transaction
import
Transaction
# Used for deprecated arguments. ZODB.utils.DEPRECATED_ARGUMENT was
# too hard to use here, due to the convoluted import dance across
# __init__.py files.
_marker
=
object
()
# We have to remember sets of synch objects, especially Connections.
# But we don't want mere registration with a transaction manager to
# keep a synch object alive forever; in particular, it's common
# practice not to explicitly close Connection objects, and keeping
# a Connection alive keeps a potentially huge number of other objects
# alive (e.g., the cache, and everything reachable from it too).
# Therefore we use "weak sets" internally.
#
# Call the ISynchronizer newTransaction() method on every element of
# WeakSet synchs.
# A transaction manager needs to do this whenever begin() is called.
# Since it would be good if tm.get() returned the new transaction while
# newTransaction() is running, calling this has to be delayed until after
# the transaction manager has done whatever it needs to do to make its
# get() return the new txn.
def
_new_transaction
(
txn
,
synchs
):
if
synchs
:
synchs
.
map
(
lambda
s
:
s
.
newTransaction
(
txn
))
# Important: we must always pass a WeakSet (even if empty) to the Transaction
# constructor: synchronizers are registered with the TM, but the
# ISynchronizer xyzCompletion() methods are called by Transactions without
# consulting the TM, so we need to pass a mutable collection of synchronizers
# so that Transactions "see" synchronizers that get registered after the
# Transaction object is constructed.
class
TransactionManager
(
object
):
def
__init__
(
self
):
self
.
_txn
=
None
self
.
_synchs
=
WeakSet
()
def
begin
(
self
):
if
self
.
_txn
is
not
None
:
self
.
_txn
.
abort
()
txn
=
self
.
_txn
=
Transaction
(
self
.
_synchs
,
self
)
_new_transaction
(
txn
,
self
.
_synchs
)
return
txn
def
get
(
self
):
if
self
.
_txn
is
None
:
self
.
_txn
=
Transaction
(
self
.
_synchs
,
self
)
return
self
.
_txn
def
free
(
self
,
txn
):
assert
txn
is
self
.
_txn
self
.
_txn
=
None
def
registerSynch
(
self
,
synch
):
self
.
_synchs
.
add
(
synch
)
def
unregisterSynch
(
self
,
synch
):
self
.
_synchs
.
remove
(
synch
)
def
isDoomed
(
self
):
return
self
.
get
().
isDoomed
()
def
doom
(
self
):
return
self
.
get
().
doom
()
def
commit
(
self
):
return
self
.
get
().
commit
()
def
abort
(
self
):
return
self
.
get
().
abort
()
def
savepoint
(
self
,
optimistic
=
False
):
return
self
.
get
().
savepoint
(
optimistic
)
class
ThreadTransactionManager
(
TransactionManager
):
"""Thread-aware transaction manager.
Each thread is associated with a unique transaction.
"""
def
__init__
(
self
):
# _threads maps thread ids to transactions
self
.
_txns
=
{}
# _synchs maps a thread id to a WeakSet of registered synchronizers.
# The WeakSet is passed to the Transaction constructor, because the
# latter needs to call the synchronizers when it commits.
self
.
_synchs
=
{}
def
begin
(
self
):
tid
=
thread
.
get_ident
()
txn
=
self
.
_txns
.
get
(
tid
)
if
txn
is
not
None
:
txn
.
abort
()
synchs
=
self
.
_synchs
.
get
(
tid
)
if
synchs
is
None
:
synchs
=
self
.
_synchs
[
tid
]
=
WeakSet
()
txn
=
self
.
_txns
[
tid
]
=
Transaction
(
synchs
,
self
)
_new_transaction
(
txn
,
synchs
)
return
txn
def
get
(
self
):
tid
=
thread
.
get_ident
()
txn
=
self
.
_txns
.
get
(
tid
)
if
txn
is
None
:
synchs
=
self
.
_synchs
.
get
(
tid
)
if
synchs
is
None
:
synchs
=
self
.
_synchs
[
tid
]
=
WeakSet
()
txn
=
self
.
_txns
[
tid
]
=
Transaction
(
synchs
,
self
)
return
txn
def
free
(
self
,
txn
):
tid
=
thread
.
get_ident
()
assert
txn
is
self
.
_txns
.
get
(
tid
)
del
self
.
_txns
[
tid
]
def
registerSynch
(
self
,
synch
):
tid
=
thread
.
get_ident
()
ws
=
self
.
_synchs
.
get
(
tid
)
if
ws
is
None
:
ws
=
self
.
_synchs
[
tid
]
=
WeakSet
()
ws
.
add
(
synch
)
def
unregisterSynch
(
self
,
synch
):
tid
=
thread
.
get_ident
()
ws
=
self
.
_synchs
[
tid
]
ws
.
remove
(
synch
)
src/transaction/_transaction.py
0 → 100644
View file @
694ac459
############################################################################
#
# Copyright (c) 2004 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
############################################################################
"""Transaction objects manage resources for an individual activity.
Compatibility issues
--------------------
The implementation of Transaction objects involves two layers of
backwards compatibility, because this version of transaction supports
both ZODB 3 and ZODB 4. Zope is evolving towards the ZODB4
interfaces.
Transaction has two methods for a resource manager to call to
participate in a transaction -- register() and join(). join() takes a
resource manager and adds it to the list of resources. register() is
for backwards compatibility. It takes a persistent object and
registers its _p_jar attribute. TODO: explain adapter
Two-phase commit
----------------
A transaction commit involves an interaction between the transaction
object and one or more resource managers. The transaction manager
calls the following four methods on each resource manager; it calls
tpc_begin() on each resource manager before calling commit() on any of
them.
1. tpc_begin(txn)
2. commit(txn)
3. tpc_vote(txn)
4. tpc_finish(txn)
Before-commit hook
------------------
Sometimes, applications want to execute some code when a transaction is
committed. For example, one might want to delay object indexing until a
transaction commits, rather than indexing every time an object is changed.
Or someone might want to check invariants only after a set of operations. A
pre-commit hook is available for such use cases: use addBeforeCommitHook(),
passing it a callable and arguments. The callable will be called with its
arguments at the start of the commit (but not for substransaction commits).
After-commit hook
------------------
Sometimes, applications want to execute code after a transaction is
committed or aborted. For example, one might want to launch non
transactional code after a successful commit. Or still someone might
want to launch asynchronous code after. A post-commit hook is
available for such use cases: use addAfterCommitHook(), passing it a
callable and arguments. The callable will be called with a Boolean
value representing the status of the commit operation as first
argument (true if successfull or false iff aborted) preceding its
arguments at the start of the commit (but not for substransaction
commits).
Error handling
--------------
When errors occur during two-phase commit, the transaction manager
aborts all the resource managers. The specific methods it calls
depend on whether the error occurs before or after the call to
tpc_vote() on that transaction manager.
If the resource manager has not voted, then the resource manager will
have one or more uncommitted objects. There are two cases that lead
to this state; either the transaction manager has not called commit()
for any objects on this resource manager or the call that failed was a
commit() for one of the objects of this resource manager. For each
uncommitted object, including the object that failed in its commit(),
call abort().
Once uncommitted objects are aborted, tpc_abort() or abort_sub() is
called on each resource manager.
Synchronization
---------------
You can register sychronization objects (synchronizers) with the
tranasction manager. The synchronizer must implement
beforeCompletion() and afterCompletion() methods. The transaction
manager calls beforeCompletion() when it starts a top-level two-phase
commit. It calls afterCompletion() when a top-level transaction is
committed or aborted. The methods are passed the current Transaction
as their only argument.
"""
import
logging
import
sys
import
thread
import
warnings
import
weakref
import
traceback
from
cStringIO
import
StringIO
from
zope
import
interface
from
ZODB.utils
import
WeakSet
from
ZODB.utils
import
deprecated37
,
deprecated38
from
ZODB.POSException
import
TransactionFailedError
from
ZODB.utils
import
oid_repr
from
transaction
import
interfaces
_marker
=
object
()
# The point of this is to avoid hiding exceptions (which the builtin
# hasattr() does).
def
myhasattr
(
obj
,
attr
):
return
getattr
(
obj
,
attr
,
_marker
)
is
not
_marker
class
Status
:
# ACTIVE is the initial state.
ACTIVE
=
"Active"
COMMITTING
=
"Committing"
COMMITTED
=
"Committed"
DOOMED
=
"Doomed"
# commit() or commit(True) raised an exception. All further attempts
# to commit or join this transaction will raise TransactionFailedError.
COMMITFAILED
=
"Commit failed"
class
Transaction
(
object
):
interface
.
implements
(
interfaces
.
ITransaction
,
interfaces
.
ITransactionDeprecated
)
# Assign an index to each savepoint so we can invalidate later savepoints
# on rollback. The first index assigned is 1, and it goes up by 1 each
# time.
_savepoint_index
=
0
# If savepoints are used, keep a weak key dict of them. This maps a
# savepoint to its index (see above).
_savepoint2index
=
None
# Meta data. ._extension is also metadata, but is initialized to an
# emtpy dict in __init__.
user
=
""
description
=
""
def
__init__
(
self
,
synchronizers
=
None
,
manager
=
None
):
self
.
status
=
Status
.
ACTIVE
# List of resource managers, e.g. MultiObjectResourceAdapters.
self
.
_resources
=
[]
# Weak set of synchronizer objects to call.
if
synchronizers
is
None
:
synchronizers
=
WeakSet
()
self
.
_synchronizers
=
synchronizers
self
.
_manager
=
manager
# _adapters: Connection/_p_jar -> MultiObjectResourceAdapter[Sub]
self
.
_adapters
=
{}
self
.
_voted
=
{}
# id(Connection) -> boolean, True if voted
# _voted and other dictionaries use the id() of the resource
# manager as a key, because we can't guess whether the actual
# resource managers will be safe to use as dict keys.
# The user, description, and _extension attributes are accessed
# directly by storages, leading underscore notwithstanding.
self
.
_extension
=
{}
self
.
log
=
logging
.
getLogger
(
"txn.%d"
%
thread
.
get_ident
())
self
.
log
.
debug
(
"new transaction"
)
# If a commit fails, the traceback is saved in _failure_traceback.
# If another attempt is made to commit, TransactionFailedError is
# raised, incorporating this traceback.
self
.
_failure_traceback
=
None
# List of (hook, args, kws) tuples added by addBeforeCommitHook().
self
.
_before_commit
=
[]
# List of (hook, args, kws) tuples added by addAfterCommitHook().
self
.
_after_commit
=
[]
def
isDoomed
(
self
):
return
self
.
status
is
Status
.
DOOMED
def
doom
(
self
):
if
self
.
status
is
not
Status
.
DOOMED
:
if
self
.
status
is
not
Status
.
ACTIVE
:
# should not doom transactions in the middle,
# or after, a commit
raise
AssertionError
()
self
.
status
=
Status
.
DOOMED
# Raise TransactionFailedError, due to commit()/join()/register()
# getting called when the current transaction has already suffered
# a commit/savepoint failure.
def
_prior_operation_failed
(
self
):
assert
self
.
_failure_traceback
is
not
None
raise
TransactionFailedError
(
"An operation previously failed, "
"with traceback:
\
n
\
n
%s"
%
self
.
_failure_traceback
.
getvalue
())
def
join
(
self
,
resource
):
if
self
.
status
is
Status
.
COMMITFAILED
:
self
.
_prior_operation_failed
()
# doesn't return
if
(
self
.
status
is
not
Status
.
ACTIVE
and
self
.
status
is
not
Status
.
DOOMED
):
# TODO: Should it be possible to join a committing transaction?
# I think some users want it.
raise
ValueError
(
"expected txn status %r or %r, but it's %r"
%
(
Status
.
ACTIVE
,
Status
.
DOOMED
,
self
.
status
))
# TODO: the prepare check is a bit of a hack, perhaps it would
# be better to use interfaces. If this is a ZODB4-style
# resource manager, it needs to be adapted, too.
if
myhasattr
(
resource
,
"prepare"
):
# TODO: deprecate 3.6
resource
=
DataManagerAdapter
(
resource
)
self
.
_resources
.
append
(
resource
)
if
self
.
_savepoint2index
:
# A data manager has joined a transaction *after* a savepoint
# was created. A couple of things are different in this case:
#
# 1. We need to add its savepoint to all previous savepoints.
# so that if they are rolled back, we roll this one back too.
#
# 2. We don't actually need to ask the data manager for a
# savepoint: because it's just joining, we can just abort it to
# roll back to the current state, so we simply use an
# AbortSavepoint.
datamanager_savepoint
=
AbortSavepoint
(
resource
,
self
)
for
transaction_savepoint
in
self
.
_savepoint2index
.
keys
():
transaction_savepoint
.
_savepoints
.
append
(
datamanager_savepoint
)
def
savepoint
(
self
,
optimistic
=
False
):
if
self
.
status
is
Status
.
COMMITFAILED
:
self
.
_prior_operation_failed
()
# doesn't return, it raises
try
:
savepoint
=
Savepoint
(
self
,
optimistic
,
*
self
.
_resources
)
except
:
self
.
_cleanup
(
self
.
_resources
)
self
.
_saveAndRaiseCommitishError
()
# reraises!
if
self
.
_savepoint2index
is
None
:
self
.
_savepoint2index
=
weakref
.
WeakKeyDictionary
()
self
.
_savepoint_index
+=
1
self
.
_savepoint2index
[
savepoint
]
=
self
.
_savepoint_index
return
savepoint
# Remove and invalidate all savepoints we know about with an index
# larger than `savepoint`'s. This is what's needed when a rollback
# _to_ `savepoint` is done.
def
_remove_and_invalidate_after
(
self
,
savepoint
):
savepoint2index
=
self
.
_savepoint2index
index
=
savepoint2index
[
savepoint
]
# use items() to make copy to avoid mutating while iterating
for
savepoint
,
i
in
savepoint2index
.
items
():
if
i
>
index
:
savepoint
.
transaction
=
None
# invalidate
del
savepoint2index
[
savepoint
]
# Invalidate and forget about all savepoints.
def
_invalidate_all_savepoints
(
self
):
for
savepoint
in
self
.
_savepoint2index
.
keys
():
savepoint
.
transaction
=
None
# invalidate
self
.
_savepoint2index
.
clear
()
def
register
(
self
,
obj
):
# The old way of registering transaction participants.
#
# register() is passed either a persisent object or a
# resource manager like the ones defined in ZODB.DB.
# If it is passed a persistent object, that object should
# be stored when the transaction commits. For other
# objects, the object implements the standard two-phase
# commit protocol.
manager
=
getattr
(
obj
,
"_p_jar"
,
obj
)
if
manager
is
None
:
raise
ValueError
(
"Register with no manager"
)
adapter
=
self
.
_adapters
.
get
(
manager
)
if
adapter
is
None
:
adapter
=
MultiObjectResourceAdapter
(
manager
)
adapter
.
objects
.
append
(
obj
)
self
.
_adapters
[
manager
]
=
adapter
self
.
join
(
adapter
)
else
:
# TODO: comment out this expensive assert later
# Use id() to guard against proxies.
assert
id
(
obj
)
not
in
map
(
id
,
adapter
.
objects
)
adapter
.
objects
.
append
(
obj
)
def
commit
(
self
):
if
self
.
status
is
Status
.
DOOMED
:
raise
interfaces
.
DoomedTransaction
()
if
self
.
_savepoint2index
:
self
.
_invalidate_all_savepoints
()
if
self
.
status
is
Status
.
COMMITFAILED
:
self
.
_prior_operation_failed
()
# doesn't return
self
.
_callBeforeCommitHooks
()
self
.
_synchronizers
.
map
(
lambda
s
:
s
.
beforeCompletion
(
self
))
self
.
status
=
Status
.
COMMITTING
try
:
self
.
_commitResources
()
self
.
status
=
Status
.
COMMITTED
except
:
t
,
v
,
tb
=
self
.
_saveAndGetCommitishError
()
self
.
_callAfterCommitHooks
(
status
=
False
)
raise
t
,
v
,
tb
else
:
if
self
.
_manager
:
self
.
_manager
.
free
(
self
)
self
.
_synchronizers
.
map
(
lambda
s
:
s
.
afterCompletion
(
self
))
self
.
_callAfterCommitHooks
(
status
=
True
)
self
.
log
.
debug
(
"commit"
)
def
_saveAndGetCommitishError
(
self
):
self
.
status
=
Status
.
COMMITFAILED
# Save the traceback for TransactionFailedError.
ft
=
self
.
_failure_traceback
=
StringIO
()
t
,
v
,
tb
=
sys
.
exc_info
()
# Record how we got into commit().
traceback
.
print_stack
(
sys
.
_getframe
(
1
),
None
,
ft
)
# Append the stack entries from here down to the exception.
traceback
.
print_tb
(
tb
,
None
,
ft
)
# Append the exception type and value.
ft
.
writelines
(
traceback
.
format_exception_only
(
t
,
v
))
return
t
,
v
,
tb
def
_saveAndRaiseCommitishError
(
self
):
t
,
v
,
tb
=
self
.
_saveAndGetCommitishError
()
raise
t
,
v
,
tb
def
getBeforeCommitHooks
(
self
):
return
iter
(
self
.
_before_commit
)
def
addBeforeCommitHook
(
self
,
hook
,
args
=
(),
kws
=
None
):
if
kws
is
None
:
kws
=
{}
self
.
_before_commit
.
append
((
hook
,
tuple
(
args
),
kws
))
def
beforeCommitHook
(
self
,
hook
,
*
args
,
**
kws
):
deprecated38
(
"Use addBeforeCommitHook instead of beforeCommitHook."
)
self
.
addBeforeCommitHook
(
hook
,
args
,
kws
)
def
_callBeforeCommitHooks
(
self
):
# Call all hooks registered, allowing further registrations
# during processing. Note that calls to addBeforeCommitHook() may
# add additional hooks while hooks are running, and iterating over a
# growing list is well-defined in Python.
for
hook
,
args
,
kws
in
self
.
_before_commit
:
hook
(
*
args
,
**
kws
)
self
.
_before_commit
=
[]
def
getAfterCommitHooks
(
self
):
return
iter
(
self
.
_after_commit
)
def
addAfterCommitHook
(
self
,
hook
,
args
=
(),
kws
=
None
):
if
kws
is
None
:
kws
=
{}
self
.
_after_commit
.
append
((
hook
,
tuple
(
args
),
kws
))
def
_callAfterCommitHooks
(
self
,
status
=
True
):
# Avoid to abort anything at the end if no hooks are registred.
if
not
self
.
_after_commit
:
return
# Call all hooks registered, allowing further registrations
# during processing. Note that calls to addAterCommitHook() may
# add additional hooks while hooks are running, and iterating over a
# growing list is well-defined in Python.
for
hook
,
args
,
kws
in
self
.
_after_commit
:
# The first argument passed to the hook is a Boolean value,
# true if the commit succeeded, or false if the commit aborted.
try
:
hook
(
status
,
*
args
,
**
kws
)
except
:
# We need to catch the exceptions if we want all hooks
# to be called
self
.
log
.
error
(
"Error in after commit hook exec in %s "
,
hook
,
exc_info
=
sys
.
exc_info
())
# The transaction is already committed. It must not have
# further effects after the commit.
for
rm
in
self
.
_resources
:
try
:
rm
.
abort
(
self
)
except
:
# XXX should we take further actions here ?
self
.
log
.
error
(
"Error in abort() on manager %s"
,
rm
,
exc_info
=
sys
.
exc_info
())
self
.
_after_commit
=
[]
self
.
_before_commit
=
[]
def
_commitResources
(
self
):
# Execute the two-phase commit protocol.
L
=
list
(
self
.
_resources
)
L
.
sort
(
rm_cmp
)
try
:
for
rm
in
L
:
rm
.
tpc_begin
(
self
)
for
rm
in
L
:
rm
.
commit
(
self
)
self
.
log
.
debug
(
"commit %r"
%
rm
)
for
rm
in
L
:
rm
.
tpc_vote
(
self
)
self
.
_voted
[
id
(
rm
)]
=
True
try
:
for
rm
in
L
:
rm
.
tpc_finish
(
self
)
except
:
# TODO: do we need to make this warning stronger?
# TODO: It would be nice if the system could be configured
# to stop committing transactions at this point.
self
.
log
.
critical
(
"A storage error occurred during the second "
"phase of the two-phase commit. Resources "
"may be in an inconsistent state."
)
raise
except
:
# If an error occurs committing a transaction, we try
# to revert the changes in each of the resource managers.
t
,
v
,
tb
=
sys
.
exc_info
()
try
:
self
.
_cleanup
(
L
)
finally
:
self
.
_synchronizers
.
map
(
lambda
s
:
s
.
afterCompletion
(
self
))
raise
t
,
v
,
tb
def
_cleanup
(
self
,
L
):
# Called when an exception occurs during tpc_vote or tpc_finish.
for
rm
in
L
:
if
id
(
rm
)
not
in
self
.
_voted
:
try
:
rm
.
abort
(
self
)
except
Exception
:
self
.
log
.
error
(
"Error in abort() on manager %s"
,
rm
,
exc_info
=
sys
.
exc_info
())
for
rm
in
L
:
try
:
rm
.
tpc_abort
(
self
)
except
Exception
:
self
.
log
.
error
(
"Error in tpc_abort() on manager %s"
,
rm
,
exc_info
=
sys
.
exc_info
())
def
abort
(
self
):
if
self
.
_savepoint2index
:
self
.
_invalidate_all_savepoints
()
self
.
_synchronizers
.
map
(
lambda
s
:
s
.
beforeCompletion
(
self
))
tb
=
None
for
rm
in
self
.
_resources
:
try
:
rm
.
abort
(
self
)
except
:
if
tb
is
None
:
t
,
v
,
tb
=
sys
.
exc_info
()
self
.
log
.
error
(
"Failed to abort resource manager: %s"
,
rm
,
exc_info
=
sys
.
exc_info
())
if
self
.
_manager
:
self
.
_manager
.
free
(
self
)
self
.
_synchronizers
.
map
(
lambda
s
:
s
.
afterCompletion
(
self
))
self
.
log
.
debug
(
"abort"
)
if
tb
is
not
None
:
raise
t
,
v
,
tb
def
note
(
self
,
text
):
text
=
text
.
strip
()
if
self
.
description
:
self
.
description
+=
"
\
n
\
n
"
+
text
else
:
self
.
description
=
text
def
setUser
(
self
,
user_name
,
path
=
"/"
):
self
.
user
=
"%s %s"
%
(
path
,
user_name
)
def
setExtendedInfo
(
self
,
name
,
value
):
self
.
_extension
[
name
]
=
value
# TODO: We need a better name for the adapters.
class
MultiObjectResourceAdapter
(
object
):
"""Adapt the old-style register() call to the new-style join().
With join(), a resource mananger like a Connection registers with
the transaction manager. With register(), an individual object
is passed to register().
"""
def
__init__
(
self
,
jar
):
self
.
manager
=
jar
self
.
objects
=
[]
self
.
ncommitted
=
0
def
__repr__
(
self
):
return
"<%s for %s at %s>"
%
(
self
.
__class__
.
__name__
,
self
.
manager
,
id
(
self
))
def
sortKey
(
self
):
return
self
.
manager
.
sortKey
()
def
tpc_begin
(
self
,
txn
):
self
.
manager
.
tpc_begin
(
txn
)
def
tpc_finish
(
self
,
txn
):
self
.
manager
.
tpc_finish
(
txn
)
def
tpc_abort
(
self
,
txn
):
self
.
manager
.
tpc_abort
(
txn
)
def
commit
(
self
,
txn
):
for
o
in
self
.
objects
:
self
.
manager
.
commit
(
o
,
txn
)
self
.
ncommitted
+=
1
def
tpc_vote
(
self
,
txn
):
self
.
manager
.
tpc_vote
(
txn
)
def
abort
(
self
,
txn
):
tb
=
None
for
o
in
self
.
objects
:
try
:
self
.
manager
.
abort
(
o
,
txn
)
except
:
# Capture the first exception and re-raise it after
# aborting all the other objects.
if
tb
is
None
:
t
,
v
,
tb
=
sys
.
exc_info
()
txn
.
log
.
error
(
"Failed to abort object: %s"
,
object_hint
(
o
),
exc_info
=
sys
.
exc_info
())
if
tb
is
not
None
:
raise
t
,
v
,
tb
def
rm_cmp
(
rm1
,
rm2
):
return
cmp
(
rm1
.
sortKey
(),
rm2
.
sortKey
())
def
object_hint
(
o
):
"""Return a string describing the object.
This function does not raise an exception.
"""
# We should always be able to get __class__.
klass
=
o
.
__class__
.
__name__
# oid would be great, but may this isn't a persistent object.
oid
=
getattr
(
o
,
"_p_oid"
,
_marker
)
if
oid
is
not
_marker
:
oid
=
oid_repr
(
oid
)
return
"%s oid=%s"
%
(
klass
,
oid
)
# TODO: deprecate for 3.6.
class
DataManagerAdapter
(
object
):
"""Adapt zodb 4-style data managers to zodb3 style
Adapt transaction.interfaces.IDataManager to
ZODB.interfaces.IPureDatamanager
"""
# Note that it is pretty important that this does not have a _p_jar
# attribute. This object will be registered with a zodb3 TM, which
# will then try to get a _p_jar from it, using it as the default.
# (Objects without a _p_jar are their own data managers.)
def
__init__
(
self
,
datamanager
):
self
.
_datamanager
=
datamanager
# TODO: I'm not sure why commit() doesn't do anything
def
commit
(
self
,
transaction
):
# We don't do anything here because ZODB4-style data managers
# didn't have a separate commit step
pass
def
abort
(
self
,
transaction
):
self
.
_datamanager
.
abort
(
transaction
)
def
tpc_begin
(
self
,
transaction
):
# We don't do anything here because ZODB4-style data managers
# didn't have a separate tpc_begin step
pass
def
tpc_abort
(
self
,
transaction
):
self
.
_datamanager
.
abort
(
transaction
)
def
tpc_finish
(
self
,
transaction
):
self
.
_datamanager
.
commit
(
transaction
)
def
tpc_vote
(
self
,
transaction
):
self
.
_datamanager
.
prepare
(
transaction
)
def
sortKey
(
self
):
return
self
.
_datamanager
.
sortKey
()
class
Savepoint
:
"""Transaction savepoint.
Transaction savepoints coordinate savepoints for data managers
participating in a transaction.
"""
interface
.
implements
(
interfaces
.
ISavepoint
)
valid
=
property
(
lambda
self
:
self
.
transaction
is
not
None
)
def
__init__
(
self
,
transaction
,
optimistic
,
*
resources
):
self
.
transaction
=
transaction
self
.
_savepoints
=
savepoints
=
[]
for
datamanager
in
resources
:
try
:
savepoint
=
datamanager
.
savepoint
except
AttributeError
:
if
not
optimistic
:
raise
TypeError
(
"Savepoints unsupported"
,
datamanager
)
savepoint
=
NoRollbackSavepoint
(
datamanager
)
else
:
savepoint
=
savepoint
()
savepoints
.
append
(
savepoint
)
def
rollback
(
self
):
transaction
=
self
.
transaction
if
transaction
is
None
:
raise
interfaces
.
InvalidSavepointRollbackError
transaction
.
_remove_and_invalidate_after
(
self
)
try
:
for
savepoint
in
self
.
_savepoints
:
savepoint
.
rollback
()
except
:
# Mark the transaction as failed.
transaction
.
_saveAndRaiseCommitishError
()
# reraises!
class
AbortSavepoint
:
def
__init__
(
self
,
datamanager
,
transaction
):
self
.
datamanager
=
datamanager
self
.
transaction
=
transaction
def
rollback
(
self
):
self
.
datamanager
.
abort
(
self
.
transaction
)
class
NoRollbackSavepoint
:
def
__init__
(
self
,
datamanager
):
self
.
datamanager
=
datamanager
def
rollback
(
self
):
raise
TypeError
(
"Savepoints unsupported"
,
self
.
datamanager
)
src/transaction/interfaces.py
0 → 100644
View file @
694ac459
##############################################################################
#
# Copyright (c) 2001, 2002 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Transaction Interfaces
$Id$
"""
import
zope.interface
class
ITransactionManager
(
zope
.
interface
.
Interface
):
"""An object that manages a sequence of transactions.
Applications use transaction managers to establish transaction boundaries.
"""
def
begin
():
"""Begin a new transaction.
If an existing transaction is in progress, it will be aborted.
The newTransaction() method of registered synchronizers is called,
passing the new transaction object.
"""
def
get
():
"""Get the current transaction.
"""
def
commit
():
"""Commit the current transaction.
"""
def
abort
():
"""Abort the current transaction.
"""
def
doom
():
"""Doom the current transaction.
"""
def
isDoomed
():
"""Returns True if the current transaction is doomed, otherwise False.
"""
def
savepoint
(
optimistic
=
False
):
"""Create a savepoint from the current transaction.
If the optimistic argument is true, then data managers that
don't support savepoints can be used, but an error will be
raised if the savepoint is rolled back.
An ISavepoint object is returned.
"""
def
registerSynch
(
synch
):
"""Register an ISynchronizer.
Synchronizers are notified about some major events in a transaction's
life. See ISynchronizer for details.
"""
def
unregisterSynch
(
synch
):
"""Unregister an ISynchronizer.
Synchronizers are notified about some major events in a transaction's
life. See ISynchronizer for details.
"""
class
ITransaction
(
zope
.
interface
.
Interface
):
"""Object representing a running transaction.
Objects with this interface may represent different transactions
during their lifetime (.begin() can be called to start a new
transaction using the same instance, although that example is
deprecated and will go away in ZODB 3.6).
"""
user
=
zope
.
interface
.
Attribute
(
"""A user name associated with the transaction.
The format of the user name is defined by the application. The value
is of Python type str. Storages record the user value, as meta-data,
when a transaction commits.
A storage may impose a limit on the size of the value; behavior is
undefined if such a limit is exceeded (for example, a storage may
raise an exception, or truncate the value).
"""
)
description
=
zope
.
interface
.
Attribute
(
"""A textual description of the transaction.
The value is of Python type str. Method note() is the intended
way to set the value. Storages record the description, as meta-data,
when a transaction commits.
A storage may impose a limit on the size of the description; behavior
is undefined if such a limit is exceeded (for example, a storage may
raise an exception, or truncate the value).
"""
)
def
commit
():
"""Finalize the transaction.
This executes the two-phase commit algorithm for all
IDataManager objects associated with the transaction.
"""
def
abort
():
"""Abort the transaction.
This is called from the application. This can only be called
before the two-phase commit protocol has been started.
"""
def
doom
():
"""Doom the transaction.
Dooms the current transaction. This will cause
DoomedTransactionException to be raised on any attempt to commit the
transaction.
Otherwise the transaction will behave as if it was active.
"""
def
savepoint
(
optimistic
=
False
):
"""Create a savepoint.
If the optimistic argument is true, then data managers that don't
support savepoints can be used, but an error will be raised if the
savepoint is rolled back.
An ISavepoint object is returned.
"""
def
join
(
datamanager
):
"""Add a data manager to the transaction.
`datamanager` must provide the transactions.interfaces.IDataManager
interface.
"""
def
note
(
text
):
"""Add text to the transaction description.
This modifies the `.description` attribute; see its docs for more
detail. First surrounding whitespace is stripped from `text`. If
`.description` is currently an empty string, then the stripped text
becomes its value, else two newlines and the stripped text are
appended to `.description`.
"""
def
setUser
(
user_name
,
path
=
"/"
):
"""Set the user name.
path should be provided if needed to further qualify the
identified user. This is a convenience method used by Zope.
It sets the .user attribute to str(path) + " " + str(user_name).
This sets the `.user` attribute; see its docs for more detail.
"""
def
setExtendedInfo
(
name
,
value
):
"""Add extension data to the transaction.
name is the name of the extension property to set, of Python type
str; value must be picklable. Multiple calls may be made to set
multiple extension properties, provided the names are distinct.
Storages record the extension data, as meta-data, when a transaction
commits.
A storage may impose a limit on the size of extension data; behavior
is undefined if such a limit is exceeded (for example, a storage may
raise an exception, or remove `<name, value>` pairs).
"""
# deprecated38
def
beforeCommitHook
(
__hook
,
*
args
,
**
kws
):
"""Register a hook to call before the transaction is committed.
THIS IS DEPRECATED IN ZODB 3.6. Use addBeforeCommitHook() instead.
The specified hook function will be called after the transaction's
commit method has been called, but before the commit process has been
started. The hook will be passed the specified positional and keyword
arguments.
Multiple hooks can be registered and will be called in the order they
were registered (first registered, first called). This method can
also be called from a hook: an executing hook can register more
hooks. Applications should take care to avoid creating infinite loops
by recursively registering hooks.
Hooks are called only for a top-level commit. A savepoint
does not call any hooks. If the transaction is aborted, hooks
are not called, and are discarded. Calling a hook "consumes" its
registration too: hook registrations do not persist across
transactions. If it's desired to call the same hook on every
transaction commit, then beforeCommitHook() must be called with that
hook during every transaction; in such a case consider registering a
synchronizer object via a TransactionManager's registerSynch() method
instead.
"""
def
addBeforeCommitHook
(
hook
,
args
=
(),
kws
=
None
):
"""Register a hook to call before the transaction is committed.
The specified hook function will be called after the transaction's
commit method has been called, but before the commit process has been
started. The hook will be passed the specified positional (`args`)
and keyword (`kws`) arguments. `args` is a sequence of positional
arguments to be passed, defaulting to an empty tuple (no positional
arguments are passed). `kws` is a dictionary of keyword argument
names and values to be passed, or the default None (no keyword
arguments are passed).
Multiple hooks can be registered and will be called in the order they
were registered (first registered, first called). This method can
also be called from a hook: an executing hook can register more
hooks. Applications should take care to avoid creating infinite loops
by recursively registering hooks.
Hooks are called only for a top-level commit. A
savepoint creation does not call any hooks. If the
transaction is aborted, hooks are not called, and are discarded.
Calling a hook "consumes" its registration too: hook registrations
do not persist across transactions. If it's desired to call the same
hook on every transaction commit, then addBeforeCommitHook() must be
called with that hook during every transaction; in such a case
consider registering a synchronizer object via a TransactionManager's
registerSynch() method instead.
"""
def
getBeforeCommitHooks
():
"""Return iterable producing the registered addBeforeCommit hooks.
A triple (hook, args, kws) is produced for each registered hook.
The hooks are produced in the order in which they would be invoked
by a top-level transaction commit.
"""
def
addAfterCommitHook
(
hook
,
args
=
(),
kws
=
None
):
"""Register a hook to call after a transaction commit attempt.
The specified hook function will be called after the transaction
commit succeeds or aborts. The first argument passed to the hook
is a Boolean value, true if the commit succeeded, or false if the
commit aborted. `args` specifies additional positional, and `kws`
keyword, arguments to pass to the hook. `args` is a sequence of
positional arguments to be passed, defaulting to an empty tuple
(only the true/false success argument is passed). `kws` is a
dictionary of keyword argument names and values to be passed, or
the default None (no keyword arguments are passed).
Multiple hooks can be registered and will be called in the order they
were registered (first registered, first called). This method can
also be called from a hook: an executing hook can register more
hooks. Applications should take care to avoid creating infinite loops
by recursively registering hooks.
Hooks are called only for a top-level commit. A
savepoint creation does not call any hooks. Calling a
hook "consumes" its registration: hook registrations do not
persist across transactions. If it's desired to call the same
hook on every transaction commit, then addAfterCommitHook() must be
called with that hook during every transaction; in such a case
consider registering a synchronizer object via a TransactionManager's
registerSynch() method instead.
"""
def
getAfterCommitHooks
():
"""Return iterable producing the registered addAfterCommit hooks.
A triple (hook, args, kws) is produced for each registered hook.
The hooks are produced in the order in which they would be invoked
by a top-level transaction commit.
"""
class
ITransactionDeprecated
(
zope
.
interface
.
Interface
):
"""Deprecated parts of the transaction API."""
def
begin
(
info
=
None
):
"""Begin a new transaction.
If the transaction is in progress, it is aborted and a new
transaction is started using the same transaction object.
"""
# TODO: deprecate this for 3.6.
def
register
(
object
):
"""Register the given object for transaction control."""
class
IDataManager
(
zope
.
interface
.
Interface
):
"""Objects that manage transactional storage.
These objects may manage data for other objects, or they may manage
non-object storages, such as relational databases. For example,
a ZODB.Connection.
Note that when some data is modified, that data's data manager should
join a transaction so that data can be committed when the user commits
the transaction.
"""
transaction_manager
=
zope
.
interface
.
Attribute
(
"""The transaction manager (TM) used by this data manager.
This is a public attribute, intended for read-only use. The value
is an instance of ITransactionManager, typically set by the data
manager's constructor.
"""
)
def
abort
(
transaction
):
"""Abort a transaction and forget all changes.
Abort must be called outside of a two-phase commit.
Abort is called by the transaction manager to abort transactions
that are not yet in a two-phase commit.
"""
# Two-phase commit protocol. These methods are called by the ITransaction
# object associated with the transaction being committed. The sequence
# of calls normally follows this regular expression:
# tpc_begin commit tpc_vote (tpc_finish | tpc_abort)
def
tpc_begin
(
transaction
):
"""Begin commit of a transaction, starting the two-phase commit.
transaction is the ITransaction instance associated with the
transaction being committed.
"""
def
commit
(
transaction
):
"""Commit modifications to registered objects.
Save changes to be made persistent if the transaction commits (if
tpc_finish is called later). If tpc_abort is called later, changes
must not persist.
This includes conflict detection and handling. If no conflicts or
errors occur, the data manager should be prepared to make the
changes persist when tpc_finish is called.
"""
def
tpc_vote
(
transaction
):
"""Verify that a data manager can commit the transaction.
This is the last chance for a data manager to vote 'no'. A
data manager votes 'no' by raising an exception.
transaction is the ITransaction instance associated with the
transaction being committed.
"""
def
tpc_finish
(
transaction
):
"""Indicate confirmation that the transaction is done.
Make all changes to objects modified by this transaction persist.
transaction is the ITransaction instance associated with the
transaction being committed.
This should never fail. If this raises an exception, the
database is not expected to maintain consistency; it's a
serious error.
"""
def
tpc_abort
(
transaction
):
"""Abort a transaction.
This is called by a transaction manager to end a two-phase commit on
the data manager. Abandon all changes to objects modified by this
transaction.
transaction is the ITransaction instance associated with the
transaction being committed.
This should never fail.
"""
def
sortKey
():
"""Return a key to use for ordering registered DataManagers.
ZODB uses a global sort order to prevent deadlock when it commits
transactions involving multiple resource managers. The resource
manager must define a sortKey() method that provides a global ordering
for resource managers.
"""
# Alternate version:
#"""Return a consistent sort key for this connection.
#
#This allows ordering multiple connections that use the same storage in
#a consistent manner. This is unique for the lifetime of a connection,
#which is good enough to avoid ZEO deadlocks.
#"""
class
ISavepointDataManager
(
IDataManager
):
def
savepoint
():
"""Return a data-manager savepoint (IDataManagerSavepoint).
"""
class
IDataManagerSavepoint
(
zope
.
interface
.
Interface
):
"""Savepoint for data-manager changes for use in transaction savepoints.
Datamanager savepoints are used by, and only by, transaction savepoints.
Note that data manager savepoints don't have any notion of, or
responsibility for, validity. It isn't the responsibility of
data-manager savepoints to prevent multiple rollbacks or rollbacks after
transaction termination. Preventing invalid savepoint rollback is the
responsibility of transaction rollbacks. Application code should never
use data-manager savepoints.
"""
def
rollback
():
"""Rollback any work done since the savepoint.
"""
class
ISavepoint
(
zope
.
interface
.
Interface
):
"""A transaction savepoint.
"""
def
rollback
():
"""Rollback any work done since the savepoint.
InvalidSavepointRollbackError is raised if the savepoint isn't valid.
"""
valid
=
zope
.
interface
.
Attribute
(
"Boolean indicating whether the savepoint is valid"
)
class
InvalidSavepointRollbackError
(
Exception
):
"""Attempt to rollback an invalid savepoint.
A savepoint may be invalid because:
- The surrounding transaction has committed or aborted.
- An earlier savepoint in the same transaction has been rolled back.
"""
class
ISynchronizer
(
zope
.
interface
.
Interface
):
"""Objects that participate in the transaction-boundary notification API.
"""
def
beforeCompletion
(
transaction
):
"""Hook that is called by the transaction at the start of a commit.
"""
def
afterCompletion
(
transaction
):
"""Hook that is called by the transaction after completing a commit.
"""
def
newTransaction
(
transaction
):
"""Hook that is called at the start of a transaction.
This hook is called when, and only when, a transaction manager's
begin() method is called explictly.
"""
class
DoomedTransaction
(
Exception
):
"""A commit was attempted on a transaction that was doomed."""
src/transaction/savepoint.txt
0 → 100644
View file @
694ac459
Savepoints
==========
Savepoints provide a way to save to disk intermediate work done during
a transaction allowing:
- partial transaction (subtransaction) rollback (abort)
- state of saved objects to be freed, freeing on-line memory for other
uses
Savepoints make it possible to write atomic subroutines that don't
make top-level transaction commitments.
Applications
------------
To demonstrate how savepoints work with transactions, we've provided a sample
data manager implementation that provides savepoint support. The primary
purpose of this data manager is to provide code that can be read to understand
how savepoints work. The secondary purpose is to provide support for
demonstrating the correct operation of savepoint support within the
transaction system. This data manager is very simple. It provides flat
storage of named immutable values, like strings and numbers.
>>> import transaction.tests.savepointsample
>>> dm = transaction.tests.savepointsample.SampleSavepointDataManager()
>>> dm['name'] = 'bob'
As with other data managers, we can commit changes:
>>> transaction.commit()
>>> dm['name']
'bob'
and abort changes:
>>> dm['name'] = 'sally'
>>> dm['name']
'sally'
>>> transaction.abort()
>>> dm['name']
'bob'
Now, let's look at an application that manages funds for people. It allows
deposits and debits to be entered for multiple people. It accepts a sequence
of entries and generates a sequence of status messages. For each entry, it
applies the change and then validates the user's account. If the user's
account is invalid, we roll back the change for that entry. The success or
failure of an entry is indicated in the output status. First we'll initialize
some accounts:
>>> dm['bob-balance'] = 0.0
>>> dm['bob-credit'] = 0.0
>>> dm['sally-balance'] = 0.0
>>> dm['sally-credit'] = 100.0
>>> transaction.commit()
Now, we'll define a validation function to validate an account:
>>> def validate_account(name):
... if dm[name+'-balance'] + dm[name+'-credit'] < 0:
... raise ValueError('Overdrawn', name)
And a function to apply entries. If the function fails in some unexpected
way, it rolls back all of its changes and prints the error:
>>> def apply_entries(entries):
... savepoint = transaction.savepoint()
... try:
... for name, amount in entries:
... entry_savepoint = transaction.savepoint()
... try:
... dm[name+'-balance'] += amount
... validate_account(name)
... except ValueError, error:
... entry_savepoint.rollback()
... print 'Error', str(error)
... else:
... print 'Updated', name
... except Exception, error:
... savepoint.rollback()
... print 'Unexpected exception', error
Now let's try applying some entries:
>>> apply_entries([
... ('bob', 10.0),
... ('sally', 10.0),
... ('bob', 20.0),
... ('sally', 10.0),
... ('bob', -100.0),
... ('sally', -100.0),
... ])
Updated bob
Updated sally
Updated bob
Updated sally
Error ('Overdrawn', 'bob')
Updated sally
>>> dm['bob-balance']
30.0
>>> dm['sally-balance']
-80.0
If we provide entries that cause an unexpected error:
>>> apply_entries([
... ('bob', 10.0),
... ('sally', 10.0),
... ('bob', '20.0'),
... ('sally', 10.0),
... ])
Updated bob
Updated sally
Unexpected exception unsupported operand type(s) for +=: 'float' and 'str'
Because the apply_entries used a savepoint for the entire function, it was
able to rollback the partial changes without rolling back changes made in the
previous call to ``apply_entries``:
>>> dm['bob-balance']
30.0
>>> dm['sally-balance']
-80.0
If we now abort the outer transactions, the earlier changes will go
away:
>>> transaction.abort()
>>> dm['bob-balance']
0.0
>>> dm['sally-balance']
0.0
Savepoint invalidation
----------------------
A savepoint can be used any number of times:
>>> dm['bob-balance'] = 100.0
>>> dm['bob-balance']
100.0
>>> savepoint = transaction.savepoint()
>>> dm['bob-balance'] = 200.0
>>> dm['bob-balance']
200.0
>>> savepoint.rollback()
>>> dm['bob-balance']
100.0
>>> savepoint.rollback() # redundant, but should be harmless
>>> dm['bob-balance']
100.0
>>> dm['bob-balance'] = 300.0
>>> dm['bob-balance']
300.0
>>> savepoint.rollback()
>>> dm['bob-balance']
100.0
However, using a savepoint invalidates any savepoints that come after it:
>>> dm['bob-balance'] = 200.0
>>> dm['bob-balance']
200.0
>>> savepoint1 = transaction.savepoint()
>>> dm['bob-balance'] = 300.0
>>> dm['bob-balance']
300.0
>>> savepoint2 = transaction.savepoint()
>>> savepoint.rollback()
>>> dm['bob-balance']
100.0
>>> savepoint2.rollback()
Traceback (most recent call last):
...
InvalidSavepointRollbackError
>>> savepoint1.rollback()
Traceback (most recent call last):
...
InvalidSavepointRollbackError
>>> transaction.abort()
Databases without savepoint support
-----------------------------------
Normally it's an error to use savepoints with databases that don't support
savepoints:
>>> dm_no_sp = transaction.tests.savepointsample.SampleDataManager()
>>> dm_no_sp['name'] = 'bob'
>>> transaction.commit()
>>> dm_no_sp['name'] = 'sally'
>>> savepoint = transaction.savepoint()
Traceback (most recent call last):
...
TypeError: ('Savepoints unsupported', {'name': 'bob'})
>>> transaction.abort()
However, a flag can be passed to the transaction savepoint method to indicate
that databases without savepoint support should be tolerated until a savepoint
is rolled back. This allows transactions to proceed if there are no reasons
to roll back:
>>> dm_no_sp['name'] = 'sally'
>>> savepoint = transaction.savepoint(1)
>>> dm_no_sp['name'] = 'sue'
>>> transaction.commit()
>>> dm_no_sp['name']
'sue'
>>> dm_no_sp['name'] = 'sam'
>>> savepoint = transaction.savepoint(1)
>>> savepoint.rollback()
Traceback (most recent call last):
...
TypeError: ('Savepoints unsupported', {'name': 'sam'})
Failures
--------
If a failure occurs when creating or rolling back a savepoint, the transaction
state will be uncertain and the transaction will become uncommitable. From
that point on, most transaction operations, including commit, will fail until
the transaction is aborted.
In the previous example, we got an error when we tried to rollback the
savepoint. If we try to commit the transaction, the commit will fail:
>>> transaction.commit() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TransactionFailedError: An operation previously failed, with traceback:
...
TypeError: ('Savepoints unsupported', {'name': 'sam'})
<BLANKLINE>
We have to abort it to make any progress:
>>> transaction.abort()
Similarly, in our earlier example, where we tried to take a savepoint with a
data manager that didn't support savepoints:
>>> dm_no_sp['name'] = 'sally'
>>> dm['name'] = 'sally'
>>> savepoint = transaction.savepoint()
Traceback (most recent call last):
...
TypeError: ('Savepoints unsupported', {'name': 'sue'})
>>> transaction.commit() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
TransactionFailedError: An operation previously failed, with traceback:
...
TypeError: ('Savepoints unsupported', {'name': 'sue'})
<BLANKLINE>
>>> transaction.abort()
After clearing the transaction with an abort, we can get on with new
transactions:
>>> dm_no_sp['name'] = 'sally'
>>> dm['name'] = 'sally'
>>> transaction.commit()
>>> dm_no_sp['name']
'sally'
>>> dm['name']
'sally'
src/transaction/tests/__init__.py
0 → 100644
View file @
694ac459
#
src/transaction/tests/abstestIDataManager.py
0 → 100644
View file @
694ac459
##############################################################################
#
# Copyright (c) 2001, 2002 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Test cases for objects implementing IDataManager.
This is a combo test between Connection and DB, since the two are
rather incestuous and the DB Interface is not defined that I was
able to find.
To do a full test suite one would probably want to write a dummy
storage that will raise errors as needed for testing.
I started this test suite to reproduce a very simple error (tpc_abort
had an error and wouldn't even run if called). So it is *very*
incomplete, and even the tests that exist do not make sure that
the data actually gets written/not written to the storge.
Obviously this test suite should be expanded.
$Id$
"""
from
unittest
import
TestCase
class
IDataManagerTests
(
TestCase
,
object
):
def
setUp
(
self
):
self
.
datamgr
=
None
# subclass should override
self
.
obj
=
None
# subclass should define Persistent object
self
.
txn_factory
=
None
def
get_transaction
(
self
):
return
self
.
txn_factory
()
################################
# IDataManager interface tests #
################################
def
testCommitObj
(
self
):
tran
=
self
.
get_transaction
()
self
.
datamgr
.
prepare
(
tran
)
self
.
datamgr
.
commit
(
tran
)
def
testAbortTran
(
self
):
tran
=
self
.
get_transaction
()
self
.
datamgr
.
prepare
(
tran
)
self
.
datamgr
.
abort
(
tran
)
src/transaction/tests/doom.txt
0 → 100644
View file @
694ac459
Dooming Transactions
====================
A doomed transaction behaves exactly the same way as an active transaction but
raises an error on any attempt to commit it, thus forcing an abort.
Doom is useful in places where abort is unsafe and an exception cannot be
raised. This occurs when the programmer wants the code following the doom to
run but not commit. It is unsafe to abort in these circumstances as a following
get() may implicitly open a new transaction.
Any attempt to commit a doomed transaction will raise a DoomedTransaction
exception.
An example of such a use case can be found in
zope/app/form/browser/editview.py. Here a form validation failure must doom
the transaction as committing the transaction may have side-effects. However,
the form code must continue to calculate a form containing the error messages
to return.
For Zope in general, code running within a request should always doom
transactions rather than aborting them. It is the responsibilty of the
publication to either abort() or commit() the transaction. Application code can
use savepoints and doom() safely.
To see how it works we first need to create a stub data manager:
>>> from transaction.interfaces import IDataManager
>>> from zope.interface import implements
>>> class DataManager:
... implements(IDataManager)
... def __init__(self):
... self.attr_counter = {}
... def __getattr__(self, name):
... def f(transaction):
... self.attr_counter[name] = self.attr_counter.get(name, 0) + 1
... return f
... def total(self):
... count = 0
... for access_count in self.attr_counter.values():
... count += access_count
... return count
... def sortKey(self):
... return 1
Start a new transaction:
>>> import transaction
>>> txn = transaction.begin()
>>> dm = DataManager()
>>> txn.join(dm)
We can ask a transaction if it is doomed to avoid expensive operations. An
example of a use case is an object-relational mapper where a pre-commit hook
sends all outstanding SQL to a relational database for objects changed during
the transaction. This expensive operation is not necessary if the transaction
has been doomed. A non-doomed transaction should return False:
>>> txn.isDoomed()
False
We can doom a transaction by calling .doom() on it:
>>> txn.doom()
>>> txn.isDoomed()
True
We can doom it again if we like:
>>> txn.doom()
The data manager is unchanged at this point:
>>> dm.total()
0
Attempting to commit a doomed transaction any number of times raises a
DoomedTransaction:
>>> txn.commit() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
DoomedTransaction
>>> txn.commit() # doctest: +ELLIPSIS
Traceback (most recent call last):
...
DoomedTransaction
But still leaves the data manager unchanged:
>>> dm.total()
0
But the doomed transaction can be aborted:
>>> txn.abort()
Which aborts the data manager:
>>> dm.total()
1
>>> dm.attr_counter['abort']
1
Dooming the current transaction can also be done directly from the transaction
module. We can also begin a new transaction directly after dooming the old one:
>>> txn = transaction.begin()
>>> transaction.isDoomed()
False
>>> transaction.doom()
>>> transaction.isDoomed()
True
>>> txn = transaction.begin()
After committing a transaction we get an assertion error if we try to doom the
transaction. This could be made more specific, but trying to doom a transaction
after it's been committed is probably a programming error:
>>> txn = transaction.begin()
>>> txn.commit()
>>> txn.doom()
Traceback (most recent call last):
...
AssertionError
A doomed transaction should act the same as an active transaction, so we should
be able to join it:
>>> txn = transaction.begin()
>>> txn.doom()
>>> dm2 = DataManager()
>>> txn.join(dm2)
Clean up:
>>> txn = transaction.begin()
>>> txn.abort()
src/transaction/tests/savepointsample.py
0 → 100644
View file @
694ac459
##############################################################################
#
# Copyright (c) 2004 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Savepoint data manager implementation example.
Sample data manager implementation that illustrates how to implement
savepoints.
See savepoint.txt in the transaction package.
$Id$
"""
import
UserDict
from
zope
import
interface
import
transaction.interfaces
class
SampleDataManager
(
UserDict
.
DictMixin
):
"""Sample implementation of data manager that doesn't support savepoints
This data manager stores named simple values, like strings and numbers.
"""
interface
.
implements
(
transaction
.
interfaces
.
IDataManager
)
def
__init__
(
self
,
transaction_manager
=
None
):
if
transaction_manager
is
None
:
# Use the thread-local transaction manager if none is provided:
transaction_manager
=
transaction
.
manager
self
.
transaction_manager
=
transaction_manager
# Our committed and uncommitted data:
self
.
committed
=
{}
self
.
uncommitted
=
self
.
committed
.
copy
()
# Our transaction state:
#
# If our uncommitted data is modified, we'll join a transaction
# and keep track of the transaction we joined. Any commit
# related messages we get should be for this same transaction
self
.
transaction
=
None
# What phase, if any, of two-phase commit we are in:
self
.
tpc_phase
=
None
#######################################################################
# Provide a mapping interface to uncommitted data. We provide
# a basic subset of the interface. DictMixin does the rest.
def
__getitem__
(
self
,
name
):
return
self
.
uncommitted
[
name
]
def
__setitem__
(
self
,
name
,
value
):
self
.
_join
()
# join the current transaction, if we haven't already
self
.
uncommitted
[
name
]
=
value
def
__delitem__
(
self
,
name
):
self
.
_join
()
# join the current transaction, if we haven't already
del
self
.
uncommitted
[
name
]
def
keys
(
self
):
return
self
.
uncommitted
.
keys
()
#
#######################################################################
#######################################################################
# Transaction methods
def
_join
(
self
):
# If this is the first change in the transaction, join the transaction
if
self
.
transaction
is
None
:
self
.
transaction
=
self
.
transaction_manager
.
get
()
self
.
transaction
.
join
(
self
)
def
_resetTransaction
(
self
):
self
.
transaction
=
None
self
.
tpc_phase
=
None
def
abort
(
self
,
transaction
):
"""Throw away changes made before the commit process has started
"""
assert
((
transaction
is
self
.
transaction
)
or
(
self
.
transaction
is
None
)
),
"Must not change transactions"
assert
self
.
tpc_phase
is
None
,
"Must be called outside of tpc"
self
.
uncommitted
=
self
.
committed
.
copy
()
self
.
_resetTransaction
()
def
tpc_begin
(
self
,
transaction
):
"""Enter two-phase commit
"""
assert
transaction
is
self
.
transaction
,
"Must not change transactions"
assert
self
.
tpc_phase
is
None
,
"Must be called outside of tpc"
self
.
tpc_phase
=
1
def
commit
(
self
,
transaction
):
"""Record data modified during the transaction
"""
assert
transaction
is
self
.
transaction
,
"Must not change transactions"
assert
self
.
tpc_phase
==
1
,
"Must be called in first phase of tpc"
# In our simple example, we don't need to do anything.
# A more complex data manager would typically write to some sort
# of log.
def
tpc_vote
(
self
,
transaction
):
assert
transaction
is
self
.
transaction
,
"Must not change transactions"
assert
self
.
tpc_phase
==
1
,
"Must be called in first phase of tpc"
# This particular data manager is always ready to vote.
# Real data managers will usually need to take some steps to
# make sure that the finish will succeed
self
.
tpc_phase
=
2
def
tpc_finish
(
self
,
transaction
):
assert
transaction
is
self
.
transaction
,
"Must not change transactions"
assert
self
.
tpc_phase
==
2
,
"Must be called in second phase of tpc"
self
.
committed
=
self
.
uncommitted
.
copy
()
self
.
_resetTransaction
()
def
tpc_abort
(
self
,
transaction
):
assert
transaction
is
self
.
transaction
,
"Must not change transactions"
assert
self
.
tpc_phase
is
not
None
,
"Must be called inside of tpc"
self
.
uncommitted
=
self
.
committed
.
copy
()
self
.
_resetTransaction
()
#
#######################################################################
#######################################################################
# Other data manager methods
def
sortKey
(
self
):
# Commit operations on multiple data managers are performed in
# sort key order. This important to avoid deadlock when data
# managers are shared among multiple threads or processes and
# use locks to manage that sharing. We aren't going to bother
# with that here.
return
str
(
id
(
self
))
#
#######################################################################
class
SampleSavepointDataManager
(
SampleDataManager
):
"""Sample implementation of a savepoint-supporting data manager
This extends the basic data manager with savepoint support.
"""
interface
.
implements
(
transaction
.
interfaces
.
ISavepointDataManager
)
def
savepoint
(
self
):
# When we create the savepoint, we save the existing database state.
return
SampleSavepoint
(
self
,
self
.
uncommitted
.
copy
())
def
_rollback_savepoint
(
self
,
savepoint
):
# When we rollback the savepoint, we restore the saved data.
# Caution: without the copy(), further changes to the database
# could reflect in savepoint.data, and then `savepoint` would no
# longer contain the originally saved data, and so `savepoint`
# couldn't restore the original state if a rollback to this
# savepoint was done again. IOW, copy() is necessary.
self
.
uncommitted
=
savepoint
.
data
.
copy
()
class
SampleSavepoint
:
interface
.
implements
(
transaction
.
interfaces
.
IDataManagerSavepoint
)
def
__init__
(
self
,
data_manager
,
data
):
self
.
data_manager
=
data_manager
self
.
data
=
data
def
rollback
(
self
):
self
.
data_manager
.
_rollback_savepoint
(
self
)
src/transaction/tests/test_SampleDataManager.py
0 → 100644
View file @
694ac459
##############################################################################
#
# Copyright (c) 2004 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Sample objects for use in tests
$Id$
"""
class
DataManager
(
object
):
"""Sample data manager
This class provides a trivial data-manager implementation and doc
strings to illustrate the the protocol and to provide a tool for
writing tests.
Our sample data manager has state that is updated through an inc
method and through transaction operations.
When we create a sample data manager:
>>> dm = DataManager()
It has two bits of state, state:
>>> dm.state
0
and delta:
>>> dm.delta
0
Both of which are initialized to 0. state is meant to model
committed state, while delta represents tentative changes within a
transaction. We change the state by calling inc:
>>> dm.inc()
which updates delta:
>>> dm.delta
1
but state isn't changed until we commit the transaction:
>>> dm.state
0
To commit the changes, we use 2-phase commit. We execute the first
stage by calling prepare. We need to pass a transation. Our
sample data managers don't really use the transactions for much,
so we'll be lazy and use strings for transactions:
>>> t1 = '1'
>>> dm.prepare(t1)
The sample data manager updates the state when we call prepare:
>>> dm.state
1
>>> dm.delta
1
This is mainly so we can detect some affect of calling the methods.
Now if we call commit:
>>> dm.commit(t1)
Our changes are"permanent". The state reflects the changes and the
delta has been reset to 0.
>>> dm.state
1
>>> dm.delta
0
"""
def
__init__
(
self
):
self
.
state
=
0
self
.
sp
=
0
self
.
transaction
=
None
self
.
delta
=
0
self
.
prepared
=
False
def
inc
(
self
,
n
=
1
):
self
.
delta
+=
n
def
prepare
(
self
,
transaction
):
"""Prepare to commit data
>>> dm = DataManager()
>>> dm.inc()
>>> t1 = '1'
>>> dm.prepare(t1)
>>> dm.commit(t1)
>>> dm.state
1
>>> dm.inc()
>>> t2 = '2'
>>> dm.prepare(t2)
>>> dm.abort(t2)
>>> dm.state
1
It is en error to call prepare more than once without an intervening
commit or abort:
>>> dm.prepare(t1)
>>> dm.prepare(t1)
Traceback (most recent call last):
...
TypeError: Already prepared
>>> dm.prepare(t2)
Traceback (most recent call last):
...
TypeError: Already prepared
>>> dm.abort(t1)
If there was a preceeding savepoint, the transaction must match:
>>> rollback = dm.savepoint(t1)
>>> dm.prepare(t2)
Traceback (most recent call last):
,,,
TypeError: ('Transaction missmatch', '2', '1')
>>> dm.prepare(t1)
"""
if
self
.
prepared
:
raise
TypeError
(
'Already prepared'
)
self
.
_checkTransaction
(
transaction
)
self
.
prepared
=
True
self
.
transaction
=
transaction
self
.
state
+=
self
.
delta
def
_checkTransaction
(
self
,
transaction
):
if
(
transaction
is
not
self
.
transaction
and
self
.
transaction
is
not
None
):
raise
TypeError
(
"Transaction missmatch"
,
transaction
,
self
.
transaction
)
def
abort
(
self
,
transaction
):
"""Abort a transaction
The abort method can be called before two-phase commit to
throw away work done in the transaction:
>>> dm = DataManager()
>>> dm.inc()
>>> dm.state, dm.delta
(0, 1)
>>> t1 = '1'
>>> dm.abort(t1)
>>> dm.state, dm.delta
(0, 0)
The abort method also throws away work done in savepoints:
>>> dm.inc()
>>> r = dm.savepoint(t1)
>>> dm.inc()
>>> r = dm.savepoint(t1)
>>> dm.state, dm.delta
(0, 2)
>>> dm.abort(t1)
>>> dm.state, dm.delta
(0, 0)
If savepoints are used, abort must be passed the same
transaction:
>>> dm.inc()
>>> r = dm.savepoint(t1)
>>> t2 = '2'
>>> dm.abort(t2)
Traceback (most recent call last):
...
TypeError: ('Transaction missmatch', '2', '1')
>>> dm.abort(t1)
The abort method is also used to abort a two-phase commit:
>>> dm.inc()
>>> dm.state, dm.delta
(0, 1)
>>> dm.prepare(t1)
>>> dm.state, dm.delta
(1, 1)
>>> dm.abort(t1)
>>> dm.state, dm.delta
(0, 0)
Of course, the transactions passed to prepare and abort must
match:
>>> dm.prepare(t1)
>>> dm.abort(t2)
Traceback (most recent call last):
...
TypeError: ('Transaction missmatch', '2', '1')
>>> dm.abort(t1)
"""
self
.
_checkTransaction
(
transaction
)
if
self
.
transaction
is
not
None
:
self
.
transaction
=
None
if
self
.
prepared
:
self
.
state
-=
self
.
delta
self
.
prepared
=
False
self
.
delta
=
0
def
commit
(
self
,
transaction
):
"""Complete two-phase commit
>>> dm = DataManager()
>>> dm.state
0
>>> dm.inc()
We start two-phase commit by calling prepare:
>>> t1 = '1'
>>> dm.prepare(t1)
We complete it by calling commit:
>>> dm.commit(t1)
>>> dm.state
1
It is an error ro call commit without calling prepare first:
>>> dm.inc()
>>> t2 = '2'
>>> dm.commit(t2)
Traceback (most recent call last):
...
TypeError: Not prepared to commit
>>> dm.prepare(t2)
>>> dm.commit(t2)
If course, the transactions given to prepare and commit must
be the same:
>>> dm.inc()
>>> t3 = '3'
>>> dm.prepare(t3)
>>> dm.commit(t2)
Traceback (most recent call last):
...
TypeError: ('Transaction missmatch', '2', '3')
"""
if
not
self
.
prepared
:
raise
TypeError
(
'Not prepared to commit'
)
self
.
_checkTransaction
(
transaction
)
self
.
delta
=
0
self
.
transaction
=
None
self
.
prepared
=
False
def
savepoint
(
self
,
transaction
):
"""Provide the ability to rollback transaction state
Savepoints provide a way to:
- Save partial transaction work. For some data managers, this
could allow resources to be used more efficiently.
- Provide the ability to revert state to a point in a
transaction without aborting the entire transaction. In
other words, savepoints support partial aborts.
Savepoints don't use two-phase commit. If there are errors in
setting or rolling back to savepoints, the application should
abort the containing transaction. This is *not* the
responsibility of the data manager.
Savepoints are always associated with a transaction. Any work
done in a savepoint's transaction is tentative until the
transaction is committed using two-phase commit.
>>> dm = DataManager()
>>> dm.inc()
>>> t1 = '1'
>>> r = dm.savepoint(t1)
>>> dm.state, dm.delta
(0, 1)
>>> dm.inc()
>>> dm.state, dm.delta
(0, 2)
>>> r.rollback()
>>> dm.state, dm.delta
(0, 1)
>>> dm.prepare(t1)
>>> dm.commit(t1)
>>> dm.state, dm.delta
(1, 0)
Savepoints must have the same transaction:
>>> r1 = dm.savepoint(t1)
>>> dm.state, dm.delta
(1, 0)
>>> dm.inc()
>>> dm.state, dm.delta
(1, 1)
>>> t2 = '2'
>>> r2 = dm.savepoint(t2)
Traceback (most recent call last):
...
TypeError: ('Transaction missmatch', '2', '1')
>>> r2 = dm.savepoint(t1)
>>> dm.inc()
>>> dm.state, dm.delta
(1, 2)
If we rollback to an earlier savepoint, we discard all work
done later:
>>> r1.rollback()
>>> dm.state, dm.delta
(1, 0)
and we can no longer rollback to the later savepoint:
>>> r2.rollback()
Traceback (most recent call last):
...
TypeError: ('Attempt to roll back to invalid save point', 3, 2)
We can roll back to a savepoint as often as we like:
>>> r1.rollback()
>>> r1.rollback()
>>> r1.rollback()
>>> dm.state, dm.delta
(1, 0)
>>> dm.inc()
>>> dm.inc()
>>> dm.inc()
>>> dm.state, dm.delta
(1, 3)
>>> r1.rollback()
>>> dm.state, dm.delta
(1, 0)
But we can't rollback to a savepoint after it has been
committed:
>>> dm.prepare(t1)
>>> dm.commit(t1)
>>> r1.rollback()
Traceback (most recent call last):
...
TypeError: Attempt to rollback stale rollback
"""
if
self
.
prepared
:
raise
TypeError
(
"Can't get savepoint during two-phase commit"
)
self
.
_checkTransaction
(
transaction
)
self
.
transaction
=
transaction
self
.
sp
+=
1
return
Rollback
(
self
)
class
Rollback
(
object
):
def
__init__
(
self
,
dm
):
self
.
dm
=
dm
self
.
sp
=
dm
.
sp
self
.
delta
=
dm
.
delta
self
.
transaction
=
dm
.
transaction
def
rollback
(
self
):
if
self
.
transaction
is
not
self
.
dm
.
transaction
:
raise
TypeError
(
"Attempt to rollback stale rollback"
)
if
self
.
dm
.
sp
<
self
.
sp
:
raise
TypeError
(
"Attempt to roll back to invalid save point"
,
self
.
sp
,
self
.
dm
.
sp
)
self
.
dm
.
sp
=
self
.
sp
self
.
dm
.
delta
=
self
.
delta
def
test_suite
():
from
zope.testing.doctest
import
DocTestSuite
return
DocTestSuite
()
if
__name__
==
'__main__'
:
unittest
.
main
()
src/transaction/tests/test_SampleResourceManager.py
0 → 100644
View file @
694ac459
##############################################################################
#
# Copyright (c) 2004 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Sample objects for use in tests
$Id$
"""
class
ResourceManager
(
object
):
"""Sample resource manager.
This class provides a trivial resource-manager implementation and doc
strings to illustrate the protocol and to provide a tool for writing
tests.
Our sample resource manager has state that is updated through an inc
method and through transaction operations.
When we create a sample resource manager:
>>> rm = ResourceManager()
It has two pieces state, state and delta, both initialized to 0:
>>> rm.state
0
>>> rm.delta
0
state is meant to model committed state, while delta represents
tentative changes within a transaction. We change the state by
calling inc:
>>> rm.inc()
which updates delta:
>>> rm.delta
1
but state isn't changed until we commit the transaction:
>>> rm.state
0
To commit the changes, we use 2-phase commit. We execute the first
stage by calling prepare. We need to pass a transation. Our
sample resource managers don't really use the transactions for much,
so we'll be lazy and use strings for transactions. The sample
resource manager updates the state when we call tpc_vote:
>>> t1 = '1'
>>> rm.tpc_begin(t1)
>>> rm.state, rm.delta
(0, 1)
>>> rm.tpc_vote(t1)
>>> rm.state, rm.delta
(1, 1)
Now if we call tpc_finish:
>>> rm.tpc_finish(t1)
Our changes are "permanent". The state reflects the changes and the
delta has been reset to 0.
>>> rm.state, rm.delta
(1, 0)
"""
def
__init__
(
self
):
self
.
state
=
0
self
.
sp
=
0
self
.
transaction
=
None
self
.
delta
=
0
self
.
txn_state
=
None
def
_check_state
(
self
,
*
ok_states
):
if
self
.
txn_state
not
in
ok_states
:
raise
ValueError
(
"txn in state %r but expected one of %r"
%
(
self
.
txn_state
,
ok_states
))
def
_checkTransaction
(
self
,
transaction
):
if
(
transaction
is
not
self
.
transaction
and
self
.
transaction
is
not
None
):
raise
TypeError
(
"Transaction missmatch"
,
transaction
,
self
.
transaction
)
def
inc
(
self
,
n
=
1
):
self
.
delta
+=
n
def
tpc_begin
(
self
,
transaction
):
"""Prepare to commit data.
>>> rm = ResourceManager()
>>> rm.inc()
>>> t1 = '1'
>>> rm.tpc_begin(t1)
>>> rm.tpc_vote(t1)
>>> rm.tpc_finish(t1)
>>> rm.state
1
>>> rm.inc()
>>> t2 = '2'
>>> rm.tpc_begin(t2)
>>> rm.tpc_vote(t2)
>>> rm.tpc_abort(t2)
>>> rm.state
1
It is an error to call tpc_begin more than once without completing
two-phase commit:
>>> rm.tpc_begin(t1)
>>> rm.tpc_begin(t1)
Traceback (most recent call last):
...
ValueError: txn in state 'tpc_begin' but expected one of (None,)
>>> rm.tpc_abort(t1)
If there was a preceeding savepoint, the transaction must match:
>>> rollback = rm.savepoint(t1)
>>> rm.tpc_begin(t2)
Traceback (most recent call last):
,,,
TypeError: ('Transaction missmatch', '2', '1')
>>> rm.tpc_begin(t1)
"""
self
.
_checkTransaction
(
transaction
)
self
.
_check_state
(
None
)
self
.
transaction
=
transaction
self
.
txn_state
=
'tpc_begin'
def
tpc_vote
(
self
,
transaction
):
"""Verify that a data manager can commit the transaction.
This is the last chance for a data manager to vote 'no'. A
data manager votes 'no' by raising an exception.
transaction is the ITransaction instance associated with the
transaction being committed.
"""
self
.
_checkTransaction
(
transaction
)
self
.
_check_state
(
'tpc_begin'
)
self
.
state
+=
self
.
delta
self
.
txn_state
=
'tpc_vote'
def
tpc_finish
(
self
,
transaction
):
"""Complete two-phase commit
>>> rm = ResourceManager()
>>> rm.state
0
>>> rm.inc()
We start two-phase commit by calling prepare:
>>> t1 = '1'
>>> rm.tpc_begin(t1)
>>> rm.tpc_vote(t1)
We complete it by calling tpc_finish:
>>> rm.tpc_finish(t1)
>>> rm.state
1
It is an error ro call tpc_finish without calling tpc_vote:
>>> rm.inc()
>>> t2 = '2'
>>> rm.tpc_begin(t2)
>>> rm.tpc_finish(t2)
Traceback (most recent call last):
...
ValueError: txn in state 'tpc_begin' but expected one of ('tpc_vote',)
>>> rm.tpc_abort(t2) # clean slate
>>> rm.tpc_begin(t2)
>>> rm.tpc_vote(t2)
>>> rm.tpc_finish(t2)
Of course, the transactions given to tpc_begin and tpc_finish must
be the same:
>>> rm.inc()
>>> t3 = '3'
>>> rm.tpc_begin(t3)
>>> rm.tpc_vote(t3)
>>> rm.tpc_finish(t2)
Traceback (most recent call last):
...
TypeError: ('Transaction missmatch', '2', '3')
"""
self
.
_checkTransaction
(
transaction
)
self
.
_check_state
(
'tpc_vote'
)
self
.
delta
=
0
self
.
transaction
=
None
self
.
prepared
=
False
self
.
txn_state
=
None
def
tpc_abort
(
self
,
transaction
):
"""Abort a transaction
The abort method can be called before two-phase commit to
throw away work done in the transaction:
>>> rm = ResourceManager()
>>> rm.inc()
>>> rm.state, rm.delta
(0, 1)
>>> t1 = '1'
>>> rm.tpc_abort(t1)
>>> rm.state, rm.delta
(0, 0)
The abort method also throws away work done in savepoints:
>>> rm.inc()
>>> r = rm.savepoint(t1)
>>> rm.inc()
>>> r = rm.savepoint(t1)
>>> rm.state, rm.delta
(0, 2)
>>> rm.tpc_abort(t1)
>>> rm.state, rm.delta
(0, 0)
If savepoints are used, abort must be passed the same
transaction:
>>> rm.inc()
>>> r = rm.savepoint(t1)
>>> t2 = '2'
>>> rm.tpc_abort(t2)
Traceback (most recent call last):
...
TypeError: ('Transaction missmatch', '2', '1')
>>> rm.tpc_abort(t1)
The abort method is also used to abort a two-phase commit:
>>> rm.inc()
>>> rm.state, rm.delta
(0, 1)
>>> rm.tpc_begin(t1)
>>> rm.state, rm.delta
(0, 1)
>>> rm.tpc_vote(t1)
>>> rm.state, rm.delta
(1, 1)
>>> rm.tpc_abort(t1)
>>> rm.state, rm.delta
(0, 0)
Of course, the transactions passed to prepare and abort must
match:
>>> rm.tpc_begin(t1)
>>> rm.tpc_abort(t2)
Traceback (most recent call last):
...
TypeError: ('Transaction missmatch', '2', '1')
>>> rm.tpc_abort(t1)
This should never fail.
"""
self
.
_checkTransaction
(
transaction
)
if
self
.
transaction
is
not
None
:
self
.
transaction
=
None
if
self
.
txn_state
==
'tpc_vote'
:
self
.
state
-=
self
.
delta
self
.
txn_state
=
None
self
.
delta
=
0
def
savepoint
(
self
,
transaction
):
"""Provide the ability to rollback transaction state
Savepoints provide a way to:
- Save partial transaction work. For some resource managers, this
could allow resources to be used more efficiently.
- Provide the ability to revert state to a point in a
transaction without aborting the entire transaction. In
other words, savepoints support partial aborts.
Savepoints don't use two-phase commit. If there are errors in
setting or rolling back to savepoints, the application should
abort the containing transaction. This is *not* the
responsibility of the resource manager.
Savepoints are always associated with a transaction. Any work
done in a savepoint's transaction is tentative until the
transaction is committed using two-phase commit.
>>> rm = ResourceManager()
>>> rm.inc()
>>> t1 = '1'
>>> r = rm.savepoint(t1)
>>> rm.state, rm.delta
(0, 1)
>>> rm.inc()
>>> rm.state, rm.delta
(0, 2)
>>> r.rollback()
>>> rm.state, rm.delta
(0, 1)
>>> rm.tpc_begin(t1)
>>> rm.tpc_vote(t1)
>>> rm.tpc_finish(t1)
>>> rm.state, rm.delta
(1, 0)
Savepoints must have the same transaction:
>>> r1 = rm.savepoint(t1)
>>> rm.state, rm.delta
(1, 0)
>>> rm.inc()
>>> rm.state, rm.delta
(1, 1)
>>> t2 = '2'
>>> r2 = rm.savepoint(t2)
Traceback (most recent call last):
...
TypeError: ('Transaction missmatch', '2', '1')
>>> r2 = rm.savepoint(t1)
>>> rm.inc()
>>> rm.state, rm.delta
(1, 2)
If we rollback to an earlier savepoint, we discard all work
done later:
>>> r1.rollback()
>>> rm.state, rm.delta
(1, 0)
and we can no longer rollback to the later savepoint:
>>> r2.rollback()
Traceback (most recent call last):
...
TypeError: ('Attempt to roll back to invalid save point', 3, 2)
We can roll back to a savepoint as often as we like:
>>> r1.rollback()
>>> r1.rollback()
>>> r1.rollback()
>>> rm.state, rm.delta
(1, 0)
>>> rm.inc()
>>> rm.inc()
>>> rm.inc()
>>> rm.state, rm.delta
(1, 3)
>>> r1.rollback()
>>> rm.state, rm.delta
(1, 0)
But we can't rollback to a savepoint after it has been
committed:
>>> rm.tpc_begin(t1)
>>> rm.tpc_vote(t1)
>>> rm.tpc_finish(t1)
>>> r1.rollback()
Traceback (most recent call last):
...
TypeError: Attempt to rollback stale rollback
"""
if
self
.
txn_state
is
not
None
:
raise
TypeError
(
"Can't get savepoint during two-phase commit"
)
self
.
_checkTransaction
(
transaction
)
self
.
transaction
=
transaction
self
.
sp
+=
1
return
SavePoint
(
self
)
def
discard
(
self
,
transaction
):
pass
class
SavePoint
(
object
):
def
__init__
(
self
,
rm
):
self
.
rm
=
rm
self
.
sp
=
rm
.
sp
self
.
delta
=
rm
.
delta
self
.
transaction
=
rm
.
transaction
def
rollback
(
self
):
if
self
.
transaction
is
not
self
.
rm
.
transaction
:
raise
TypeError
(
"Attempt to rollback stale rollback"
)
if
self
.
rm
.
sp
<
self
.
sp
:
raise
TypeError
(
"Attempt to roll back to invalid save point"
,
self
.
sp
,
self
.
rm
.
sp
)
self
.
rm
.
sp
=
self
.
sp
self
.
rm
.
delta
=
self
.
delta
def
discard
(
self
):
pass
def
test_suite
():
from
doctest
import
DocTestSuite
return
DocTestSuite
()
if
__name__
==
'__main__'
:
unittest
.
main
()
src/transaction/tests/test_register_compat.py
0 → 100644
View file @
694ac459
##############################################################################
#
# Copyright (c) 2004 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Test backwards compatibility for resource managers using register().
The transaction package supports several different APIs for resource
managers. The original ZODB3 API was implemented by ZODB.Connection.
The Connection passed persistent objects to a Transaction's register()
method. It's possible that third-party code also used this API, hence
these tests that the code that adapts the old interface to the current
API works.
These tests use a TestConnection object that implements the old API.
They check that the right methods are called and in roughly the right
order.
Common cases
------------
First, check that a basic transaction commit works.
>>> cn = TestConnection()
>>> cn.register(Object())
>>> cn.register(Object())
>>> cn.register(Object())
>>> transaction.commit()
>>> len(cn.committed)
3
>>> len(cn.aborted)
0
>>> cn.calls
['begin', 'vote', 'finish']
Second, check that a basic transaction abort works. If the
application calls abort(), then the transaction never gets into the
two-phase commit. It just aborts each object.
>>> cn = TestConnection()
>>> cn.register(Object())
>>> cn.register(Object())
>>> cn.register(Object())
>>> transaction.abort()
>>> len(cn.committed)
0
>>> len(cn.aborted)
3
>>> cn.calls
[]
Error handling
--------------
The tricky part of the implementation is recovering from an error that
occurs during the two-phase commit. We override the commit() and
abort() methods of Object to cause errors during commit.
Note that the implementation uses lists internally, so that objects
are committed in the order they are registered. (In the presence of
multiple resource managers, objects from a single resource manager are
committed in order. I'm not sure if this is an accident of the
implementation or a feature that should be supported by any
implementation.)
The order of resource managers depends on sortKey().
>>> cn = TestConnection()
>>> cn.register(Object())
>>> cn.register(CommitError())
>>> cn.register(Object())
>>> transaction.commit()
Traceback (most recent call last):
...
RuntimeError: commit
>>> len(cn.committed)
1
>>> len(cn.aborted)
3
Clean up:
>>> transaction.abort()
"""
import
transaction
class
Object
(
object
):
def
commit
(
self
):
pass
def
abort
(
self
):
pass
class
CommitError
(
Object
):
def
commit
(
self
):
raise
RuntimeError
(
"commit"
)
class
AbortError
(
Object
):
def
abort
(
self
):
raise
RuntimeError
(
"abort"
)
class
BothError
(
CommitError
,
AbortError
):
pass
class
TestConnection
:
def
__init__
(
self
):
self
.
committed
=
[]
self
.
aborted
=
[]
self
.
calls
=
[]
def
register
(
self
,
obj
):
obj
.
_p_jar
=
self
transaction
.
get
().
register
(
obj
)
def
sortKey
(
self
):
return
str
(
id
(
self
))
def
tpc_begin
(
self
,
txn
):
self
.
calls
.
append
(
"begin"
)
def
tpc_vote
(
self
,
txn
):
self
.
calls
.
append
(
"vote"
)
def
tpc_finish
(
self
,
txn
):
self
.
calls
.
append
(
"finish"
)
def
tpc_abort
(
self
,
txn
):
self
.
calls
.
append
(
"abort"
)
def
commit
(
self
,
obj
,
txn
):
obj
.
commit
()
self
.
committed
.
append
(
obj
)
def
abort
(
self
,
obj
,
txn
):
obj
.
abort
()
self
.
aborted
.
append
(
obj
)
from
zope.testing
import
doctest
def
test_suite
():
return
doctest
.
DocTestSuite
()
src/transaction/tests/test_savepoint.py
0 → 100644
View file @
694ac459
##############################################################################
#
# Copyright (c) 2004 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
"""Tests of savepoint feature
$Id$
"""
import
unittest
from
zope.testing
import
doctest
def
testRollbackRollsbackDataManagersThatJoinedLater
():
"""
A savepoint needs to not just rollback it's savepoints, but needs to
rollback savepoints for data managers that joined savepoints after the
savepoint:
>>> import transaction.tests.savepointsample
>>> dm = transaction.tests.savepointsample.SampleSavepointDataManager()
>>> dm['name'] = 'bob'
>>> sp1 = transaction.savepoint()
>>> dm['job'] = 'geek'
>>> sp2 = transaction.savepoint()
>>> dm['salary'] = 'fun'
>>> dm2 = transaction.tests.savepointsample.SampleSavepointDataManager()
>>> dm2['name'] = 'sally'
>>> 'name' in dm
True
>>> 'job' in dm
True
>>> 'salary' in dm
True
>>> 'name' in dm2
True
>>> sp1.rollback()
>>> 'name' in dm
True
>>> 'job' in dm
False
>>> 'salary' in dm
False
>>> 'name' in dm2
False
"""
def
test_suite
():
return
unittest
.
TestSuite
((
doctest
.
DocFileSuite
(
'../savepoint.txt'
),
doctest
.
DocTestSuite
(),
))
if
__name__
==
'__main__'
:
unittest
.
main
(
defaultTest
=
'test_suite'
)
src/transaction/tests/test_transaction.py
0 → 100644
View file @
694ac459
##############################################################################
#
# Copyright (c) 2001, 2002, 2005 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.1 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Test transaction behavior for variety of cases.
I wrote these unittests to investigate some odd transaction
behavior when doing unittests of integrating non sub transaction
aware objects, and to insure proper txn behavior. these
tests test the transaction system independent of the rest of the
zodb.
you can see the method calls to a jar by passing the
keyword arg tracing to the modify method of a dataobject.
the value of the arg is a prefix used for tracing print calls
to that objects jar.
the number of times a jar method was called can be inspected
by looking at an attribute of the jar that is the method
name prefixed with a c (count/check).
i've included some tracing examples for tests that i thought
were illuminating as doc strings below.
TODO
add in tests for objects which are modified multiple times,
for example an object that gets modified in multiple sub txns.
$Id$
"""
import
unittest
import
warnings
import
transaction
from
ZODB.utils
import
positive_id
from
ZODB.tests.warnhook
import
WarningsHook
class
TransactionTests
(
unittest
.
TestCase
):
def
setUp
(
self
):
mgr
=
self
.
transaction_manager
=
transaction
.
TransactionManager
()
self
.
sub1
=
DataObject
(
mgr
)
self
.
sub2
=
DataObject
(
mgr
)
self
.
sub3
=
DataObject
(
mgr
)
self
.
nosub1
=
DataObject
(
mgr
,
nost
=
1
)
# basic tests with two sub trans jars
# really we only need one, so tests for
# sub1 should identical to tests for sub2
def
testTransactionCommit
(
self
):
self
.
sub1
.
modify
()
self
.
sub2
.
modify
()
self
.
transaction_manager
.
commit
()
assert
self
.
sub1
.
_p_jar
.
ccommit_sub
==
0
assert
self
.
sub1
.
_p_jar
.
ctpc_finish
==
1
def
testTransactionAbort
(
self
):
self
.
sub1
.
modify
()
self
.
sub2
.
modify
()
self
.
transaction_manager
.
abort
()
assert
self
.
sub2
.
_p_jar
.
cabort
==
1
def
testTransactionNote
(
self
):
t
=
self
.
transaction_manager
.
get
()
t
.
note
(
'This is a note.'
)
self
.
assertEqual
(
t
.
description
,
'This is a note.'
)
t
.
note
(
'Another.'
)
self
.
assertEqual
(
t
.
description
,
'This is a note.
\
n
\
n
Another.'
)
t
.
abort
()
# repeat adding in a nonsub trans jars
def
testNSJTransactionCommit
(
self
):
self
.
nosub1
.
modify
()
self
.
transaction_manager
.
commit
()
assert
self
.
nosub1
.
_p_jar
.
ctpc_finish
==
1
def
testNSJTransactionAbort
(
self
):
self
.
nosub1
.
modify
()
self
.
transaction_manager
.
abort
()
assert
self
.
nosub1
.
_p_jar
.
ctpc_finish
==
0
assert
self
.
nosub1
.
_p_jar
.
cabort
==
1
### Failure Mode Tests
#
# ok now we do some more interesting
# tests that check the implementations
# error handling by throwing errors from
# various jar methods
###
# first the recoverable errors
def
testExceptionInAbort
(
self
):
self
.
sub1
.
_p_jar
=
BasicJar
(
errors
=
'abort'
)
self
.
nosub1
.
modify
()
self
.
sub1
.
modify
(
nojar
=
1
)
self
.
sub2
.
modify
()
try
:
self
.
transaction_manager
.
abort
()
except
TestTxnException
:
pass
assert
self
.
nosub1
.
_p_jar
.
cabort
==
1
assert
self
.
sub2
.
_p_jar
.
cabort
==
1
def
testExceptionInCommit
(
self
):
self
.
sub1
.
_p_jar
=
BasicJar
(
errors
=
'commit'
)
self
.
nosub1
.
modify
()
self
.
sub1
.
modify
(
nojar
=
1
)
try
:
self
.
transaction_manager
.
commit
()
except
TestTxnException
:
pass
assert
self
.
nosub1
.
_p_jar
.
ctpc_finish
==
0
assert
self
.
nosub1
.
_p_jar
.
ccommit
==
1
assert
self
.
nosub1
.
_p_jar
.
ctpc_abort
==
1
def
testExceptionInTpcVote
(
self
):
self
.
sub1
.
_p_jar
=
BasicJar
(
errors
=
'tpc_vote'
)
self
.
nosub1
.
modify
()
self
.
sub1
.
modify
(
nojar
=
1
)
try
:
self
.
transaction_manager
.
commit
()
except
TestTxnException
:
pass
assert
self
.
nosub1
.
_p_jar
.
ctpc_finish
==
0
assert
self
.
nosub1
.
_p_jar
.
ccommit
==
1
assert
self
.
nosub1
.
_p_jar
.
ctpc_abort
==
1
assert
self
.
sub1
.
_p_jar
.
ctpc_abort
==
1
def
testExceptionInTpcBegin
(
self
):
"""
ok this test reveals a bug in the TM.py
as the nosub tpc_abort there is ignored.
nosub calling method tpc_begin
nosub calling method commit
sub calling method tpc_begin
sub calling method abort
sub calling method tpc_abort
nosub calling method tpc_abort
"""
self
.
sub1
.
_p_jar
=
BasicJar
(
errors
=
'tpc_begin'
)
self
.
nosub1
.
modify
()
self
.
sub1
.
modify
(
nojar
=
1
)
try
:
self
.
transaction_manager
.
commit
()
except
TestTxnException
:
pass
assert
self
.
nosub1
.
_p_jar
.
ctpc_abort
==
1
assert
self
.
sub1
.
_p_jar
.
ctpc_abort
==
1
def
testExceptionInTpcAbort
(
self
):
self
.
sub1
.
_p_jar
=
BasicJar
(
errors
=
(
'tpc_abort'
,
'tpc_vote'
))
self
.
nosub1
.
modify
()
self
.
sub1
.
modify
(
nojar
=
1
)
try
:
self
.
transaction_manager
.
commit
()
except
TestTxnException
:
pass
assert
self
.
nosub1
.
_p_jar
.
ctpc_abort
==
1
# last test, check the hosing mechanism
## def testHoserStoppage(self):
## # It's hard to test the "hosed" state of the database, where
## # hosed means that a failure occurred in the second phase of
## # the two phase commit. It's hard because the database can
## # recover from such an error if it occurs during the very first
## # tpc_finish() call of the second phase.
## for obj in self.sub1, self.sub2:
## j = HoserJar(errors='tpc_finish')
## j.reset()
## obj._p_jar = j
## obj.modify(nojar=1)
## try:
## transaction.commit()
## except TestTxnException:
## pass
## self.assert_(Transaction.hosed)
## self.sub2.modify()
## try:
## transaction.commit()
## except Transaction.POSException.TransactionError:
## pass
## else:
## self.fail("Hosed Application didn't stop commits")
class
DataObject
:
def
__init__
(
self
,
transaction_manager
,
nost
=
0
):
self
.
transaction_manager
=
transaction_manager
self
.
nost
=
nost
self
.
_p_jar
=
None
def
modify
(
self
,
nojar
=
0
,
tracing
=
0
):
if
not
nojar
:
if
self
.
nost
:
self
.
_p_jar
=
BasicJar
(
tracing
=
tracing
)
else
:
self
.
_p_jar
=
BasicJar
(
tracing
=
tracing
)
self
.
transaction_manager
.
get
().
join
(
self
.
_p_jar
)
class
TestTxnException
(
Exception
):
pass
class
BasicJar
:
def
__init__
(
self
,
errors
=
(),
tracing
=
0
):
if
not
isinstance
(
errors
,
tuple
):
errors
=
errors
,
self
.
errors
=
errors
self
.
tracing
=
tracing
self
.
cabort
=
0
self
.
ccommit
=
0
self
.
ctpc_begin
=
0
self
.
ctpc_abort
=
0
self
.
ctpc_vote
=
0
self
.
ctpc_finish
=
0
self
.
cabort_sub
=
0
self
.
ccommit_sub
=
0
def
__repr__
(
self
):
return
"<%s %X %s>"
%
(
self
.
__class__
.
__name__
,
positive_id
(
self
),
self
.
errors
)
def
sortKey
(
self
):
# All these jars use the same sort key, and Python's list.sort()
# is stable. These two
return
self
.
__class__
.
__name__
def
check
(
self
,
method
):
if
self
.
tracing
:
print
'%s calling method %s'
%
(
str
(
self
.
tracing
),
method
)
if
method
in
self
.
errors
:
raise
TestTxnException
(
"error %s"
%
method
)
## basic jar txn interface
def
abort
(
self
,
*
args
):
self
.
check
(
'abort'
)
self
.
cabort
+=
1
def
commit
(
self
,
*
args
):
self
.
check
(
'commit'
)
self
.
ccommit
+=
1
def
tpc_begin
(
self
,
txn
,
sub
=
0
):
self
.
check
(
'tpc_begin'
)
self
.
ctpc_begin
+=
1
def
tpc_vote
(
self
,
*
args
):
self
.
check
(
'tpc_vote'
)
self
.
ctpc_vote
+=
1
def
tpc_abort
(
self
,
*
args
):
self
.
check
(
'tpc_abort'
)
self
.
ctpc_abort
+=
1
def
tpc_finish
(
self
,
*
args
):
self
.
check
(
'tpc_finish'
)
self
.
ctpc_finish
+=
1
class
HoserJar
(
BasicJar
):
# The HoserJars coordinate their actions via the class variable
# committed. The check() method will only raise its exception
# if committed > 0.
committed
=
0
def
reset
(
self
):
# Calling reset() on any instance will reset the class variable.
HoserJar
.
committed
=
0
def
check
(
self
,
method
):
if
HoserJar
.
committed
>
0
:
BasicJar
.
check
(
self
,
method
)
def
tpc_finish
(
self
,
*
args
):
self
.
check
(
'tpc_finish'
)
self
.
ctpc_finish
+=
1
HoserJar
.
committed
+=
1
def
test_join
():
"""White-box test of the join method
The join method is provided for "backward-compatability" with ZODB 4
data managers.
The argument to join must be a zodb4 data manager,
transaction.interfaces.IDataManager.
>>> from ZODB.tests.sampledm import DataManager
>>> from transaction._transaction import DataManagerAdapter
>>> t = transaction.Transaction()
>>> dm = DataManager()
>>> t.join(dm)
The end result is that a data manager adapter is one of the
transaction's objects:
>>> isinstance(t._resources[0], DataManagerAdapter)
True
>>> t._resources[0]._datamanager is dm
True
"""
def
hook
():
pass
# deprecated38; remove this then
def
test_beforeCommitHook
():
"""Test beforeCommitHook.
Let's define a hook to call, and a way to see that it was called.
>>> log = []
>>> def reset_log():
... del log[:]
>>> def hook(arg='no_arg', kw1='no_kw1', kw2='no_kw2'):
... log.append("arg %r kw1 %r kw2 %r" % (arg, kw1, kw2))
beforeCommitHook is deprecated, so we need cruft to suppress the
warnings.
>>> whook = WarningsHook()
>>> whook.install()
Fool the warnings module into delivering the warnings despite that
they've been seen before; this is needed in case this test is run
more than once.
>>> import warnings
>>> warnings.filterwarnings("always", category=DeprecationWarning)
Now register the hook with a transaction.
>>> import transaction
>>> t = transaction.begin()
>>> t.beforeCommitHook(hook, '1')
Make sure it triggered a deprecation warning:
>>> len(whook.warnings)
1
>>> message, category, filename, lineno = whook.warnings[0]
>>> print message
This will be removed in ZODB 3.8:
Use addBeforeCommitHook instead of beforeCommitHook.
>>> category.__name__
'DeprecationWarning'
>>> whook.clear()
We can see that the hook is indeed registered.
>>> [(hook.func_name, args, kws)
... for hook, args, kws in t.getBeforeCommitHooks()]
[('hook', ('1',), {})]
When transaction commit starts, the hook is called, with its
arguments.
>>> log
[]
>>> t.commit()
>>> log
["arg '1' kw1 'no_kw1' kw2 'no_kw2'"]
>>> reset_log()
A hook's registration is consumed whenever the hook is called. Since
the hook above was called, it's no longer registered:
>>> len(list(t.getBeforeCommitHooks()))
0
>>> transaction.commit()
>>> log
[]
The hook is only called for a full commit, not for a savepoint.
>>> t = transaction.begin()
>>> t.beforeCommitHook(hook, 'A', kw1='B')
>>> dummy = t.savepoint()
>>> log
[]
>>> t.commit()
>>> log
["arg 'A' kw1 'B' kw2 'no_kw2'"]
>>> reset_log()
If a transaction is aborted, no hook is called.
>>> t = transaction.begin()
>>> t.beforeCommitHook(hook, "OOPS!")
>>> transaction.abort()
>>> log
[]
>>> transaction.commit()
>>> log
[]
The hook is called before the commit does anything, so even if the
commit fails the hook will have been called. To provoke failures in
commit, we'll add failing resource manager to the transaction.
>>> class CommitFailure(Exception):
... pass
>>> class FailingDataManager:
... def tpc_begin(self, txn, sub=False):
... raise CommitFailure
... def abort(self, txn):
... pass
>>> t = transaction.begin()
>>> t.join(FailingDataManager())
>>> t.beforeCommitHook(hook, '2')
>>> t.commit()
Traceback (most recent call last):
...
CommitFailure
>>> log
["arg '2' kw1 'no_kw1' kw2 'no_kw2'"]
>>> reset_log()
Let's register several hooks.
>>> t = transaction.begin()
>>> t.beforeCommitHook(hook, '4', kw1='4.1')
>>> t.beforeCommitHook(hook, '5', kw2='5.2')
They are returned in the same order by getBeforeCommitHooks.
>>> [(hook.func_name, args, kws) #doctest: +NORMALIZE_WHITESPACE
... for hook, args, kws in t.getBeforeCommitHooks()]
[('hook', ('4',), {'kw1': '4.1'}),
('hook', ('5',), {'kw2': '5.2'})]
And commit also calls them in this order.
>>> t.commit()
>>> len(log)
2
>>> log #doctest: +NORMALIZE_WHITESPACE
["arg '4' kw1 '4.1' kw2 'no_kw2'",
"arg '5' kw1 'no_kw1' kw2 '5.2'"]
>>> reset_log()
While executing, a hook can itself add more hooks, and they will all
be called before the real commit starts.
>>> def recurse(txn, arg):
... log.append('rec' + str(arg))
... if arg:
... txn.beforeCommitHook(hook, '-')
... txn.beforeCommitHook(recurse, txn, arg-1)
>>> t = transaction.begin()
>>> t.beforeCommitHook(recurse, t, 3)
>>> transaction.commit()
>>> log #doctest: +NORMALIZE_WHITESPACE
['rec3',
"arg '-' kw1 'no_kw1' kw2 'no_kw2'",
'rec2',
"arg '-' kw1 'no_kw1' kw2 'no_kw2'",
'rec1',
"arg '-' kw1 'no_kw1' kw2 'no_kw2'",
'rec0']
>>> reset_log()
We have to uninstall the warnings hook so that other warnings don't get
lost.
>>> whook.uninstall()
Obscure: There is no API call for removing the filter we added, but
filters appears to be a public variable.
>>> del warnings.filters[0]
"""
def
test_addBeforeCommitHook
():
"""Test addBeforeCommitHook.
Let's define a hook to call, and a way to see that it was called.
>>> log = []
>>> def reset_log():
... del log[:]
>>> def hook(arg='no_arg', kw1='no_kw1', kw2='no_kw2'):
... log.append("arg %r kw1 %r kw2 %r" % (arg, kw1, kw2))
Now register the hook with a transaction.
>>> import transaction
>>> t = transaction.begin()
>>> t.addBeforeCommitHook(hook, '1')
We can see that the hook is indeed registered.
>>> [(hook.func_name, args, kws)
... for hook, args, kws in t.getBeforeCommitHooks()]
[('hook', ('1',), {})]
When transaction commit starts, the hook is called, with its
arguments.
>>> log
[]
>>> t.commit()
>>> log
["arg '1' kw1 'no_kw1' kw2 'no_kw2'"]
>>> reset_log()
A hook's registration is consumed whenever the hook is called. Since
the hook above was called, it's no longer registered:
>>> len(list(t.getBeforeCommitHooks()))
0
>>> transaction.commit()
>>> log
[]
The hook is only called for a full commit, not for a savepoint.
>>> t = transaction.begin()
>>> t.addBeforeCommitHook(hook, 'A', dict(kw1='B'))
>>> dummy = t.savepoint()
>>> log
[]
>>> t.commit()
>>> log
["arg 'A' kw1 'B' kw2 'no_kw2'"]
>>> reset_log()
If a transaction is aborted, no hook is called.
>>> t = transaction.begin()
>>> t.addBeforeCommitHook(hook, ["OOPS!"])
>>> transaction.abort()
>>> log
[]
>>> transaction.commit()
>>> log
[]
The hook is called before the commit does anything, so even if the
commit fails the hook will have been called. To provoke failures in
commit, we'll add failing resource manager to the transaction.
>>> class CommitFailure(Exception):
... pass
>>> class FailingDataManager:
... def tpc_begin(self, txn, sub=False):
... raise CommitFailure
... def abort(self, txn):
... pass
>>> t = transaction.begin()
>>> t.join(FailingDataManager())
>>> t.addBeforeCommitHook(hook, '2')
>>> t.commit()
Traceback (most recent call last):
...
CommitFailure
>>> log
["arg '2' kw1 'no_kw1' kw2 'no_kw2'"]
>>> reset_log()
Let's register several hooks.
>>> t = transaction.begin()
>>> t.addBeforeCommitHook(hook, '4', dict(kw1='4.1'))
>>> t.addBeforeCommitHook(hook, '5', dict(kw2='5.2'))
They are returned in the same order by getBeforeCommitHooks.
>>> [(hook.func_name, args, kws) #doctest: +NORMALIZE_WHITESPACE
... for hook, args, kws in t.getBeforeCommitHooks()]
[('hook', ('4',), {'kw1': '4.1'}),
('hook', ('5',), {'kw2': '5.2'})]
And commit also calls them in this order.
>>> t.commit()
>>> len(log)
2
>>> log #doctest: +NORMALIZE_WHITESPACE
["arg '4' kw1 '4.1' kw2 'no_kw2'",
"arg '5' kw1 'no_kw1' kw2 '5.2'"]
>>> reset_log()
While executing, a hook can itself add more hooks, and they will all
be called before the real commit starts.
>>> def recurse(txn, arg):
... log.append('rec' + str(arg))
... if arg:
... txn.addBeforeCommitHook(hook, '-')
... txn.addBeforeCommitHook(recurse, (txn, arg-1))
>>> t = transaction.begin()
>>> t.addBeforeCommitHook(recurse, (t, 3))
>>> transaction.commit()
>>> log #doctest: +NORMALIZE_WHITESPACE
['rec3',
"arg '-' kw1 'no_kw1' kw2 'no_kw2'",
'rec2',
"arg '-' kw1 'no_kw1' kw2 'no_kw2'",
'rec1',
"arg '-' kw1 'no_kw1' kw2 'no_kw2'",
'rec0']
>>> reset_log()
When modifing persitent objects within before commit hooks
modifies the objects, of course :)
Start a new transaction
>>> t = transaction.begin()
Create a DB instance and add a IOBTree within
>>> from ZODB.tests.util import DB
>>> from ZODB.tests.util import P
>>> db = DB()
>>> con = db.open()
>>> root = con.root()
>>> root['p'] = P('julien')
>>> p = root['p']
>>> p.name
'julien'
This hook will get the object from the `DB` instance and change
the flag attribute.
>>> def hookmodify(status, arg=None, kw1='no_kw1', kw2='no_kw2'):
... p.name = 'jul'
Now register this hook and commit.
>>> t.addBeforeCommitHook(hookmodify, (p, 1))
>>> transaction.commit()
Nothing should have changed since it should have been aborted.
>>> p.name
'jul'
>>> db.close()
"""
def
test_addAfterCommitHook
():
"""Test addAfterCommitHook.
Let's define a hook to call, and a way to see that it was called.
>>> log = []
>>> def reset_log():
... del log[:]
>>> def hook(status, arg='no_arg', kw1='no_kw1', kw2='no_kw2'):
... log.append("%r arg %r kw1 %r kw2 %r" % (status, arg, kw1, kw2))
Now register the hook with a transaction.
>>> import transaction
>>> t = transaction.begin()
>>> t.addAfterCommitHook(hook, '1')
We can see that the hook is indeed registered.
>>> [(hook.func_name, args, kws)
... for hook, args, kws in t.getAfterCommitHooks()]
[('hook', ('1',), {})]
When transaction commit is done, the hook is called, with its
arguments.
>>> log
[]
>>> t.commit()
>>> log
["True arg '1' kw1 'no_kw1' kw2 'no_kw2'"]
>>> reset_log()
A hook's registration is consumed whenever the hook is called. Since
the hook above was called, it's no longer registered:
>>> len(list(t.getAfterCommitHooks()))
0
>>> transaction.commit()
>>> log
[]
The hook is only called after a full commit, not for a savepoint.
>>> t = transaction.begin()
>>> t.addAfterCommitHook(hook, 'A', dict(kw1='B'))
>>> dummy = t.savepoint()
>>> log
[]
>>> t.commit()
>>> log
["True arg 'A' kw1 'B' kw2 'no_kw2'"]
>>> reset_log()
If a transaction is aborted, no hook is called.
>>> t = transaction.begin()
>>> t.addAfterCommitHook(hook, ["OOPS!"])
>>> transaction.abort()
>>> log
[]
>>> transaction.commit()
>>> log
[]
The hook is called after the commit is done, so even if the
commit fails the hook will have been called. To provoke failures in
commit, we'll add failing resource manager to the transaction.
>>> class CommitFailure(Exception):
... pass
>>> class FailingDataManager:
... def tpc_begin(self, txn):
... raise CommitFailure
... def abort(self, txn):
... pass
>>> t = transaction.begin()
>>> t.join(FailingDataManager())
>>> t.addAfterCommitHook(hook, '2')
>>> t.commit()
Traceback (most recent call last):
...
CommitFailure
>>> log
["False arg '2' kw1 'no_kw1' kw2 'no_kw2'"]
>>> reset_log()
Let's register several hooks.
>>> t = transaction.begin()
>>> t.addAfterCommitHook(hook, '4', dict(kw1='4.1'))
>>> t.addAfterCommitHook(hook, '5', dict(kw2='5.2'))
They are returned in the same order by getAfterCommitHooks.
>>> [(hook.func_name, args, kws) #doctest: +NORMALIZE_WHITESPACE
... for hook, args, kws in t.getAfterCommitHooks()]
[('hook', ('4',), {'kw1': '4.1'}),
('hook', ('5',), {'kw2': '5.2'})]
And commit also calls them in this order.
>>> t.commit()
>>> len(log)
2
>>> log #doctest: +NORMALIZE_WHITESPACE
["True arg '4' kw1 '4.1' kw2 'no_kw2'",
"True arg '5' kw1 'no_kw1' kw2 '5.2'"]
>>> reset_log()
While executing, a hook can itself add more hooks, and they will all
be called before the real commit starts.
>>> def recurse(status, txn, arg):
... log.append('rec' + str(arg))
... if arg:
... txn.addAfterCommitHook(hook, '-')
... txn.addAfterCommitHook(recurse, (txn, arg-1))
>>> t = transaction.begin()
>>> t.addAfterCommitHook(recurse, (t, 3))
>>> transaction.commit()
>>> log #doctest: +NORMALIZE_WHITESPACE
['rec3',
"True arg '-' kw1 'no_kw1' kw2 'no_kw2'",
'rec2',
"True arg '-' kw1 'no_kw1' kw2 'no_kw2'",
'rec1',
"True arg '-' kw1 'no_kw1' kw2 'no_kw2'",
'rec0']
>>> reset_log()
If an after commit hook is raising an exception then it will log a
message at error level so that if other hooks are registered they
can be executed. We don't support execution dependencies at this level.
>>> mgr = transaction.TransactionManager()
>>> do = DataObject(mgr)
>>> def hookRaise(status, arg='no_arg', kw1='no_kw1', kw2='no_kw2'):
... raise TypeError("Fake raise")
>>> t = transaction.begin()
>>> t.addAfterCommitHook(hook, ('-', 1))
>>> t.addAfterCommitHook(hookRaise, ('-', 2))
>>> t.addAfterCommitHook(hook, ('-', 3))
>>> transaction.commit()
>>> log
["True arg '-' kw1 1 kw2 'no_kw2'", "True arg '-' kw1 3 kw2 'no_kw2'"]
>>> reset_log()
Test that the associated transaction manager has been cleanup when
after commit hooks are registered
>>> mgr = transaction.TransactionManager()
>>> do = DataObject(mgr)
>>> t = transaction.begin()
>>> len(t._manager._txns)
1
>>> t.addAfterCommitHook(hook, ('-', 1))
>>> transaction.commit()
>>> log
["True arg '-' kw1 1 kw2 'no_kw2'"]
>>> len(t._manager._txns)
0
>>> reset_log()
The transaction is already committed when the after commit hooks
will be executed. Executing the hooks must not have further
effects on persistent objects.
Start a new transaction
>>> t = transaction.begin()
Create a DB instance and add a IOBTree within
>>> from ZODB.tests.util import DB
>>> from ZODB.tests.util import P
>>> db = DB()
>>> con = db.open()
>>> root = con.root()
>>> root['p'] = P('julien')
>>> p = root['p']
>>> p.name
'julien'
This hook will get the object from the `DB` instance and change
the flag attribute.
>>> def badhook(status, arg=None, kw1='no_kw1', kw2='no_kw2'):
... p.name = 'jul'
Now register this hook and commit.
>>> t.addAfterCommitHook(badhook, (p, 1))
>>> transaction.commit()
Nothing should have changed since it should have been aborted.
>>> p.name
'julien'
>>> db.close()
"""
def
test_suite
():
from
zope.testing.doctest
import
DocTestSuite
,
DocFileSuite
return
unittest
.
TestSuite
((
DocFileSuite
(
'doom.txt'
),
DocTestSuite
(),
unittest
.
makeSuite
(
TransactionTests
),
))
if
__name__
==
'__main__'
:
unittest
.
TextTestRunner
().
run
(
test_suite
())
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment