Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Z
ZODB
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Kirill Smelkov
ZODB
Commits
115158a3
Commit
115158a3
authored
May 20, 2015
by
Tres Seaver
Browse files
Options
Browse Files
Download
Plain Diff
Merge pull request #34 from NextThought/pypy
Support PyPy.
parents
4bc02fde
228604dd
Changes
36
Hide whitespace changes
Inline
Side-by-side
Showing
36 changed files
with
309 additions
and
143 deletions
+309
-143
.gitignore
.gitignore
+1
-0
.travis.yml
.travis.yml
+3
-1
CHANGES.rst
CHANGES.rst
+8
-1
setup.py
setup.py
+11
-5
src/ZODB/ConflictResolution.py
src/ZODB/ConflictResolution.py
+4
-13
src/ZODB/Connection.py
src/ZODB/Connection.py
+13
-12
src/ZODB/DB.py
src/ZODB/DB.py
+9
-2
src/ZODB/DemoStorage.test
src/ZODB/DemoStorage.test
+15
-6
src/ZODB/ExportImport.py
src/ZODB/ExportImport.py
+2
-7
src/ZODB/FileStorage/iterator.test
src/ZODB/FileStorage/iterator.test
+4
-1
src/ZODB/POSException.py
src/ZODB/POSException.py
+10
-0
src/ZODB/_compat.py
src/ZODB/_compat.py
+56
-9
src/ZODB/blob.py
src/ZODB/blob.py
+22
-4
src/ZODB/broken.py
src/ZODB/broken.py
+1
-2
src/ZODB/scripts/analyze.py
src/ZODB/scripts/analyze.py
+2
-3
src/ZODB/serialize.py
src/ZODB/serialize.py
+6
-20
src/ZODB/tests/PackableStorage.py
src/ZODB/tests/PackableStorage.py
+2
-7
src/ZODB/tests/StorageTestBase.py
src/ZODB/tests/StorageTestBase.py
+2
-6
src/ZODB/tests/blob_transaction.txt
src/ZODB/tests/blob_transaction.txt
+2
-2
src/ZODB/tests/dbopen.txt
src/ZODB/tests/dbopen.txt
+6
-5
src/ZODB/tests/testBroken.py
src/ZODB/tests/testBroken.py
+1
-1
src/ZODB/tests/testCache.py
src/ZODB/tests/testCache.py
+5
-2
src/ZODB/tests/testConnection.py
src/ZODB/tests/testConnection.py
+8
-4
src/ZODB/tests/testConnectionSavepoint.py
src/ZODB/tests/testConnectionSavepoint.py
+9
-0
src/ZODB/tests/testConnectionSavepoint.txt
src/ZODB/tests/testConnectionSavepoint.txt
+2
-3
src/ZODB/tests/testDB.py
src/ZODB/tests/testDB.py
+5
-2
src/ZODB/tests/testRecover.py
src/ZODB/tests/testRecover.py
+18
-2
src/ZODB/tests/testSerialize.py
src/ZODB/tests/testSerialize.py
+37
-2
src/ZODB/tests/test_cache.py
src/ZODB/tests/test_cache.py
+5
-2
src/ZODB/tests/test_fsdump.py
src/ZODB/tests/test_fsdump.py
+1
-1
src/ZODB/tests/testblob.py
src/ZODB/tests/testblob.py
+12
-0
src/ZODB/tests/testfsoids.py
src/ZODB/tests/testfsoids.py
+8
-8
src/ZODB/tests/testpersistentclass.py
src/ZODB/tests/testpersistentclass.py
+3
-4
src/ZODB/tests/util.py
src/ZODB/tests/util.py
+4
-2
src/ZODB/utils.txt
src/ZODB/utils.txt
+8
-3
tox.ini
tox.ini
+4
-1
No files found.
.gitignore
View file @
115158a3
...
...
@@ -17,3 +17,4 @@ coverage.xml
dist
testing.log
.eggs/
.dir-locals.el
.travis.yml
View file @
115158a3
language
:
python
sudo
:
false
python
:
-
pypy
-
pypy3
-
2.6
-
2.7
-
3.2
-
3.3
-
3.4
install
:
-
travis_retry pip install BTrees ZConfig manuel persistent six transaction zc.lockfile zdaemon zope.interface zope.testing zope.testrunner
==4.4.4
-
travis_retry pip install BTrees ZConfig manuel persistent six transaction zc.lockfile zdaemon zope.interface zope.testing zope.testrunner
-
travis_retry pip install -e .
script
:
-
zope-testrunner -u --test-path=src --auto-color --auto-progress
...
...
CHANGES.rst
View file @
115158a3
...
...
@@ -2,12 +2,19 @@
Change History
================
4.
1.1
(unreleased)
4.
2.0
(unreleased)
==================
- Fix command-line parsing of --verbose and --verify arguments.
(The short versions -v and -V were parsed correctly.)
- Add support for PyPy.
- Fix the methods in ``ZODB.serialize`` that find object references
under Python 2.7 (used in scripts like ``referrers``, ``netspace``,
and ``fsrecover`` among others). This requires the addition of the
``zodbpickle`` dependency.
4.1.0 (2015-01-11)
==================
...
...
setup.py
View file @
115158a3
...
...
@@ -20,9 +20,10 @@ to application logic. ZODB includes features such as a plugable storage
interface, rich transaction support, and undo.
"""
VERSION
=
"4.
1.
0"
VERSION
=
"4.
2.0.dev
0"
import
os
import
platform
import
sys
from
setuptools
import
setup
,
find_packages
...
...
@@ -35,6 +36,10 @@ if (3,) < sys.version_info < (3, 2):
sys
.
exit
(
0
)
PY3
=
sys
.
version_info
>=
(
3
,)
PY27
=
sys
.
version_info
>=
(
2
,
7
)
py_impl
=
getattr
(
platform
,
'python_implementation'
,
lambda
:
None
)
PYPY
=
py_impl
()
==
'PyPy'
# The (non-obvious!) choices for the Trove Development Status line:
# Development Status :: 5 - Production/Stable
...
...
@@ -54,6 +59,7 @@ Programming Language :: Python :: 3.2
Programming Language :: Python :: 3.3
Programming Language :: Python :: 3.4
Programming Language :: Python :: Implementation :: CPython
Programming Language :: Python :: Implementation :: PyPy
Topic :: Database
Topic :: Software Development :: Libraries :: Python Modules
Operating System :: Microsoft :: Windows
...
...
@@ -153,15 +159,15 @@ setup(name="ZODB",
tests_require
=
tests_require
,
extras_require
=
dict
(
test
=
tests_require
),
install_requires
=
[
'persistent'
,
'BTrees'
,
'persistent
>= 4.1.0
'
,
'BTrees
>= 4.1.3
'
,
'ZConfig'
,
'transaction >= 1.4.
1'
if
PY3
else
'transaction
'
,
'transaction >= 1.4.
4
'
,
'six'
,
'zc.lockfile'
,
'zdaemon >= 4.0.0a1'
,
'zope.interface'
,
]
+
([
'zodbpickle >= 0.
2'
]
if
PY3
else
[]),
]
+
([
'zodbpickle >= 0.
6.0'
]
if
(
PY3
or
PY27
or
PYPY
)
else
[]),
zip_safe
=
False
,
entry_points
=
"""
[console_scripts]
...
...
src/ZODB/ConflictResolution.py
View file @
115158a3
...
...
@@ -13,13 +13,12 @@
##############################################################################
import
logging
import
sys
import
six
import
zope.interface
from
ZODB.POSException
import
ConflictError
from
ZODB.loglevels
import
BLATHER
from
ZODB._compat
import
BytesIO
,
Unpickler
,
Pickler
,
_protocol
from
ZODB._compat
import
BytesIO
,
PersistentUnpickler
,
Persistent
Pickler
,
_protocol
# Subtle: Python 2.x has pickle.PicklingError and cPickle.PicklingError,
# and these are unrelated classes! So we shouldn't use pickle.PicklingError,
...
...
@@ -74,9 +73,7 @@ def state(self, oid, serial, prfactory, p=''):
p
=
p
or
self
.
loadSerial
(
oid
,
serial
)
p
=
self
.
_crs_untransform_record_data
(
p
)
file
=
BytesIO
(
p
)
unpickler
=
Unpickler
(
file
)
unpickler
.
find_global
=
find_global
unpickler
.
persistent_load
=
prfactory
.
persistent_load
unpickler
=
PersistentUnpickler
(
find_global
,
prfactory
.
persistent_load
,
file
)
unpickler
.
load
()
# skip the class tuple
return
unpickler
.
load
()
...
...
@@ -243,9 +240,7 @@ def tryToResolveConflict(self, oid, committedSerial, oldSerial, newpickle,
prfactory
=
PersistentReferenceFactory
()
newpickle
=
self
.
_crs_untransform_record_data
(
newpickle
)
file
=
BytesIO
(
newpickle
)
unpickler
=
Unpickler
(
file
)
unpickler
.
find_global
=
find_global
unpickler
.
persistent_load
=
prfactory
.
persistent_load
unpickler
=
PersistentUnpickler
(
find_global
,
prfactory
.
persistent_load
,
file
)
meta
=
unpickler
.
load
()
if
isinstance
(
meta
,
tuple
):
klass
=
meta
[
0
]
...
...
@@ -286,11 +281,7 @@ def tryToResolveConflict(self, oid, committedSerial, oldSerial, newpickle,
resolved
=
resolve
(
old
,
committed
,
newstate
)
file
=
BytesIO
()
pickler
=
Pickler
(
file
,
_protocol
)
if
sys
.
version_info
[
0
]
<
3
:
pickler
.
inst_persistent_id
=
persistent_id
else
:
pickler
.
persistent_id
=
persistent_id
pickler
=
PersistentPickler
(
persistent_id
,
file
,
_protocol
)
pickler
.
dump
(
meta
)
pickler
.
dump
(
resolved
)
return
self
.
_crs_transform_record_data
(
file
.
getvalue
())
...
...
src/ZODB/Connection.py
View file @
115158a3
...
...
@@ -328,10 +328,10 @@ class Connection(ExportImport, object):
# get back here.
else
:
self
.
opened
=
None
am
=
self
.
_db
.
_activity_monitor
if
am
is
not
None
:
am
.
closedConnection
(
self
)
am
.
closedConnection
(
self
)
def
db
(
self
):
"""Returns a handle to the database this connection belongs to."""
...
...
@@ -439,7 +439,6 @@ class Connection(ExportImport, object):
# the savepoint, then they won't have _p_oid or _p_jar after
# they've been unadded. This will make the code in _abort
# confused.
self
.
_abort
()
if
self
.
_savepoint_storage
is
not
None
:
...
...
@@ -463,7 +462,6 @@ class Connection(ExportImport, object):
if
obj
.
_p_changed
:
obj
.
_p_changed
=
False
else
:
# Note: If we invalidate a non-ghostifiable object
# (i.e. a persistent class), the object will
# immediately reread its state. That means that the
...
...
@@ -868,7 +866,7 @@ class Connection(ExportImport, object):
raise
try
:
self
.
_setstate
(
obj
)
self
.
_setstate
(
obj
,
oid
)
except
ConflictError
:
raise
except
:
...
...
@@ -876,8 +874,11 @@ class Connection(ExportImport, object):
className
(
obj
),
oid_repr
(
oid
))
raise
def
_setstate
(
self
,
obj
):
def
_setstate
(
self
,
obj
,
oid
):
# Helper for setstate(), which provides logging of failures.
# We accept the oid param, which must be the same as obj._p_oid,
# as a performance optimization for the pure-Python persistent implementation
# where accessing an attribute involves __getattribute__ calls
# The control flow is complicated here to avoid loading an
# object revision that we are sure we aren't going to use. As
...
...
@@ -892,7 +893,7 @@ class Connection(ExportImport, object):
if
self
.
before
is
not
None
:
# Load data that was current before the time we have.
before
=
self
.
before
t
=
self
.
_storage
.
loadBefore
(
o
bj
.
_p_o
id
,
before
)
t
=
self
.
_storage
.
loadBefore
(
oid
,
before
)
if
t
is
None
:
raise
POSKeyError
()
# historical connection!
p
,
serial
,
end
=
t
...
...
@@ -905,16 +906,16 @@ class Connection(ExportImport, object):
if
self
.
_invalidatedCache
:
raise
ReadConflictError
()
if
(
o
bj
.
_p_o
id
in
self
.
_invalidated
):
if
(
oid
in
self
.
_invalidated
):
self
.
_load_before_or_conflict
(
obj
)
return
p
,
serial
=
self
.
_storage
.
load
(
o
bj
.
_p_o
id
,
''
)
p
,
serial
=
self
.
_storage
.
load
(
oid
,
''
)
self
.
_load_count
+=
1
self
.
_inv_lock
.
acquire
()
try
:
invalid
=
o
bj
.
_p_o
id
in
self
.
_invalidated
invalid
=
oid
in
self
.
_invalidated
finally
:
self
.
_inv_lock
.
release
()
...
...
@@ -924,13 +925,13 @@ class Connection(ExportImport, object):
self
.
_reader
.
setGhostState
(
obj
,
p
)
obj
.
_p_serial
=
serial
self
.
_cache
.
update_object_size_estimation
(
o
bj
.
_p_o
id
,
len
(
p
))
self
.
_cache
.
update_object_size_estimation
(
oid
,
len
(
p
))
obj
.
_p_estimated_size
=
len
(
p
)
# Blob support
if
isinstance
(
obj
,
Blob
):
obj
.
_p_blob_uncommitted
=
None
obj
.
_p_blob_committed
=
self
.
_storage
.
loadBlob
(
o
bj
.
_p_o
id
,
serial
)
obj
.
_p_blob_committed
=
self
.
_storage
.
loadBlob
(
oid
,
serial
)
def
_load_before_or_conflict
(
self
,
obj
):
"""Load non-current state for obj or raise ReadConflictError."""
...
...
src/ZODB/DB.py
View file @
115158a3
...
...
@@ -530,7 +530,11 @@ class DB(object):
def
cacheExtremeDetail
(
self
):
detail
=
[]
conn_no
=
[
0
]
# A mutable reference to a counter
def
f
(
con
,
detail
=
detail
,
rc
=
sys
.
getrefcount
,
conn_no
=
conn_no
):
# sys.getrefcount is a CPython implementation detail
# not required to exist on, e.g., PyPy.
rc
=
getattr
(
sys
,
'getrefcount'
,
None
)
def
f
(
con
,
detail
=
detail
,
rc
=
rc
,
conn_no
=
conn_no
):
conn_no
[
0
]
+=
1
cn
=
conn_no
[
0
]
for
oid
,
ob
in
con
.
_cache_items
():
...
...
@@ -555,12 +559,15 @@ class DB(object):
# sys.getrefcount(ob) returns. But, in addition to that,
# the cache holds an extra reference on non-ghost objects,
# and we also want to pretend that doesn't exist.
# If we have no way to get a refcount, we return False to symbolize
# that. As opposed to None, this has the advantage of being usable
# as a number (0) in case clients depended on that.
detail
.
append
({
'conn_no'
:
cn
,
'oid'
:
oid
,
'id'
:
id
,
'klass'
:
"%s%s"
%
(
module
,
ob
.
__class__
.
__name__
),
'rc'
:
rc
(
ob
)
-
3
-
(
ob
.
_p_changed
is
not
None
),
'rc'
:
rc
(
ob
)
-
3
-
(
ob
.
_p_changed
is
not
None
)
if
rc
else
False
,
'state'
:
ob
.
_p_changed
,
#'references': con.references(oid),
})
...
...
src/ZODB/DemoStorage.test
View file @
115158a3
...
...
@@ -14,7 +14,10 @@ existing, base, storage without updating the storage.
...
return
now
>>>
import
time
>>>
real_time_time
=
time
.
time
>>>
time
.
time
=
faux_time_time
>>>
if
isinstance
(
time
,
type
)
:
...
time
.
time
=
staticmethod
(
faux_time_time
)
# Jython
...
else
:
...
time
.
time
=
faux_time_time
To
see
how
this
works
,
we
'll start by creating a base storage and
puting an object (in addition to the root object) in it:
...
...
@@ -45,6 +48,13 @@ and combine the 2 in a demofilestorage:
>>> from ZODB.DemoStorage import DemoStorage
>>> storage = DemoStorage(base=base, changes=changes)
The storage will assign OIDs in a pseudo-random fashion, but for test
purposes we need to control where they start (since the random seeds
can be different on different platforms):
>>> storage._next_oid = 3553260803050964942
If there are no transactions, the storage reports the lastTransaction
of the base database:
...
...
@@ -375,12 +385,12 @@ Now, we create a demostorage.
If we ask for an oid, we'll get 1042.
>>>
u64(storage.new_oid(
))
>>>
print(u64(storage.new_oid()
))
1042
oids are allocated seuentially:
>>>
u64(storage.new_oid(
))
>>>
print(u64(storage.new_oid()
))
1043
Now, we'll save 1044 in changes so that it has to pick a new one randomly.
...
...
@@ -388,7 +398,7 @@ Now, we'll save 1044 in changes so that it has to pick a new one randomly.
>>> t = transaction.get()
>>> ZODB.tests.util.store(storage.changes, 1044)
>>>
u64(storage.new_oid(
))
>>>
print(u64(storage.new_oid()
))
called randint
2042
...
...
@@ -400,7 +410,7 @@ to force another attempt:
>>> oid = storage.new_oid()
called randint
called randint
>>>
u64(oid
)
>>>
print(u64(oid)
)
3042
DemoStorage keeps up with the issued OIDs to know when not to reissue them...
...
...
@@ -426,4 +436,3 @@ DemoStorage keeps up with the issued OIDs to know when not to reissue them...
.. restore time
>>> time.time = real_time_time
src/ZODB/ExportImport.py
View file @
115158a3
...
...
@@ -15,7 +15,6 @@
import
logging
import
os
import
sys
from
tempfile
import
TemporaryFile
import
six
...
...
@@ -25,7 +24,7 @@ from ZODB.interfaces import IBlobStorage
from
ZODB.POSException
import
ExportError
from
ZODB.serialize
import
referencesf
from
ZODB.utils
import
p64
,
u64
,
cp
,
mktemp
from
ZODB._compat
import
Pickler
,
Unpickler
,
BytesIO
,
_protocol
from
ZODB._compat
import
P
ersistentP
ickler
,
Unpickler
,
BytesIO
,
_protocol
logger
=
logging
.
getLogger
(
'ZODB.ExportImport'
)
...
...
@@ -178,11 +177,7 @@ class ExportImport:
unpickler
.
persistent_load
=
persistent_load
newp
=
BytesIO
()
pickler
=
Pickler
(
newp
,
_protocol
)
if
sys
.
version_info
[
0
]
<
3
:
pickler
.
inst_persistent_id
=
persistent_id
else
:
pickler
.
persistent_id
=
persistent_id
pickler
=
PersistentPickler
(
persistent_id
,
newp
,
_protocol
)
pickler
.
dump
(
unpickler
.
load
())
pickler
.
dump
(
unpickler
.
load
())
...
...
src/ZODB/FileStorage/iterator.test
View file @
115158a3
...
...
@@ -13,7 +13,10 @@ We'll make some assertions about time, so we'll take it over:
...
return
now
>>>
import
time
>>>
time_time
=
time
.
time
>>>
time
.
time
=
faux_time
>>>
if
isinstance
(
time
,
type
)
:
...
time
.
time
=
staticmethod
(
faux_time
)
# Jython
...
else
:
...
time
.
time
=
faux_time
Commit
a
bunch
of
transactions
:
...
...
src/ZODB/POSException.py
View file @
115158a3
...
...
@@ -61,6 +61,16 @@ class POSError(Exception):
return
(
_recon
,
(
self
.
__class__
,
state
))
def
__setstate__
(
self
,
state
):
# PyPy doesn't store the 'args' attribute in an instance's
# __dict__; instead, it uses what amounts to a slot. Because
# we customize the pickled representation to just be a dictionary,
# the args would then get lost, leading to unprintable exceptions
# and worse. Manually assign to args from the state to be sure
# this doesn't happen.
super
(
POSError
,
self
).
__setstate__
(
state
)
self
.
args
=
state
[
'args'
]
class
POSKeyError
(
POSError
,
KeyError
):
"""Key not found in database."""
...
...
src/ZODB/_compat.py
View file @
115158a3
...
...
@@ -11,15 +11,27 @@
# FOR A PARTICULAR PURPOSE
#
##############################################################################
import
sys
IS_JYTHON
=
sys
.
platform
.
startswith
(
'java'
)
try
:
# Python 2.x
from
cPickle
import
Pickler
from
cPickle
import
Unpickler
from
cPickle
import
dump
from
cPickle
import
dumps
from
cPickle
import
loads
from
cPickle
import
HIGHEST_PROTOCOL
import
cPickle
if
((
hasattr
(
cPickle
.
Unpickler
,
'load'
)
and
not
hasattr
(
cPickle
.
Unpickler
,
'noload'
))
or
sys
.
version_info
>=
(
2
,
7
)):
# PyPy doesn't have noload, and noload is broken in Python 2.7.
# Get the fastest version we can (PyPy has no fastpickle)
try
:
import
zodbpickle.fastpickle
as
cPickle
except
ImportError
:
import
zodbpickle.pickle
as
cPickle
Pickler
=
cPickle
.
Pickler
Unpickler
=
cPickle
.
Unpickler
dump
=
cPickle
.
dump
dumps
=
cPickle
.
dumps
loads
=
cPickle
.
loads
HIGHEST_PROTOCOL
=
cPickle
.
HIGHEST_PROTOCOL
IMPORT_MAPPING
=
{}
NAME_MAPPING
=
{}
_protocol
=
1
...
...
@@ -61,12 +73,47 @@ except ImportError:
FILESTORAGE_MAGIC
=
b"FS30"
# XXX: consistent spelling of inst_persistent_id/persistent_id?
# e.g. StorageTestBase and probably elsewhere
def
PersistentPickler
(
persistent_id
,
*
args
,
**
kwargs
):
"""
Returns a :class:`Pickler` that will use the given ``persistent_id``
to get persistent IDs. The remainder of the arguments are passed to the
Pickler itself.
This covers the differences between Python 2 and 3 and PyPy/zodbpickle.
"""
p
=
Pickler
(
*
args
,
**
kwargs
)
if
sys
.
version_info
[
0
]
<
3
:
p
.
inst_persistent_id
=
persistent_id
# PyPy uses a python implementation of cPickle/zodbpickle in both Python 2
# and Python 3. We can't really detect inst_persistent_id as its
# a magic attribute that's not readable, but it doesn't hurt to
# simply always assign to persistent_id also
p
.
persistent_id
=
persistent_id
return
p
def
PersistentUnpickler
(
find_global
,
load_persistent
,
*
args
,
**
kwargs
):
"""
Returns a :class:`Unpickler` that will use the given `find_global` function
to locate classes, and the given `load_persistent` function to load
objects from a persistent id.
This covers the differences between Python 2 and 3 and PyPy/zodbpickle.
"""
unpickler
=
Unpickler
(
*
args
,
**
kwargs
)
if
find_global
is
not
None
:
unpickler
.
find_global
=
find_global
try
:
unpickler
.
find_class
=
find_global
# PyPy, zodbpickle, the non-c-accelerated version
except
AttributeError
:
pass
if
load_persistent
is
not
None
:
unpickler
.
persistent_load
=
load_persistent
return
unpickler
try
:
# Python 2.x
# XXX: why not just import BytesIO from io?
from
cStringIO
import
StringIO
as
BytesIO
except
ImportError
:
...
...
src/ZODB/blob.py
View file @
115158a3
...
...
@@ -32,7 +32,7 @@ from ZODB.interfaces import BlobError
from
ZODB
import
utils
from
ZODB.POSException
import
POSKeyError
from
ZODB._compat
import
BytesIO
from
ZODB._compat
import
Unpickler
from
ZODB._compat
import
Persistent
Unpickler
from
ZODB._compat
import
decodebytes
from
ZODB._compat
import
ascii_bytes
from
ZODB._compat
import
INT_TYPES
...
...
@@ -57,6 +57,15 @@ valid_modes = 'r', 'w', 'r+', 'a', 'c'
# This introduces a threading issue, since a blob file may be destroyed
# via GC in any thread.
# PyPy 2.5 doesn't properly call the cleanup function
# of a weakref when the weakref object dies at the same time
# as the object it refers to. In other words, this doesn't work:
# self._ref = weakref.ref(self, lambda ref: ...)
# because the function never gets called (https://bitbucket.org/pypy/pypy/issue/2030).
# The Blob class used to use that pattern to clean up uncommitted
# files; now we use this module-level global (but still keep a
# reference in the Blob in case we need premature cleanup).
_blob_close_refs
=
[]
@
zope
.
interface
.
implementer
(
ZODB
.
interfaces
.
IBlob
)
class
Blob
(
persistent
.
Persistent
):
...
...
@@ -65,6 +74,7 @@ class Blob(persistent.Persistent):
_p_blob_uncommitted
=
None
# Filename of the uncommitted (dirty) data
_p_blob_committed
=
None
# Filename of the committed data
_p_blob_ref
=
None
# weakreference to self; also in _blob_close_refs
readers
=
writers
=
None
...
...
@@ -283,8 +293,13 @@ class Blob(persistent.Persistent):
def
cleanup
(
ref
):
if
os
.
path
.
exists
(
filename
):
os
.
remove
(
filename
)
try
:
_blob_close_refs
.
remove
(
ref
)
except
ValueError
:
pass
self
.
_p_blob_ref
=
weakref
.
ref
(
self
,
cleanup
)
_blob_close_refs
.
append
(
self
.
_p_blob_ref
)
return
filename
def
_uncommitted
(
self
):
...
...
@@ -293,6 +308,10 @@ class Blob(persistent.Persistent):
filename
=
self
.
_p_blob_uncommitted
if
filename
is
None
and
self
.
_p_blob_committed
is
None
:
filename
=
self
.
_create_uncommitted_file
()
try
:
_blob_close_refs
.
remove
(
self
.
_p_blob_ref
)
except
ValueError
:
pass
self
.
_p_blob_uncommitted
=
self
.
_p_blob_ref
=
None
return
filename
...
...
@@ -937,8 +956,7 @@ def is_blob_record(record):
"""
if
record
and
(
b'ZODB.blob'
in
record
):
unpickler
=
Unpickler
(
BytesIO
(
record
))
unpickler
.
find_global
=
find_global_Blob
unpickler
=
PersistentUnpickler
(
find_global_Blob
,
None
,
BytesIO
(
record
))
try
:
return
unpickler
.
load
()
is
Blob
...
...
src/ZODB/broken.py
View file @
115158a3
...
...
@@ -23,7 +23,6 @@ import ZODB.interfaces
from
ZODB._compat
import
IMPORT_MAPPING
from
ZODB._compat
import
NAME_MAPPING
broken_cache
=
{}
@
zope
.
interface
.
implementer
(
ZODB
.
interfaces
.
IBroken
)
...
...
@@ -308,7 +307,7 @@ class PersistentBroken(Broken, persistent.Persistent):
>>> a.__reduce__() # doctest: +NORMALIZE_WHITESPACE
Traceback (most recent call last):
...
BrokenModified:
BrokenModified:
<persistent broken not.there.Atall instance '\x00\x00\x00\x00****'>
but you can get their state:
...
...
src/ZODB/scripts/analyze.py
View file @
115158a3
...
...
@@ -6,7 +6,7 @@ from __future__ import print_function
import
sys
from
ZODB.FileStorage
import
FileStorage
from
ZODB._compat
import
Unpickler
,
BytesIO
from
ZODB._compat
import
Persistent
Unpickler
,
BytesIO
...
...
@@ -22,8 +22,7 @@ def fake_find_class(module, name):
def
FakeUnpickler
(
f
):
unpickler
=
Unpickler
(
f
)
unpickler
.
find_global
=
fake_find_class
unpickler
=
PersistentUnpickler
(
fake_find_class
,
None
,
f
)
return
unpickler
...
...
src/ZODB/serialize.py
View file @
115158a3
...
...
@@ -134,13 +134,12 @@ A number of legacyforms are defined:
"""
import
logging
import
sys
from
persistent
import
Persistent
from
persistent.wref
import
WeakRefMarker
,
WeakRef
from
ZODB
import
broken
from
ZODB.POSException
import
InvalidObjectReference
from
ZODB._compat
import
P
ickler
,
Unpickler
,
BytesIO
,
_protocol
from
ZODB._compat
import
P
ersistentPickler
,
Persistent
Unpickler
,
BytesIO
,
_protocol
_oidtypes
=
bytes
,
type
(
None
)
...
...
@@ -172,16 +171,7 @@ class ObjectWriter:
def
__init__
(
self
,
obj
=
None
):
self
.
_file
=
BytesIO
()
self
.
_p
=
Pickler
(
self
.
_file
,
_protocol
)
if
sys
.
version_info
[
0
]
<
3
:
self
.
_p
.
inst_persistent_id
=
self
.
persistent_id
# PyPy uses a python implementation of cPickle in both Python 2
# and Python 3. We can't really detect inst_persistent_id as its
# a magic attribute that's not readable, but it doesn't hurt to
# simply always assign to persistent_id also
self
.
_p
.
persistent_id
=
self
.
persistent_id
else
:
self
.
_p
.
persistent_id
=
self
.
persistent_id
self
.
_p
=
PersistentPickler
(
self
.
persistent_id
,
self
.
_file
,
_protocol
)
self
.
_stack
=
[]
if
obj
is
not
None
:
self
.
_stack
.
append
(
obj
)
...
...
@@ -474,15 +464,13 @@ class ObjectReader:
def
_get_unpickler
(
self
,
pickle
):
file
=
BytesIO
(
pickle
)
unpickler
=
Unpickler
(
file
)
unpickler
.
persistent_load
=
self
.
_persistent_load
factory
=
self
.
_factory
conn
=
self
.
_conn
def
find_global
(
modulename
,
name
):
return
factory
(
conn
,
modulename
,
name
)
unpickler
.
find_global
=
find_global
unpickler
=
PersistentUnpickler
(
find_global
,
self
.
_persistent_load
,
file
)
return
unpickler
...
...
@@ -646,8 +634,7 @@ def referencesf(p, oids=None):
"""
refs
=
[]
u
=
Unpickler
(
BytesIO
(
p
))
u
.
persistent_load
=
refs
.
append
u
=
PersistentUnpickler
(
None
,
refs
.
append
,
BytesIO
(
p
))
u
.
noload
()
u
.
noload
()
...
...
@@ -688,8 +675,7 @@ def get_refs(a_pickle):
"""
refs
=
[]
u
=
Unpickler
(
BytesIO
(
a_pickle
))
u
.
persistent_load
=
refs
.
append
u
=
PersistentUnpickler
(
None
,
refs
.
append
,
BytesIO
(
a_pickle
))
u
.
noload
()
u
.
noload
()
...
...
src/ZODB/tests/PackableStorage.py
View file @
115158a3
...
...
@@ -15,7 +15,6 @@
from
__future__
import
print_function
import
doctest
import
sys
import
time
from
persistent
import
Persistent
...
...
@@ -26,7 +25,7 @@ from ZODB.serialize import referencesf
from
ZODB.tests.MinPO
import
MinPO
from
ZODB.tests.MTStorage
import
TestThread
from
ZODB.tests.StorageTestBase
import
snooze
from
ZODB._compat
import
loads
,
Pickler
,
Unpickler
,
BytesIO
,
_protocol
from
ZODB._compat
import
loads
,
P
ersistentPickler
,
P
ickler
,
Unpickler
,
BytesIO
,
_protocol
import
transaction
import
ZODB.interfaces
import
ZODB.tests.util
...
...
@@ -85,11 +84,7 @@ def dumps(obj):
return
obj
.
getoid
()
return
None
s
=
BytesIO
()
p
=
Pickler
(
s
,
_protocol
)
if
sys
.
version_info
[
0
]
<
3
:
p
.
inst_persistent_id
=
getpersid
else
:
p
.
persistent_id
=
getpersid
p
=
PersistentPickler
(
getpersid
,
s
,
_protocol
)
p
.
dump
(
obj
)
p
.
dump
(
None
)
return
s
.
getvalue
()
...
...
src/ZODB/tests/StorageTestBase.py
View file @
115158a3
...
...
@@ -25,7 +25,7 @@ import transaction
from
ZODB.utils
import
u64
from
ZODB.tests.MinPO
import
MinPO
from
ZODB._compat
import
Pickler
,
Unpickler
,
BytesIO
,
_protocol
from
ZODB._compat
import
P
ersistentP
ickler
,
Unpickler
,
BytesIO
,
_protocol
import
ZODB.tests.util
...
...
@@ -50,11 +50,7 @@ def _persistent_id(obj):
def
zodb_pickle
(
obj
):
"""Create a pickle in the format expected by ZODB."""
f
=
BytesIO
()
p
=
Pickler
(
f
,
_protocol
)
if
sys
.
version_info
[
0
]
<
3
:
p
.
inst_persistent_id
=
_persistent_id
else
:
p
.
persistent_id
=
_persistent_id
p
=
PersistentPickler
(
_persistent_id
,
f
,
_protocol
)
klass
=
obj
.
__class__
assert
not
hasattr
(
obj
,
'__getinitargs__'
),
"not ready for constructors"
args
=
None
...
...
src/ZODB/tests/blob_transaction.txt
View file @
115158a3
...
...
@@ -31,7 +31,7 @@ Aborting a blob add leaves the blob unchanged:
... fp.read()
'this is blob 1'
It doesn't clear the file because there is no previously committed version:
It doesn't clear the file because there is no previously committed version:
>>> fname = blob1._p_blob_uncommitted
>>> import os
...
...
@@ -79,7 +79,7 @@ resulting filehandle is accomplished via the filehandle's read method::
>>> blob1afh1.read()
'this is blob 1'
Let's make another filehandle for read only to blob1a.
A
ach file
Let's make another filehandle for read only to blob1a.
E
ach file
handle has a reference to the (same) underlying blob::
>>> blob1afh2 = blob1a.open("r")
...
...
src/ZODB/tests/dbopen.txt
View file @
115158a3
...
...
@@ -228,17 +228,18 @@ that are still alive.
3
If a connection object is abandoned (it becomes unreachable), then it
will vanish from pool.all automatically. However, connections are
involved in cycles, so exactly when a connection vanishes from pool.all
isn't predictable. It can be forced by running gc.collect():
will vanish from pool.all automatically. However, connections are
involved in cycles, so exactly when a connection vanishes from
pool.all isn't predictable. It can be forced (on most platforms but
not Jython) by running gc.collect():
>>> import gc
>>> import gc
, sys
>>> dummy = gc.collect()
>>> len(pool.all)
3
>>> c3 = None
>>> dummy = gc.collect() # removes c3 from pool.all
>>> len(pool.all)
>>> len(pool.all)
if not sys.platform.startswith("java") else 2
2
Note that c3 is really gone; in particular it didn't get added back to
...
...
src/ZODB/tests/testBroken.py
View file @
115158a3
...
...
@@ -67,7 +67,7 @@ def test_integration():
>>> conn3 = db.open()
>>> a3 = conn3.root()['a']
>>> a3 # doctest: +NORMALIZE_WHITESPACE
<persistent broken ZODB.not.there.Atall instance
<persistent broken ZODB.not.there.Atall instance
'\x00\x00\x00\x00\x00\x00\x00\x01'>
>>> a3.__Broken_state__
...
...
src/ZODB/tests/testCache.py
View file @
115158a3
...
...
@@ -339,7 +339,10 @@ class CacheErrors(unittest.TestCase):
def
add
(
key
,
obj
):
self
.
cache
[
key
]
=
obj
nones
=
sys
.
getrefcount
(
None
)
# getrefcount is an implementation detail of CPython,
# not present under PyPy/Jython
rc
=
getattr
(
sys
,
'getrefcount'
,
lambda
x
:
1
)
nones
=
rc
(
None
)
key
=
p64
(
2
)
# value isn't persistent
...
...
@@ -369,7 +372,7 @@ class CacheErrors(unittest.TestCase):
# structure that adds a new reference to None for each executed
# line of code, which interferes with this test. So check it
# only if we're running without coverage tracing.
self
.
assertEqual
(
sys
.
getrefcount
(
None
),
nones
)
self
.
assertEqual
(
rc
(
None
),
nones
)
def
testTwoCaches
(
self
):
jar2
=
StubDataManager
()
...
...
src/ZODB/tests/testConnection.py
View file @
115158a3
...
...
@@ -244,10 +244,14 @@ class UserMethodTests(unittest.TestCase):
If all references to the object are released, then a new
object will be returned. The cache doesn't keep unreferenced
ghosts alive. (The next object returned my still have the
same id, because Python may re-use the same memory.)
ghosts alive, although on some implementations like PyPy we
need to run a garbage collection to be sure they go away. (The
next object returned may still have the same id, because Python
may re-use the same memory.)
>>> del obj, obj2
>>> import gc
>>> _ = gc.collect()
>>> cn._cache.get(p64(0), None)
If the object is unghosted, then it will stay in the cache
...
...
@@ -683,8 +687,8 @@ def doctest_proper_ghost_initialization_with_empty__p_deactivate():
>>> transaction.commit()
>>> conn2 = db.open()
>>>
conn2.root.x._p_changed
>>>
bool(conn2.root.x._p_changed)
False
>>> conn2.root.x.y
1
...
...
src/ZODB/tests/testConnectionSavepoint.py
View file @
115158a3
...
...
@@ -99,6 +99,7 @@ one traditional use for savepoints is simply to free memory space midstream
during a long transaction. Before ZODB 3.4.2, making a savepoint failed
to trigger cache gc, and this test verifies that it now does.
>>> import gc
>>> import ZODB
>>> from ZODB.tests.MinPO import MinPO
>>> from ZODB.MappingStorage import MappingStorage
...
...
@@ -129,6 +130,14 @@ Making a savepoint at this time used to leave the cache holding the same
number of objects. Make sure the cache shrinks now instead.
>>> dummy = transaction.savepoint()
Jython needs a GC, and needs to actually access the cache data to be
sure the size is updated (it uses "eventually consistent" implementations for
its weak dictionaries):
>>> _ = gc.collect()
>>> _ = getattr(cn._cache, 'data', {}).values()
>>> _ = getattr(cn._cache, 'data', {}).keys()
>>> len(cn._cache) <= CACHESIZE + 1
True
...
...
src/ZODB/tests/testConnectionSavepoint.txt
View file @
115158a3
...
...
@@ -112,10 +112,10 @@ If we provide entries that cause an unexpected error:
... ('sally', 10.0),
... ('bob', '20.0'),
... ('sally', 10.0),
... ])
... ])
#doctest: +ELLIPSIS
Updated bob
Updated sally
Unexpected exception unsupported operand type(s) for +
=
: 'float' and 'str'
Unexpected exception unsupported operand type(s) for +
...
: 'float' and 'str'
Because the apply_entries used a savepoint for the entire function, it was
able to rollback the partial changes without rolling back changes made in the
...
...
@@ -194,4 +194,3 @@ However, using a savepoint invalidates any savepoints that come after it:
InvalidSavepointRollbackError: invalidated by a later savepoint
>>> transaction.abort()
src/ZODB/tests/testDB.py
View file @
115158a3
...
...
@@ -125,7 +125,10 @@ def connectionDebugInfo():
... now += .1
... return now
>>> real_time = time.time
>>> time.time = faux_time
>>> if isinstance(time,type):
... time.time = staticmethod(faux_time) # Jython
... else:
... time.time = faux_time
>>> from ZODB.tests.util import DB
>>> import transaction
...
...
@@ -252,7 +255,7 @@ if sys.version_info >= (2, 6):
>>> with db.transaction() as conn2:
... conn2.root()['y'] = 2
... XXX
... XXX
#doctest: +IGNORE_EXCEPTION_DETAIL
Traceback (most recent call last):
...
NameError: name 'XXX' is not defined
...
...
src/ZODB/tests/testRecover.py
View file @
115158a3
...
...
@@ -69,12 +69,28 @@ class RecoverTest(ZODB.tests.util.TestCase):
def
damage
(
self
,
num
,
size
):
self
.
storage
.
close
()
# Drop size null bytes into num random spots.
for
i
in
range
(
num
):
for
i
in
range
(
num
-
1
):
offset
=
random
.
randint
(
0
,
self
.
storage
.
_pos
-
size
)
with
open
(
self
.
path
,
"a+b"
)
as
f
:
# Note that we open the file as r+, not a+. Seeking a file
# open in append mode is effectively a no-op *depending on
# platform*, as the write may simply append to the file. An
# earlier version of this code opened the file in a+ mode,
# meaning on some platforms it was only writing to the end of the
# file, and so the test cases were always finding that bad data.
# For compatibility with that, we do one write outside the loop
# at the end.
with
open
(
self
.
path
,
"r+b"
)
as
f
:
f
.
seek
(
offset
)
f
.
write
(
b"
\
0
"
*
size
)
with
open
(
self
.
path
,
'rb'
)
as
f
:
f
.
seek
(
offset
)
v
=
f
.
read
(
size
)
self
.
assertEqual
(
b"
\
0
"
*
size
,
v
)
with
open
(
self
.
path
,
'a+b'
)
as
f
:
f
.
write
(
b"
\
0
"
*
size
)
ITERATIONS
=
5
# Run recovery, from self.path to self.dest. Return whatever
...
...
src/ZODB/tests/testSerialize.py
View file @
115158a3
...
...
@@ -15,10 +15,15 @@ import doctest
import
sys
import
unittest
from
persistent
import
Persistent
from
persistent.wref
import
WeakRef
import
ZODB.tests.util
from
ZODB
import
serialize
from
ZODB._compat
import
Pickler
,
BytesIO
,
_protocol
from
ZODB._compat
import
Pickler
,
PersistentUnpickler
,
BytesIO
,
_protocol
,
IS_JYTHON
class
PersistentObject
(
Persistent
):
pass
class
ClassWithNewargs
(
int
):
def
__new__
(
cls
,
value
):
...
...
@@ -118,6 +123,26 @@ class SerializerTestCase(unittest.TestCase):
self
.
assertTrue
(
not
serialize
.
myhasattr
(
OldStyle
(),
"rat"
))
self
.
assertTrue
(
not
serialize
.
myhasattr
(
NewStyle
(),
"rat"
))
def
test_persistent_id_noload
(
self
):
# make sure we can noload weak references and other list-based
# references like we expect. Protect explicitly against the
# breakage in CPython 2.7 and zodbpickle < 0.6.0
o
=
PersistentObject
()
o
.
_p_oid
=
b'abcd'
top
=
PersistentObject
()
top
.
_p_oid
=
b'efgh'
top
.
ref
=
WeakRef
(
o
)
pickle
=
serialize
.
ObjectWriter
().
serialize
(
top
)
refs
=
[]
u
=
PersistentUnpickler
(
None
,
refs
.
append
,
BytesIO
(
pickle
))
u
.
noload
()
u
.
noload
()
self
.
assertEqual
(
refs
,
[[
'w'
,
(
b'abcd'
,)]])
class
SerializerFunctestCase
(
unittest
.
TestCase
):
...
...
@@ -139,7 +164,17 @@ class SerializerFunctestCase(unittest.TestCase):
# buildout doesn't arrange for the sys.path to be exported,
# so force it ourselves
environ
=
os
.
environ
.
copy
()
environ
[
'PYTHONPATH'
]
=
os
.
pathsep
.
join
(
sys
.
path
)
if
IS_JYTHON
:
# Jython 2.7rc2 has a bug; if its Lib directory is
# specifically put on the PYTHONPATH, then it doesn't add
# it itself, which means it fails to 'import site' because
# it can't import '_jythonlib' and the whole process fails
# We would use multiprocessing here, but it doesn't exist on jython
sys_path
=
[
x
for
x
in
sys
.
path
if
not
x
.
endswith
(
'Lib'
)
and
x
!=
'__classpath__'
and
x
!=
'__pyclasspath__/'
]
else
:
sys_path
=
sys
.
path
environ
[
'PYTHONPATH'
]
=
os
.
pathsep
.
join
(
sys_path
)
subprocess
.
check_call
(
prep_args
,
env
=
environ
)
load_args
=
[
sys
.
executable
,
'-c'
,
'from ZODB.tests.testSerialize import _functest_load; '
...
...
src/ZODB/tests/test_cache.py
View file @
115158a3
...
...
@@ -14,8 +14,8 @@
"""Test behavior of Connection plus cPickleCache."""
from
persistent
import
Persistent
from
ZODB.config
import
databaseFromString
import
doctest
import
transaction
import
doctest
class
RecalcitrantObject
(
Persistent
):
"""A Persistent object that will not become a ghost."""
...
...
@@ -199,12 +199,15 @@ class CacheTests:
5
>>> transaction.abort()
>>> len(cn._cache)
6
>>> cn._cache.cache_non_ghost_count
2
>>> cn._cache.ringlen()
2
>>> RegularObject.deactivations
4
"""
def
test_gc_on_open_connections
(
self
):
r"""Test that automatic GC is not applied to open connections.
...
...
src/ZODB/tests/test_fsdump.py
View file @
115158a3
...
...
@@ -58,7 +58,7 @@ Trans #00000 tid=... time=... offset=<OFFSET>
Trans #00001 tid=... time=... offset=<OFFSET>
status=' ' user='' description='added an OOBTree'
data #00000 oid=0000000000000000 size=<SIZE> class=persistent.mapping.PersistentMapping
data #00001 oid=0000000000000001 size=<SIZE> class=BTrees.OOBTree.OOBTree
data #00001 oid=0000000000000001 size=<SIZE> class=BTrees.OOBTree.OOBTree
...
Now we see two transactions and two changed objects.
...
...
src/ZODB/tests/testblob.py
View file @
115158a3
...
...
@@ -345,6 +345,18 @@ def gc_blob_removes_uncommitted_data():
>>> os.path.exists(fname)
True
>>> file = blob = None
PyPy not being reference counted actually needs GC to be
explicitly requested. In experiments, it finds the weakref
on the first collection, but only does the cleanup on the second
collection:
>>> import gc
>>> _ = gc.collect()
>>> _ = gc.collect()
Now the file is gone on all platforms:
>>> os.path.exists(fname)
False
"""
...
...
src/ZODB/tests/testfsoids.py
View file @
115158a3
...
...
@@ -90,12 +90,12 @@ oid 0x00 persistent.mapping.PersistentMapping 2 revisions
tid user=''
tid description='added an OOBTree'
new revision persistent.mapping.PersistentMapping at <OFFSET>
references 0x01 BTrees.OOBTree.OOBTree at <OFFSET>
oid 0x01 BTrees.OOBTree.OOBTree 1 revision
references 0x01 BTrees.OOBTree.OOBTree
...
at <OFFSET>
oid 0x01 BTrees.OOBTree.OOBTree
...
1 revision
tid 0x... offset=<OFFSET> ...
tid user=''
tid description='added an OOBTree'
new revision BTrees.OOBTree.OOBTree at <OFFSET>
new revision BTrees.OOBTree.OOBTree
...
at <OFFSET>
referenced by 0x00 persistent.mapping.PersistentMapping at <OFFSET>
So there are two revisions of oid 0 now, and the second references oid 1.
...
...
@@ -118,21 +118,21 @@ oid 0x00 persistent.mapping.PersistentMapping 2 revisions
tid user=''
tid description='added an OOBTree'
new revision persistent.mapping.PersistentMapping at <OFFSET>
references 0x01 BTrees.OOBTree.OOBTree at <OFFSET>
references 0x01 BTrees.OOBTree.OOBTree
...
at <OFFSET>
tid 0x... offset=<OFFSET> ...
tid user=''
tid description='circling back to the root'
referenced by 0x01 BTrees.OOBTree.OOBTree at <OFFSET>
oid 0x01 BTrees.OOBTree.OOBTree 2 revisions
referenced by 0x01 BTrees.OOBTree.OOBTree
...
at <OFFSET>
oid 0x01 BTrees.OOBTree.OOBTree
...
2 revisions
tid 0x... offset=<OFFSET> ...
tid user=''
tid description='added an OOBTree'
new revision BTrees.OOBTree.OOBTree at <OFFSET>
new revision BTrees.OOBTree.OOBTree
...
at <OFFSET>
referenced by 0x00 persistent.mapping.PersistentMapping at <OFFSET>
tid 0x... offset=<OFFSET> ...
tid user=''
tid description='circling back to the root'
new revision BTrees.OOBTree.OOBTree at <OFFSET>
new revision BTrees.OOBTree.OOBTree
...
at <OFFSET>
references 0x00 persistent.mapping.PersistentMapping at <OFFSET>
oid 0x02 <unknown> 0 revisions
this oid was not defined (no data record for it found)
...
...
src/ZODB/tests/testpersistentclass.py
View file @
115158a3
...
...
@@ -55,10 +55,10 @@ def test_new_ghost_w_persistent_class():
>>> import persistent
>>> jar = object()
>>> cache = persistent.PickleCache(jar, 10, 100)
>>> cache.new_ghost('1', PC)
>>> cache.new_ghost(
b
'1', PC)
>>> PC._p_oid
'1'
>>> PC._p_oid
== b'1'
True
>>> PC._p_jar is jar
True
>>> PC._p_serial
...
...
@@ -95,4 +95,3 @@ def test_suite():
if
__name__
==
'__main__'
:
unittest
.
main
(
defaultTest
=
'test_suite'
)
src/ZODB/tests/util.py
View file @
115158a3
...
...
@@ -182,5 +182,7 @@ def mess_with_time(test=None, globs=None, now=1278864701.5):
import
time
zope
.
testing
.
setupstack
.
register
(
test
,
setattr
,
time
,
'time'
,
time
.
time
)
time
.
time
=
faux_time
if
isinstance
(
time
,
type
):
time
.
time
=
staticmethod
(
faux_time
)
# jython
else
:
time
.
time
=
faux_time
src/ZODB/utils.txt
View file @
115158a3
...
...
@@ -41,7 +41,12 @@ To see this work (in a predictable way), we'll first hack time.time:
>>> import time
>>> old_time = time.time
>>> time.time = lambda : 1224825068.12
>>> time_value = 1224825068.12
>>> faux_time = lambda: time_value
>>> if isinstance(time,type):
... time.time = staticmethod(faux_time) # Jython
... else:
... time.time = faux_time
Now, if we ask for a new time stamp, we'll get one based on our faux
time:
...
...
@@ -71,7 +76,7 @@ Here, since we called it at the same time, we got a time stamp that
was only slightly larger than the previos one. Of course, at a later
time, the time stamp we get will be based on the time:
>>> time
.time = lambda :
1224825069.12
>>> time
_value =
1224825069.12
>>> tid = ZODB.utils.newTid(tid2)
>>> print(ZODB.TimeStamp.TimeStamp(tid))
2008-10-24 05:11:09.120000
...
...
@@ -194,4 +199,4 @@ supports optional method preconditions [1]_.
locked. Combining preconditions with locking provides both
efficiency and concise expressions. A more general-purpose
facility would almost certainly provide separate descriptors for
preconditions.
preconditions.
tox.ini
View file @
115158a3
[tox]
envlist
=
py26,py27,py32,py33,py34,simple
# Jython 2.7rc2 does work, but unfortunately has an issue running
# with Tox 1.9.2 (http://bugs.jython.org/issue2325)
#envlist = py26,py27,py32,py33,py34,pypy,simple,jython,pypy3
envlist
=
py26,py27,py32,py33,py34,pypy,simple,pypy3
[testenv]
commands
=
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment