Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Z
ZEO
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
nexedi
ZEO
Commits
183162af
Commit
183162af
authored
Mar 11, 2005
by
Tim Peters
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Convert some XXXs. More to come.
parent
ee545a85
Changes
24
Hide whitespace changes
Inline
Side-by-side
Showing
24 changed files
with
125 additions
and
132 deletions
+125
-132
src/BTrees/BTreeTemplate.c
src/BTrees/BTreeTemplate.c
+5
-5
src/BTrees/BucketTemplate.c
src/BTrees/BucketTemplate.c
+3
-3
src/ZEO/ClientStorage.py
src/ZEO/ClientStorage.py
+16
-16
src/ZEO/auth/auth_digest.py
src/ZEO/auth/auth_digest.py
+3
-2
src/ZEO/cache.py
src/ZEO/cache.py
+7
-6
src/ZEO/tests/Cache.py
src/ZEO/tests/Cache.py
+3
-2
src/ZEO/tests/CommitLockTests.py
src/ZEO/tests/CommitLockTests.py
+1
-1
src/ZEO/tests/ConnectionTests.py
src/ZEO/tests/ConnectionTests.py
+3
-11
src/ZEO/zrpc/client.py
src/ZEO/zrpc/client.py
+10
-11
src/ZEO/zrpc/connection.py
src/ZEO/zrpc/connection.py
+6
-6
src/ZODB/BaseStorage.py
src/ZODB/BaseStorage.py
+2
-2
src/ZODB/Connection.py
src/ZODB/Connection.py
+18
-21
src/ZODB/DB.py
src/ZODB/DB.py
+3
-3
src/ZODB/DemoStorage.py
src/ZODB/DemoStorage.py
+2
-2
src/ZODB/FileStorage/FileStorage.py
src/ZODB/FileStorage/FileStorage.py
+10
-9
src/ZODB/FileStorage/fsdump.py
src/ZODB/FileStorage/fsdump.py
+3
-3
src/ZODB/FileStorage/fspack.py
src/ZODB/FileStorage/fspack.py
+11
-10
src/ZODB/broken.py
src/ZODB/broken.py
+1
-1
src/ZODB/fstools.py
src/ZODB/fstools.py
+2
-2
src/ZODB/tests/BasicStorage.py
src/ZODB/tests/BasicStorage.py
+1
-1
src/ZODB/tests/ConflictResolution.py
src/ZODB/tests/ConflictResolution.py
+1
-1
src/persistent/cPersistence.c
src/persistent/cPersistence.c
+8
-7
src/persistent/cPickleCache.c
src/persistent/cPickleCache.c
+5
-5
src/scripts/fstest.py
src/scripts/fstest.py
+1
-2
No files found.
src/BTrees/BTreeTemplate.c
View file @
183162af
...
@@ -239,7 +239,7 @@ BTree_newBucket(BTree *self)
...
@@ -239,7 +239,7 @@ BTree_newBucket(BTree *self)
factory
=
PyObject_GetAttr
((
PyObject
*
)
self
->
ob_type
,
_bucket_type_str
);
factory
=
PyObject_GetAttr
((
PyObject
*
)
self
->
ob_type
,
_bucket_type_str
);
if
(
factory
==
NULL
)
if
(
factory
==
NULL
)
return
NULL
;
return
NULL
;
/*
XXX
Should we check that the factory actually returns something
/*
TODO:
Should we check that the factory actually returns something
of the appropriate type? How? The C code here is going to
of the appropriate type? How? The C code here is going to
depend on any custom bucket type having the same layout at the
depend on any custom bucket type having the same layout at the
C level.
C level.
...
@@ -469,7 +469,7 @@ BTree_lastBucket(BTree *self)
...
@@ -469,7 +469,7 @@ BTree_lastBucket(BTree *self)
Bucket
*
result
;
Bucket
*
result
;
UNLESS
(
self
->
data
&&
self
->
len
)
{
UNLESS
(
self
->
data
&&
self
->
len
)
{
IndexError
(
-
1
);
/*
XXX
*/
IndexError
(
-
1
);
/*
is this the best action to take?
*/
return
NULL
;
return
NULL
;
}
}
...
@@ -1783,9 +1783,9 @@ BTree_iteritems(BTree *self, PyObject *args, PyObject *kw)
...
@@ -1783,9 +1783,9 @@ BTree_iteritems(BTree *self, PyObject *args, PyObject *kw)
/* End of iterator support. */
/* End of iterator support. */
/*
XXX
Even though the _firstbucket attribute is read-only, a program
/*
Caution:
Even though the _firstbucket attribute is read-only, a program
could
probably do arbitrary damage to a the btree internals. For
could
do arbitrary damage to the btree internals. For example, it could
example, it could
call clear() on a bucket inside a BTree.
call clear() on a bucket inside a BTree.
We need to decide if the convenience for inspecting BTrees is worth
We need to decide if the convenience for inspecting BTrees is worth
the risk.
the risk.
...
...
src/BTrees/BucketTemplate.c
View file @
183162af
...
@@ -1415,9 +1415,9 @@ bucket__p_resolveConflict(Bucket *self, PyObject *args)
...
@@ -1415,9 +1415,9 @@ bucket__p_resolveConflict(Bucket *self, PyObject *args)
}
}
#endif
#endif
/*
XXX
Even though the _next attribute is read-only, a program could
/*
Caution:
Even though the _next attribute is read-only, a program could
probably do arbitrary damage to a the btree internals. For
do arbitrary damage to the btree internals. For example, it could call
example, it could call
clear() on a bucket inside a BTree.
clear() on a bucket inside a BTree.
We need to decide if the convenience for inspecting BTrees is worth
We need to decide if the convenience for inspecting BTrees is worth
the risk.
the risk.
...
...
src/ZEO/ClientStorage.py
View file @
183162af
...
@@ -262,7 +262,7 @@ class ClientStorage(object):
...
@@ -262,7 +262,7 @@ class ClientStorage(object):
# _seriald: _check_serials() moves from _serials to _seriald,
# _seriald: _check_serials() moves from _serials to _seriald,
# which maps oid to serialno
# which maps oid to serialno
#
XXX
If serial number matches transaction id, then there is
#
TODO:
If serial number matches transaction id, then there is
# no need to have all this extra infrastructure for handling
# no need to have all this extra infrastructure for handling
# serial numbers. The vote call can just return the tid.
# serial numbers. The vote call can just return the tid.
# If there is a conflict error, we can't have a special method
# If there is a conflict error, we can't have a special method
...
@@ -310,7 +310,7 @@ class ClientStorage(object):
...
@@ -310,7 +310,7 @@ class ClientStorage(object):
else
:
else
:
cache_path
=
None
cache_path
=
None
self
.
_cache
=
self
.
ClientCacheClass
(
cache_path
,
size
=
cache_size
)
self
.
_cache
=
self
.
ClientCacheClass
(
cache_path
,
size
=
cache_size
)
#
XXX When should it be opened?
#
TODO: maybe there's a better time to open the cache? Unclear.
self
.
_cache
.
open
()
self
.
_cache
.
open
()
self
.
_rpc_mgr
=
self
.
ConnectionManagerClass
(
addr
,
self
,
self
.
_rpc_mgr
=
self
.
ConnectionManagerClass
(
addr
,
self
,
...
@@ -459,7 +459,7 @@ class ClientStorage(object):
...
@@ -459,7 +459,7 @@ class ClientStorage(object):
exception raised by register() is passed through.
exception raised by register() is passed through.
"""
"""
log2
(
"Testing connection %r"
%
conn
)
log2
(
"Testing connection %r"
%
conn
)
#
XXX C
heck the protocol version here?
#
TODO: Should we c
heck the protocol version here?
self
.
_conn_is_read_only
=
0
self
.
_conn_is_read_only
=
0
stub
=
self
.
StorageServerStubClass
(
conn
)
stub
=
self
.
StorageServerStubClass
(
conn
)
...
@@ -496,7 +496,7 @@ class ClientStorage(object):
...
@@ -496,7 +496,7 @@ class ClientStorage(object):
# this method before it was stopped.
# this method before it was stopped.
return
return
#
XXX would like to report whether we get a read-only connection
#
TODO: report whether we get a read-only connection.
if
self
.
_connection
is
not
None
:
if
self
.
_connection
is
not
None
:
reconnect
=
1
reconnect
=
1
else
:
else
:
...
@@ -597,8 +597,8 @@ class ClientStorage(object):
...
@@ -597,8 +597,8 @@ class ClientStorage(object):
self
.
_pickler
=
cPickle
.
Pickler
(
self
.
_tfile
,
1
)
self
.
_pickler
=
cPickle
.
Pickler
(
self
.
_tfile
,
1
)
self
.
_pickler
.
fast
=
1
# Don't use the memo
self
.
_pickler
.
fast
=
1
# Don't use the memo
#
XXX should batch these operations for efficiency
#
TODO: should batch these operations for efficiency; would need
#
XXX need to acquire lock
...
#
to acquire lock
...
for
oid
,
tid
,
version
in
self
.
_cache
.
contents
():
for
oid
,
tid
,
version
in
self
.
_cache
.
contents
():
server
.
verify
(
oid
,
version
,
tid
)
server
.
verify
(
oid
,
version
,
tid
)
self
.
_pending_server
=
server
self
.
_pending_server
=
server
...
@@ -627,7 +627,7 @@ class ClientStorage(object):
...
@@ -627,7 +627,7 @@ class ClientStorage(object):
def
__len__
(
self
):
def
__len__
(
self
):
"""Return the size of the storage."""
"""Return the size of the storage."""
#
XXX Where is this
used?
#
TODO: Is this method
used?
return
self
.
_info
[
'length'
]
return
self
.
_info
[
'length'
]
def
getName
(
self
):
def
getName
(
self
):
...
@@ -700,7 +700,7 @@ class ClientStorage(object):
...
@@ -700,7 +700,7 @@ class ClientStorage(object):
# versions of ZODB, you'd get a conflict error if you tried to
# versions of ZODB, you'd get a conflict error if you tried to
# commit a transaction with the cached data.
# commit a transaction with the cached data.
#
XXX
If we could guarantee that ZODB gave the right answer,
# If we could guarantee that ZODB gave the right answer,
# we could just invalidate the version data.
# we could just invalidate the version data.
for
oid
in
oids
:
for
oid
in
oids
:
self
.
_tbuf
.
invalidate
(
oid
,
''
)
self
.
_tbuf
.
invalidate
(
oid
,
''
)
...
@@ -798,7 +798,7 @@ class ClientStorage(object):
...
@@ -798,7 +798,7 @@ class ClientStorage(object):
# doesn't use the _load_lock, so it is possble to overlap
# doesn't use the _load_lock, so it is possble to overlap
# this load with an invalidation for the same object.
# this load with an invalidation for the same object.
#
XXX
If we call again, we're guaranteed to get the
# If we call again, we're guaranteed to get the
# post-invalidation data. But if the data is still
# post-invalidation data. But if the data is still
# current, we'll still get end == None.
# current, we'll still get end == None.
...
@@ -857,8 +857,8 @@ class ClientStorage(object):
...
@@ -857,8 +857,8 @@ class ClientStorage(object):
days -- a number of days to subtract from the pack time;
days -- a number of days to subtract from the pack time;
defaults to zero.
defaults to zero.
"""
"""
#
XXX
Is it okay that read-only connections allow pack()?
#
TODO:
Is it okay that read-only connections allow pack()?
# rf argument ignored; server will provide it
'
s own implementation
# rf argument ignored; server will provide its own implementation
if
t
is
None
:
if
t
is
None
:
t
=
time
.
time
()
t
=
time
.
time
()
t
=
t
-
(
days
*
86400
)
t
=
t
-
(
days
*
86400
)
...
@@ -866,7 +866,7 @@ class ClientStorage(object):
...
@@ -866,7 +866,7 @@ class ClientStorage(object):
def
_check_serials
(
self
):
def
_check_serials
(
self
):
"""Internal helper to move data from _serials to _seriald."""
"""Internal helper to move data from _serials to _seriald."""
#
XXX
serials are always going to be the same, the only
# serials are always going to be the same, the only
# question is whether an exception has been raised.
# question is whether an exception has been raised.
if
self
.
_serials
:
if
self
.
_serials
:
l
=
len
(
self
.
_serials
)
l
=
len
(
self
.
_serials
)
...
@@ -939,7 +939,7 @@ class ClientStorage(object):
...
@@ -939,7 +939,7 @@ class ClientStorage(object):
if
txn
is
not
self
.
_transaction
:
if
txn
is
not
self
.
_transaction
:
return
return
try
:
try
:
#
XXX Are there any transac
tions that should prevent an
#
Caution: Are there any excep
tions that should prevent an
# abort from occurring? It seems wrong to swallow them
# abort from occurring? It seems wrong to swallow them
# all, yet you want to be sure that other abort logic is
# all, yet you want to be sure that other abort logic is
# executed regardless.
# executed regardless.
...
@@ -991,8 +991,7 @@ class ClientStorage(object):
...
@@ -991,8 +991,7 @@ class ClientStorage(object):
"""
"""
# Must be called with _lock already acquired.
# Must be called with _lock already acquired.
# XXX not sure why _update_cache() would be called on
# Not sure why _update_cache() would be called on a closed storage.
# a closed storage.
if
self
.
_cache
is
None
:
if
self
.
_cache
is
None
:
return
return
...
@@ -1063,7 +1062,8 @@ class ClientStorage(object):
...
@@ -1063,7 +1062,8 @@ class ClientStorage(object):
# Invalidation as result of verify_cache().
# Invalidation as result of verify_cache().
# Queue an invalidate for the end the verification procedure.
# Queue an invalidate for the end the verification procedure.
if
self
.
_pickler
is
None
:
if
self
.
_pickler
is
None
:
# XXX This should never happen
# This should never happen. TODO: assert it doesn't, or log
# if it does.
return
return
self
.
_pickler
.
dump
(
args
)
self
.
_pickler
.
dump
(
args
)
...
...
src/ZEO/auth/auth_digest.py
View file @
183162af
...
@@ -30,8 +30,9 @@ security requirements are quite different as a result. The HTTP
...
@@ -30,8 +30,9 @@ security requirements are quite different as a result. The HTTP
protocol uses a nonce as a challenge. The ZEO protocol requires a
protocol uses a nonce as a challenge. The ZEO protocol requires a
separate session key that is used for message authentication. We
separate session key that is used for message authentication. We
generate a second nonce for this purpose; the hash of nonce and
generate a second nonce for this purpose; the hash of nonce and
user/realm/password is used as the session key. XXX I'm not sure if
user/realm/password is used as the session key.
this is a sound approach; SRP would be preferred.
TODO: I'm not sure if this is a sound approach; SRP would be preferred.
"""
"""
import
os
import
os
...
...
src/ZEO/cache.py
View file @
183162af
...
@@ -211,7 +211,7 @@ class ClientCache:
...
@@ -211,7 +211,7 @@ class ClientCache:
self
.
_trace
(
0x24
,
oid
,
tid
)
self
.
_trace
(
0x24
,
oid
,
tid
)
return
return
lo
,
hi
=
L
[
i
-
1
]
lo
,
hi
=
L
[
i
-
1
]
#
XXX
lo should always be less than tid
# lo should always be less than tid
if
not
lo
<
tid
<=
hi
:
if
not
lo
<
tid
<=
hi
:
self
.
_trace
(
0x24
,
oid
,
tid
)
self
.
_trace
(
0x24
,
oid
,
tid
)
return
None
return
None
...
@@ -361,12 +361,13 @@ class ClientCache:
...
@@ -361,12 +361,13 @@ class ClientCache:
del
self
.
current
[
oid
]
# because we no longer have current data
del
self
.
current
[
oid
]
# because we no longer have current data
# Update the end_tid half of oid's validity range on disk.
# Update the end_tid half of oid's validity range on disk.
#
XXX Want to fetch object without marking it as accessed
#
TODO: Want to fetch object without marking it as accessed.
o
=
self
.
fc
.
access
((
oid
,
cur_tid
))
o
=
self
.
fc
.
access
((
oid
,
cur_tid
))
assert
o
is
not
None
assert
o
is
not
None
assert
o
.
end_tid
is
None
# i.e., o was current
assert
o
.
end_tid
is
None
# i.e., o was current
if
o
is
None
:
if
o
is
None
:
# XXX is this possible? (doubt it; added an assert just above)
# TODO: Since we asserted o is not None above, this block
# should be removing; waiting on time to prove it can't happen.
return
return
o
.
end_tid
=
tid
o
.
end_tid
=
tid
self
.
fc
.
update
(
o
)
# record the new end_tid on disk
self
.
fc
.
update
(
o
)
# record the new end_tid on disk
...
@@ -377,7 +378,7 @@ class ClientCache:
...
@@ -377,7 +378,7 @@ class ClientCache:
##
##
# Return the number of object revisions in the cache.
# Return the number of object revisions in the cache.
#
#
#
XXX just return len(self.cache)?
#
Or maybe better to just return len(self.cache)? Needs clearer use case.
def
__len__
(
self
):
def
__len__
(
self
):
n
=
len
(
self
.
current
)
+
len
(
self
.
version
)
n
=
len
(
self
.
current
)
+
len
(
self
.
version
)
if
self
.
noncurrent
:
if
self
.
noncurrent
:
...
@@ -389,7 +390,7 @@ class ClientCache:
...
@@ -389,7 +390,7 @@ class ClientCache:
# cache. This generator is used by cache verification.
# cache. This generator is used by cache verification.
def
contents
(
self
):
def
contents
(
self
):
#
XXX May need to materialize list instead of iterating,
#
May need to materialize list instead of iterating;
# depends on whether the caller may change the cache.
# depends on whether the caller may change the cache.
for
o
in
self
.
fc
:
for
o
in
self
.
fc
:
oid
,
tid
=
o
.
key
oid
,
tid
=
o
.
key
...
@@ -993,7 +994,7 @@ class FileCache(object):
...
@@ -993,7 +994,7 @@ class FileCache(object):
# header to update the in-memory data structures held by
# header to update the in-memory data structures held by
# ClientCache.
# ClientCache.
#
XXX Or we coul
d just keep the header in memory at all times.
#
We could instea
d just keep the header in memory at all times.
e
=
self
.
key2entry
.
pop
(
key
,
None
)
e
=
self
.
key2entry
.
pop
(
key
,
None
)
if
e
is
None
:
if
e
is
None
:
...
...
src/ZEO/tests/Cache.py
View file @
183162af
...
@@ -28,8 +28,9 @@ class TransUndoStorageWithCache:
...
@@ -28,8 +28,9 @@ class TransUndoStorageWithCache:
info
=
self
.
_storage
.
undoInfo
()
info
=
self
.
_storage
.
undoInfo
()
if
not
info
:
if
not
info
:
# XXX perhaps we have an old storage implementation that
# Preserved this comment, but don't understand it:
# does do the negative nonsense
# "Perhaps we have an old storage implementation that
# does do the negative nonsense."
info
=
self
.
_storage
.
undoInfo
(
0
,
20
)
info
=
self
.
_storage
.
undoInfo
(
0
,
20
)
tid
=
info
[
0
][
'id'
]
tid
=
info
[
0
][
'id'
]
...
...
src/ZEO/tests/CommitLockTests.py
View file @
183162af
...
@@ -132,7 +132,7 @@ class CommitLockTests:
...
@@ -132,7 +132,7 @@ class CommitLockTests:
def
_duplicate_client
(
self
):
def
_duplicate_client
(
self
):
"Open another ClientStorage to the same server."
"Open another ClientStorage to the same server."
#
XXX argh it's hard to find the actual address
#
It's hard to find the actual address.
# The rpc mgr addr attribute is a list. Each element in the
# The rpc mgr addr attribute is a list. Each element in the
# list is a socket domain (AF_INET, AF_UNIX, etc.) and an
# list is a socket domain (AF_INET, AF_UNIX, etc.) and an
# address.
# address.
...
...
src/ZEO/tests/ConnectionTests.py
View file @
183162af
...
@@ -261,8 +261,7 @@ class ConnectionTests(CommonSetupTearDown):
...
@@ -261,8 +261,7 @@ class ConnectionTests(CommonSetupTearDown):
self
.
_storage
.
close
()
self
.
_storage
.
close
()
def
checkMultipleServers
(
self
):
def
checkMultipleServers
(
self
):
# XXX crude test at first -- just start two servers and do a
# Crude test-- just start two servers and do a commit at each one.
# commit at each one.
self
.
_newAddr
()
self
.
_newAddr
()
self
.
_storage
=
self
.
openClientStorage
(
'test'
,
100000
)
self
.
_storage
=
self
.
openClientStorage
(
'test'
,
100000
)
...
@@ -334,7 +333,7 @@ class ConnectionTests(CommonSetupTearDown):
...
@@ -334,7 +333,7 @@ class ConnectionTests(CommonSetupTearDown):
self
.
assertRaises
(
ReadOnlyError
,
self
.
_dostore
)
self
.
assertRaises
(
ReadOnlyError
,
self
.
_dostore
)
self
.
_storage
.
close
()
self
.
_storage
.
close
()
#
XXX
Compare checkReconnectXXX() here to checkReconnection()
#
TODO:
Compare checkReconnectXXX() here to checkReconnection()
# further down. Is the code here hopelessly naive, or is
# further down. Is the code here hopelessly naive, or is
# checkReconnection() overwrought?
# checkReconnection() overwrought?
...
@@ -535,13 +534,6 @@ class ConnectionTests(CommonSetupTearDown):
...
@@ -535,13 +534,6 @@ class ConnectionTests(CommonSetupTearDown):
def
checkReconnection
(
self
):
def
checkReconnection
(
self
):
# Check that the client reconnects when a server restarts.
# Check that the client reconnects when a server restarts.
# XXX Seem to get occasional errors that look like this:
# File ZEO/zrpc2.py, line 217, in handle_request
# File ZEO/StorageServer.py, line 325, in storea
# File ZEO/StorageServer.py, line 209, in _check_tid
# StorageTransactionError: (None, <tid>)
# could system reconnect and continue old transaction?
self
.
_storage
=
self
.
openClientStorage
()
self
.
_storage
=
self
.
openClientStorage
()
oid
=
self
.
_storage
.
new_oid
()
oid
=
self
.
_storage
.
new_oid
()
obj
=
MinPO
(
12
)
obj
=
MinPO
(
12
)
...
@@ -609,7 +601,7 @@ class ConnectionTests(CommonSetupTearDown):
...
@@ -609,7 +601,7 @@ class ConnectionTests(CommonSetupTearDown):
# transaction. This is not really a connection test, but it needs
# transaction. This is not really a connection test, but it needs
# about the same infrastructure (several storage servers).
# about the same infrastructure (several storage servers).
#
XXX WARNING
: with the current ZEO code, this occasionally fails.
#
TODO
: with the current ZEO code, this occasionally fails.
# That's the point of this test. :-)
# That's the point of this test. :-)
def
NOcheckMultiStorageTransaction
(
self
):
def
NOcheckMultiStorageTransaction
(
self
):
...
...
src/ZEO/zrpc/client.py
View file @
183162af
...
@@ -120,13 +120,13 @@ class ConnectionManager(object):
...
@@ -120,13 +120,13 @@ class ConnectionManager(object):
# be started in a child process after a fork. Regardless,
# be started in a child process after a fork. Regardless,
# it's good to be defensive.
# it's good to be defensive.
#
XXX
need each connection started with async==0 to have a
#
We
need each connection started with async==0 to have a
# callback
# callback
.
log
(
"CM.set_async(%s)"
%
repr
(
map
),
level
=
logging
.
DEBUG
)
log
(
"CM.set_async(%s)"
%
repr
(
map
),
level
=
logging
.
DEBUG
)
if
not
self
.
closed
and
self
.
trigger
is
None
:
if
not
self
.
closed
and
self
.
trigger
is
None
:
log
(
"CM.set_async(): first call"
)
log
(
"CM.set_async(): first call"
)
self
.
trigger
=
trigger
()
self
.
trigger
=
trigger
()
self
.
thr_async
=
1
#
XXX
needs to be set on the Connection
self
.
thr_async
=
1
# needs to be set on the Connection
def
attempt_connect
(
self
):
def
attempt_connect
(
self
):
"""Attempt a connection to the server without blocking too long.
"""Attempt a connection to the server without blocking too long.
...
@@ -139,8 +139,8 @@ class ConnectionManager(object):
...
@@ -139,8 +139,8 @@ class ConnectionManager(object):
finishes quickly.
finishes quickly.
"""
"""
#
XXX
Will a single attempt take too long?
# Will a single attempt take too long?
#
XXX Answer:
it depends -- normally, you'll connect or get a
#
Answer:
it depends -- normally, you'll connect or get a
# connection refused error very quickly. Packet-eating
# connection refused error very quickly. Packet-eating
# firewalls and other mishaps may cause the connect to take a
# firewalls and other mishaps may cause the connect to take a
# long time to time out though. It's also possible that you
# long time to time out though. It's also possible that you
...
@@ -228,7 +228,7 @@ class ConnectionManager(object):
...
@@ -228,7 +228,7 @@ class ConnectionManager(object):
# to the errno value(s) expected if the connect succeeds *or* if it's
# to the errno value(s) expected if the connect succeeds *or* if it's
# already connected (our code can attempt redundant connects).
# already connected (our code can attempt redundant connects).
if
hasattr
(
errno
,
"WSAEWOULDBLOCK"
):
# Windows
if
hasattr
(
errno
,
"WSAEWOULDBLOCK"
):
# Windows
#
XXX
The official Winsock docs claim that WSAEALREADY should be
#
Caution:
The official Winsock docs claim that WSAEALREADY should be
# treated as yet another "in progress" indicator, but we've never
# treated as yet another "in progress" indicator, but we've never
# seen this.
# seen this.
_CONNECT_IN_PROGRESS
=
(
errno
.
WSAEWOULDBLOCK
,)
_CONNECT_IN_PROGRESS
=
(
errno
.
WSAEWOULDBLOCK
,)
...
@@ -287,7 +287,7 @@ class ConnectThread(threading.Thread):
...
@@ -287,7 +287,7 @@ class ConnectThread(threading.Thread):
delay
=
self
.
tmin
delay
=
self
.
tmin
success
=
0
success
=
0
# Don't wait too long the first time.
# Don't wait too long the first time.
#
XXX
make timeout configurable?
#
TODO:
make timeout configurable?
attempt_timeout
=
5
attempt_timeout
=
5
while
not
self
.
stopped
:
while
not
self
.
stopped
:
success
=
self
.
try_connecting
(
attempt_timeout
)
success
=
self
.
try_connecting
(
attempt_timeout
)
...
@@ -373,7 +373,7 @@ class ConnectThread(threading.Thread):
...
@@ -373,7 +373,7 @@ class ConnectThread(threading.Thread):
log
(
"CT: select() %d, %d, %d"
%
tuple
(
map
(
len
,
(
r
,
w
,
x
))))
log
(
"CT: select() %d, %d, %d"
%
tuple
(
map
(
len
,
(
r
,
w
,
x
))))
except
select
.
error
,
msg
:
except
select
.
error
,
msg
:
log
(
"CT: select failed; msg=%s"
%
str
(
msg
),
log
(
"CT: select failed; msg=%s"
%
str
(
msg
),
level
=
logging
.
WARNING
)
# XXX Is this the right level?
level
=
logging
.
WARNING
)
continue
continue
# Exceptable wrappers are in trouble; close these suckers
# Exceptable wrappers are in trouble; close these suckers
for
wrap
in
x
:
for
wrap
in
x
:
...
@@ -408,7 +408,7 @@ class ConnectThread(threading.Thread):
...
@@ -408,7 +408,7 @@ class ConnectThread(threading.Thread):
assert
wrap
.
state
==
"closed"
assert
wrap
.
state
==
"closed"
del
wrappers
[
wrap
]
del
wrappers
[
wrap
]
#
XXX
should check deadline
#
TODO:
should check deadline
class
ConnectWrapper
:
class
ConnectWrapper
:
...
@@ -520,8 +520,7 @@ class ConnectWrapper:
...
@@ -520,8 +520,7 @@ class ConnectWrapper:
self
.
preferred
=
0
self
.
preferred
=
0
if
self
.
conn
is
not
None
:
if
self
.
conn
is
not
None
:
# Closing the ZRPC connection will eventually close the
# Closing the ZRPC connection will eventually close the
# socket, somewhere in asyncore.
# socket, somewhere in asyncore. Guido asks: Why do we care?
# XXX Why do we care? --Guido
self
.
conn
.
close
()
self
.
conn
.
close
()
self
.
conn
=
None
self
.
conn
=
None
if
self
.
sock
is
not
None
:
if
self
.
sock
is
not
None
:
...
...
src/ZEO/zrpc/connection.py
View file @
183162af
...
@@ -407,7 +407,7 @@ class Connection(smac.SizedMessageAsyncConnection, object):
...
@@ -407,7 +407,7 @@ class Connection(smac.SizedMessageAsyncConnection, object):
self
.
close
()
self
.
close
()
def
check_method
(
self
,
name
):
def
check_method
(
self
,
name
):
#
XXX Is this sufficient "security" for now?
#
TODO: This is hardly "secure".
if
name
.
startswith
(
'_'
):
if
name
.
startswith
(
'_'
):
return
None
return
None
return
hasattr
(
self
.
obj
,
name
)
return
hasattr
(
self
.
obj
,
name
)
...
@@ -524,7 +524,7 @@ class Connection(smac.SizedMessageAsyncConnection, object):
...
@@ -524,7 +524,7 @@ class Connection(smac.SizedMessageAsyncConnection, object):
def
_prepare_async
(
self
):
def
_prepare_async
(
self
):
self
.
thr_async
=
False
self
.
thr_async
=
False
ThreadedAsync
.
register_loop_callback
(
self
.
set_async
)
ThreadedAsync
.
register_loop_callback
(
self
.
set_async
)
#
XXX
If we are not in async mode, this will cause dead
#
TODO:
If we are not in async mode, this will cause dead
# Connections to be leaked.
# Connections to be leaked.
def
set_async
(
self
,
map
):
def
set_async
(
self
,
map
):
...
@@ -642,9 +642,9 @@ class Connection(smac.SizedMessageAsyncConnection, object):
...
@@ -642,9 +642,9 @@ class Connection(smac.SizedMessageAsyncConnection, object):
# loop is only intended to make sure all incoming data is
# loop is only intended to make sure all incoming data is
# returned.
# returned.
#
XXX What if the server sends a lot of invalidations,
#
Insecurity: What if the server sends a lot of
#
such that pending never finishes? Seems unlikely, but
#
invalidations, such that pending never finishes? Seems
#
not im
possible.
#
unlikely, but
possible.
timeout
=
0
timeout
=
0
if
r
:
if
r
:
try
:
try
:
...
@@ -771,7 +771,7 @@ class ManagedClientConnection(Connection):
...
@@ -771,7 +771,7 @@ class ManagedClientConnection(Connection):
return
0
return
0
def
is_async
(
self
):
def
is_async
(
self
):
#
XXX
could the check_mgr_async() be avoided on each test?
#
TODO:
could the check_mgr_async() be avoided on each test?
if
self
.
thr_async
:
if
self
.
thr_async
:
return
1
return
1
return
self
.
check_mgr_async
()
return
self
.
check_mgr_async
()
...
...
src/ZODB/BaseStorage.py
View file @
183162af
...
@@ -309,7 +309,7 @@ class BaseStorage(UndoLogCompatible):
...
@@ -309,7 +309,7 @@ class BaseStorage(UndoLogCompatible):
def
loadBefore
(
self
,
oid
,
tid
):
def
loadBefore
(
self
,
oid
,
tid
):
"""Return most recent revision of oid before tid committed."""
"""Return most recent revision of oid before tid committed."""
#
XXX
Is it okay for loadBefore() to return current data?
#
Unsure:
Is it okay for loadBefore() to return current data?
# There doesn't seem to be a good reason to forbid it, even
# There doesn't seem to be a good reason to forbid it, even
# though the typical use of this method will never find
# though the typical use of this method will never find
# current data. But maybe we should call it loadByTid()?
# current data. But maybe we should call it loadByTid()?
...
@@ -329,7 +329,7 @@ class BaseStorage(UndoLogCompatible):
...
@@ -329,7 +329,7 @@ class BaseStorage(UndoLogCompatible):
# Note: history() returns the most recent record first.
# Note: history() returns the most recent record first.
#
XXX
The filter argument to history() only appears to be
#
TODO:
The filter argument to history() only appears to be
# supported by FileStorage. Perhaps it shouldn't be used.
# supported by FileStorage. Perhaps it shouldn't be used.
L
=
self
.
history
(
oid
,
""
,
n
,
lambda
d
:
not
d
[
"version"
])
L
=
self
.
history
(
oid
,
""
,
n
,
lambda
d
:
not
d
[
"version"
])
if
not
L
:
if
not
L
:
...
...
src/ZODB/Connection.py
View file @
183162af
...
@@ -95,16 +95,16 @@ class Connection(ExportImport, object):
...
@@ -95,16 +95,16 @@ class Connection(ExportImport, object):
The Connection manages movement of objects in and out of object
The Connection manages movement of objects in and out of object
storage.
storage.
XXX
We should document an intended API for using a Connection via
TODO:
We should document an intended API for using a Connection via
multiple threads.
multiple threads.
XXX
We should explain that the Connection has a cache and that
TODO:
We should explain that the Connection has a cache and that
multiple calls to get() will return a reference to the same
multiple calls to get() will return a reference to the same
object, provided that one of the earlier objects is still
object, provided that one of the earlier objects is still
referenced. Object identity is preserved within a connection, but
referenced. Object identity is preserved within a connection, but
not across connections.
not across connections.
XXX
Mention the database pool.
TODO:
Mention the database pool.
A database connection always presents a consistent view of the
A database connection always presents a consistent view of the
objects in the database, although it may not always present the
objects in the database, although it may not always present the
...
@@ -186,8 +186,7 @@ class Connection(ExportImport, object):
...
@@ -186,8 +186,7 @@ class Connection(ExportImport, object):
# Caches for versions end up empty if the version
# Caches for versions end up empty if the version
# is not used for a while. Non-version caches
# is not used for a while. Non-version caches
# keep their content indefinitely.
# keep their content indefinitely.
# Unclear: Why do we want version caches to behave this way?
# XXX Why do we want version caches to behave this way?
self
.
_cache
.
cache_drain_resistance
=
100
self
.
_cache
.
cache_drain_resistance
=
100
self
.
_committed
=
[]
self
.
_committed
=
[]
...
@@ -221,12 +220,11 @@ class Connection(ExportImport, object):
...
@@ -221,12 +220,11 @@ class Connection(ExportImport, object):
# from a single transaction should be applied atomically, so
# from a single transaction should be applied atomically, so
# the lock must be held when reading _invalidated.
# the lock must be held when reading _invalidated.
# XXX It sucks that we have to hold the lock to read
# It sucks that we have to hold the lock to read _invalidated.
# _invalidated. Normally, _invalidated is written by calling
# Normally, _invalidated is written by calling dict.update, which
# dict.update, which will execute atomically by virtue of the
# will execute atomically by virtue of the GIL. But some storage
# GIL. But some storage might generate oids where hash or
# might generate oids where hash or compare invokes Python code. In
# compare invokes Python code. In that case, the GIL can't
# that case, the GIL can't save us.
# save us.
self
.
_inv_lock
=
threading
.
Lock
()
self
.
_inv_lock
=
threading
.
Lock
()
self
.
_invalidated
=
d
=
{}
self
.
_invalidated
=
d
=
{}
self
.
_invalid
=
d
.
has_key
self
.
_invalid
=
d
.
has_key
...
@@ -329,7 +327,6 @@ class Connection(ExportImport, object):
...
@@ -329,7 +327,6 @@ class Connection(ExportImport, object):
- `ConnectionStateError`: if the connection is closed.
- `ConnectionStateError`: if the connection is closed.
"""
"""
if
self
.
_storage
is
None
:
if
self
.
_storage
is
None
:
# XXX Should this be a ZODB-specific exception?
raise
ConnectionStateError
(
"The database connection is closed"
)
raise
ConnectionStateError
(
"The database connection is closed"
)
obj
=
self
.
_cache
.
get
(
oid
,
None
)
obj
=
self
.
_cache
.
get
(
oid
,
None
)
...
@@ -424,7 +421,7 @@ class Connection(ExportImport, object):
...
@@ -424,7 +421,7 @@ class Connection(ExportImport, object):
register for afterCompletion() calls.
register for afterCompletion() calls.
"""
"""
#
XXX
Why do we go to all the trouble of setting _db and
#
TODO:
Why do we go to all the trouble of setting _db and
# other attributes on open and clearing them on close?
# other attributes on open and clearing them on close?
# A Connection is only ever associated with a single DB
# A Connection is only ever associated with a single DB
# and Storage.
# and Storage.
...
@@ -478,14 +475,13 @@ class Connection(ExportImport, object):
...
@@ -478,14 +475,13 @@ class Connection(ExportImport, object):
self
.
_tpc_cleanup
()
self
.
_tpc_cleanup
()
#
XXX s
hould there be a way to call incrgc directly?
#
S
hould there be a way to call incrgc directly?
#
p
erhaps "full sweep" should do that?
#
P
erhaps "full sweep" should do that?
#
XXX
we should test what happens when these methods are called
#
TODO:
we should test what happens when these methods are called
# mid-transaction.
# mid-transaction.
def
cacheFullSweep
(
self
,
dt
=
None
):
def
cacheFullSweep
(
self
,
dt
=
None
):
# XXX needs doc string
deprecated36
(
"cacheFullSweep is deprecated. "
deprecated36
(
"cacheFullSweep is deprecated. "
"Use cacheMinimize instead."
)
"Use cacheMinimize instead."
)
if
dt
is
None
:
if
dt
is
None
:
...
@@ -581,7 +577,8 @@ class Connection(ExportImport, object):
...
@@ -581,7 +577,8 @@ class Connection(ExportImport, object):
def
commit
(
self
,
transaction
):
def
commit
(
self
,
transaction
):
if
self
.
_import
:
if
self
.
_import
:
# XXX eh?
# TODO: This code seems important for Zope, but needs docs
# to explain why.
self
.
_importDuringCommit
(
transaction
,
*
self
.
_import
)
self
.
_importDuringCommit
(
transaction
,
*
self
.
_import
)
self
.
_import
=
None
self
.
_import
=
None
...
@@ -647,7 +644,7 @@ class Connection(ExportImport, object):
...
@@ -647,7 +644,7 @@ class Connection(ExportImport, object):
self
.
_cache
[
oid
]
=
obj
self
.
_cache
[
oid
]
=
obj
except
:
except
:
# Dang, I bet it's wrapped:
# Dang, I bet it's wrapped:
#
XXX
Deprecate, then remove, this.
#
TODO:
Deprecate, then remove, this.
if
hasattr
(
obj
,
'aq_base'
):
if
hasattr
(
obj
,
'aq_base'
):
self
.
_cache
[
oid
]
=
obj
.
aq_base
self
.
_cache
[
oid
]
=
obj
.
aq_base
else
:
else
:
...
@@ -776,7 +773,7 @@ class Connection(ExportImport, object):
...
@@ -776,7 +773,7 @@ class Connection(ExportImport, object):
# directly. That is no longer allowed, but we need to
# directly. That is no longer allowed, but we need to
# provide support for old code that still does it.
# provide support for old code that still does it.
#
XXX
The actual complaint here is that an object without
# The actual complaint here is that an object without
# an oid is being registered. I can't think of any way to
# an oid is being registered. I can't think of any way to
# achieve that without assignment to _p_jar. If there is
# achieve that without assignment to _p_jar. If there is
# a way, this will be a very confusing warning.
# a way, this will be a very confusing warning.
...
@@ -922,7 +919,7 @@ class Connection(ExportImport, object):
...
@@ -922,7 +919,7 @@ class Connection(ExportImport, object):
def
oldstate
(
self
,
obj
,
tid
):
def
oldstate
(
self
,
obj
,
tid
):
"""Return copy of obj that was written by tid.
"""Return copy of obj that was written by tid.
XXX
The returned object does not have the typical metadata
The returned object does not have the typical metadata
(_p_jar, _p_oid, _p_serial) set. I'm not sure how references
(_p_jar, _p_oid, _p_serial) set. I'm not sure how references
to other peristent objects are handled.
to other peristent objects are handled.
...
...
src/ZODB/DB.py
View file @
183162af
...
@@ -280,7 +280,8 @@ class DB(object):
...
@@ -280,7 +280,8 @@ class DB(object):
# Just let the connection go.
# Just let the connection go.
# We need to break circular refs to make it really go.
# We need to break circular refs to make it really go.
# XXX What objects are involved in the cycle?
# TODO: Figure out exactly which objects are involved in the
# cycle.
connection
.
__dict__
.
clear
()
connection
.
__dict__
.
clear
()
return
return
pool
.
repush
(
connection
)
pool
.
repush
(
connection
)
...
@@ -713,9 +714,8 @@ class ResourceManager(object):
...
@@ -713,9 +714,8 @@ class ResourceManager(object):
return
"%s:%s"
%
(
self
.
_db
.
_storage
.
sortKey
(),
id
(
self
))
return
"%s:%s"
%
(
self
.
_db
.
_storage
.
sortKey
(),
id
(
self
))
def
tpc_begin
(
self
,
txn
,
sub
=
False
):
def
tpc_begin
(
self
,
txn
,
sub
=
False
):
# XXX we should never be called with sub=True.
if
sub
:
if
sub
:
raise
ValueError
,
"doesn't supoprt sub-transactions"
raise
ValueError
(
"doesn't support sub-transactions"
)
self
.
_db
.
_storage
.
tpc_begin
(
txn
)
self
.
_db
.
_storage
.
tpc_begin
(
txn
)
# The object registers itself with the txn manager, so the ob
# The object registers itself with the txn manager, so the ob
...
...
src/ZODB/DemoStorage.py
View file @
183162af
...
@@ -323,7 +323,7 @@ class DemoStorage(BaseStorage.BaseStorage):
...
@@ -323,7 +323,7 @@ class DemoStorage(BaseStorage.BaseStorage):
last
=
first
-
last
+
1
last
=
first
-
last
+
1
self
.
_lock_acquire
()
self
.
_lock_acquire
()
try
:
try
:
#
XXX Shouldn't this be sorted
?
#
Unsure: shouldn we sort this
?
transactions
=
self
.
_data
.
items
()
transactions
=
self
.
_data
.
items
()
pos
=
len
(
transactions
)
pos
=
len
(
transactions
)
r
=
[]
r
=
[]
...
@@ -404,7 +404,7 @@ class DemoStorage(BaseStorage.BaseStorage):
...
@@ -404,7 +404,7 @@ class DemoStorage(BaseStorage.BaseStorage):
index
,
vindex
=
self
.
_build_indexes
(
stop
)
index
,
vindex
=
self
.
_build_indexes
(
stop
)
#
XXX
This packing algorithm is flawed. It ignores
#
TODO:
This packing algorithm is flawed. It ignores
# references from non-current records after the pack
# references from non-current records after the pack
# time.
# time.
...
...
src/ZODB/FileStorage/FileStorage.py
View file @
183162af
...
@@ -700,9 +700,10 @@ class FileStorage(BaseStorage.BaseStorage,
...
@@ -700,9 +700,10 @@ class FileStorage(BaseStorage.BaseStorage,
# Else oid's data record contains the data, and the file offset of
# Else oid's data record contains the data, and the file offset of
# oid's data record is returned. This data record should contain
# oid's data record is returned. This data record should contain
# a pickle identical to the 'data' argument.
# a pickle identical to the 'data' argument.
# XXX If the length of the stored data doesn't match len(data),
# XXX an exception is raised. If the lengths match but the data
# Unclear: If the length of the stored data doesn't match len(data),
# XXX isn't the same, 0 is returned. Why the discrepancy?
# an exception is raised. If the lengths match but the data isn't
# the same, 0 is returned. Why the discrepancy?
self
.
_file
.
seek
(
tpos
)
self
.
_file
.
seek
(
tpos
)
h
=
self
.
_file
.
read
(
TRANS_HDR_LEN
)
h
=
self
.
_file
.
read
(
TRANS_HDR_LEN
)
tid
,
tl
,
status
,
ul
,
dl
,
el
=
unpack
(
TRANS_HDR
,
h
)
tid
,
tl
,
status
,
ul
,
dl
,
el
=
unpack
(
TRANS_HDR
,
h
)
...
@@ -820,7 +821,7 @@ class FileStorage(BaseStorage.BaseStorage,
...
@@ -820,7 +821,7 @@ class FileStorage(BaseStorage.BaseStorage,
if
h
.
version
:
if
h
.
version
:
return
h
.
pnv
return
h
.
pnv
if
h
.
back
:
if
h
.
back
:
#
XXX
Not sure the following is always true:
#
TODO:
Not sure the following is always true:
# The previous record is not for this version, yet we
# The previous record is not for this version, yet we
# have a backpointer to it. The current record must
# have a backpointer to it. The current record must
# be an undo of an abort or commit, so the backpointer
# be an undo of an abort or commit, so the backpointer
...
@@ -1175,8 +1176,8 @@ class FileStorage(BaseStorage.BaseStorage,
...
@@ -1175,8 +1176,8 @@ class FileStorage(BaseStorage.BaseStorage,
new
.
setVersion
(
v
,
snv
,
vprev
)
new
.
setVersion
(
v
,
snv
,
vprev
)
self
.
_tvindex
[
v
]
=
here
self
.
_tvindex
[
v
]
=
here
#
XXX
This seek shouldn't be necessary, but some other
#
TODO:
This seek shouldn't be necessary, but some other
# bit of code is messig with the file pointer.
# bit of code is messi
n
g with the file pointer.
assert
self
.
_tfile
.
tell
()
==
here
-
base
,
(
here
,
base
,
assert
self
.
_tfile
.
tell
()
==
here
-
base
,
(
here
,
base
,
self
.
_tfile
.
tell
())
self
.
_tfile
.
tell
())
self
.
_tfile
.
write
(
new
.
asString
())
self
.
_tfile
.
write
(
new
.
asString
())
...
@@ -1857,7 +1858,7 @@ class FileIterator(Iterator, FileStorageFormatter):
...
@@ -1857,7 +1858,7 @@ class FileIterator(Iterator, FileStorageFormatter):
def
next
(
self
,
index
=
0
):
def
next
(
self
,
index
=
0
):
if
self
.
_file
is
None
:
if
self
.
_file
is
None
:
# A closed iterator.
XXX:
Is IOError the best we can do? For
# A closed iterator. Is IOError the best we can do? For
# now, mimic a read on a closed file.
# now, mimic a read on a closed file.
raise
IOError
,
'iterator is closed'
raise
IOError
,
'iterator is closed'
...
@@ -1988,8 +1989,8 @@ class RecordIterator(Iterator, BaseStorage.TransactionRecord,
...
@@ -1988,8 +1989,8 @@ class RecordIterator(Iterator, BaseStorage.TransactionRecord,
data
=
None
data
=
None
else
:
else
:
data
,
tid
=
self
.
_loadBackTxn
(
h
.
oid
,
h
.
back
,
False
)
data
,
tid
=
self
.
_loadBackTxn
(
h
.
oid
,
h
.
back
,
False
)
#
XXX looks like this only goes one link back, should
#
Caution: :ooks like this only goes one link back.
# it go to the original data like BDBFullStorage?
#
Should
it go to the original data like BDBFullStorage?
prev_txn
=
self
.
getTxnFromData
(
h
.
oid
,
h
.
back
)
prev_txn
=
self
.
getTxnFromData
(
h
.
oid
,
h
.
back
)
r
=
Record
(
h
.
oid
,
h
.
tid
,
h
.
version
,
data
,
prev_txn
,
pos
)
r
=
Record
(
h
.
oid
,
h
.
tid
,
h
.
version
,
data
,
prev_txn
,
pos
)
...
...
src/ZODB/FileStorage/fsdump.py
View file @
183162af
...
@@ -47,8 +47,8 @@ def fsdump(path, file=None, with_offset=1):
...
@@ -47,8 +47,8 @@ def fsdump(path, file=None, with_offset=1):
version
=
""
version
=
""
if
rec
.
data_txn
:
if
rec
.
data_txn
:
#
XXX
It would be nice to print the transaction number
# It would be nice to print the transaction number
# (i) but it would be
too
expensive to keep track of.
# (i) but it would be expensive to keep track of.
bp
=
" bp=%016x"
%
u64
(
rec
.
data_txn
)
bp
=
" bp=%016x"
%
u64
(
rec
.
data_txn
)
else
:
else
:
bp
=
""
bp
=
""
...
@@ -64,7 +64,7 @@ def fmt(p64):
...
@@ -64,7 +64,7 @@ def fmt(p64):
class
Dumper
:
class
Dumper
:
"""A very verbose dumper for debuggin FileStorage problems."""
"""A very verbose dumper for debuggin FileStorage problems."""
#
XXX
Should revise this class to use FileStorageFormatter.
#
TODO:
Should revise this class to use FileStorageFormatter.
def
__init__
(
self
,
path
,
dest
=
None
):
def
__init__
(
self
,
path
,
dest
=
None
):
self
.
file
=
open
(
path
,
"rb"
)
self
.
file
=
open
(
path
,
"rb"
)
...
...
src/ZODB/FileStorage/fspack.py
View file @
183162af
...
@@ -82,9 +82,10 @@ class DataCopier(FileStorageFormatter):
...
@@ -82,9 +82,10 @@ class DataCopier(FileStorageFormatter):
# Else oid's data record contains the data, and the file offset of
# Else oid's data record contains the data, and the file offset of
# oid's data record is returned. This data record should contain
# oid's data record is returned. This data record should contain
# a pickle identical to the 'data' argument.
# a pickle identical to the 'data' argument.
# XXX If the length of the stored data doesn't match len(data),
# XXX an exception is raised. If the lengths match but the data
# Unclear: If the length of the stored data doesn't match len(data),
# XXX isn't the same, 0 is returned. Why the discrepancy?
# an exception is raised. If the lengths match but the data isn't
# the same, 0 is returned. Why the discrepancy?
h
=
self
.
_read_txn_header
(
tpos
)
h
=
self
.
_read_txn_header
(
tpos
)
tend
=
tpos
+
h
.
tlen
tend
=
tpos
+
h
.
tlen
pos
=
self
.
_file
.
tell
()
pos
=
self
.
_file
.
tell
()
...
@@ -121,7 +122,7 @@ class DataCopier(FileStorageFormatter):
...
@@ -121,7 +122,7 @@ class DataCopier(FileStorageFormatter):
if
h
.
version
:
if
h
.
version
:
return
h
.
pnv
return
h
.
pnv
elif
bp
:
elif
bp
:
#
XXX
Not sure the following is always true:
#
Unclear:
Not sure the following is always true:
# The previous record is not for this version, yet we
# The previous record is not for this version, yet we
# have a backpointer to it. The current record must
# have a backpointer to it. The current record must
# be an undo of an abort or commit, so the backpointer
# be an undo of an abort or commit, so the backpointer
...
@@ -280,8 +281,8 @@ class GC(FileStorageFormatter):
...
@@ -280,8 +281,8 @@ class GC(FileStorageFormatter):
if
err
.
buf
!=
""
:
if
err
.
buf
!=
""
:
raise
raise
if
th
.
status
==
'p'
:
if
th
.
status
==
'p'
:
# Delay import to cope with circular imports.
# Delay
ed
import to cope with circular imports.
#
XXX put exceptions in a separate module
#
TODO: put exceptions in a separate module.
from
ZODB.FileStorage.FileStorage
import
RedundantPackWarning
from
ZODB.FileStorage.FileStorage
import
RedundantPackWarning
raise
RedundantPackWarning
(
raise
RedundantPackWarning
(
"The database has already been packed to a later time"
"The database has already been packed to a later time"
...
@@ -447,9 +448,9 @@ class FileStoragePacker(FileStorageFormatter):
...
@@ -447,9 +448,9 @@ class FileStoragePacker(FileStorageFormatter):
# The packer will use several indexes.
# The packer will use several indexes.
# index: oid -> pos
# index: oid -> pos
# vindex: version -> pos
of XXX
# vindex: version -> pos
# tindex: oid -> pos, for current txn
# tindex: oid -> pos, for current txn
# tvindex: version -> pos
of XXX
, for current txn
# tvindex: version -> pos, for current txn
# oid2tid: not used by the packer
# oid2tid: not used by the packer
self
.
index
=
fsIndex
()
self
.
index
=
fsIndex
()
...
@@ -476,12 +477,12 @@ class FileStoragePacker(FileStorageFormatter):
...
@@ -476,12 +477,12 @@ class FileStoragePacker(FileStorageFormatter):
# Because these pointers are stored as file offsets, they
# Because these pointers are stored as file offsets, they
# must be updated when we copy data.
# must be updated when we copy data.
#
XXX Need to add sanity checking to pack
#
TODO: Should add sanity checking to pack.
self
.
gc
.
findReachable
()
self
.
gc
.
findReachable
()
# Setup the destination file and copy the metadata.
# Setup the destination file and copy the metadata.
#
XXX rename from _tfile to something clearer
#
TODO: rename from _tfile to something clearer.
self
.
_tfile
=
open
(
self
.
_name
+
".pack"
,
"w+b"
)
self
.
_tfile
=
open
(
self
.
_name
+
".pack"
,
"w+b"
)
self
.
_file
.
seek
(
0
)
self
.
_file
.
seek
(
0
)
self
.
_tfile
.
write
(
self
.
_file
.
read
(
self
.
_metadata_size
))
self
.
_tfile
.
write
(
self
.
_file
.
read
(
self
.
_metadata_size
))
...
...
src/ZODB/broken.py
View file @
183162af
...
@@ -94,7 +94,7 @@ class Broken(object):
...
@@ -94,7 +94,7 @@ class Broken(object):
__Broken_state__
=
__Broken_initargs__
=
None
__Broken_state__
=
__Broken_initargs__
=
None
__name__
=
'b
ob XXX
'
__name__
=
'b
roken object
'
def
__new__
(
class_
,
*
args
):
def
__new__
(
class_
,
*
args
):
result
=
object
.
__new__
(
class_
)
result
=
object
.
__new__
(
class_
)
...
...
src/ZODB/fstools.py
View file @
183162af
...
@@ -14,8 +14,8 @@
...
@@ -14,8 +14,8 @@
"""Tools for using FileStorage data files.
"""Tools for using FileStorage data files.
XXX
This module needs tests.
TODO:
This module needs tests.
XXX
This file needs to be kept in sync with FileStorage.py.
Caution:
This file needs to be kept in sync with FileStorage.py.
"""
"""
import
cPickle
import
cPickle
...
...
src/ZODB/tests/BasicStorage.py
View file @
183162af
...
@@ -176,7 +176,7 @@ class BasicStorage:
...
@@ -176,7 +176,7 @@ class BasicStorage:
eq
(
revid2
,
self
.
_storage
.
getSerial
(
oid
))
eq
(
revid2
,
self
.
_storage
.
getSerial
(
oid
))
def
checkTwoArgBegin
(
self
):
def
checkTwoArgBegin
(
self
):
#
XXX
how standard is three-argument tpc_begin()?
#
Unsure:
how standard is three-argument tpc_begin()?
t
=
transaction
.
Transaction
()
t
=
transaction
.
Transaction
()
tid
=
'
\
0
\
0
\
0
\
0
\
0
psu'
tid
=
'
\
0
\
0
\
0
\
0
\
0
psu'
self
.
_storage
.
tpc_begin
(
t
,
tid
)
self
.
_storage
.
tpc_begin
(
t
,
tid
)
...
...
src/ZODB/tests/ConflictResolution.py
View file @
183162af
...
@@ -37,7 +37,7 @@ class PCounter(Persistent):
...
@@ -37,7 +37,7 @@ class PCounter(Persistent):
return
oldState
return
oldState
#
XXX
What if _p_resolveConflict _thinks_ it resolved the
#
Insecurity:
What if _p_resolveConflict _thinks_ it resolved the
# conflict, but did something wrong?
# conflict, but did something wrong?
class
PCounter2
(
PCounter
):
class
PCounter2
(
PCounter
):
...
...
src/persistent/cPersistence.c
View file @
183162af
...
@@ -85,7 +85,7 @@ unghostify(cPersistentObject *self)
...
@@ -85,7 +85,7 @@ unghostify(cPersistentObject *self)
if
(
self
->
state
<
0
&&
self
->
jar
)
{
if
(
self
->
state
<
0
&&
self
->
jar
)
{
PyObject
*
r
;
PyObject
*
r
;
/*
XXX
Is it ever possibly to not have a cache? */
/* Is it ever possibly to not have a cache? */
if
(
self
->
cache
)
{
if
(
self
->
cache
)
{
/* Create a node in the ring for this unghostified object. */
/* Create a node in the ring for this unghostified object. */
self
->
cache
->
non_ghost_count
++
;
self
->
cache
->
non_ghost_count
++
;
...
@@ -156,7 +156,7 @@ ghostify(cPersistentObject *self)
...
@@ -156,7 +156,7 @@ ghostify(cPersistentObject *self)
if
(
self
->
state
==
cPersistent_GHOST_STATE
)
if
(
self
->
state
==
cPersistent_GHOST_STATE
)
return
;
return
;
/*
XXX i
s it ever possible to not have a cache? */
/*
I
s it ever possible to not have a cache? */
if
(
self
->
cache
==
NULL
)
{
if
(
self
->
cache
==
NULL
)
{
self
->
state
=
cPersistent_GHOST_STATE
;
self
->
state
=
cPersistent_GHOST_STATE
;
return
;
return
;
...
@@ -386,7 +386,7 @@ pickle___getstate__(PyObject *self)
...
@@ -386,7 +386,7 @@ pickle___getstate__(PyObject *self)
continue
;
continue
;
}
}
/*
XXX w
ill this go through our getattr hook? */
/*
Unclear: W
ill this go through our getattr hook? */
value
=
PyObject_GetAttr
(
self
,
name
);
value
=
PyObject_GetAttr
(
self
,
name
);
if
(
value
==
NULL
)
if
(
value
==
NULL
)
PyErr_Clear
();
PyErr_Clear
();
...
@@ -548,11 +548,12 @@ pickle___reduce__(PyObject *self)
...
@@ -548,11 +548,12 @@ pickle___reduce__(PyObject *self)
static
PyObject
*
static
PyObject
*
Per__getstate__
(
cPersistentObject
*
self
)
Per__getstate__
(
cPersistentObject
*
self
)
{
{
/*
XXX
Should it be an error to call __getstate__() on a ghost? */
/*
TODO:
Should it be an error to call __getstate__() on a ghost? */
if
(
unghostify
(
self
)
<
0
)
if
(
unghostify
(
self
)
<
0
)
return
NULL
;
return
NULL
;
/* XXX shouldn't we increment stickyness? */
/* TODO: should we increment stickyness? Tim doesn't understand that
question. S*/
return
pickle___getstate__
((
PyObject
*
)
self
);
return
pickle___getstate__
((
PyObject
*
)
self
);
}
}
...
@@ -723,7 +724,7 @@ Per__p_getattr(cPersistentObject *self, PyObject *name)
...
@@ -723,7 +724,7 @@ Per__p_getattr(cPersistentObject *self, PyObject *name)
}
}
/*
/*
XXX
we should probably not allow assignment of __class__ and __dict__.
TODO:
we should probably not allow assignment of __class__ and __dict__.
*/
*/
static
int
static
int
...
@@ -858,7 +859,7 @@ Per_set_changed(cPersistentObject *self, PyObject *v)
...
@@ -858,7 +859,7 @@ Per_set_changed(cPersistentObject *self, PyObject *v)
is to clear the exception, but that simply masks the
is to clear the exception, but that simply masks the
error.
error.
XXX We'll print
an error to stderr just like exceptions in
This prints
an error to stderr just like exceptions in
__del__(). It would probably be better to log it but that
__del__(). It would probably be better to log it but that
would be painful from C.
would be painful from C.
*/
*/
...
...
src/persistent/cPickleCache.c
View file @
183162af
...
@@ -303,7 +303,7 @@ cc_full_sweep(ccobject *self, PyObject *args)
...
@@ -303,7 +303,7 @@ cc_full_sweep(ccobject *self, PyObject *args)
{
{
int
dt
=
-
999
;
int
dt
=
-
999
;
/*
XXX This should be deprecated
*/
/*
TODO: This should be deprecated;
*/
if
(
!
PyArg_ParseTuple
(
args
,
"|i:full_sweep"
,
&
dt
))
if
(
!
PyArg_ParseTuple
(
args
,
"|i:full_sweep"
,
&
dt
))
return
NULL
;
return
NULL
;
...
@@ -354,7 +354,7 @@ _invalidate(ccobject *self, PyObject *key)
...
@@ -354,7 +354,7 @@ _invalidate(ccobject *self, PyObject *key)
/* This looks wrong, but it isn't. We use strong references to types
/* This looks wrong, but it isn't. We use strong references to types
because they don't have the ring members.
because they don't have the ring members.
XXX t
he result is that we *never* remove classes unless
T
he result is that we *never* remove classes unless
they are modified.
they are modified.
*/
*/
...
@@ -412,7 +412,7 @@ cc_invalidate(ccobject *self, PyObject *inv)
...
@@ -412,7 +412,7 @@ cc_invalidate(ccobject *self, PyObject *inv)
_invalidate
(
self
,
key
);
_invalidate
(
self
,
key
);
Py_DECREF
(
key
);
Py_DECREF
(
key
);
}
}
/*
XXX Do we really want to modify the input?
*/
/*
Dubious: modifying the input may be an unexpected side effect.
*/
PySequence_DelSlice
(
inv
,
0
,
l
);
PySequence_DelSlice
(
inv
,
0
,
l
);
}
}
}
}
...
@@ -603,7 +603,7 @@ cc_oid_unreferenced(ccobject *self, PyObject *oid)
...
@@ -603,7 +603,7 @@ cc_oid_unreferenced(ccobject *self, PyObject *oid)
*/
*/
Py_INCREF
(
v
);
Py_INCREF
(
v
);
/*
XXX
Should we call _Py_ForgetReference() on error exit? */
/*
TODO:
Should we call _Py_ForgetReference() on error exit? */
if
(
PyDict_DelItem
(
self
->
data
,
oid
)
<
0
)
if
(
PyDict_DelItem
(
self
->
data
,
oid
)
<
0
)
return
;
return
;
Py_DECREF
((
ccobject
*
)((
cPersistentObject
*
)
v
)
->
cache
);
Py_DECREF
((
ccobject
*
)((
cPersistentObject
*
)
v
)
->
cache
);
...
@@ -851,7 +851,7 @@ cc_add_item(ccobject *self, PyObject *key, PyObject *v)
...
@@ -851,7 +851,7 @@ cc_add_item(ccobject *self, PyObject *key, PyObject *v)
classes that derive from persistent.Persistent, BTrees,
classes that derive from persistent.Persistent, BTrees,
etc), report an error.
etc), report an error.
XXX Need a bette
r test.
TODO: checking sizeof() seems a poo
r test.
*/
*/
PyErr_SetString
(
PyExc_TypeError
,
PyErr_SetString
(
PyExc_TypeError
,
"Cache values must be persistent objects."
);
"Cache values must be persistent objects."
);
...
...
src/scripts/fstest.py
View file @
183162af
...
@@ -193,12 +193,11 @@ def check_drec(path, file, pos, tpos, tid):
...
@@ -193,12 +193,11 @@ def check_drec(path, file, pos, tpos, tid):
(
path
,
pos
,
tloc
,
tpos
))
(
path
,
pos
,
tloc
,
tpos
))
pos
=
pos
+
dlen
pos
=
pos
+
dlen
# XXX is the following code necessary?
if
plen
:
if
plen
:
file
.
seek
(
plen
,
1
)
file
.
seek
(
plen
,
1
)
else
:
else
:
file
.
seek
(
8
,
1
)
file
.
seek
(
8
,
1
)
#
XXX
_loadBack() ?
# _loadBack() ?
return
pos
,
oid
return
pos
,
oid
...
...
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment