Commit eebae5a2 authored by Julien Muchembled's avatar Julien Muchembled

Merge pull request #54 from zopefoundation/deps-cleanup

Cleanup dependencies, remove ZEO leftovers
parents 6b707535 bf276846
...@@ -16,7 +16,7 @@ use ZoneAlarm. ...@@ -16,7 +16,7 @@ use ZoneAlarm.
Compatibility Compatibility
============= =============
ZODB 4.0 requires Python 2.6, 2.7, 3.2, 3.3 or 3.4. ZODB 4.3 requires Python 2.7 or Python >= 3.3.
Travis: |buildstatus|_ Travis: |buildstatus|_
winbot: |winbotstatus|_ winbot: |winbotstatus|_
...@@ -27,11 +27,10 @@ Prerequisites ...@@ -27,11 +27,10 @@ Prerequisites
You must have Python installed. If you're using a system Python You must have Python installed. If you're using a system Python
install, make sure development support is installed too. install, make sure development support is installed too.
You also need the transaction, BTrees, persistent, zc.lockfile, You also need the transaction, BTrees, persistent, six, zc.lockfile, ZConfig,
ZConfig, zdaemon, zope.event, zope.interface, zope.proxy and zodbpickle, zope.interface packages, and optionally manuel and zope.testing.
zope.testing packages. If you don't have them and you can connect to If you don't have them and you can connect to the Python Package Index,
the Python Package Index, then these will be installed for you if you then these will be installed for you if you don't have them.
don't have them.
Installation Installation
============ ============
...@@ -41,9 +40,8 @@ and install it are to use `easy_install ...@@ -41,9 +40,8 @@ and install it are to use `easy_install
<http://peak.telecommunity.com/DevCenter/EasyInstall>`_, or <http://peak.telecommunity.com/DevCenter/EasyInstall>`_, or
`zc.buildout <http://www.python.org/pypi/zc.buildout>`_. `zc.buildout <http://www.python.org/pypi/zc.buildout>`_.
To install by hand, first install the dependencies, ZConfig, zdaemon, To install by hand, first install the dependencies listed in `Prerequisites`_.
zope.interface, zope.proxy and zope.testing. These can be found These can be found in the `Python Package Index <http://www.python.org/pypi>`_.
in the `Python Package Index <http://www.python.org/pypi>`_.
To run the tests, use the test setup command:: To run the tests, use the test setup command::
......
===========================================
How to use NFS to make Blobs more efficient
===========================================
:Author: Christian Theune <ct@gocept.com>
Overview
========
When handling blobs, the biggest goal is to avoid writing operations that
require the blob data to be transferred using up IO resources.
When bringing a blob into the system, at least one O(N) operation has to
happen, e.g. when the blob is uploaded via a network server. The blob should
be extracted as a file on the final storage volume as early as possible,
avoiding further copies.
In a ZEO setup, all data is stored on a networked server and passed to it
using zrpc. This is a major problem for handling blobs, because it will lock
all transactions from committing when storing a single large blob. As a
default, this mechanism works but is not recommended for high-volume
installations.
Shared filesystem
=================
The solution for the transfer problem is to setup various storage parameters
so that blobs are always handled on a single volume that is shared via network
between ZEO servers and clients.
Step 1: Setup a writable shared filesystem for ZEO server and client
--------------------------------------------------------------------
On the ZEO server, create two directories on the volume that will be used by
this setup (assume the volume is accessible via $SERVER/):
- $SERVER/blobs
- $SERVER/tmp
Then export the $SERVER directory using a shared network filesystem like NFS.
Make sure it's writable by the ZEO clients.
Assume the exported directory is available on the client as $CLIENT.
Step 2: Application temporary directories
-----------------------------------------
Applications (i.e. Zope) will put uploaded data in a temporary directory
first. Adjust your TMPDIR, TMP or TEMP environment variable to point to the
shared filesystem:
$ export TMPDIR=$CLIENT/tmp
Step 3: ZEO client caches
-------------------------
Edit the file `zope.conf` on the ZEO client and adjust the configuration of
the `zeoclient` storage with two new variables::
blob-dir = $CLIENT/blobs
blob-cache-writable = yes
Step 4: ZEO server
------------------
Edit the file `zeo.conf` on the ZEO server to configure the blob directory.
Assuming the published storage of the ZEO server is a file storage, then the
configuration should look like this::
<blobstorage 1>
<filestorage>
path $INSTANCE/var/Data.fs
<filestorage>
blob-dir $SERVER/blobs
</blobstorage>
(Remember to manually replace $SERVER and $CLIENT with the exported directory
as accessible by either the ZEO server or the ZEO client.)
Conclusion
----------
At this point, after restarting your ZEO server and clients, the blob
directory will be shared and a minimum amount of IO will occur when working
with blobs.
ZEO Client Cache Tracing
========================
An important question for ZEO users is: how large should the ZEO
client cache be? ZEO 2 (as of ZEO 2.0b2) has a new feature that lets
you collect a trace of cache activity and tools to analyze this trace,
enabling you to make an informed decision about the cache size.
Don't confuse the ZEO client cache with the Zope object cache. The
ZEO client cache is only used when an object is not in the Zope object
cache; the ZEO client cache avoids roundtrips to the ZEO server.
Enabling Cache Tracing
----------------------
To enable cache tracing, you must use a persistent cache (specify a ``client``
name), and set the environment variable ZEO_CACHE_TRACE to a non-empty
value. The path to the trace file is derived from the path to the persistent
cache file by appending ".trace". If the file doesn't exist, ZEO will try to
create it. If the file does exist, it's opened for appending (previous trace
information is not overwritten). If there are problems with the file, a
warning message is logged. To start or stop tracing, the ZEO client process
(typically a Zope application server) must be restarted.
The trace file can grow pretty quickly; on a moderately loaded server, we
observed it growing by 7 MB per hour. The file consists of binary records,
each 34 bytes long if 8-byte oids are in use; a detailed description of the
record lay-out is given in stats.py. No sensitive data is logged: data
record sizes (but not data records), and binary object and transaction ids
are logged, but no object pickles, object types or names, user names,
transaction comments, access paths, or machine information (such as machine
name or IP address) are logged.
Analyzing a Cache Trace
-----------------------
The stats.py command-line tool is the first-line tool to analyze a cache
trace. Its default output consists of two parts: a one-line summary of
essential statistics for each segment of 15 minutes, interspersed with lines
indicating client restarts, followed by a more detailed summary of overall
statistics.
The most important statistic is the "hit rate", a percentage indicating how
many requests to load an object could be satisfied from the cache. Hit rates
around 70% are good. 90% is excellent. If you see a hit rate under 60% you
can probably improve the cache performance (and hence your Zope application
server's performance) by increasing the ZEO cache size. This is normally
configured using key ``cache_size`` in the ``zeoclient`` section of your
configuration file. The default cache size is 20 MB, which is small.
The stats.py tool shows its command line syntax when invoked without
arguments. The tracefile argument can be a gzipped file if it has a .gz
extension. It will be read from stdin (assuming uncompressed data) if the
tracefile argument is '-'.
Simulating Different Cache Sizes
--------------------------------
Based on a cache trace file, you can make a prediction of how well the cache
might do with a different cache size. The simul.py tool runs a simulation of
the ZEO client cache implementation based upon the events read from a trace
file. A new simulation is started each time the trace file records a client
restart event; if a trace file contains more than one restart event, a
separate line is printed for each simulation, and a line with overall
statistics is added at the end.
Example, assuming the trace file is in /tmp/cachetrace.log::
$ python simul.py -s 4 /tmp/cachetrace.log
CircularCacheSimulation, cache size 4,194,304 bytes
START TIME DURATION LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 22 22:22 39:09 3218856 1429329 24046 41517 44.4% 40776 99.8
This shows that with a 4 MB cache size, the cache hit rate is 44.4%, the
percentage 1429329 (number of cache hits) is of 3218856 (number of load
requests). The cache simulated 40776 evictions, to make room for new object
states. At the end, 99.8% of the bytes reserved for the cache file were in
use to hold object state (the remaining 0.2% consists of "holes", bytes freed
by object eviction and not yet reused to hold another object's state).
Let's try this again with an 8 MB cache::
$ python simul.py -s 8 /tmp/cachetrace.log
CircularCacheSimulation, cache size 8,388,608 bytes
START TIME DURATION LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 22 22:22 39:09 3218856 2182722 31315 41517 67.8% 40016 100.0
That's a huge improvement in hit rate, which isn't surprising since these are
very small cache sizes. The default cache size is 20 MB, which is still on
the small side::
$ python simul.py /tmp/cachetrace.log
CircularCacheSimulation, cache size 20,971,520 bytes
START TIME DURATION LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 22 22:22 39:09 3218856 2982589 37922 41517 92.7% 37761 99.9
Again a very nice improvement in hit rate, and there's not a lot of room left
for improvement. Let's try 100 MB::
$ python simul.py -s 100 /tmp/cachetrace.log
CircularCacheSimulation, cache size 104,857,600 bytes
START TIME DURATION LOADS HITS INVALS WRITES HITRATE EVICTS INUSE
Jul 22 22:22 39:09 3218856 3218741 39572 41517 100.0% 22778 100.0
It's very unusual to see a hit rate so high. The application here frequently
modified a very large BTree, so given enough cache space to hold the entire
BTree it rarely needed to ask the ZEO server for data: this application
reused the same objects over and over.
More typical is that a substantial number of objects will be referenced only
once. Whenever an object turns out to be loaded only once, it's a pure loss
for the cache: the first (and only) load is a cache miss; storing the object
evicts other objects, possibly causing more cache misses; and the object is
never loaded again. If, for example, a third of the objects are loaded only
once, it's quite possible for the theoretical maximum hit rate to be 67%, no
matter how large the cache.
The simul.py script also contains code to simulate different cache
strategies. Since none of these are implemented, and only the default cache
strategy's code has been updated to be aware of MVCC, these are not further
documented here.
Simulation Limitations
----------------------
The cache simulation is an approximation, and actual hit rate may be higher
or lower than the simulated result. These are some factors that inhibit
exact simulation:
- The simulator doesn't try to emulate versions. If the trace file contains
loads and stores of objects in versions, the simulator treats them as if
they were loads and stores of non-version data.
- Each time a load of an object O in the trace file was a cache hit, but the
simulated cache has evicted O, the simulated cache has no way to repair its
knowledge about O. This is more frequent when simulating caches smaller
than the cache used to produce the trace file. When a real cache suffers a
cache miss, it asks the ZEO server for the needed information about O, and
saves O in the client cache. The simulated cache doesn't have a ZEO server
to ask, and O continues to be absent in the simulated cache. Further
requests for O will continue to be simulated cache misses, although in a
real cache they'll likely be cache hits. On the other hand, the
simulated cache doesn't need to evict any objects to make room for O, so it
may enjoy further cache hits on objects a real cache would have evicted.
ZEO Client Cache
The client cache provides a disk based cache for each ZEO client. The
client cache allows reads to be done from local disk rather than by remote
access to the storage server.
The cache may be persistent or transient. If the cache is persistent, then
the cache file is retained for use after process restarts. A non-
persistent cache uses a temporary file.
The client cache is managed in a single file, of the specified size.
The life of the cache is as follows:
- The cache file is opened (if it already exists), or created and set to
the specified size.
- Cache records are written to the cache file, as transactions commit
locally, and as data are loaded from the server.
- Writes are to "the current file position". This is a pointer that
travels around the file, circularly. After a record is written, the
pointer advances to just beyond it. Objects starting at the current
file position are evicted, as needed, to make room for the next record
written.
A distinct index file is not created, although indexing structures are
maintained in memory while a ClientStorage is running. When a persistent
client cache file is reopened, these indexing structures are recreated
by analyzing the file contents.
Persistent cache files are created in the directory named in the ``var``
argument to the ClientStorage, or if ``var`` is None, in the current
working directory. Persistent cache files have names of the form::
client-storage.zec
where:
client -- the client name, as given by the ClientStorage's ``client``
argument
storage -- the storage name, as given by the ClientStorage's ``storage``
argument; this is typically a string denoting a small integer,
"1" by default
For example, the cache file for client '8881' and storage 'spam' is named
"8881-spam.zec".
==========================
Running a ZEO Server HOWTO
==========================
Introduction
------------
ZEO (Zope Enterprise Objects) is a client-server system for sharing a
single storage among many clients. Normally, a ZODB storage can only
be used by a single process. When you use ZEO, the storage is opened
in the ZEO server process. Client programs connect to this process
using a ZEO ClientStorage. ZEO provides a consistent view of the
database to all clients. The ZEO client and server communicate using
a custom RPC protocol layered on top of TCP.
There are several configuration options that affect the behavior of a
ZEO server. This section describes how a few of these features
working. Subsequent sections describe how to configure every option.
Client cache
~~~~~~~~~~~~
Each ZEO client keeps an on-disk cache of recently used objects to
avoid fetching those objects from the server each time they are
requested. It is usually faster to read the objects from disk than it
is to fetch them over the network. The cache can also provide
read-only copies of objects during server outages.
The cache may be persistent or transient. If the cache is persistent,
then the cache files are retained for use after process restarts. A
non-persistent cache uses temporary files that are removed when the
client storage is closed.
The client cache size is configured when the ClientStorage is created.
The default size is 20MB, but the right size depends entirely on the
particular database. Setting the cache size too small can hurt
performance, but in most cases making it too big just wastes disk
space. The document "Client cache tracing" describes how to collect a
cache trace that can be used to determine a good cache size.
ZEO uses invalidations for cache consistency. Every time an object is
modified, the server sends a message to each client informing it of
the change. The client will discard the object from its cache when it
receives an invalidation. These invalidations are often batched.
Each time a client connects to a server, it must verify that its cache
contents are still valid. (It did not receive any invalidation
messages while it was disconnected.) There are several mechanisms
used to perform cache verification. In the worst case, the client
sends the server a list of all objects in its cache along with their
timestamps; the server sends back an invalidation message for each
stale object. The cost of verification is one drawback to making the
cache too large.
Note that every time a client crashes or disconnects, it must verify
its cache. Every time a server crashes, all of its clients must
verify their caches.
The cache verification process is optimized in two ways to eliminate
costs when restarting clients and servers. Each client keeps the
timestamp of the last invalidation message it has seen. When it
connects to the server, it checks to see if any invalidation messages
were sent after that timestamp. If not, then the cache is up-to-date
and no further verification occurs. The other optimization is the
invalidation queue, described below.
Invalidation queue
~~~~~~~~~~~~~~~~~~
The ZEO server keeps a queue of recent invalidation messages in
memory. When a client connects to the server, it sends the timestamp
of the most recent invalidation message it has received. If that
message is still in the invalidation queue, then the server sends the
client all the missing invalidations. This is often cheaper than
perform full cache verification.
The default size of the invalidation queue is 100. If the
invalidation queue is larger, it will be more likely that a client
that reconnects will be able to verify its cache using the queue. On
the other hand, a large queue uses more memory on the server to store
the message. Invalidation messages tend to be small, perhaps a few
hundred bytes each on average; it depends on the number of objects
modified by a transaction.
Transaction timeouts
~~~~~~~~~~~~~~~~~~~~
A ZEO server can be configured to timeout a transaction if it takes
too long to complete. Only a single transaction can commit at a time;
so if one transaction takes too long, all other clients will be
delayed waiting for it. In the extreme, a client can hang during the
commit process. If the client hangs, the server will be unable to
commit other transactions until it restarts. A well-behaved client
will not hang, but the server can be configured with a transaction
timeout to guard against bugs that cause a client to hang.
If any transaction exceeds the timeout threshold, the client's
connection to the server will be closed and the transaction aborted.
Once the transaction is aborted, the server can start processing other
client's requests. Most transactions should take very little time to
commit. The timer begins for a transaction after all the data has
been sent to the server. At this point, the cost of commit should be
dominated by the cost of writing data to disk; it should be unusual
for a commit to take longer than 1 second. A transaction timeout of
30 seconds should tolerate heavy load and slow communications between
client and server, while guarding against hung servers.
When a transaction times out, the client can be left in an awkward
position. If the timeout occurs during the second phase of the two
phase commit, the client will log a panic message. This should only
cause problems if the client transaction involved multiple storages.
If it did, it is possible that some storages committed the client
changes and others did not.
Connection management
~~~~~~~~~~~~~~~~~~~~~
A ZEO client manages its connection to the ZEO server. If it loses
the connection, it attempts to reconnect. While
it is disconnected, it can satisfy some reads by using its cache.
The client can be configured to wait for a connection when it is created
or to return immediately and provide data from its persistent cache.
It usually simplifies programming to have the client wait for a
connection on startup.
When the client is disconnected, it polls periodically to see if the
server is available. The rate at which it polls is configurable.
The client can be configured with multiple server addresses. In this
case, it assumes that each server has identical content and will use
any server that is available. It is possible to configure the client
to accept a read-only connection to one of these servers if no
read-write connection is available. If it has a read-only connection,
it will continue to poll for a read-write connection. This feature
supports the Zope Replication Services product,
http://www.zope.com/Products/ZopeProducts/ZRS. In general, it could
be used to with a system that arranges to provide hot backups of
servers in the case of failure.
If a single address resolves to multiple IPv4 or IPv6 addresses,
the client will connect to an arbitrary of these addresses.
Authentication
~~~~~~~~~~~~~~
ZEO supports optional authentication of client and server using a
password scheme similar to HTTP digest authentication (RFC 2069). It
is a simple challenge-response protocol that does not send passwords
in the clear, but does not offer strong security. The RFC discusses
many of the limitations of this kind of protocol. Note that this
feature provides authentication only. It does not provide encryption
or confidentiality.
The challenge-response also produces a session key that is used to
generate message authentication codes for each ZEO message. This
should prevent session hijacking.
Guard the password database as if it contained plaintext passwords.
It stores the hash of a username and password. This does not expose
the plaintext password, but it is sensitive nonetheless. An attacker
with the hash can impersonate the real user. This is a limitation of
the simple digest scheme.
The authentication framework allows third-party developers to provide
new authentication modules.
Installing software
-------------------
ZEO is distributed as part of the ZODB3 package and with Zope,
starting with Zope 2.7. You can download it from
http://pypi.python.org/pypi/ZODB3.
Configuring server
------------------
The script runzeo.py runs the ZEO server. The server can be
configured using command-line arguments or a config file. This
document only describes the config file. Run runzeo.py
-h to see the list of command-line arguments.
The runzeo.py script imports the ZEO package. ZEO must either be
installed in Python's site-packages directory or be in a directory on
PYTHONPATH.
The configuration file specifies the underlying storage the server
uses, the address it binds, and a few other optional parameters.
An example is::
<zeo>
address zeo.example.com:8090
monitor-address zeo.example.com:8091
</zeo>
<filestorage 1>
path /var/tmp/Data.fs
</filestorage>
<eventlog>
<logfile>
path /var/tmp/zeo.log
format %(asctime)s %(message)s
</logfile>
</eventlog>
This file configures a server to use a FileStorage from
/var/tmp/Data.fs. The server listens on port 8090 of zeo.example.com.
It also starts a monitor server that lists in port 8091. The ZEO
server writes its log file to /var/tmp/zeo.log and uses a custom
format for each line. Assuming the example configuration it stored in
zeo.config, you can run a server by typing::
python /usr/local/bin/runzeo.py -C zeo.config
A configuration file consists of a <zeo> section and a storage
section, where the storage section can use any of the valid ZODB
storage types. It may also contain an eventlog configuration. See
the document "Configuring a ZODB database" for more information about
configuring storages and eventlogs.
The zeo section must list the address. All the other keys are
optional.
address
The address at which the server should listen. This can be in
the form 'host:port' to signify a TCP/IP connection or a
pathname string to signify a Unix domain socket connection (at
least one '/' is required). A hostname may be a DNS name or a
dotted IP address. If the hostname is omitted, the platform's
default behavior is used when binding the listening socket (''
is passed to socket.bind() as the hostname portion of the
address).
read-only
Flag indicating whether the server should operate in read-only
mode. Defaults to false. Note that even if the server is
operating in writable mode, individual storages may still be
read-only. But if the server is in read-only mode, no write
operations are allowed, even if the storages are writable. Note
that pack() is considered a read-only operation.
invalidation-queue-size
The storage server keeps a queue of the objects modified by the
last N transactions, where N == invalidation_queue_size. This
queue is used to speed client cache verification when a client
disconnects for a short period of time.
monitor-address
The address at which the monitor server should listen. If
specified, a monitor server is started. The monitor server
provides server statistics in a simple text format. This can
be in the form 'host:port' to signify a TCP/IP connection or a
pathname string to signify a Unix domain socket connection (at
least one '/' is required). A hostname may be a DNS name or a
dotted IP address. If the hostname is omitted, the platform's
default behavior is used when binding the listening socket (''
is passed to socket.bind() as the hostname portion of the
address).
transaction-timeout
The maximum amount of time to wait for a transaction to commit
after acquiring the storage lock, specified in seconds. If the
transaction takes too long, the client connection will be closed
and the transaction aborted.
authentication-protocol
The name of the protocol used for authentication. The
only protocol provided with ZEO is "digest," but extensions
may provide other protocols.
authentication-database
The path of the database containing authentication credentials.
authentication-realm
The authentication realm of the server. Some authentication
schemes use a realm to identify the logic set of usernames
that are accepted by this server.
Configuring clients
-------------------
The ZEO client can also be configured using ZConfig. The ZODB.config
module provides several function for opening a storage based on its
configuration.
- ZODB.config.storageFromString()
- ZODB.config.storageFromFile()
- ZODB.config.storageFromURL()
The ZEO client configuration requires the server address be
specified. Everything else is optional. An example configuration is::
<zeoclient>
server zeo.example.com:8090
</zeoclient>
The other configuration options are listed below.
storage
The name of the storage that the client wants to use. If the
ZEO server serves more than one storage, the client selects
the storage it wants to use by name. The default name is '1',
which is also the default name for the ZEO server.
cache-size
The maximum size of the client cache, in bytes.
name
The storage name. If unspecified, the address of the server
will be used as the name.
client
Enables persistent cache files. The string passed here is
used to construct the cache filenames. If it is not
specified, the client creates a temporary cache that will
only be used by the current object.
var
The directory where persistent cache files are stored. By
default cache files, if they are persistent, are stored in
the current directory.
min-disconnect-poll
The minimum delay in seconds between attempts to connect to
the server, in seconds. Defaults to 5 seconds.
max-disconnect-poll
The maximum delay in seconds between attempts to connect to
the server, in seconds. Defaults to 300 seconds.
wait
A boolean indicating whether the constructor should wait
for the client to connect to the server and verify the cache
before returning. The default is true.
read-only
A flag indicating whether this should be a read-only storage,
defaulting to false (i.e. writing is allowed by default).
read-only-fallback
A flag indicating whether a read-only remote storage should be
acceptable as a fallback when no writable storages are
available. Defaults to false. At most one of read_only and
read_only_fallback should be true.
realm
The authentication realm of the server. Some authentication
schemes use a realm to identify the logic set of usernames
that are accepted by this server.
A ZEO client can also be created by calling the ClientStorage
constructor explicitly. For example::
from ZEO.ClientStorage import ClientStorage
storage = ClientStorage(("zeo.example.com", 8090))
Running the ZEO server as a daemon
----------------------------------
In an operational setting, you will want to run the ZEO server a
daemon process that is restarted when it dies. The zdaemon package
provides two tools for running daemons: zdrun.py and zdctl.py. You can
find zdaemon and it's documentation at
http://pypi.python.org/pypi/zdaemon.
Rotating log files
~~~~~~~~~~~~~~~~~~
ZEO will re-initialize its logging subsystem when it receives a
SIGUSR2 signal. If you are using the standard event logger, you
should first rename the log file and then send the signal to the
server. The server will continue writing to the renamed log file
until it receives the signal. After it receives the signal, the
server will create a new file with the old name and write to it.
Tools
-----
There are a few scripts that may help running a ZEO server. The
zeopack.py script connects to a server and packs the storage. It can
be run as a cron job. The zeoup.py script attempts to connect to a
ZEO server and verify that is is functioning. The zeopasswd.py script
manages a ZEO servers password database.
Diagnosing problems
-------------------
If an exception occurs on the server, the server will log a traceback
and send an exception to the client. The traceback on the client will
show a ZEO protocol library as the source of the error. If you need
to diagnose the problem, you will have to look in the server log for
the rest of the traceback.
...@@ -160,7 +160,6 @@ setup(name="ZODB", ...@@ -160,7 +160,6 @@ setup(name="ZODB",
'transaction >= 1.4.4', 'transaction >= 1.4.4',
'six', 'six',
'zc.lockfile', 'zc.lockfile',
'zdaemon >= 4.0.0a1',
'zope.interface', 'zope.interface',
'zodbpickle >= 0.6.0', 'zodbpickle >= 0.6.0',
], ],
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment