Commit 81f586c4 authored by Jeremy Hylton's avatar Jeremy Hylton

Merge ZEO2-branch to trunk.

parent d09403de
...@@ -10,20 +10,23 @@ ClientStorage ...@@ -10,20 +10,23 @@ ClientStorage
Creating a ClientStorage Creating a ClientStorage
At a minimum, a client storage requires an argument (named The ClientStorage requires at leats one argument, the address or
connection) giving connection information. This argument should be addresses of the server(s) to use. It accepts several other
a string, specifying a unix-domain socket file name, or a tuple optional keyword arguments.
consisting of a host and port. The host should be a string host
name or IP number. The port should be a numeric port number.
The ClientStorage constructor provides a number of additional The address argument can be one of:
options (arguments). The full list of arguments is:
- a tuple containing hostname and port number
- a string specifying the path to a Unix domain socket
- a sequence of the previous two
connection -- Connection information. If a sequence of addresses is specified, the client will use the
first server from the list that it can connect to.
This argument is either a string containing a socket file name The ClientStorage constructor provides a number of additional
or a tuple consisting of a string host name or ip number and an options (arguments). The full list of arguments is:
integer port.
storage -- The name of the storage to connect to. storage -- The name of the storage to connect to.
...@@ -33,7 +36,9 @@ ClientStorage ...@@ -33,7 +36,9 @@ ClientStorage
default name for both the server and client is '1'. default name for both the server and client is '1'.
cache_size -- The number of bytes to allow for the client cache. cache_size -- The number of bytes to allow for the client cache.
The default is 20,000,000. The default is 20,000,000. A large cache can significantly
increase the performance of a ZEO system. For applications that
have a large database, the default size may be too small.
For more information on client caches, see ClientCache.txt. For more information on client caches, see ClientCache.txt.
...@@ -54,10 +59,6 @@ ClientStorage ...@@ -54,10 +59,6 @@ ClientStorage
For more information on client cache files, see ClientCache.txt. For more information on client cache files, see ClientCache.txt.
debug -- If this is provided, it should be a non-empty string. It
indicates that client should log tracing and debugging
information, using zLOG.
var -- The directory in which persistent cache files should be var -- The directory in which persistent cache files should be
written. If this option is provided, it is unnecessary to written. If this option is provided, it is unnecessary to
set INSTANCE_HOME in __builtins__. set INSTANCE_HOME in __builtins__.
...@@ -82,6 +83,13 @@ ClientStorage ...@@ -82,6 +83,13 @@ ClientStorage
The default is 300 seconds. The default is 300 seconds.
wait_for_server_on_starup -- Indicate whether the ClientStorage wait -- Indicate whether the ClientStorage should block waiting
should block waiting for a storage server connection, or whether for a storage server connection, or whether it should proceed,
it should proceed, satisfying reads from the client cache. satisfying reads from the client cache.
read_only -- Open a read-only connection to the server. If the
client attempts to commit a transaction, it will get a
ReadOnlyError exception.
Each storage served by a ZEO server can be configured as either
read-write or read-only.
Zope Enterprize Objects Zope Enterprize Objects
ZEO 1.0 requires Python 2.0 when used without Zope. It depends on Installation
versions of asyncore and cPickle that were first released with
Python 2.0.
Put the ZEO package in a directory on your Python path. On a Unix ZEO 2.0 requires Python 2.1 or higher when used without Zope. If
system, you can use the site-packages directory of your Python lib you use Python 2.1, we recommend the latest minor release (2.1.3 as
directory. The ZEO package is the directory named ZEO that contains of this writing) because it includes a few bug fixes that affect
an __init__.py file. ZEO.
Starting (and configuring) the ZEO Server ZEO is packaged with distutils. To install it, run this command
from the top-level ZEO directory::
python setup.py install
The setup script will install the ZEO package in your Python
site-packages directory.
You can test ZEO before installing it with the test script::
To start the storage server, run the start.py script contained in python test.py -v
the ZEO package. You can run the script from the package
directory or copy it to a directory on your path. Run the script with the -h option for a full list of options. The
ZEO 2.0a1 release contains 87 unit tests on Unix.
Starting (and configuring) the ZEO Server
Specify the port number when you run the script:: To start the storage server, go to your Zope install directory and
run::
python ZEO/start.py -p port_number python lib/python/ZEO/start.py -p port_number
Or run start.py without arguments to see options. The options are This run the storage sever under zdaemon. zdaemon automatically
documented in start.txt. restarts programs that exit unexpectedly.
The server and the client don't have to be on the same machine. The server and the client don't have to be on the same machine.
If the server and client *are* on the same machine, then you can If they are on the same machine, then you can use a Unix domain
use a Unix domain socket:: socket::
python ZEO/start.py -U filename python lib/python/ZEO/start.py -U filename
The start script provides a number of options not documented here.
See doc/start.txt for more information.
Running a ZEO client Running a ZEO client
In your application, create a ClientStorage, rather than, say, a In your application, create a ClientStorage, rather than, say, a
FileStorage: FileStorage:
import ZODB, ZEO.ClientStorage import ZODB
Storage=ZEO.ClientStorage.ClientStorage(('',port_number)) from ZEO.ClientStorage import ClientStorage
db=ZODB.DB(Storage) Storage = ClientStorage(('', port_number))
db = ZODB.DB(Storage)
You can specify a host name (rather than '') if you want. The port You can specify a host name (rather than '') if you want. The port
number is, of course, the port number used to start the storage number is, of course, the port number used to start the storage
...@@ -43,38 +57,24 @@ Zope Enterprize Objects ...@@ -43,38 +57,24 @@ Zope Enterprize Objects
You can also give the name of a Unix domain socket file:: You can also give the name of a Unix domain socket file::
import ZODB, ZEO.ClientStorage import ZODB
Storage=ZEO.ClientStorage.ClientStorage(filename) from ZEO.ClientStorage import ClientStorage
db=ZODB.DB(Storage) Storage = ClientStorage(filename)
db = ZODB.DB(Storage)
There are a number of configuration options available for the There are a number of configuration options available for the
ClientStorage. See ClientStorage.txt for details. ClientStorage. See ClientStorage.txt for details.
If you want a persistent client cache which retains cache contents If you want a persistent client cache which retains cache contents
across ClientStorage restarts, you need to define the environment across ClientStorage restarts, you need to define the environment
variable, ZEO_CLIENT, to a unique name for the client. This is variable, ZEO_CLIENT, or set the client keyword argument to the
needed so that unique cache name files can be computed. Otherwise, constructor to a unique name for the client. This is needed so
the client cache is stored in temporary files which are removed when that unique cache name files can be computed. Otherwise, the
client cache is stored in temporary files which are removed when
the ClientStorage shuts down. the ClientStorage shuts down.
Dependencies on other modules Dependencies on other modules
- The module ThreadedAsync must be on the Python path. ZEO depends on other modules that are distributed with
StandaloneZODB and with Zope. You can download StandaloneZODB
- The zdaemon module is necessary if you want to run your from http://www.zope.org/Products/StandaloneZODB.
storage server as a daemon that automatically restarts itself
if there is a fatal error.
- The zLOG module provides a handy logging capability.
If you are using a version of Python before Python 2:
- ZServer should be in the Python path, or you should copy the
version of asyncore.py from ZServer (from Zope 2.2 or CVS) to
your Python path, or you should copy a version of a asyncore
from the medusa CVS tree to your Python path. A recent change
in asyncore is required.
- The version of cPickle from Zope, or from the python.org CVS
tree must be used. It has a hook to provide control over which
"global objects" (e.g. classes) may be pickled.
...@@ -2,30 +2,38 @@ Zope Enterprise Objects ...@@ -2,30 +2,38 @@ Zope Enterprise Objects
Installation Installation
ZEO 1.0 requires Zope 2.2 or higher. ZEO 2.0 requires Zope 2.4 or higher and Python 2.1 or higher.
If you use Python 2.1, we recommend the latest minor release
(2.1.3 as of this writing) because it includes a few bug fixes
that affect ZEO.
Put this package (the ZEO directory, without any wrapping directory Put the package (the ZEO directory, without any wrapping directory
included in a distribution) in your Zope lib/python. included in a distribution) in your Zope lib/python.
If you are using Python 1.5.2, the lib/python/ZODB directory must The setup.py script in the top-level ZEO directory can also be
contain a cPickle.so (Unix) or cPickle.pyd (Windows) file. In used. Run "python setup.py install --home=ZOPE" where ZOPE is the
many cases, the Zope installation process will not place this file top-level Zope directory.
in the right location. You may need to copy it from lib/python to
lib/python/ZODB. You can test ZEO before installing it with the test script::
python test.py -v
Run the script with the -h option for a full list of options. The
ZEO 2.0a1 release contains 87 unit tests on Unix.
Starting (and configuring) the ZEO Server Starting (and configuring) the ZEO Server
To start the storage server, go to your Zope install directory and:: To start the storage server, go to your Zope install directory and
run::
python lib/python/ZEO/start.py -p port_number python lib/python/ZEO/start.py -p port_number
(Run start without arguments to see options.) This run the storage sever under zdaemon. zdaemon automatically
restarts programs that exit unexpectedly.
Of course, the server and the client don't have to be on the same The server and the client don't have to be on the same machine.
machine. If they are on the same machine, then you can use a Unix domain
socket::
If the server and client *are* on the same machine, then you can use
a Unix domain socket::
python lib/python/ZEO/start.py -U filename python lib/python/ZEO/start.py -U filename
...@@ -38,10 +46,8 @@ Zope Enterprise Objects ...@@ -38,10 +46,8 @@ Zope Enterprise Objects
custom_zodb.py, in your Zope install directory, so that Zope uses a custom_zodb.py, in your Zope install directory, so that Zope uses a
ClientStorage:: ClientStorage::
import ZEO.ClientStorage from ZEO.ClientStorage import ClientStorage
Storage=ZEO.ClientStorage.ClientStorage(('',port_number)) Storage = ClientStorage(('', port_number))
(See the misc/custom_zodb.py for an example.)
You can specify a host name (rather than '') if you want. The port You can specify a host name (rather than '') if you want. The port
number is, of course, the port number used to start the storage number is, of course, the port number used to start the storage
...@@ -49,19 +55,20 @@ Zope Enterprise Objects ...@@ -49,19 +55,20 @@ Zope Enterprise Objects
You can also give the name of a Unix domain socket file:: You can also give the name of a Unix domain socket file::
import ZEO.ClientStorage from ZEO.ClientStorage import ClientStorage
Storage=ZEO.ClientStorage.ClientStorage(filename) Storage = ClientStorage(filename)
There are a number of configuration options available for the There are a number of configuration options available for the
ClientStorage. See doc/ClientStorage.txt for details. ClientStorage. See doc/ClientStorage.txt for details.
If you want a persistent client cache which retains cache contents If you want a persistent client cache which retains cache contents
across ClientStorage restarts, you need to define the environment across ClientStorage restarts, you need to define the environment
variable, ZEO_CLIENT, to a unique name for the client. This is variable, ZEO_CLIENT, or set the client keyword argument to the
needed so that unique cache name files can be computed. Otherwise, constructor to a unique name for the client. This is needed so
the client cache is stored in temporary files which are removed when that unique cache name files can be computed. Otherwise, the
client cache is stored in temporary files which are removed when
the ClientStorage shuts down. For example, to start two Zope the ClientStorage shuts down. For example, to start two Zope
processes with unique caches, use something like: processes with unique caches, use something like::
python z2.py -P8700 ZEO_CLIENT=8700 python z2.py -P8700 ZEO_CLIENT=8700
python z2.py -P8800 ZEO_CLIENT=8800 python z2.py -P8800 ZEO_CLIENT=8800
...@@ -74,9 +81,8 @@ Zope Enterprise Objects ...@@ -74,9 +81,8 @@ Zope Enterprise Objects
different clients have different software installed, the correct different clients have different software installed, the correct
state of the database is ambiguous. state of the database is ambiguous.
Starting in Zope 2.2, Zope will not modify the Zope database Zope will not modify the Zope database during product installation
during product installation if the environment variable ZEO_CLIENT if the environment variable ZEO_CLIENT is set.
is set.
Normally, Zope ZEO clients should be run with ZEO_CLIENT set so Normally, Zope ZEO clients should be run with ZEO_CLIENT set so
that product initialization is not performed. that product initialization is not performed.
......
This diff is collapsed.
This diff is collapsed.
##############################################################################
#
# Copyright (c) 2001, 2002 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Stub for interface exported by ClientStorage"""
class ClientStorage:
def __init__(self, rpc):
self.rpc = rpc
def beginVerify(self):
self.rpc.callAsync('begin')
# XXX must rename the two invalidate messages. I can never
# remember which is which
def invalidate(self, args):
self.rpc.callAsync('invalidate', args)
def Invalidate(self, args):
self.rpc.callAsync('Invalidate', args)
def endVerify(self):
self.rpc.callAsync('end')
def serialnos(self, arg):
self.rpc.callAsync('serialnos', arg)
def info(self, arg):
self.rpc.callAsync('info', arg)
##############################################################################
#
# Copyright (c) 2001, 2002 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Log a transaction's commit info during two-phase commit.
A storage server allows multiple clients to commit transactions, but
must serialize them as the actually execute at the server. The
concurrent commits are achieved by logging actions up until the
tpc_vote(). At that point, the entire transaction is committed on the
real storage.
"""
import cPickle
import tempfile
class CommitLog:
def __init__(self):
self.file = tempfile.TemporaryFile(suffix=".log")
self.pickler = cPickle.Pickler(self.file, 1)
self.pickler.fast = 1
self.stores = 0
self.read = 0
def tpc_begin(self, t, tid, status):
self.t = t
self.tid = tid
self.status = status
def store(self, oid, serial, data, version):
self.pickler.dump((oid, serial, data, version))
self.stores += 1
def get_loader(self):
self.read = 1
self.file.seek(0)
return self.stores, cPickle.Unpickler(self.file)
##############################################################################
#
# Copyright (c) 2001, 2002 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Exceptions for ZEO."""
class Disconnected(Exception):
"""Exception raised when a ZEO client is disconnected from the
ZEO server."""
try:
from Interface import Base
except ImportError:
class Base:
# a dummy interface for use when Zope's is unavailable
pass
class ICache(Base):
"""ZEO client cache.
__init__(storage, size, client, var)
All arguments optional.
storage -- name of storage
size -- max size of cache in bytes
client -- a string; if specified, cache is persistent.
var -- var directory to store cache files in
"""
def open():
"""Returns a sequence of object info tuples.
An object info tuple is a pair containing an object id and a
pair of serialnos, a non-version serialno and a version serialno:
oid, (serial, ver_serial)
This method builds an index of the cache and returns a
sequence used for cache validation.
"""
def close():
"""Closes the cache."""
def verify(func):
"""Call func on every object in cache.
func is called with three arguments
func(oid, serial, ver_serial)
"""
def invalidate(oid, version):
"""Remove object from cache."""
def load(oid, version):
"""Load object from cache.
Return None if object not in cache.
Return data, serialno if object is in cache.
"""
def store(oid, p, s, version, pv, sv):
"""Store a new object in the cache."""
def update(oid, serial, version, data):
"""Update an object already in the cache.
XXX This method is called to update objects that were modified by
a transaction. It's likely that it is already in the cache,
and it may be possible for the implementation to operate more
efficiently.
"""
def modifiedInVersion(oid):
"""Return the version an object is modified in.
'' signifies the trunk.
Returns None if the object is not in the cache.
"""
def checkSize(size):
"""Check if adding size bytes would exceed cache limit.
This method is often called just before store or update. The
size is a hint about the amount of data that is about to be
stored. The cache may want to evict some data to make space.
"""
##############################################################################
#
# Copyright (c) 2001, 2002 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Stub for interface exposed by StorageServer"""
class StorageServer:
def __init__(self, rpc):
self.rpc = rpc
def register(self, storage_name, read_only):
self.rpc.call('register', storage_name, read_only)
def get_info(self):
return self.rpc.call('get_info')
def get_size_info(self):
return self.rpc.call('get_size_info')
def beginZeoVerify(self):
self.rpc.callAsync('beginZeoVerify')
def zeoVerify(self, oid, s, sv):
self.rpc.callAsync('zeoVerify', oid, s, sv)
def endZeoVerify(self):
self.rpc.callAsync('endZeoVerify')
def new_oids(self, n=None):
if n is None:
return self.rpc.call('new_oids')
else:
return self.rpc.call('new_oids', n)
def pack(self, t, wait=None):
if wait is None:
self.rpc.call('pack', t)
else:
self.rpc.call('pack', t, wait)
def zeoLoad(self, oid):
return self.rpc.call('zeoLoad', oid)
def storea(self, oid, serial, data, version, id):
self.rpc.callAsync('storea', oid, serial, data, version, id)
def tpc_begin(self, id, user, descr, ext, tid, status):
return self.rpc.call('tpc_begin', id, user, descr, ext, tid, status)
def vote(self, trans_id):
return self.rpc.call('vote', trans_id)
def tpc_finish(self, id):
return self.rpc.call('tpc_finish', id)
def tpc_abort(self, id):
self.rpc.callAsync('tpc_abort', id)
def abortVersion(self, src, id):
return self.rpc.call('abortVersion', src, id)
def commitVersion(self, src, dest, id):
return self.rpc.call('commitVersion', src, dest, id)
def history(self, oid, version, length=None):
if length is not None:
return self.rpc.call('history', oid, version)
else:
return self.rpc.call('history', oid, version, length)
def load(self, oid, version):
return self.rpc.call('load', oid, version)
def loadSerial(self, oid, serial):
return self.rpc.call('loadSerial', oid, serial)
def modifiedInVersion(self, oid):
return self.rpc.call('modifiedInVersion', oid)
def new_oid(self, last=None):
if last is None:
return self.rpc.call('new_oid')
else:
return self.rpc.call('new_oid', last)
def store(self, oid, serial, data, version, trans):
return self.rpc.call('store', oid, serial, data, version, trans)
def transactionalUndo(self, trans_id, trans):
return self.rpc.call('transactionalUndo', trans_id, trans)
def undo(self, trans_id):
return self.rpc.call('undo', trans_id)
def undoLog(self, first, last):
# XXX filter not allowed across RPC
return self.rpc.call('undoLog', first, last)
def undoInfo(self, first, last, spec):
return self.rpc.call('undoInfo', first, last, spec)
def versionEmpty(self, vers):
return self.rpc.call('versionEmpty', vers)
def versions(self, max=None):
if max is None:
return self.rpc.call('versions')
else:
return self.rpc.call('versions', max)
This diff is collapsed.
##############################################################################
#
# Copyright (c) 2001, 2002 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""A TransactionBuffer store transaction updates until commit or abort.
A transaction may generate enough data that it is not practical to
always hold pending updates in memory. Instead, a TransactionBuffer
is used to store the data until a commit or abort.
"""
# A faster implementation might store trans data in memory until it
# reaches a certain size.
import tempfile
import cPickle
class TransactionBuffer:
def __init__(self):
self.file = tempfile.TemporaryFile(suffix=".tbuf")
self.count = 0
self.size = 0
# It's safe to use a fast pickler because the only objects
# stored are builtin types -- strings or None.
self.pickler = cPickle.Pickler(self.file, 1)
self.pickler.fast = 1
def close(self):
try:
self.file.close()
except OSError:
pass
def store(self, oid, version, data):
"""Store oid, version, data for later retrieval"""
self.pickler.dump((oid, version, data))
self.count += 1
# Estimate per-record cache size
self.size = self.size + len(data) + (27 + 12)
if version:
self.size = self.size + len(version) + 4
def invalidate(self, oid, version):
self.pickler.dump((oid, version, None))
self.count += 1
def clear(self):
"""Mark the buffer as empty"""
self.file.seek(0)
self.count = 0
self.size = 0
# unchecked constraints:
# 1. can't call store() after begin_iterate()
# 2. must call clear() after iteration finishes
def begin_iterate(self):
"""Move the file pointer in advance of iteration"""
self.file.flush()
self.file.seek(0)
self.unpickler = cPickle.Unpickler(self.file)
def next(self):
"""Return next tuple of data or None if EOF"""
if self.count == 0:
del self.unpickler
return None
oid_ver_data = self.unpickler.load()
self.count -= 1
return oid_ver_data
def get_size(self):
"""Return size of data stored in buffer (just a hint)."""
return self.size
...@@ -11,5 +11,3 @@ ...@@ -11,5 +11,3 @@
# FOR A PARTICULAR PURPOSE # FOR A PARTICULAR PURPOSE
# #
############################################################################## ##############################################################################
import fap
...@@ -14,11 +14,14 @@ ...@@ -14,11 +14,14 @@
"""Sized message async connections """Sized message async connections
""" """
__version__ = "$Revision: 1.16 $"[11:-2] __version__ = "$Revision: 1.17 $"[11:-2]
import asyncore, struct
from Exceptions import Disconnected
from zLOG import LOG, TRACE, ERROR, INFO, BLATHER
from types import StringType
import asyncore, string, struct, zLOG, sys, Acquisition
import socket, errno import socket, errno
from logger import zLogger
# Use the dictionary to make sure we get the minimum number of errno # Use the dictionary to make sure we get the minimum number of errno
# entries. We expect that EWOULDBLOCK == EAGAIN on most systems -- # entries. We expect that EWOULDBLOCK == EAGAIN on most systems --
...@@ -38,81 +41,103 @@ tmp_dict = {errno.EAGAIN: 0, ...@@ -38,81 +41,103 @@ tmp_dict = {errno.EAGAIN: 0,
expected_socket_write_errors = tuple(tmp_dict.keys()) expected_socket_write_errors = tuple(tmp_dict.keys())
del tmp_dict del tmp_dict
class SizedMessageAsyncConnection(Acquisition.Explicit, asyncore.dispatcher): class SizedMessageAsyncConnection(asyncore.dispatcher):
__super_init = asyncore.dispatcher.__init__
__super_close = asyncore.dispatcher.close
__closed = 1 # Marker indicating that we're closed
__append=None # Marker indicating that we're closed socket = None # to outwit Sam's getattr
socket=None # to outwit Sam's getattr READ_SIZE = 8096
def __init__(self, sock, addr, map=None, debug=None): def __init__(self, sock, addr, map=None, debug=None):
SizedMessageAsyncConnection.inheritedAttribute( self.addr = addr
'__init__')(self, sock, map) if debug is not None:
self.addr=addr
if debug is None and __debug__:
self._debug = zLogger("smac")
else:
self._debug = debug self._debug = debug
self.__state=None elif not hasattr(self, '_debug'):
self.__inp=None self._debug = __debug__ and 'smac'
self.__inpl=0 self.__state = None
self.__l=4 self.__inp = None # None, a single String, or a list
self.__output=output=[] self.__input_len = 0
self.__append=output.append self.__msg_size = 4
self.__pop=output.pop self.__output = []
self.__closed = None
def handle_read(self, self.__super_init(sock, map)
join=string.join, StringType=type(''), _type=type,
_None=None): # XXX avoid expensive getattr calls? Can't remember exactly what
# this comment was supposed to mean, but it has something to do
# with the way asyncore uses getattr and uses if sock:
def __nonzero__(self):
return 1
def handle_read(self):
# Use a single __inp buffer and integer indexes to make this
# fast.
try: try:
d=self.recv(8096) d=self.recv(8096)
except socket.error, err: except socket.error, err:
if err[0] in expected_socket_read_errors: if err[0] in expected_socket_read_errors:
return return
raise raise
if not d: return if not d:
return
inp=self.__inp
if inp is _None: input_len = self.__input_len + len(d)
inp=d msg_size = self.__msg_size
elif _type(inp) is StringType: state = self.__state
inp=[inp,d]
inp = self.__inp
if msg_size > input_len:
if inp is None:
self.__inp = d
elif type(self.__inp) is StringType:
self.__inp = [self.__inp, d]
else:
self.__inp.append(d)
self.__input_len = input_len
return # keep waiting for more input
# load all previous input and d into single string inp
if isinstance(inp, StringType):
inp = inp + d
elif inp is None:
inp = d
else: else:
inp.append(d) inp.append(d)
inp = "".join(inp)
inpl=self.__inpl+len(d)
l=self.__l offset = 0
while (offset + msg_size) <= input_len:
while 1: msg = inp[offset:offset + msg_size]
offset = offset + msg_size
if l <= inpl: if state is None:
# Woo hoo, we have enough data # waiting for message
if _type(inp) is not StringType: inp=join(inp,'') msg_size = struct.unpack(">i", msg)[0]
d=inp[:l] state = 1
inp=inp[l:]
inpl=inpl-l
if self.__state is _None:
# waiting for message
l=struct.unpack(">i",d)[0]
self.__state=1
else:
l=4
self.__state=_None
self.message_input(d)
else: else:
break # not enough data msg_size = 4
state = None
self.__l=l self.message_input(msg)
self.__inp=inp
self.__inpl=inpl
def readable(self): return 1 self.__state = state
def writable(self): return not not self.__output self.__msg_size = msg_size
self.__inp = inp[offset:]
self.__input_len = input_len - offset
def readable(self):
return 1
def writable(self):
if len(self.__output) == 0:
return 0
else:
return 1
def handle_write(self): def handle_write(self):
output=self.__output output = self.__output
while output: while output:
v=output[0] v = output[0]
try: try:
n=self.send(v) n=self.send(v)
except socket.error, err: except socket.error, err:
...@@ -120,37 +145,33 @@ class SizedMessageAsyncConnection(Acquisition.Explicit, asyncore.dispatcher): ...@@ -120,37 +145,33 @@ class SizedMessageAsyncConnection(Acquisition.Explicit, asyncore.dispatcher):
break # we couldn't write anything break # we couldn't write anything
raise raise
if n < len(v): if n < len(v):
output[0]=v[n:] output[0] = v[n:]
break # we can't write any more break # we can't write any more
else: else:
del output[0] del output[0]
#break # waaa
def handle_close(self): def handle_close(self):
self.close() self.close()
def message_output(self, message, def message_output(self, message):
pack=struct.pack, len=len): if __debug__:
if self._debug is not None: if self._debug:
if len(message) > 40: if len(message) > 40:
m = message[:40]+' ...' m = message[:40]+' ...'
else: else:
m = message m = message
self._debug.trace('message_output %s' % `m`) LOG(self._debug, TRACE, 'message_output %s' % `m`)
append=self.__append if self.__closed is not None:
if append is None: raise Disconnected, (
raise Disconnected("This action is temporarily unavailable.<p>") "This action is temporarily unavailable."
"<p>"
append(pack(">i",len(message))+message) )
# do two separate appends to avoid copying the message string
self.__output.append(struct.pack(">i", len(message)))
self.__output.append(message)
def close(self): def close(self):
if self.__append is not None: if self.__closed is None:
self.__append=None self.__closed = 1
SizedMessageAsyncConnection.inheritedAttribute('close')(self) self.__super_close()
class Disconnected(Exception):
"""The client has become disconnected from the server
"""
...@@ -11,21 +11,23 @@ ...@@ -11,21 +11,23 @@
# FOR A PARTICULAR PURPOSE # FOR A PARTICULAR PURPOSE
# #
############################################################################## ##############################################################################
"""Start the server storage. """Start the server storage.
""" """
__version__ = "$Revision: 1.32 $"[11:-2] __version__ = "$Revision: 1.33 $"[11:-2]
import sys, os, getopt, string import sys, os, getopt, string
import StorageServer
import asyncore
def directory(p, n=1): def directory(p, n=1):
d=p d=p
while n: while n:
d=os.path.split(d)[0] d=os.path.split(d)[0]
if not d or d=='.': d=os.getcwd() if not d or d=='.': d=os.getcwd()
n=n-1 n=n-1
return d return d
def get_storage(m, n, cache={}): def get_storage(m, n, cache={}):
...@@ -44,9 +46,11 @@ def get_storage(m, n, cache={}): ...@@ -44,9 +46,11 @@ def get_storage(m, n, cache={}):
def main(argv): def main(argv):
me=argv[0] me=argv[0]
sys.path[:]==filter(None, sys.path)
sys.path.insert(0, directory(me, 2)) sys.path.insert(0, directory(me, 2))
# XXX hack for profiling support
global unix, storages, zeo_pid, asyncore
args=[] args=[]
last='' last=''
for a in argv[1:]: for a in argv[1:]:
...@@ -77,23 +81,22 @@ def main(argv): ...@@ -77,23 +81,22 @@ def main(argv):
fs = os.path.join(var, 'Data.fs') fs = os.path.join(var, 'Data.fs')
usage = """%s [options] [filename] usage="""%s [options] [filename]
where options are: where options are:
-D -- Run in debug mode -D -- Run in debug mode
-d -- Generate detailed debug logging without running -d -- Set STUPD_LOG_SEVERITY to -300
in the foreground.
-U -- Unix-domain socket file to listen on -U -- Unix-domain socket file to listen on
-u username or uid number -u username or uid number
The username to run the ZEO server as. You may want to run The username to run the ZEO server as. You may want to run
the ZEO server as 'nobody' or some other user with limited the ZEO server as 'nobody' or some other user with limited
resouces. The only works under Unix, and if the storage resouces. The only works under Unix, and if ZServer is
server is started by root. started by root.
-p port -- port to listen on -p port -- port to listen on
...@@ -116,30 +119,47 @@ def main(argv): ...@@ -116,30 +119,47 @@ def main(argv):
attr_name -- This is the name to which the storage object attr_name -- This is the name to which the storage object
is assigned in the module. is assigned in the module.
-P file -- Run under profile and dump output to file. Implies the
-s flag.
if no file name is specified, then %s is used. if no file name is specified, then %s is used.
""" % (me, fs) """ % (me, fs)
try: try:
opts, args = getopt.getopt(args, 'p:Ddh:U:sS:u:') opts, args = getopt.getopt(args, 'p:Dh:U:sS:u:P:d')
except getopt.error, err: except getopt.error, msg:
print err
print usage print usage
print msg
sys.exit(1) sys.exit(1)
port=None port = None
debug=detailed=0 debug = 0
host='' host = ''
unix=None unix =None
Z=1 Z = 1
UID='nobody' UID = 'nobody'
prof = None
detailed = 0
for o, v in opts: for o, v in opts:
if o=='-p': port=string.atoi(v) if o=='-p':
elif o=='-h': host=v port = int(v)
elif o=='-U': unix=v elif o=='-h':
elif o=='-u': UID=v host = v
elif o=='-D': debug=1 elif o=='-U':
elif o=='-d': detailed=1 unix = v
elif o=='-s': Z=0 elif o=='-u':
UID = v
elif o=='-D':
debug = 1
elif o=='-d':
detailed = 1
elif o=='-s':
Z = 0
elif o=='-P':
prof = v
if prof:
Z = 0
if port is None and unix is None: if port is None and unix is None:
print usage print usage
...@@ -153,14 +173,16 @@ def main(argv): ...@@ -153,14 +173,16 @@ def main(argv):
sys.exit(1) sys.exit(1)
fs=args[0] fs=args[0]
if debug: os.environ['Z_DEBUG_MODE']='1' __builtins__.__debug__=debug
if debug:
if detailed: os.environ['STUPID_LOG_SEVERITY']='-99999' os.environ['Z_DEBUG_MODE'] = '1'
if detailed:
os.environ['STUPID_LOG_SEVERITY'] = '-300'
from zLOG import LOG, INFO, ERROR from zLOG import LOG, INFO, ERROR
# Try to set uid to "-u" -provided uid. # Try to set uid to "-u" -provided uid.
# Try to set gid to "-u" user's primary group. # Try to set gid to "-u" user's primary group.
# This will only work if this script is run by root. # This will only work if this script is run by root.
try: try:
import pwd import pwd
...@@ -175,7 +197,7 @@ def main(argv): ...@@ -175,7 +197,7 @@ def main(argv):
uid = pwd.getpwuid(UID)[2] uid = pwd.getpwuid(UID)[2]
gid = pwd.getpwuid(UID)[3] gid = pwd.getpwuid(UID)[3]
else: else:
raise KeyError raise KeyError
try: try:
if gid is not None: if gid is not None:
try: try:
...@@ -200,7 +222,7 @@ def main(argv): ...@@ -200,7 +222,7 @@ def main(argv):
try: try:
import ZEO.StorageServer, asyncore import ZEO.StorageServer, asyncore
storages={} storages={}
for o, v in opts: for o, v in opts:
if o=='-S': if o=='-S':
...@@ -243,15 +265,15 @@ def main(argv): ...@@ -243,15 +265,15 @@ def main(argv):
if not unix: unix=host, port if not unix: unix=host, port
ZEO.StorageServer.StorageServer(unix, storages) StorageServer.StorageServer(unix, storages)
try: try:
ppid, pid = os.getppid(), os.getpid() ppid, pid = os.getppid(), os.getpid()
except: except:
pass # getpid not supported pass # getpid not supported
else: else:
open(zeo_pid,'w').write("%s %s" % (ppid, pid)) open(zeo_pid,'w').write("%s %s" % (ppid, pid))
except: except:
# Log startup exception and tell zdaemon not to restart us. # Log startup exception and tell zdaemon not to restart us.
info = sys.exc_info() info = sys.exc_info()
...@@ -269,7 +291,6 @@ def main(argv): ...@@ -269,7 +291,6 @@ def main(argv):
asyncore.loop() asyncore.loop()
def rotate_logs(): def rotate_logs():
import zLOG import zLOG
if hasattr(zLOG.log_write, 'reinitialize'): if hasattr(zLOG.log_write, 'reinitialize'):
...@@ -292,29 +313,21 @@ def shutdown(storages, die=1): ...@@ -292,29 +313,21 @@ def shutdown(storages, die=1):
# unnecessary, since we now use so_reuseaddr. # unnecessary, since we now use so_reuseaddr.
for ignored in 1,2: for ignored in 1,2:
for socket in asyncore.socket_map.values(): for socket in asyncore.socket_map.values():
try: try: socket.close()
socket.close() except: pass
except:
pass
for storage in storages.values(): for storage in storages.values():
try: try: storage.close()
storage.close() finally: pass
except:
pass
try: try:
from zLOG import LOG, INFO from zLOG import LOG, INFO
LOG('ZEO Server', INFO, LOG('ZEO Server', INFO,
"Shutting down (%s)" % (die and "shutdown" or "restart") "Shutting down (%s)" % (die and "shutdown" or "restart")
) )
except: except: pass
pass
if die: sys.exit(0)
if die: else: sys.exit(1)
sys.exit(0)
else:
sys.exit(1)
if __name__ == '__main__': if __name__=='__main__': main(sys.argv)
main(sys.argv)
...@@ -46,7 +46,7 @@ class TransUndoStorageWithCache: ...@@ -46,7 +46,7 @@ class TransUndoStorageWithCache:
# Make sure this doesn't load invalid data into the cache # Make sure this doesn't load invalid data into the cache
self._storage.load(oid, '') self._storage.load(oid, '')
self._storage.tpc_vote(t) self._storage.tpc_vote(t)
self._storage.tpc_finish(t) self._storage.tpc_finish(t)
......
##############################################################################
#
# Copyright (c) 2002 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Tests of the distributed commit lock."""
import threading
from ZODB.Transaction import Transaction
from ZODB.tests.StorageTestBase import zodb_pickle, MinPO
import ZEO.ClientStorage
from ZEO.Exceptions import Disconnected
ZERO = '\0'*8
class DummyDB:
def invalidate(self, *args):
pass
class WorkerThread(threading.Thread):
# run the entire test in a thread so that the blocking call for
# tpc_vote() doesn't hang the test suite.
def __init__(self, storage, trans, method="tpc_finish"):
self.storage = storage
self.trans = trans
self.method = method
threading.Thread.__init__(self)
def run(self):
try:
self.storage.tpc_begin(self.trans)
oid = self.storage.new_oid()
self.storage.store(oid, ZERO, zodb_pickle(MinPO("c")), '', self.trans)
oid = self.storage.new_oid()
self.storage.store(oid, ZERO, zodb_pickle(MinPO("c")), '', self.trans)
self.storage.tpc_vote(self.trans)
if self.method == "tpc_finish":
self.storage.tpc_finish(self.trans)
else:
self.storage.tpc_abort(self.trans)
except Disconnected:
pass
class CommitLockTests:
# The commit lock tests verify that the storage successfully
# blocks and restarts transactions when there is content for a
# single storage. There are a lot of cases to cover.
# CommitLock1 checks the case where a single transaction delays
# other transactions before they actually block. IOW, by the time
# the other transactions get to the vote stage, the first
# transaction has finished.
def checkCommitLock1OnCommit(self):
self._storages = []
try:
self._checkCommitLock("tpc_finish", self._dosetup1, self._dowork1)
finally:
self._cleanup()
def checkCommitLock1OnAbort(self):
self._storages = []
try:
self._checkCommitLock("tpc_abort", self._dosetup1, self._dowork1)
finally:
self._cleanup()
def checkCommitLock2OnCommit(self):
self._storages = []
try:
self._checkCommitLock("tpc_finish", self._dosetup2, self._dowork2)
finally:
self._cleanup()
def checkCommitLock2OnAbort(self):
self._storages = []
try:
self._checkCommitLock("tpc_abort", self._dosetup2, self._dowork2)
finally:
self._cleanup()
def _cleanup(self):
for store, trans in self._storages:
store.tpc_abort(trans)
store.close()
self._storages = []
def _checkCommitLock(self, method_name, dosetup, dowork):
# check the commit lock when a client attemps a transaction,
# but fails/exits before finishing the commit.
# Start on transaction normally.
t = Transaction()
self._storage.tpc_begin(t)
# Start a second transaction on a different connection without
# blocking the test thread.
self._storages = []
for i in range(4):
storage2 = self._duplicate_client()
t2 = Transaction()
tid = `ZEO.ClientStorage.get_timestamp()` # XXX why?
dosetup(storage2, t2, tid)
if i == 0:
storage2.close()
else:
self._storages.append((storage2, t2))
oid = self._storage.new_oid()
self._storage.store(oid, ZERO, zodb_pickle(MinPO(1)), '', t)
self._storage.tpc_vote(t)
if method_name == "tpc_finish":
self._storage.tpc_finish(t)
self._storage.load(oid, '')
else:
self._storage.tpc_abort(t)
dowork(method_name)
# Make sure the server is still responsive
self._dostore()
def _dosetup1(self, storage, trans, tid):
storage.tpc_begin(trans, tid)
def _dowork1(self, method_name):
for store, trans in self._storages:
oid = store.new_oid()
store.store(oid, ZERO, zodb_pickle(MinPO("c")), '', trans)
store.tpc_vote(trans)
if method_name == "tpc_finish":
store.tpc_finish(trans)
else:
store.tpc_abort(trans)
def _dosetup2(self, storage, trans, tid):
self._threads = []
t = WorkerThread(storage, trans)
self._threads.append(t)
t.start()
def _dowork2(self, method_name):
for t in self._threads:
t.join()
def _duplicate_client(self):
"Open another ClientStorage to the same server."
# XXX argh it's hard to find the actual address
# The rpc mgr addr attribute is a list. Each element in the
# list is a socket domain (AF_INET, AF_UNIX, etc.) and an
# address.
addr = self._storage._rpc_mgr.addr[0][1]
new = ZEO.ClientStorage.ClientStorage(addr, wait=1)
new.registerDB(DummyDB(), None)
return new
def _get_timestamp(self):
t = time.time()
t = apply(TimeStamp,(time.gmtime(t)[:5]+(t%60,)))
return `t`
##############################################################################
#
# Copyright (c) 2002 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
"""Compromising positions involving threads."""
import threading
from ZODB.Transaction import Transaction
from ZODB.tests.StorageTestBase import zodb_pickle, MinPO
import ZEO.ClientStorage
from ZEO.Exceptions import Disconnected
ZERO = '\0'*8
class BasicThread(threading.Thread):
def __init__(self, storage, doNextEvent, threadStartedEvent):
self.storage = storage
self.trans = Transaction()
self.doNextEvent = doNextEvent
self.threadStartedEvent = threadStartedEvent
self.gotValueError = 0
self.gotDisconnected = 0
threading.Thread.__init__(self)
class GetsThroughVoteThread(BasicThread):
# This thread gets partially through a transaction before it turns
# execution over to another thread. We're trying to establish that a
# tpc_finish() after a storage has been closed by another thread will get
# a ClientStorageError error.
#
# This class gets does a tpc_begin(), store(), tpc_vote() and is waiting
# to do the tpc_finish() when the other thread closes the storage.
def run(self):
self.storage.tpc_begin(self.trans)
oid = self.storage.new_oid()
self.storage.store(oid, ZERO, zodb_pickle(MinPO("c")), '', self.trans)
self.storage.tpc_vote(self.trans)
self.threadStartedEvent.set()
self.doNextEvent.wait(10)
try:
self.storage.tpc_finish(self.trans)
except ZEO.ClientStorage.ClientStorageError:
self.gotValueError = 1
self.storage.tpc_abort(self.trans)
class GetsThroughBeginThread(BasicThread):
# This class is like the above except that it is intended to be run when
# another thread is already in a tpc_begin(). Thus, this thread will
# block in the tpc_begin until another thread closes the storage. When
# that happens, this one will get disconnected too.
def run(self):
try:
self.storage.tpc_begin(self.trans)
except ZEO.ClientStorage.ClientStorageError:
self.gotValueError = 1
class AbortsAfterBeginFailsThread(BasicThread):
# This class is identical to GetsThroughBeginThread except that it
# attempts to tpc_abort() after the tpc_begin() fails. That will raise a
# ClientDisconnected exception which implies that we don't have the lock,
# and that's what we really want to test (but it's difficult given the
# threading module's API).
def run(self):
try:
self.storage.tpc_begin(self.trans)
except ZEO.ClientStorage.ClientStorageError:
self.gotValueError = 1
try:
self.storage.tpc_abort(self.trans)
except Disconnected:
self.gotDisconnected = 1
class ThreadTests:
# Thread 1 should start a transaction, but not get all the way through it.
# Main thread should close the connection. Thread 1 should then get
# disconnected.
def checkDisconnectedOnThread2Close(self):
doNextEvent = threading.Event()
threadStartedEvent = threading.Event()
thread1 = GetsThroughVoteThread(self._storage,
doNextEvent, threadStartedEvent)
thread1.start()
threadStartedEvent.wait(10)
self._storage.close()
doNextEvent.set()
thread1.join()
self.assertEqual(thread1.gotValueError, 1)
# Thread 1 should start a transaction, but not get all the way through
# it. While thread 1 is in the middle of the transaction, a second thread
# should start a transaction, and it will block in the tcp_begin() --
# because thread 1 has acquired the lock in its tpc_begin(). Now the main
# thread closes the storage and both sub-threads should get disconnected.
def checkSecondBeginFails(self):
doNextEvent = threading.Event()
threadStartedEvent = threading.Event()
thread1 = GetsThroughVoteThread(self._storage,
doNextEvent, threadStartedEvent)
thread2 = GetsThroughBeginThread(self._storage,
doNextEvent, threadStartedEvent)
thread1.start()
threadStartedEvent.wait(1)
thread2.start()
self._storage.close()
doNextEvent.set()
thread1.join()
thread2.join()
self.assertEqual(thread1.gotValueError, 1)
self.assertEqual(thread2.gotValueError, 1)
def checkThatFailedBeginDoesNotHaveLock(self):
doNextEvent = threading.Event()
threadStartedEvent = threading.Event()
thread1 = GetsThroughVoteThread(self._storage,
doNextEvent, threadStartedEvent)
thread2 = AbortsAfterBeginFailsThread(self._storage,
doNextEvent, threadStartedEvent)
thread1.start()
threadStartedEvent.wait(1)
thread2.start()
self._storage.close()
doNextEvent.set()
thread1.join()
thread2.join()
self.assertEqual(thread1.gotValueError, 1)
self.assertEqual(thread2.gotValueError, 1)
self.assertEqual(thread2.gotDisconnected, 1)
...@@ -15,14 +15,17 @@ ...@@ -15,14 +15,17 @@
import asyncore import asyncore
import os import os
import profile
import random import random
import socket import socket
import sys import sys
import traceback
import types import types
import ZEO.ClientStorage, ZEO.StorageServer import ZEO.ClientStorage
# Change value of PROFILE to enable server-side profiling
PROFILE = 0 PROFILE = 0
if PROFILE:
import hotshot
def get_port(): def get_port():
"""Return a port that is not in use. """Return a port that is not in use.
...@@ -47,21 +50,23 @@ def get_port(): ...@@ -47,21 +50,23 @@ def get_port():
if os.name == "nt": if os.name == "nt":
def start_zeo_server(storage_name, args, port=None): def start_zeo_server(storage_name, args, addr=None):
"""Start a ZEO server in a separate process. """Start a ZEO server in a separate process.
Returns the ZEO port, the test server port, and the pid. Returns the ZEO port, the test server port, and the pid.
""" """
import ZEO.tests.winserver import ZEO.tests.winserver
if port is None: if addr is None:
port = get_port() port = get_port()
else:
port = addr[1]
script = ZEO.tests.winserver.__file__ script = ZEO.tests.winserver.__file__
if script.endswith('.pyc'): if script.endswith('.pyc'):
script = script[:-1] script = script[:-1]
args = (sys.executable, script, str(port), storage_name) + args args = (sys.executable, script, str(port), storage_name) + args
d = os.environ.copy() d = os.environ.copy()
d['PYTHONPATH'] = os.pathsep.join(sys.path) d['PYTHONPATH'] = os.pathsep.join(sys.path)
pid = os.spawnve(os.P_NOWAIT, sys.executable, args, os.environ) pid = os.spawnve(os.P_NOWAIT, sys.executable, args, d)
return ('localhost', port), ('localhost', port + 1), pid return ('localhost', port), ('localhost', port + 1), pid
else: else:
...@@ -79,9 +84,11 @@ else: ...@@ -79,9 +84,11 @@ else:
buf = self.recv(4) buf = self.recv(4)
if buf: if buf:
assert buf == "done" assert buf == "done"
server.close_server()
asyncore.socket_map.clear() asyncore.socket_map.clear()
def handle_close(self): def handle_close(self):
server.close_server()
asyncore.socket_map.clear() asyncore.socket_map.clear()
class ZEOClientExit: class ZEOClientExit:
...@@ -90,38 +97,56 @@ else: ...@@ -90,38 +97,56 @@ else:
self.pipe = pipe self.pipe = pipe
def close(self): def close(self):
os.write(self.pipe, "done") try:
os.close(self.pipe) os.write(self.pipe, "done")
os.close(self.pipe)
except os.error:
pass
def start_zeo_server(storage, addr): def start_zeo_server(storage_name, args, addr):
assert isinstance(args, types.TupleType)
rd, wr = os.pipe() rd, wr = os.pipe()
pid = os.fork() pid = os.fork()
if pid == 0: if pid == 0:
if PROFILE: import ZEO.zrpc.log
p = profile.Profile() reload(ZEO.zrpc.log)
p.runctx("run_server(storage, addr, rd, wr)", globals(), try:
locals()) if PROFILE:
p.dump_stats("stats.s.%d" % os.getpid()) p = hotshot.Profile("stats.s.%d" % os.getpid())
else: p.runctx("run_server(storage, addr, rd, wr)",
run_server(storage, addr, rd, wr) globals(), locals())
p.close()
else:
run_server(addr, rd, wr, storage_name, args)
except:
print "Exception in ZEO server process"
traceback.print_exc()
os._exit(0) os._exit(0)
else: else:
os.close(rd) os.close(rd)
return pid, ZEOClientExit(wr) return pid, ZEOClientExit(wr)
def run_server(storage, addr, rd, wr): def load_storage(name, args):
package = __import__("ZODB." + name)
mod = getattr(package, name)
klass = getattr(mod, name)
return klass(*args)
def run_server(addr, rd, wr, storage_name, args):
# in the child, run the storage server # in the child, run the storage server
global server
os.close(wr) os.close(wr)
ZEOServerExit(rd) ZEOServerExit(rd)
serv = ZEO.StorageServer.StorageServer(addr, {'1':storage}) import ZEO.StorageServer, ZEO.zrpc.server
asyncore.loop() storage = load_storage(storage_name, args)
os.close(rd) server = ZEO.StorageServer.StorageServer(addr, {'1':storage})
ZEO.zrpc.server.loop()
storage.close() storage.close()
if isinstance(addr, types.StringType): if isinstance(addr, types.StringType):
os.unlink(addr) os.unlink(addr)
def start_zeo(storage, cache=None, cleanup=None, domain="AF_INET", def start_zeo(storage_name, args, cache=None, cleanup=None,
storage_id="1", cache_size=20000000): domain="AF_INET", storage_id="1", cache_size=20000000):
"""Setup ZEO client-server for storage. """Setup ZEO client-server for storage.
Returns a ClientStorage instance and a ZEOClientExit instance. Returns a ClientStorage instance and a ZEOClientExit instance.
...@@ -137,10 +162,10 @@ else: ...@@ -137,10 +162,10 @@ else:
else: else:
raise ValueError, "bad domain: %s" % domain raise ValueError, "bad domain: %s" % domain
pid, exit = start_zeo_server(storage, addr) pid, exit = start_zeo_server(storage_name, args, addr)
s = ZEO.ClientStorage.ClientStorage(addr, storage_id, s = ZEO.ClientStorage.ClientStorage(addr, storage_id,
debug=1, client=cache, client=cache,
cache_size=cache_size, cache_size=cache_size,
min_disconnect_poll=0.5) min_disconnect_poll=0.5,
wait=1)
return s, exit, pid return s, exit, pid
...@@ -69,16 +69,18 @@ def start_server(addr): ...@@ -69,16 +69,18 @@ def start_server(addr):
def start_client(addr, client_func=None): def start_client(addr, client_func=None):
pid = os.fork() pid = os.fork()
if pid == 0: if pid == 0:
import ZEO.ClientStorage try:
if VERBOSE: import ZEO.ClientStorage
print "Client process started:", os.getpid() if VERBOSE:
cli = ZEO.ClientStorage.ClientStorage(addr, client=CLIENT_CACHE) print "Client process started:", os.getpid()
if client_func is None: cli = ZEO.ClientStorage.ClientStorage(addr, client=CLIENT_CACHE)
run(cli) if client_func is None:
else: run(cli)
client_func(cli) else:
cli.close() client_func(cli)
os._exit(0) cli.close()
finally:
os._exit(0)
else: else:
return pid return pid
......
...@@ -41,7 +41,7 @@ Options: ...@@ -41,7 +41,7 @@ Options:
-t n Number of concurrent threads to run. -t n Number of concurrent threads to run.
""" """
import asyncore import asyncore
import sys, os, getopt, string, time import sys, os, getopt, string, time
##sys.path.insert(0, os.getcwd()) ##sys.path.insert(0, os.getcwd())
...@@ -81,7 +81,7 @@ def work(db, results, nrep, compress, data, detailed, minimize, threadno=None): ...@@ -81,7 +81,7 @@ def work(db, results, nrep, compress, data, detailed, minimize, threadno=None):
for r in 1, 10, 100, 1000: for r in 1, 10, 100, 1000:
t = time.time() t = time.time()
conflicts = 0 conflicts = 0
jar = db.open() jar = db.open()
while 1: while 1:
try: try:
...@@ -105,7 +105,7 @@ def work(db, results, nrep, compress, data, detailed, minimize, threadno=None): ...@@ -105,7 +105,7 @@ def work(db, results, nrep, compress, data, detailed, minimize, threadno=None):
else: else:
break break
jar.close() jar.close()
t = time.time() - t t = time.time() - t
if detailed: if detailed:
if threadno is None: if threadno is None:
...@@ -205,11 +205,11 @@ def mean(l): ...@@ -205,11 +205,11 @@ def mean(l):
for v in l: for v in l:
tot = tot + v tot = tot + v
return tot / len(l) return tot / len(l)
##def compress(s): ##def compress(s):
## c = zlib.compressobj() ## c = zlib.compressobj()
## o = c.compress(s) ## o = c.compress(s)
## return o + c.flush() ## return o + c.flush()
if __name__=='__main__': if __name__=='__main__':
main(sys.argv[1:]) main(sys.argv[1:])
...@@ -103,8 +103,13 @@ def start_child(zaddr): ...@@ -103,8 +103,13 @@ def start_child(zaddr):
pid = os.fork() pid = os.fork()
if pid != 0: if pid != 0:
return pid return pid
try:
storage = ClientStorage(zaddr, debug=1, min_disconnect_poll=0.5) _start_child(zaddr)
finally:
os._exit(0)
def _start_child(zaddr):
storage = ClientStorage(zaddr, debug=1, min_disconnect_poll=0.5, wait=1)
db = ZODB.DB(storage, pool_size=NUM_CONNECTIONS) db = ZODB.DB(storage, pool_size=NUM_CONNECTIONS)
setup(db.open()) setup(db.open())
conns = [] conns = []
...@@ -129,7 +134,5 @@ def start_child(zaddr): ...@@ -129,7 +134,5 @@ def start_child(zaddr):
c.__count += 1 c.__count += 1
work(c) work(c)
os._exit(0)
if __name__ == "__main__": if __name__ == "__main__":
main() main()
##############################################################################
#
# Copyright (c) 2001, 2002 Zope Corporation and Contributors.
# All Rights Reserved.
#
# This software is subject to the provisions of the Zope Public License,
# Version 2.0 (ZPL). A copy of the ZPL should accompany this distribution.
# THIS SOFTWARE IS PROVIDED "AS IS" AND ANY AND ALL EXPRESS OR IMPLIED
# WARRANTIES ARE DISCLAIMED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
# WARRANTIES OF TITLE, MERCHANTABILITY, AGAINST INFRINGEMENT, AND FITNESS
# FOR A PARTICULAR PURPOSE
#
##############################################################################
import random
import unittest
from ZEO.TransactionBuffer import TransactionBuffer
def random_string(size):
"""Return a random string of size size."""
l = [chr(random.randrange(256)) for i in range(size)]
return "".join(l)
def new_store_data():
"""Return arbitrary data to use as argument to store() method."""
return random_string(8), '', random_string(random.randrange(1000))
def new_invalidate_data():
"""Return arbitrary data to use as argument to invalidate() method."""
return random_string(8), ''
class TransBufTests(unittest.TestCase):
def checkTypicalUsage(self):
tbuf = TransactionBuffer()
tbuf.store(*new_store_data())
tbuf.invalidate(*new_invalidate_data())
tbuf.begin_iterate()
while 1:
o = tbuf.next()
if o is None:
break
tbuf.clear()
def doUpdates(self, tbuf):
data = []
for i in range(10):
d = new_store_data()
tbuf.store(*d)
data.append(d)
d = new_invalidate_data()
tbuf.invalidate(*d)
data.append(d)
tbuf.begin_iterate()
for i in range(len(data)):
x = tbuf.next()
if x[2] is None:
# the tbuf add a dummy None to invalidates
x = x[:2]
self.assertEqual(x, data[i])
def checkOrderPreserved(self):
tbuf = TransactionBuffer()
self.doUpdates(tbuf)
def checkReusable(self):
tbuf = TransactionBuffer()
self.doUpdates(tbuf)
tbuf.clear()
self.doUpdates(tbuf)
tbuf.clear()
self.doUpdates(tbuf)
def test_suite():
return unittest.makeSuite(TransBufTests, 'check')
This diff is collapsed.
...@@ -11,20 +11,16 @@ ...@@ -11,20 +11,16 @@
# FOR A PARTICULAR PURPOSE # FOR A PARTICULAR PURPOSE
# #
############################################################################## ##############################################################################
# This module is a simplified version of the select_trigger module
# from Sam Rushing's Medusa server.
import asyncore import asyncore
import errno
import os import os
import socket import socket
import string import string
import thread import thread
if os.name == 'posix': if os.name == 'posix':
class trigger(asyncore.file_dispatcher): class trigger (asyncore.file_dispatcher):
"Wake up a call to select() running in the main thread" "Wake up a call to select() running in the main thread"
...@@ -56,46 +52,50 @@ if os.name == 'posix': ...@@ -56,46 +52,50 @@ if os.name == 'posix':
# new data onto a channel's outgoing data queue at the same time that # new data onto a channel's outgoing data queue at the same time that
# the main thread is trying to remove some] # the main thread is trying to remove some]
def __init__(self): def __init__ (self):
r, w = self._fds = os.pipe() r, w = self._fds = os.pipe()
self.trigger = w self.trigger = w
asyncore.file_dispatcher.__init__(self, r) asyncore.file_dispatcher.__init__ (self, r)
self.lock = thread.allocate_lock() self.lock = thread.allocate_lock()
self.thunks = [] self.thunks = []
self._closed = None
# Override the asyncore close() method, because it seems that
# it would only close the r file descriptor and not w. The
# constructor calls file_dispactcher.__init__ and passes r,
# which would get stored in a file_wrapper and get closed by
# the default close. But that would leave w open...
def __del__(self): def close(self):
os.close(self._fds[0]) if self._closed is None:
os.close(self._fds[1]) self._closed = 1
self.del_channel()
for fd in self._fds:
os.close(fd)
def __repr__(self): def __repr__ (self):
return '<select-trigger(pipe) at %x>' % id(self) return '<select-trigger (pipe) at %x>' % id(self)
def readable(self): def readable (self):
return 1 return 1
def writable(self): def writable (self):
return 0 return 0
def handle_connect(self): def handle_connect (self):
pass pass
def pull_trigger(self, thunk=None): def pull_trigger (self, thunk=None):
# print 'PULL_TRIGGER: ', len(self.thunks)
if thunk: if thunk:
try: try:
self.lock.acquire() self.lock.acquire()
self.thunks.append(thunk) self.thunks.append (thunk)
finally: finally:
self.lock.release() self.lock.release()
os.write(self.trigger, 'x') os.write (self.trigger, 'x')
def handle_read(self): def handle_read (self):
try: self.recv (8192)
self.recv(8192)
except os.error, err:
if err[0] == errno.EAGAIN: # resource temporarily unavailable
return
raise
try: try:
self.lock.acquire() self.lock.acquire()
for thunk in self.thunks: for thunk in self.thunks:
...@@ -104,7 +104,7 @@ if os.name == 'posix': ...@@ -104,7 +104,7 @@ if os.name == 'posix':
except: except:
nil, t, v, tbinfo = asyncore.compact_traceback() nil, t, v, tbinfo = asyncore.compact_traceback()
print ('exception in trigger thunk:' print ('exception in trigger thunk:'
'(%s:%s %s)' % (t, v, tbinfo)) ' (%s:%s %s)' % (t, v, tbinfo))
self.thunks = [] self.thunks = []
finally: finally:
self.lock.release() self.lock.release()
...@@ -116,13 +116,13 @@ else: ...@@ -116,13 +116,13 @@ else:
# win32-safe version # win32-safe version
class trigger(asyncore.dispatcher): class trigger (asyncore.dispatcher):
address = ('127.9.9.9', 19999) address = ('127.9.9.9', 19999)
def __init__(self): def __init__ (self):
a = socket.socket(socket.AF_INET, socket.SOCK_STREAM) a = socket.socket (socket.AF_INET, socket.SOCK_STREAM)
w = socket.socket(socket.AF_INET, socket.SOCK_STREAM) w = socket.socket (socket.AF_INET, socket.SOCK_STREAM)
# set TCP_NODELAY to true to avoid buffering # set TCP_NODELAY to true to avoid buffering
w.setsockopt(socket.IPPROTO_TCP, 1, 1) w.setsockopt(socket.IPPROTO_TCP, 1, 1)
...@@ -139,51 +139,46 @@ else: ...@@ -139,51 +139,46 @@ else:
if port <= 19950: if port <= 19950:
raise 'Bind Error', 'Cannot bind trigger!' raise 'Bind Error', 'Cannot bind trigger!'
port=port - 1 port=port - 1
a.listen(1) a.listen (1)
w.setblocking(0) w.setblocking (0)
try: try:
w.connect(self.address) w.connect (self.address)
except: except:
pass pass
r, addr = a.accept() r, addr = a.accept()
a.close() a.close()
w.setblocking(1) w.setblocking (1)
self.trigger = w self.trigger = w
asyncore.dispatcher.__init__(self, r) asyncore.dispatcher.__init__ (self, r)
self.lock = thread.allocate_lock() self.lock = thread.allocate_lock()
self.thunks = [] self.thunks = []
self._trigger_connected = 0 self._trigger_connected = 0
def __repr__(self): def __repr__ (self):
return '<select-trigger (loopback) at %x>' % id(self) return '<select-trigger (loopback) at %x>' % id(self)
def readable(self): def readable (self):
return 1 return 1
def writable(self): def writable (self):
return 0 return 0
def handle_connect(self): def handle_connect (self):
pass pass
def pull_trigger(self, thunk=None): def pull_trigger (self, thunk=None):
if thunk: if thunk:
try: try:
self.lock.acquire() self.lock.acquire()
self.thunks.append(thunk) self.thunks.append (thunk)
finally: finally:
self.lock.release() self.lock.release()
self.trigger.send('x') self.trigger.send ('x')
def handle_read(self): def handle_read (self):
try: self.recv (8192)
self.recv(8192)
except os.error, err:
if err[0] == errno.EAGAIN: # resource temporarily unavailable
return
raise
try: try:
self.lock.acquire() self.lock.acquire()
for thunk in self.thunks: for thunk in self.thunks:
......
...@@ -14,11 +14,14 @@ ...@@ -14,11 +14,14 @@
"""Sized message async connections """Sized message async connections
""" """
__version__ = "$Revision: 1.16 $"[11:-2] __version__ = "$Revision: 1.17 $"[11:-2]
import asyncore, struct
from Exceptions import Disconnected
from zLOG import LOG, TRACE, ERROR, INFO, BLATHER
from types import StringType
import asyncore, string, struct, zLOG, sys, Acquisition
import socket, errno import socket, errno
from logger import zLogger
# Use the dictionary to make sure we get the minimum number of errno # Use the dictionary to make sure we get the minimum number of errno
# entries. We expect that EWOULDBLOCK == EAGAIN on most systems -- # entries. We expect that EWOULDBLOCK == EAGAIN on most systems --
...@@ -38,81 +41,103 @@ tmp_dict = {errno.EAGAIN: 0, ...@@ -38,81 +41,103 @@ tmp_dict = {errno.EAGAIN: 0,
expected_socket_write_errors = tuple(tmp_dict.keys()) expected_socket_write_errors = tuple(tmp_dict.keys())
del tmp_dict del tmp_dict
class SizedMessageAsyncConnection(Acquisition.Explicit, asyncore.dispatcher): class SizedMessageAsyncConnection(asyncore.dispatcher):
__super_init = asyncore.dispatcher.__init__
__super_close = asyncore.dispatcher.close
__closed = 1 # Marker indicating that we're closed
__append=None # Marker indicating that we're closed socket = None # to outwit Sam's getattr
socket=None # to outwit Sam's getattr READ_SIZE = 8096
def __init__(self, sock, addr, map=None, debug=None): def __init__(self, sock, addr, map=None, debug=None):
SizedMessageAsyncConnection.inheritedAttribute( self.addr = addr
'__init__')(self, sock, map) if debug is not None:
self.addr=addr
if debug is None and __debug__:
self._debug = zLogger("smac")
else:
self._debug = debug self._debug = debug
self.__state=None elif not hasattr(self, '_debug'):
self.__inp=None self._debug = __debug__ and 'smac'
self.__inpl=0 self.__state = None
self.__l=4 self.__inp = None # None, a single String, or a list
self.__output=output=[] self.__input_len = 0
self.__append=output.append self.__msg_size = 4
self.__pop=output.pop self.__output = []
self.__closed = None
def handle_read(self, self.__super_init(sock, map)
join=string.join, StringType=type(''), _type=type,
_None=None): # XXX avoid expensive getattr calls? Can't remember exactly what
# this comment was supposed to mean, but it has something to do
# with the way asyncore uses getattr and uses if sock:
def __nonzero__(self):
return 1
def handle_read(self):
# Use a single __inp buffer and integer indexes to make this
# fast.
try: try:
d=self.recv(8096) d=self.recv(8096)
except socket.error, err: except socket.error, err:
if err[0] in expected_socket_read_errors: if err[0] in expected_socket_read_errors:
return return
raise raise
if not d: return if not d:
return
inp=self.__inp
if inp is _None: input_len = self.__input_len + len(d)
inp=d msg_size = self.__msg_size
elif _type(inp) is StringType: state = self.__state
inp=[inp,d]
inp = self.__inp
if msg_size > input_len:
if inp is None:
self.__inp = d
elif type(self.__inp) is StringType:
self.__inp = [self.__inp, d]
else:
self.__inp.append(d)
self.__input_len = input_len
return # keep waiting for more input
# load all previous input and d into single string inp
if isinstance(inp, StringType):
inp = inp + d
elif inp is None:
inp = d
else: else:
inp.append(d) inp.append(d)
inp = "".join(inp)
inpl=self.__inpl+len(d)
l=self.__l offset = 0
while (offset + msg_size) <= input_len:
while 1: msg = inp[offset:offset + msg_size]
offset = offset + msg_size
if l <= inpl: if state is None:
# Woo hoo, we have enough data # waiting for message
if _type(inp) is not StringType: inp=join(inp,'') msg_size = struct.unpack(">i", msg)[0]
d=inp[:l] state = 1
inp=inp[l:]
inpl=inpl-l
if self.__state is _None:
# waiting for message
l=struct.unpack(">i",d)[0]
self.__state=1
else:
l=4
self.__state=_None
self.message_input(d)
else: else:
break # not enough data msg_size = 4
state = None
self.__l=l self.message_input(msg)
self.__inp=inp
self.__inpl=inpl
def readable(self): return 1 self.__state = state
def writable(self): return not not self.__output self.__msg_size = msg_size
self.__inp = inp[offset:]
self.__input_len = input_len - offset
def readable(self):
return 1
def writable(self):
if len(self.__output) == 0:
return 0
else:
return 1
def handle_write(self): def handle_write(self):
output=self.__output output = self.__output
while output: while output:
v=output[0] v = output[0]
try: try:
n=self.send(v) n=self.send(v)
except socket.error, err: except socket.error, err:
...@@ -120,37 +145,33 @@ class SizedMessageAsyncConnection(Acquisition.Explicit, asyncore.dispatcher): ...@@ -120,37 +145,33 @@ class SizedMessageAsyncConnection(Acquisition.Explicit, asyncore.dispatcher):
break # we couldn't write anything break # we couldn't write anything
raise raise
if n < len(v): if n < len(v):
output[0]=v[n:] output[0] = v[n:]
break # we can't write any more break # we can't write any more
else: else:
del output[0] del output[0]
#break # waaa
def handle_close(self): def handle_close(self):
self.close() self.close()
def message_output(self, message, def message_output(self, message):
pack=struct.pack, len=len): if __debug__:
if self._debug is not None: if self._debug:
if len(message) > 40: if len(message) > 40:
m = message[:40]+' ...' m = message[:40]+' ...'
else: else:
m = message m = message
self._debug.trace('message_output %s' % `m`) LOG(self._debug, TRACE, 'message_output %s' % `m`)
append=self.__append if self.__closed is not None:
if append is None: raise Disconnected, (
raise Disconnected("This action is temporarily unavailable.<p>") "This action is temporarily unavailable."
"<p>"
append(pack(">i",len(message))+message) )
# do two separate appends to avoid copying the message string
self.__output.append(struct.pack(">i", len(message)))
self.__output.append(message)
def close(self): def close(self):
if self.__append is not None: if self.__closed is None:
self.__append=None self.__closed = 1
SizedMessageAsyncConnection.inheritedAttribute('close')(self) self.__super_close()
class Disconnected(Exception):
"""The client has become disconnected from the server
"""
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment