Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Z
ZODB
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
CI / CD
CI / CD
Pipelines
Jobs
Schedules
Analytics
Analytics
CI / CD
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Jobs
Commits
Issue Boards
Open sidebar
Kirill Smelkov
ZODB
Commits
29f549e2
Commit
29f549e2
authored
Apr 09, 2001
by
Jeremy Hylton
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
moved to Releases/ZEO
parent
c40a1916
Changes
1
Show whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
0 additions
and
354 deletions
+0
-354
src/ZEO/CHANGES.txt
src/ZEO/CHANGES.txt
+0
-354
No files found.
src/ZEO/CHANGES.txt
deleted
100644 → 0
View file @
c40a1916
Zope Enterprise Objects (ZEO) Revision History
ZEO 1.0 beta 1
New Features
- Improved release organization.
Bugs fixed
- Normal shutdown was reported as a panic.
- The signal exception handler was disabled.
- Errors arising from incompatable versions of cPickle were
uclear.
ZEO 0.5.0
New Features
- The server can be made to reopen it's log file
by sending it a HUP (on systems supporting signals). Note
that this requires a change to asyncore to catch interrupted
system calls on some platforms.
- The shutdown signals have been changed:
o To shutdown, use TERM
o To restart, use INT. (This must be send to the
child, not the parent.
- Client scripts can now be written to pack a remote storage and
wait for the pack results. This is handy when packing as part
of cron jobs.
- It is no longer necessary to symbolically link cPickle or
ZServer. ZServer is no longer necessary at all.
- A Zope-style INSTANCE_HOME and var directory are no longer
needed.
- If ZServer *is* available, the medusa monitor server can be
used in the storage server.
- An option, -d, was added to facilitate generation of a
detailed debug log while running in the background.
- The documentation has been simplified and spread over multiple
files in the doc subdirectory.
Bugs Fixed
- Application-level conflict resolution, introduced in Zope
2.3.1, was not supported. This caused the ZEO cache to be
written incorrectly.
- A possible (but unobserved) race condition that could
lead to ZEO cache corruption was corrected.
- ZEO clients could fail to start if they needed data that
wasn't in their cache and if they couldn't talk to a ZEO
server right away. For now, on startup, the client storage
will wait to connect to a storage before returning from
initialization.
- Restarting the ZEO server shortly after shutting down could
lead to "address already in use" errors.
- User-level eceptions, like undo, version-lock, and conflict
errors were logged in the server event log.
- Pack errors weren't logged in the server event log.
- If an attempt was made to commit a transaction with updates
while the client storage was disconnected from the server,
no further write transactions would be allowed, even after
reconnection, and the site would eventually hang.
- A forgotten argument made it unreliable to start a ClientStorage
after the main loop has started.
- In combination with recent changes in zdeamon, startup errors
could cause infinite loops.
- The handling of the Python global, __debug__, was not
compatible with Python 2.1.
- If an exception raised on the server which could not be
unpickled on the client could cause the client connection to
fail.
Planned for (future) ZEO releases
New Features
- Provide optional data compression. This should enhance
performance over slow connections to the storage server and
reduce the server I/O load.
- Provide optional authentication adapters that allow for
pluggable authentication and encryption schemes.
This is a feature that is listed on the ZEO fact sheet, but
that didn't make it into the 1.0 release. Firewall or secure
tunneling techniques can be used to secure communication
between clients and the storage for now when the client and
storage are on different machines. (If they are on the same
machine, then unix-domain sockets or the loop-back interface
can be used.)
- Provide an option to start a client process without waiting
for a connection to the storage server. This was the original
intent, however, it turns out that it can be extremely
problemantic to get storage errors resulting from attempts to
read objects not in the cache during process (e.g. Zope)
startup. In addition, some smarter cache management can be
done to decrease the probability of important objects being
removed from the cache.
- Provide improved client cache management. This will involve
changes like:
o Increasing the number of cache files to reduce the number of
objects lost from the cache (or that need to be recovered)
when the cache "rolls over".
o Use separate indexes for each cache.
o use better cache indexing structures
ZEO 0.4.1
Bugs fixed
- Improperly handled server exeptions could cause clients to
lock up.
- Misshandling of client transaction meta data could cause
server errors because transaction ids were mangled.
- The storage server didn't close sockets on shutdown. This
could sometimes make it necessary to wait before restarting
the server to avoid "address already in use" messages.
- The storage server did not log shutdown.
ZEO 0.4
Bugs fixed
- The new (in 0.3) logic to switch to an ordinary user when
started as root was executed too late so that some files were
incorrectly owned by root. This caused ZEO clients to fail
when the cache files were rotated.
- There were some unusual error conditions that were not handled
correctly that could cause clients to fail. This was detected
only when ZEO was put into production on zope.org.
- The cache files weren't rotated on reads. This could cause the
caches to grow way beyond their target sizes.
- Exceptions raised in the servers asynchronous store handler
could cause the client and server to get out of sync.
- Connection and disconnection events weren't logged on the
server.
Features added
- ClientStorage objects have two new constructor arguments,
min_disconnect_poll and max_disconnect_poll to set the minimum
and maximum times to wait, in seconds, before retrying to
reconnect when disconnected from the ZEO server.
- A call to get database info on startup was eliminated in
favor of having the server send the information
automatically. This eliminates a round-trip and, therefore
speeds up startup a tiny bit.
- Database size info is now sent to all clients (asynchronously)
after a pack and after a transaction commit, allowing all
clients to have timely size information.
- Added client logging of connection attempts.
- Added a misc subdirectory with sample storage server start and
stop scripts and with a sample custom_zodb.py module.
ZEO 0.3.0
Bugs fixed
- Large transactions (e.g. ZCatalog updates) could cause
spurious conflict errors that could, eventually, make it
impossible to modify some objects without restarting Zope.
- Temporary non-persistent cache files were not removed at the
end of a run.
Features added
- On Unix, when the storage server start script is run as root,
the script will switch to a different user (nobody by
default). There is a new '-u' option that can be used to
specify the user.
- On Unix the server will gracefully close served storages when
the server is killed with a SIGTERM or SIGHUP. If a
FileStorage is being served, then an index file will be
written.
ZEO 0.2.3
Bugs fixed
- Versions didn't work. Not even close. :|
- If a client was disconnected from a server during transaction
commit, then, when the client was reconnected to the server,
attempts to commit transactions caused the client to hang.
- The server would fail (and successfully automatically restart)
if an unpickleable exception was raised.
ZEO 0.2.2
Bugs fixed
- The storage server didn't fully implement a new ZODB storage
protocol. This caused serving of FileStorages to fail in Zope
2.2.1, since FileStorages now use this protocol.
- In the start.py start script
o The '-S' option did not allow spaces between the option and it's
argument.
o The '-S' option did not work with FileStorages.
o The README file didn't mention the '-S' option.
ZEO 0.2.1
Bugs fixed
- ZEO clients didn't work properly (effectively at all) on
Solaris or Windows NT.
- An error in the handling of the distributed transaction lock
could cause a client to stop writing and eventually hang if
two clients tried to commit a transaction at the same time.
- Extra (harmless) messages were sent from the server
when invalidating objects during a commit.
- New protocols (especially 'loadSerial'), used for looking at
DTML historical versions, were not implemented.
Features
- The '-S' option was added to the storage server startup script
to allow selection of one or more storages to serve.
ZEO 0.2
This release is expected to be close to beta quality. Initially, the
primary goals of this release were to:
- Correct some consistency problems that had been observed in
0.1 on starup.
- Allow ZEO clients to detect, survive, and recover from
disconnection from the ZEO server.
Based on some feedback from some folks who tried 0.1, improving
write performance was made a priority.
Features
- The ZEO Client now handles server failures gracefully:
o The client with a persistent cache can generally startup
even if the server is not running, assuming that it has at
least a minimal number of objects in the cache.
o The client will continue to function even if the server
connection is interuppted.
o Server availability is detected by the client (which tries
to connect to the server every few minutes). A disconnected
client will automatically reconnect to an available server.
o When the client is disconnected, write transactions cannot
be performed. Reads fail for objects that are not in the
cache.
- Performance enhancements
The speed of write-intensive operations have been improved
approximately 70%. When using Unix domain sockets for
client/server communication, ZEO transactions take roughly 2-3
times as long as FileStorage transactions to commit.
(This was based on some tests. Your mileage may vary.)
- Packing support was added. Note that packing is done
asynchrounously. The client returns immediately from a pack
call. The server packs in a thread and sends updated
statistics to the client when packing is completed.
- Support for Unix-domain sockets was added.
- Pickles sent to the server are now checked to make sure that
they don't contain unapproved instance or global-variable
(function) pickles.
Bugs fixed
- Data could be badly inconsistent when a persistent cache
was started, due to a bug in the cache initialization logic.
- The application was allowed to begin operation while the cache
was being verified. This could lead to harmful inconsistencies.
Changes made to Zope to support ZEO
- A number of changes were made to ZODB to support asynchronous
storage during transaction commit.
- Normally Zope updates the database during startup to reflect
product changes. This behavior is now suppressed when the
ZEO_CLIENT environment variable is set. It doesn't make sense
for many clients to update the database for the same products.
- The asyncore module was modified to add support for multiple
asyncore loops. This change was applied to asyncore in the
Zope and the (official, owned by Sam Rushing) medusa CVS
trees.
- A new module, ThreadedAsync.py has been added in the Zope
lib/python directory. This module provides notification to
async objects (like ZEO clients) to let them know when the
asyncore main loop has started. This was needed to enable use
of async code before the main loop starts.
ZEO 0.1 (aka "iteration 1")
This was an initial alpha of ZEO that demonstrated basic
functionalities. It lacked robustness and has some performance
problems on writes.
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment