Commit 03bf92a9 authored by Tim Peters's avatar Tim Peters

Merge rev 37432 from 3.4 branch.

The cache simulation seems good enough to be useful now,
although the toy app I wrote to generate a 500MB trace
file doesn't use MVCC in an essential way (some of the
MVCC simulation code is nevertheless exercised, since
an invalidation of current data in the presence of MVCC
always creates a cache record for the newly non-current
revision).

Still puzzling over what to do when the trace file records
a load hit but the simulated cache gets a miss.  The
old simulation code seemed to assume that a store for the same
oid would show up in the trace file next, and it could get
the info it needed about the missing object from the store
trace.  But that isn't true:  precisely because the load was
a hit in the trace file, the object isn't going to be stored
again "soon" in the trace file.

Here are some actual-vs-simulated hit rate results, for a
20MB cache, with a trace file covering about 9 million
loads, over 3 ZEO client (re)starts:

actual   simulated
------   ---------
  93.1        92.7
  79.8        79.0
  68.0        69.1
      
  81.4        81.1  overall

Since the simulated hit rates are both higher and lower
than the actual hit rates, that argues against a gross
systematic bias in the simulation (although there may
be several systematic biases in opposite directions).
parent 05627a49
...@@ -14,22 +14,22 @@ Enabling Cache Tracing ...@@ -14,22 +14,22 @@ Enabling Cache Tracing
---------------------- ----------------------
To enable cache tracing, you must use a persistent cache (specify a ``client`` To enable cache tracing, you must use a persistent cache (specify a ``client``
name), and set the environment variable ZEO_CACHE_TRACE. The path to the name), and set the environment variable ZEO_CACHE_TRACE to a non-empty
trace file is derived from the path to the persistent cache file by appending value. The path to the trace file is derived from the path to the persistent
".trace". If the file doesn't exist, ZEO will try to create it. If the file cache file by appending ".trace". If the file doesn't exist, ZEO will try to
does exist, it's opened for appending (previous trace information is not create it. If the file does exist, it's opened for appending (previous trace
overwritten). If there are problems with the file, a warning message is information is not overwritten). If there are problems with the file, a
logged. To start or stop tracing, the ZEO client process (typically a Zope warning message is logged. To start or stop tracing, the ZEO client process
application server) must be restarted. (typically a Zope application server) must be restarted.
The trace file can grow pretty quickly; on a moderately loaded server, we The trace file can grow pretty quickly; on a moderately loaded server, we
observed it growing by 5 MB per hour. The file consists of binary records, observed it growing by 7 MB per hour. The file consists of binary records,
each 34 bytes long if 8-byte oids are in use; a detailed description of the each 34 bytes long if 8-byte oids are in use; a detailed description of the
record lay-out is given in stats.py. No sensitive data is logged: data record lay-out is given in stats.py. No sensitive data is logged: data
record sizes and binary object and transaction ids are logged, but no record sizes (but not data records), and binary object and transaction ids
information about object types or names, user names, version names, are logged, but no object pickles, object types or names, user names,
transaction comments, access paths, or machine information such as machine transaction comments, access paths, or machine information (such as machine
name or IP address. name or IP address) are logged.
Analyzing a Cache Trace Analyzing a Cache Trace
----------------------- -----------------------
...@@ -40,31 +40,29 @@ essential statistics for each segment of 15 minutes, interspersed with lines ...@@ -40,31 +40,29 @@ essential statistics for each segment of 15 minutes, interspersed with lines
indicating client restarts, followed by a more detailed summary of overall indicating client restarts, followed by a more detailed summary of overall
statistics. statistics.
The most important statistic is probably the "hit rate", a percentage The most important statistic is the "hit rate", a percentage indicating how
indicating how many requests to load an object could be satisfied from many requests to load an object could be satisfied from the cache. Hit rates
the cache. Hit rates around 70% are good. 90% is probably close to around 70% are good. 90% is excellent. If you see a hit rate under 60% you
the theoretical maximum. If you see a hit rate under 60% you can can probably improve the cache performance (and hence your Zope application
probably improve the cache performance (and hence your Zope server's performance) by increasing the ZEO cache size. This is normally
application server's performance) by increasing the ZEO cache size. configured using key ``cache_size`` in the ``zeoclient`` section of your
This is normally configured using cache_size key in the ``zeoclient`` configuration file. The default cache size is 20 MB, which is very small.
section of your configuration file. The default cache size is 20 MB, which
is very small.
The stats.py tool shows its command line syntax when invoked without The stats.py tool shows its command line syntax when invoked without
arguments. The tracefile argument can be a gzipped file if it has a arguments. The tracefile argument can be a gzipped file if it has a .gz
.gz extension. It will read from stdin (assuming uncompressed data) extension. It will read from stdin (assuming uncompressed data) if the
if the tracefile argument is '-'. tracefile argument is '-'.
Simulating Different Cache Sizes Simulating Different Cache Sizes
-------------------------------- --------------------------------
Based on a cache trace file, you can make a prediction of how well the Based on a cache trace file, you can make a prediction of how well the cache
cache might do with a different cache size. The simul.py tool runs an might do with a different cache size. The simul.py tool runs a simulation of
accurate simulation of the ZEO client cache implementation based upon the ZEO client cache implementation based upon the events read from a trace
the events read from a trace file. A new simulation is started each file. A new simulation is started each time the trace file records a client
time the trace file records a client restart event; if a trace file restart event; if a trace file contains more than one restart event, a
contains more than one restart event, a separate line is printed for separate line is printed for each simulation, and a line with overall
each simulation, and line with overall statistics is added at the end. statistics is added at the end.
Example, assuming the trace file is in /tmp/cachetrace.log:: Example, assuming the trace file is in /tmp/cachetrace.log::
...@@ -98,3 +96,26 @@ the cache only helps if an object is loaded more than once. ...@@ -98,3 +96,26 @@ the cache only helps if an object is loaded more than once.
The simul.py tool also supports simulating different cache The simul.py tool also supports simulating different cache
strategies. Since none of these are implemented, these are not strategies. Since none of these are implemented, these are not
further documented here. further documented here.
Simulation Limitations
----------------------
The cache simulation is an approximation, and actual hit rate may be higher
or lower than the simulated result. These are some factors that inhibit
exact simulation:
- The simulator doesn't try to emulate versions. If the trace file contains
loads and stores of objects in versions, the simulator treats them as if
they were loads and stores of non-version data.
- Each time a load of an object O in the trace file was a cache hit, but the
simulated cache has evicted O, the simulated cache has no way to repair its
knowledge about O. This is more frequent when simulating caches smaller
than the cache used to produce the trace file. When a real cache suffers a
cache miss, it asks the ZEO server for the needed information about O, and
saves O in the client cache. The simulated cache doesn't have a ZEO server
to ask, and O continues to be absent in the simulated cache. Further
requests for O will continue to be simulated cache misses, although in a
real cache they'll likely be cache hits. On the other hand, the
simulated cache doesn't need to evict any objects to make room for O, so it
may enjoy further cache hits on objects a real cache would need to evict.
...@@ -210,7 +210,7 @@ class ClientCache(object): ...@@ -210,7 +210,7 @@ class ClientCache(object):
# than any comparable non-None object in recent Pythons. # than any comparable non-None object in recent Pythons.
i = bisect.bisect_left(L, (tid, None)) i = bisect.bisect_left(L, (tid, None))
# Now L[i-1] < (tid, None) < L[i], and the start_tid for everything in # Now L[i-1] < (tid, None) < L[i], and the start_tid for everything in
# L[:i} is < tid, and the start_tid for everything in L[i:] is >= tid. # L[:i] is < tid, and the start_tid for everything in L[i:] is >= tid.
# Therefore the largest start_tid < tid must be at L[i-1]. If i is 0, # Therefore the largest start_tid < tid must be at L[i-1]. If i is 0,
# there is no start_tid < tid: we don't have any data old enougn. # there is no start_tid < tid: we don't have any data old enougn.
if i == 0: if i == 0:
......
...@@ -14,19 +14,10 @@ ...@@ -14,19 +14,10 @@
############################################################################## ##############################################################################
"""Cache simulation. """Cache simulation.
Usage: simul.py [-bflyz] [-s size] tracefile Usage: simul.py [-s size] tracefile
Use one of -b, -f, -l, -y or -z select the cache simulator:
-b: buddy system allocator
-f: simple free list allocator
-l: idealized LRU (no allocator)
-y: variation on the existing ZEO cache that copies to current file
-z: existing ZEO cache (default)
Options: Options:
-s size: cache size in MB (default 20 MB) -s size: cache size in MB (default 20 MB)
Note: the buddy system allocator rounds the cache size up to a power of 2
""" """
import sys import sys
...@@ -47,7 +38,7 @@ def main(): ...@@ -47,7 +38,7 @@ def main():
# Parse options. # Parse options.
MB = 1024**2 MB = 1024**2
cachelimit = 20*MB cachelimit = 20*MB
simclass = ZEOCacheSimulation simclass = CircularCacheSimulation
try: try:
opts, args = getopt.getopt(sys.argv[1:], "bflyz2cOaTUs:") opts, args = getopt.getopt(sys.argv[1:], "bflyz2cOaTUs:")
except getopt.error, msg: except getopt.error, msg:
...@@ -140,12 +131,11 @@ def main(): ...@@ -140,12 +131,11 @@ def main():
if len(oid) < oidlen: if len(oid) < oidlen:
break break
# Decode the code. # Decode the code.
dlen, version, code, current = (code & 0x7fffff00, dlen, version, code = (code & 0x7fffff00,
code & 0x80, code & 0x80,
code & 0x7e, code & 0x7e)
code & 0x01)
# And pass it to the simulation. # And pass it to the simulation.
sim.event(ts, dlen, version, code, current, oid, start_tid, end_tid) sim.event(ts, dlen, version, code, oid, start_tid, end_tid)
f.close() f.close()
# Finish simulation. # Finish simulation.
...@@ -162,7 +152,6 @@ class Simulation(object): ...@@ -162,7 +152,6 @@ class Simulation(object):
The standard event() method calls these additional methods: The standard event() method calls these additional methods:
write(), load(), inval(), report(), restart(); the standard write(), load(), inval(), report(), restart(); the standard
finish() method also calls report(). finish() method also calls report().
""" """
def __init__(self, cachelimit): def __init__(self, cachelimit):
...@@ -187,7 +176,7 @@ class Simulation(object): ...@@ -187,7 +176,7 @@ class Simulation(object):
self.writes = 0 self.writes = 0
self.ts0 = None self.ts0 = None
def event(self, ts, dlen, _version, code, _current, oid, def event(self, ts, dlen, _version, code, oid,
start_tid, end_tid): start_tid, end_tid):
# Record first and last timestamp seen. # Record first and last timestamp seen.
if self.ts0 is None: if self.ts0 is None:
...@@ -198,9 +187,11 @@ class Simulation(object): ...@@ -198,9 +187,11 @@ class Simulation(object):
# Simulate cache behavior. Caution: the codes in the trace file # Simulate cache behavior. Caution: the codes in the trace file
# record whether the actual cache missed or hit on each load, but # record whether the actual cache missed or hit on each load, but
# that bears no relationship to whether the simulated cache will # that bears no necessary relationship to whether the simulated cache
# hit or miss. # will hit or miss. Relatedly, if the actual cache needed to store
action = code & 0x70 # ignore high bit (version flag) # an object, the simulated cache may not need to (it may already
# have the data).
action = code & 0x70
if action == 0x20: if action == 0x20:
# Load. # Load.
self.loads += 1 self.loads += 1
...@@ -282,12 +273,34 @@ class Simulation(object): ...@@ -282,12 +273,34 @@ class Simulation(object):
for name in self.extras]) for name in self.extras])
print (self.format + " OVERALL") % args print (self.format + " OVERALL") % args
# For use in CircularCacheSimulation.
class CircularCacheEntry(object):
__slots__ = (# object key: an (oid, start_tid) pair, where
# start_tid is the tid of the transaction that created
# this revision of oid
'key',
# tid of transaction that created the next revision;
# z64 iff this is the current revision
'end_tid',
# Offset from start of file to the object's data
# record; this includes all overhead bytes (status
# byte, size bytes, etc).
'offset',
)
def __init__(self, key, end_tid, offset):
self.key = key
self.end_tid = end_tid
self.offset = offset
from ZEO.cache import ZEC3_HEADER_SIZE from ZEO.cache import ZEC3_HEADER_SIZE
# An Entry just wraps a (key, offset) pair. A key is in turn an
# (oid, tid) pair.
from ZEO.cache import Entry
class CircularCacheSimulation(Simulation): class CircularCacheSimulation(Simulation):
"""Simulate the ZEO 3.0a cache."""
# The cache is managed as a single file with a pointer that # The cache is managed as a single file with a pointer that
# goes around the file, circularly, forever. New objects # goes around the file, circularly, forever. New objects
# are written at the current pointer, evicting whatever was # are written at the current pointer, evicting whatever was
...@@ -300,14 +313,16 @@ class CircularCacheSimulation(Simulation): ...@@ -300,14 +313,16 @@ class CircularCacheSimulation(Simulation):
from BTrees.OIBTree import OIBTree from BTrees.OIBTree import OIBTree
Simulation.__init__(self, cachelimit) Simulation.__init__(self, cachelimit)
self.total_evicts = 0 self.total_evicts = 0 # number of cache evictions
# Current offset in file. # Current offset in file.
self.offset = ZEC3_HEADER_SIZE self.offset = ZEC3_HEADER_SIZE
# Map offset in file to (size, Entry) pair, or to (size, None) if
# the offset starts a free block. # Map offset in file to (size, CircularCacheEntry) pair, or to
# (size, None) if the offset starts a free block.
self.filemap = {ZEC3_HEADER_SIZE: (self.cachelimit - ZEC3_HEADER_SIZE, self.filemap = {ZEC3_HEADER_SIZE: (self.cachelimit - ZEC3_HEADER_SIZE,
None)} None)}
# Map key to Entry. A key is an (oid, tid) pair. # Map key to CircularCacheEntry. A key is an (oid, tid) pair.
self.key2entry = {} self.key2entry = {}
# Map oid to tid of current revision. # Map oid to tid of current revision.
...@@ -348,7 +363,7 @@ class CircularCacheSimulation(Simulation): ...@@ -348,7 +363,7 @@ class CircularCacheSimulation(Simulation):
self._cache_miss(oid, tid) self._cache_miss(oid, tid)
return return
# May or may not be trying to load current revisiion. # May or may not be trying to load current revision.
cur_tid = self.current.get(oid) cur_tid = self.current.get(oid)
if cur_tid == tid: if cur_tid == tid:
self.hits += 1 self.hits += 1
...@@ -375,9 +390,10 @@ class CircularCacheSimulation(Simulation): ...@@ -375,9 +390,10 @@ class CircularCacheSimulation(Simulation):
self.total_hits += 1 self.total_hits += 1
def _cache_miss(self, oid, tid, HUGE64='\xff'*8): def _cache_miss(self, oid, tid, HUGE64='\xff'*8):
return
have_data = False have_data = False
if tid == z64: if tid == z64:
# Miss on current data. Find the most revision we ever saw. # Miss on current data. Find the most recent revision we ever saw.
items = self.key2size.items(min=(oid, z64), max=(oid, HUGE64)) items = self.key2size.items(min=(oid, z64), max=(oid, HUGE64))
if items: if items:
(oid, tid), size = items[-1] # most recent (oid, tid), size = items[-1] # most recent
...@@ -393,10 +409,11 @@ class CircularCacheSimulation(Simulation): ...@@ -393,10 +409,11 @@ class CircularCacheSimulation(Simulation):
# Pretend the cache miss was followed by a store. # Pretend the cache miss was followed by a store.
self.writes += 1 self.writes += 1
self.total_writes += 1 self.total_writes += 1
self.add(oid, tid, size) self.add(oid, size, tid)
# (oid, tid) is in the cache. Remove it: take it out of key2entry, # (oid, tid) is in the cache. Remove it: take it out of key2entry,
# and in `filemap` mark the space it occupied as being free. # and in `filemap` mark the space it occupied as being free. The
# caller is responsible for removing it from `current` or `noncurrent`.
def _remove(self, oid, tid): def _remove(self, oid, tid):
key = oid, tid key = oid, tid
e = self.key2entry.pop(key) e = self.key2entry.pop(key)
...@@ -416,12 +433,13 @@ class CircularCacheSimulation(Simulation): ...@@ -416,12 +433,13 @@ class CircularCacheSimulation(Simulation):
def inval(self, oid, tid): def inval(self, oid, tid):
if tid == z64: if tid == z64:
# This is part of startup cache verification. # This is part of startup cache verification: forget everything
# about this oid.
self._remove_noncurrent_revisions(oid, version) self._remove_noncurrent_revisions(oid, version)
cur_tid = self.current.get(oid) cur_tid = self.current.get(oid)
if cur_tid is None: if cur_tid is None:
# We don't have current data, so nothing to do. # We don't have current data, so nothing more to do.
return return
# We had current data for oid, but no longer. # We had current data for oid, but no longer.
...@@ -433,10 +451,15 @@ class CircularCacheSimulation(Simulation): ...@@ -433,10 +451,15 @@ class CircularCacheSimulation(Simulation):
self._remove(oid, current_tid) self._remove(oid, current_tid)
return return
# Add the validty range to the list of non-current data for oid. # Our current data becomes non-current data.
# Add the validity range to the list of non-current data for oid.
assert cur_tid < tid assert cur_tid < tid
L = self.noncurrent.setdefault(oid, []) L = self.noncurrent.setdefault(oid, [])
bisect.insort_left(L, (cur_tid, tid)) bisect.insort_left(L, (cur_tid, tid))
# Update the end of oid's validity range in its CircularCacheEntry.
e = self.key2entry[oid, cur_tid]
assert e.end_tid == z64
e.end_tid = tid
def write(self, oid, size, start_tid, end_tid): def write(self, oid, size, start_tid, end_tid):
if end_tid == z64: if end_tid == z64:
...@@ -447,7 +470,7 @@ class CircularCacheSimulation(Simulation): ...@@ -447,7 +470,7 @@ class CircularCacheSimulation(Simulation):
self.key2size[oid, start_tid] = size self.key2size[oid, start_tid] = size
self.writes += 1 self.writes += 1
self.total_writes += 1 self.total_writes += 1
self.add(oid, start_tid, size) self.add(oid, size, start_tid)
return return
# Storing non-current revision. # Storing non-current revision.
L = self.noncurrent.setdefault(oid, []) L = self.noncurrent.setdefault(oid, [])
...@@ -458,14 +481,17 @@ class CircularCacheSimulation(Simulation): ...@@ -458,14 +481,17 @@ class CircularCacheSimulation(Simulation):
self.key2size[(oid, start_tid)] = size self.key2size[(oid, start_tid)] = size
self.writes += 1 self.writes += 1
self.total_writes += 1 self.total_writes += 1
self.add(oid, start_tid, size) self.add(oid, size, start_tid, end_tid)
def add(self, oid, tid, size): # Add `oid` to the cache, evicting objects as needed to make room.
# This updates `filemap` and `key2entry`; it's the caller's
# responsibilty to update `current` or `noncurrent` appropriately.
def add(self, oid, size, start_tid, end_tid=z64):
key = oid, start_tid
assert key not in self.key2entry
size += self.overhead size += self.overhead
avail = self.makeroom(size) avail = self.makeroom(size)
key = oid, tid e = CircularCacheEntry(key, end_tid, self.offset)
assert key not in self.key2entry
e = Entry(key, self.offset)
self.filemap[self.offset] = size, e self.filemap[self.offset] = size, e
self.key2entry[key] = e self.key2entry[key] = e
self.offset += size self.offset += size
...@@ -474,8 +500,13 @@ class CircularCacheSimulation(Simulation): ...@@ -474,8 +500,13 @@ class CircularCacheSimulation(Simulation):
if excess: if excess:
self.filemap[self.offset] = excess, None self.filemap[self.offset] = excess, None
# Evict enough objects to make at least `need` contiguous bytes, starting
# at `self.offset`, available. Evicted objects are removed from
# `filemap`, `key2entry`, `current` and `noncurrent`. The caller is
# responsible for adding new entries to `filemap` to account for all
# the freed bytes, and for advancing `self.offset`. The number of bytes
# freed is the return value, and will be >= need.
def makeroom(self, need): def makeroom(self, need):
# Evict enough objects to make the necessary space available.
if self.offset + need > self.cachelimit: if self.offset + need > self.cachelimit:
self.offset = ZEC3_HEADER_SIZE self.offset = ZEC3_HEADER_SIZE
pos = self.offset pos = self.offset
...@@ -487,12 +518,18 @@ class CircularCacheSimulation(Simulation): ...@@ -487,12 +518,18 @@ class CircularCacheSimulation(Simulation):
self.dump() self.dump()
raise raise
del self.filemap[pos] del self.filemap[pos]
if e: if e: # there is an object here (else it's already free space)
self.evicts += 1 self.evicts += 1
self.total_evicts += 1 self.total_evicts += 1
assert pos == e.offset assert pos == e.offset
_e = self.key2entry.pop(e.key) _e = self.key2entry.pop(e.key)
assert e is _e assert e is _e
oid, start_tid = e.key
if e.end_tid == z64:
del self.current[oid]
else:
L = self.noncurrent[oid]
L.remove((start_tid, e.end_tid))
need -= size need -= size
pos += size pos += size
return pos - self.offset # total number of bytes freed return pos - self.offset # total number of bytes freed
...@@ -531,8 +568,16 @@ class CircularCacheSimulation(Simulation): ...@@ -531,8 +568,16 @@ class CircularCacheSimulation(Simulation):
v = self.filemap[k] v = self.filemap[k]
print k, v[0], repr(v[1]) print k, v[0], repr(v[1])
#############################################################################
# CAUTION: It's most likely that none of the simulators below this
# point work anymore. A great many changes were needed to teach
# CircularCacheSimulation (above) about MVCC, including method signature
# changes and changes in cache file format, and none of the others simulator
# classes were changed.
#############################################################################
class ZEOCacheSimulation(Simulation): class ZEOCacheSimulation(Simulation):
"""Simulate the ZEO 1.0 and 2.0cache behavior. """Simulate the ZEO 1.0 and 2.0 cache behavior.
This assumes the cache is not persistent (we don't know how to This assumes the cache is not persistent (we don't know how to
simulate cache validation.) simulate cache validation.)
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment