Commit bb047cfc authored by Jim Fulton's avatar Jim Fulton

Reduced the cache size used in the test to increase the likelihood of

staying under 5000 bytes.  There appears to be a race in the test.  If
the cleanup thread is slow enough, it will prevent later cleanup
threads from running. Need to think about this.
parent d2d83b88
...@@ -22,7 +22,7 @@ Let's start by setting up some data: ...@@ -22,7 +22,7 @@ Let's start by setting up some data:
We'll also create a client. We'll also create a client.
>>> import ZEO >>> import ZEO
>>> db = ZEO.DB(addr, blob_dir='blobs', blob_cache_size=4000) >>> db = ZEO.DB(addr, blob_dir='blobs', blob_cache_size=3000)
Here, we passed a blob_cache_size parameter, which specifies a target Here, we passed a blob_cache_size parameter, which specifies a target
blob cache size. This is not a hard limit, but rather a target. It blob cache size. This is not a hard limit, but rather a target. It
...@@ -50,7 +50,7 @@ Now, let's write some data: ...@@ -50,7 +50,7 @@ Now, let's write some data:
... conn.root()[i].open('w').write(chr(i)*100) ... conn.root()[i].open('w').write(chr(i)*100)
>>> transaction.commit() >>> transaction.commit()
We've committed 10000 bytes of data, but our target size is 4000. We We've committed 10000 bytes of data, but our target size is 3000. We
expect to have not much more than the target size in the cache blob expect to have not much more than the target size in the cache blob
directory. directory.
......
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment