Skip to content
Projects
Groups
Snippets
Help
Loading...
Help
Support
Keyboard shortcuts
?
Submit feedback
Contribute to GitLab
Sign in / Register
Toggle navigation
Z
Zope
Project overview
Project overview
Details
Activity
Releases
Repository
Repository
Files
Commits
Branches
Tags
Contributors
Graph
Compare
Issues
0
Issues
0
List
Boards
Labels
Milestones
Merge Requests
0
Merge Requests
0
Analytics
Analytics
Repository
Value Stream
Wiki
Wiki
Snippets
Snippets
Members
Members
Collapse sidebar
Close sidebar
Activity
Graph
Create a new issue
Commits
Issue Boards
Open sidebar
Kirill Smelkov
Zope
Commits
4882da74
Commit
4882da74
authored
Mar 24, 2004
by
Chris McDonough
Browse files
Options
Browse Files
Download
Email Patches
Plain Diff
Jim commented this code at the PyCon DC 2004 sprint.
parent
6077a3ad
Changes
1
Hide whitespace changes
Inline
Side-by-side
Showing
1 changed file
with
61 additions
and
23 deletions
+61
-23
lib/python/ZServer/PubCore/ZRendezvous.py
lib/python/ZServer/PubCore/ZRendezvous.py
+61
-23
No files found.
lib/python/ZServer/PubCore/ZRendezvous.py
View file @
4882da74
...
...
@@ -15,49 +15,87 @@ import thread
from
ZServerPublisher
import
ZServerPublisher
class
ZRendevous
:
"""Worker thread pool
For better or worse, we hide locking sementics from the worker
threads. The worker threads do no locking.
"""
def
__init__
(
self
,
n
=
1
):
sync
=
thread
.
allocate_lock
()
self
.
_a
=
sync
.
acquire
self
.
_r
=
sync
.
release
pool
=
[]
self
.
_lists
=
pool
,
[],
[]
self
.
_a
()
sync
=
thread
.
allocate_lock
()
self
.
_acquire
=
sync
.
acquire
self
.
_release
=
sync
.
release
pool
=
[]
self
.
_lists
=
(
pool
,
# Collection of locks representing threads are not
# waiting for work to do
[],
# Request queue
[],
# Pool of locks representing threads that are
# waiting (ready) for work to do.
)
self
.
_acquire
()
# callers will block
try
:
while
n
>
0
:
l
=
thread
.
allocate_lock
()
l
=
thread
.
allocate_lock
()
l
.
acquire
()
pool
.
append
(
l
)
thread
.
start_new_thread
(
ZServerPublisher
,
(
self
.
accept
,))
n
=
n
-
1
finally
:
self
.
_r
()
n
=
n
-
1
finally
:
self
.
_release
()
# let callers through now
def
accept
(
self
):
self
.
_a
()
"""Return a request from the request queue
If no requests are in the request queue, then block until
there is nonw.
"""
self
.
_acquire
()
# prevent other calls to protect data structures
try
:
pool
,
requests
,
ready
=
self
.
_lists
while
not
requests
:
l
=
pool
[
-
1
]
del
pool
[
-
1
]
# There are no requests in the queue. Wait until there are.
# This thread is waiting, to remove a lock from the collection
# of locks corresponding to threads not waiting for work
l
=
pool
.
pop
()
# And add it to the collection of locks representing threads
# ready and waiting for work.
ready
.
append
(
l
)
self
.
_r
()
self
.
_release
()
# allow other calls
# Now try to acquire the lock. We will block until
# someone calls handle to queue a request and releases the lock
# which handle finds in the ready queue
l
.
acquire
()
self
.
_a
()
self
.
_acquire
()
# prevent calls so we can update
# not waiting pool
pool
.
append
(
l
)
r
=
requests
[
0
]
del
requests
[
0
]
return
r
finally
:
self
.
_r
()
# return the *first* request
return
requests
.
pop
(
0
)
finally
:
self
.
_release
()
# allow calls
def
handle
(
self
,
name
,
request
,
response
):
self
.
_a
()
"""Queue a request for processing
"""
self
.
_acquire
()
# prevent other calls to protect data structs
try
:
pool
,
requests
,
ready
=
self
.
_lists
# queue request
requests
.
append
((
name
,
request
,
response
))
if
ready
:
l
=
ready
[
-
1
]
del
ready
[
-
1
]
l
.
release
()
finally
:
self
.
_r
()
# If any threads are ready and waiting for work
# then remove one of the locks from the ready pool
# and release it, letting the waiting thread go forward
# and consume the request
ready
.
pop
().
release
()
finally
:
self
.
_release
()
Write
Preview
Markdown
is supported
0%
Try again
or
attach a new file
Attach a file
Cancel
You are about to add
0
people
to the discussion. Proceed with caution.
Finish editing this message first!
Cancel
Please
register
or
sign in
to comment