Commit 9d27d269 authored by Jason Madden's avatar Jason Madden Committed by GitHub

Support TLS1.3, OpenSSL 1.1.1 (and Python 3.7.4) (#1460)

* Fix tests with TLS1.3

There were hidden assumptions about the order in which greenlets would run.

TLS1.3 changed the handshake in a way that broke those assumptions.

* Attempting to fix the appveyor backdoor test timeouts.

Suspect a buffering issue. Trying to fix by disabling Nagle's algorithm.

Also a cleaner separation of concerns in the backdoor server.

* Brute-force debugging.

* Tweak the backlog and simultaneous accept settings, and drop the extra event listeners we don't need.

* max_accept=1 and dropping the events fixed test__backdoor on windows.

probably it was just the events. test that.

* Brute force debugging for the SSL issues.

* Some of the windows ssl issues may be due to fileno reuse and hence incorrectly multiplexed watchers?

* Even more care with socket lifetimes.

On Python 3.7, use the newer wrapping form that the stdlib uses to create the _sslobj.

* More debugging.

* The preferred method of unwrapping differs on 3.6 and 3.7+

* Checking ordering.

* Partial success, hopefully.

* Lots of progress. Try again.

* More Windows work.

* D'oh. Test cases from 3.7.4 handle some of these issues more gracefully so we will just use them.

* Tweaks.

* Having a hard time getting 3.7 updated on Travis. Try to force the issue.

* Yet another weird error from appveyor.
parent 8224e814
[MASTER] [MASTER]
extension-pkg-whitelist=gevent.libuv._corecffi,gevent.libev._corecffi,gevent.local,gevent._ident extension-pkg-whitelist=gevent.greenlet,gevent.libuv._corecffi,gevent.libev._corecffi,gevent.local,gevent._ident
# Control the amount of potential inferred values when inferring a single
# object. This can help the performance when dealing with large functions or
# complex, nested conditions.
# gevent: The changes for Python 3.7 in _ssl3.py lead to infinite recursion
# in pylint 2.3.1/astroid 2.2.5 in that file unless we this this to 1
# from the default of 100.
limit-inference-results=1
[MESSAGES CONTROL] [MESSAGES CONTROL]
...@@ -90,9 +98,7 @@ generated-members=exc_clear ...@@ -90,9 +98,7 @@ generated-members=exc_clear
# List of classes names for which member attributes should not be checked # List of classes names for which member attributes should not be checked
# (useful for classes with attributes dynamically set). This supports can work # (useful for classes with attributes dynamically set). This supports can work
# with qualified names. # with qualified names.
# greenlet, Greenlet, parent, dead: all attempts to fix issues in greenlet.py #ignored-classes=SSLContext, SSLSocket, greenlet, Greenlet, parent, dead
# only seen on the service, e.g., self.parent.loop: class parent has no loop
ignored-classes=SSLContext, SSLSocket, greenlet, Greenlet, parent, dead
# List of module names for which member attributes should not be checked # List of module names for which member attributes should not be checked
# (useful for modules/projects where namespaces are manipulated during runtime # (useful for modules/projects where namespaces are manipulated during runtime
...@@ -109,4 +115,8 @@ bad-functions=input ...@@ -109,4 +115,8 @@ bad-functions=input
# Prospector turns ot unsafe-load-any-extension by default, but # Prospector turns ot unsafe-load-any-extension by default, but
# pylint leaves it off. This is the proximal cause of the # pylint leaves it off. This is the proximal cause of the
# undefined-all-variable crash. # undefined-all-variable crash.
#unsafe-load-any-extension = no unsafe-load-any-extension = yes
# Local Variables:
# mode: conf
# End:
...@@ -199,9 +199,7 @@ jobs: ...@@ -199,9 +199,7 @@ jobs:
# pylint has stopped updating for Python 2. # pylint has stopped updating for Python 2.
- stage: test - stage: test
# We need pylint, since above we're not installing a # We need pylint, since above we're not installing a
# requirements file. Code added to _ssl3.SSLContext for Python # requirements file.
# 3.8 triggers an infinite recursion bug in pylint 2.3.1/astroid 2.2.5
# unless we disable inference.
install: pip install pylint install: pip install pylint
script: python -m pylint --limit-inference-results=1 --rcfile=.pylintrc gevent script: python -m pylint --limit-inference-results=1 --rcfile=.pylintrc gevent
env: TRAVIS_PYTHON_VERSION=3.7 env: TRAVIS_PYTHON_VERSION=3.7
......
...@@ -18,6 +18,13 @@ ...@@ -18,6 +18,13 @@
- Implement ``SSLSocket.verify_client_post_handshake()`` when available. - Implement ``SSLSocket.verify_client_post_handshake()`` when available.
- Fix tests when TLS1.3 is supported.
- Disable Nagle's algorithm in the backdoor server. This can improve
interactive response time.
- Test on Python 3.7.4. There are important SSL test fixes.
1.5a1 (2019-05-02) 1.5a1 (2019-05-02)
================== ==================
......
...@@ -14,6 +14,21 @@ The exact API exposed by this module varies depending on what version ...@@ -14,6 +14,21 @@ The exact API exposed by this module varies depending on what version
of Python you are using. The documents below describe the API for of Python you are using. The documents below describe the API for
Python 3, Python 2.7.9 and above, and Python 2.7.8 and below, respectively. Python 3, Python 2.7.9 and above, and Python 2.7.8 and below, respectively.
.. tip::
As an implementation note, gevent's exact behaviour will differ
somewhat depending on the underlying TLS version in use. For
example, the number of data exchanges involved in the handshake
process, and exactly when that process occurs, will vary. This can
be indirectly observed by the number and timing of greenlet
switches or trips around the event loop gevent makes.
Most applications should not notice this, but some applications
(and especially tests, where it is common for a process to be both
a server and its own client), may find that they have coded in
assumptions about the order in which multiple greenlets run. As
TLS 1.3 gets deployed, those assumptions are likely to break.
.. warning:: All the described APIs should be imported from .. warning:: All the described APIs should be imported from
``gevent.ssl``, and *not* from their implementation modules. ``gevent.ssl``, and *not* from their implementation modules.
Their organization is an implementation detail that may change at Their organization is an implementation detail that may change at
......
...@@ -72,15 +72,23 @@ install () { ...@@ -72,15 +72,23 @@ install () {
# so that we don't get *spurious* caching. (Travis doesn't check for mod times, # so that we don't get *spurious* caching. (Travis doesn't check for mod times,
# just contents, so echoing each time doesn't cause it to re-cache.) # just contents, so echoing each time doesn't cause it to re-cache.)
# Overwrite an existing alias # Overwrite an existing alias.
ln -sf $DESTINATION/bin/python $SNAKEPIT/$ALIAS # For whatever reason, ln -sf on Travis works fine for the ALIAS,
ln -sf $DESTINATION $SNAKEPIT/$DIR_ALIAS # but fails for the DIR_ALIAS. No clue why. So we delete an existing one of those
# manually.
if [ -L "$SNAKEPIT/$DIR_ALIAS" ]; then
rm -f $SNAKEPIT/$DIR_ALIAS
fi
ln -sfv $DESTINATION/bin/python $SNAKEPIT/$ALIAS
ln -sfv $DESTINATION $SNAKEPIT/$DIR_ALIAS
echo $VERSION $ALIAS $DIR_ALIAS > $SNAKEPIT/$ALIAS.installed echo $VERSION $ALIAS $DIR_ALIAS > $SNAKEPIT/$ALIAS.installed
$SNAKEPIT/$ALIAS --version $SNAKEPIT/$ALIAS --version
$DESTINATION/bin/python --version
# Set the PATH to include the install's bin directory so pip # Set the PATH to include the install's bin directory so pip
# doesn't nag. # doesn't nag.
PATH="$DESTINATION/bin/:$PATH" $SNAKEPIT/$ALIAS -m pip install --upgrade pip wheel virtualenv PATH="$DESTINATION/bin/:$PATH" $SNAKEPIT/$ALIAS -m pip install --upgrade pip wheel virtualenv
ls -l $SNAKEPIT ls -l $SNAKEPIT
ls -l $BASE/versions
} }
...@@ -97,7 +105,7 @@ for var in "$@"; do ...@@ -97,7 +105,7 @@ for var in "$@"; do
install 3.6.8 python3.6 3.6.d install 3.6.8 python3.6 3.6.d
;; ;;
3.7) 3.7)
install 3.7.2 python3.7 3.7.d install 3.7.4 python3.7 3.7.d
;; ;;
3.8) 3.8)
install 3.8.0b4 python3.8 3.8.d install 3.8.0b4 python3.8 3.8.d
......
...@@ -3,6 +3,7 @@ ...@@ -3,6 +3,7 @@
Python 2 socket module. Python 2 socket module.
""" """
from __future__ import absolute_import from __future__ import absolute_import
from __future__ import print_function
# Our import magic sadly makes this warning useless # Our import magic sadly makes this warning useless
# pylint: disable=undefined-variable # pylint: disable=undefined-variable
...@@ -80,6 +81,11 @@ class _closedsocket(object): ...@@ -80,6 +81,11 @@ class _closedsocket(object):
# All _delegate_methods must also be initialized here. # All _delegate_methods must also be initialized here.
send = recv = recv_into = sendto = recvfrom = recvfrom_into = _dummy send = recv = recv_into = sendto = recvfrom = recvfrom_into = _dummy
def __nonzero__(self):
return False
__bool__ = __nonzero__
if PYPY: if PYPY:
def _drop(self): def _drop(self):
...@@ -92,6 +98,7 @@ class _closedsocket(object): ...@@ -92,6 +98,7 @@ class _closedsocket(object):
timeout_default = object() timeout_default = object()
gtype = type
from gevent._hub_primitives import wait_on_socket as _wait_on_socket from gevent._hub_primitives import wait_on_socket as _wait_on_socket
...@@ -111,22 +118,23 @@ class socket(object): ...@@ -111,22 +118,23 @@ class socket(object):
# pylint:disable=too-many-public-methods # pylint:disable=too-many-public-methods
def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0, _sock=None): def __init__(self, family=AF_INET, type=SOCK_STREAM, proto=0, _sock=None):
timeout = _socket.getdefaulttimeout()
if _sock is None: if _sock is None:
self._sock = _realsocket(family, type, proto) self._sock = _realsocket(family, type, proto)
self.timeout = _socket.getdefaulttimeout()
else: else:
if hasattr(_sock, '_sock'): if hasattr(_sock, '_sock'):
# passed a gevent socket timeout = getattr(_sock, 'timeout', timeout)
self._sock = _sock._sock while hasattr(_sock, '_sock'):
self.timeout = getattr(_sock, 'timeout', False) # passed a gevent socket or a native
if self.timeout is False: # socket._socketobject. Unwrap this all the way to the
self.timeout = _socket.getdefaulttimeout() # native _socket.socket.
else: _sock = _sock._sock
# passed a native socket
self._sock = _sock self._sock = _sock
self.timeout = _socket.getdefaulttimeout()
if PYPY: if PYPY:
self._sock._reuse() self._sock._reuse()
self.timeout = timeout
self._sock.setblocking(0) self._sock.setblocking(0)
fileno = self._sock.fileno() fileno = self._sock.fileno()
self.hub = get_hub() self.hub = get_hub()
...@@ -207,23 +215,33 @@ class socket(object): ...@@ -207,23 +215,33 @@ class socket(object):
def close(self, _closedsocket=_closedsocket): def close(self, _closedsocket=_closedsocket):
if not self._sock:
return
# This function should not reference any globals. See Python issue #808164. # This function should not reference any globals. See Python issue #808164.
# Also break any reference to the loop.io objects. Our fileno, # First, change self._sock. On CPython, this drops a
# which they were tied to, is now free to be reused, so these # reference, and if it was the last reference, __del__ will
# objects are no longer functional. # close it. (We cannot close it, makefile() relies on
self._drop_events() # reference counting like this, and it may be shared among
# multiple wrapper objects). Methods *must not* cache
# `self._sock` separately from
# self._write_event/self._read_event, or they will be out of
# sync and we may get inappropriate errors. (See
# test__hub:TestCloseSocketWhilePolling for an example).
s = self._sock s = self._sock
# Note that we change self._sock at this point. Methods *must not*
# cache `self._sock` separately from self._write_event/self._read_event,
# or they will be out of sync and we may get inappropriate errors.
# (See test__hub:TestCloseSocketWhilePolling for an example).
self._sock = _closedsocket() self._sock = _closedsocket()
# On PyPy we have to manually tell it to drop a ref.
if PYPY: if PYPY:
s._drop() s._drop()
# Finally, break any reference to the loop.io objects. Our
# fileno, which they were tied to, is about to be free to be
# reused, so these objects are no longer functional.
self._drop_events()
@property @property
def closed(self): def closed(self):
return isinstance(self._sock, _closedsocket) return isinstance(self._sock, _closedsocket)
......
This diff is collapsed.
This diff is collapsed.
"""A clone of threading module (version 2.7.2) that always """
targets real OS threads. (Unlike 'threading' which flips between A small selection of primitives that always work with
green and OS threads based on whether the monkey patching is in effect native threads. This has very limited utility and is
or not). targeted only for the use of gevent's threadpool.
This module is missing 'Thread' class, but includes 'Queue'.
""" """
from __future__ import absolute_import from __future__ import absolute_import
...@@ -80,6 +78,9 @@ class _Condition(object): ...@@ -80,6 +78,9 @@ class _Condition(object):
waiter.acquire() waiter.acquire()
self.__waiters.append(waiter) self.__waiters.append(waiter)
saved_state = self._release_save() saved_state = self._release_save()
# This variable is for the monitoring utils to know that
# this is an idle frame and shouldn't be counted.
gevent_threadpool_worker_idle = True # pylint:disable=unused-variable
try: # restore state no matter what (e.g., KeyboardInterrupt) try: # restore state no matter what (e.g., KeyboardInterrupt)
waiter.acquire() # Block on the native lock waiter.acquire() # Block on the native lock
finally: finally:
......
...@@ -11,6 +11,7 @@ with other elements of the process. ...@@ -11,6 +11,7 @@ with other elements of the process.
""" """
from __future__ import print_function, absolute_import from __future__ import print_function, absolute_import
import sys import sys
import socket
from code import InteractiveConsole from code import InteractiveConsole
from gevent.greenlet import Greenlet from gevent.greenlet import Greenlet
...@@ -18,8 +19,11 @@ from gevent.hub import getcurrent ...@@ -18,8 +19,11 @@ from gevent.hub import getcurrent
from gevent.server import StreamServer from gevent.server import StreamServer
from gevent.pool import Pool from gevent.pool import Pool
from gevent._compat import PY36 from gevent._compat import PY36
from gevent._compat import exc_clear
__all__ = ['BackdoorServer'] __all__ = [
'BackdoorServer',
]
try: try:
sys.ps1 sys.ps1
...@@ -32,25 +36,47 @@ except AttributeError: ...@@ -32,25 +36,47 @@ except AttributeError:
class _Greenlet_stdreplace(Greenlet): class _Greenlet_stdreplace(Greenlet):
# A greenlet that replaces sys.std[in/out/err] while running. # A greenlet that replaces sys.std[in/out/err] while running.
_fileobj = None
saved = None __slots__ = (
'stdin',
'stdout',
'prev_stdin',
'prev_stdout',
'prev_stderr',
)
def __init__(self, *args, **kwargs):
Greenlet.__init__(self, *args, **kwargs)
self.stdin = None
self.stdout = None
self.prev_stdin = None
self.prev_stdout = None
self.prev_stderr = None
def switch(self, *args, **kw): def switch(self, *args, **kw):
if self._fileobj is not None: if self.stdin is not None:
self.switch_in() self.switch_in()
Greenlet.switch(self, *args, **kw) Greenlet.switch(self, *args, **kw)
def switch_in(self): def switch_in(self):
self.saved = sys.stdin, sys.stderr, sys.stdout self.prev_stdin = sys.stdin
sys.stdin = sys.stdout = sys.stderr = self._fileobj self.prev_stdout = sys.stdout
self.prev_stderr = sys.stderr
sys.stdin = self.stdin
sys.stdout = self.stdout
sys.stderr = self.stdout
def switch_out(self): def switch_out(self):
sys.stdin, sys.stderr, sys.stdout = self.saved sys.stdin = self.prev_stdin
self.saved = None sys.stdout = self.prev_stdout
sys.stderr = self.prev_stderr
self.prev_stdin = self.prev_stdout = self.prev_stderr = None
def throw(self, *args, **kwargs): def throw(self, *args, **kwargs):
# pylint:disable=arguments-differ # pylint:disable=arguments-differ
if self.saved is None and self._fileobj is not None: if self.prev_stdin is None and self.stdin is not None:
self.switch_in() self.switch_in()
Greenlet.throw(self, *args, **kwargs) Greenlet.throw(self, *args, **kwargs)
...@@ -139,10 +165,12 @@ class BackdoorServer(StreamServer): ...@@ -139,10 +165,12 @@ class BackdoorServer(StreamServer):
``locals`` dictionary. Previously they were shared in a ``locals`` dictionary. Previously they were shared in a
potentially unsafe manner. potentially unsafe manner.
""" """
fobj = conn.makefile(mode="rw") conn.setsockopt(socket.SOL_TCP, socket.TCP_NODELAY, True) # pylint:disable=no-member
fobj = _fileobject(conn, fobj, self.stderr) raw_file = conn.makefile(mode="r")
getcurrent()._fileobj = fobj getcurrent().stdin = _StdIn(conn, raw_file)
getcurrent().stdout = _StdErr(conn, raw_file)
# Swizzle the inputs
getcurrent().switch_in() getcurrent().switch_in()
try: try:
console = InteractiveConsole(self._create_interactive_locals()) console = InteractiveConsole(self._create_interactive_locals())
...@@ -152,15 +180,51 @@ class BackdoorServer(StreamServer): ...@@ -152,15 +180,51 @@ class BackdoorServer(StreamServer):
console.interact(banner=self.banner, exitmsg='') # pylint:disable=unexpected-keyword-arg console.interact(banner=self.banner, exitmsg='') # pylint:disable=unexpected-keyword-arg
else: else:
console.interact(banner=self.banner) console.interact(banner=self.banner)
except SystemExit: # raised by quit() except SystemExit:
if hasattr(sys, 'exc_clear'): # py2 # raised by quit(); obviously this cannot propagate.
sys.exc_clear() exc_clear() # Python 2
finally: finally:
raw_file.close()
conn.close() conn.close()
fobj.close()
class _BaseFileLike(object):
# Python 2 likes to test for this before writing to stderr.
softspace = None
encoding = 'utf-8'
__slots__ = (
'sock',
'fobj',
'fileno',
)
def __init__(self, sock, stdin):
self.sock = sock
self.fobj = stdin
# On Python 3, The builtin input() function (used by the
# default InteractiveConsole) calls fileno() on
# sys.stdin. If it's the same as the C stdin's fileno,
# and isatty(fd) (C function call) returns true,
# and all of that is also true for stdout, then input() will use
# PyOS_Readline to get the input.
#
# On Python 2, the sys.stdin object has to extend the file()
# class, and return true from isatty(fileno(sys.stdin.f_fp))
# (where f_fp is a C-level FILE* member) to use PyOS_Readline.
#
# If that doesn't hold, both versions fall back to reading and writing
# using sys.stdout.write() and sys.stdin.readline().
self.fileno = sock.fileno
def __getattr__(self, name):
return getattr(self.fobj, name)
def close(self):
pass
class _fileobject(object): class _StdErr(_BaseFileLike):
""" """
A file-like object that wraps the result of socket.makefile (composition A file-like object that wraps the result of socket.makefile (composition
instead of inheritance lets us work identically under CPython and PyPy). instead of inheritance lets us work identically under CPython and PyPy).
...@@ -171,37 +235,25 @@ class _fileobject(object): ...@@ -171,37 +235,25 @@ class _fileobject(object):
the file in binary mode and translating everywhere with a non-default the file in binary mode and translating everywhere with a non-default
encoding. encoding.
""" """
def __init__(self, sock, fobj, stderr):
self._sock = sock
self._fobj = fobj
self.stderr = stderr
def __getattr__(self, name): def flush(self):
return getattr(self._fobj, name) "Does nothing. raw_input() calls this, only on Python 3."
def close(self):
self._fobj.close()
self._sock.close()
def write(self, data): def write(self, data):
if not isinstance(data, bytes): if not isinstance(data, bytes):
data = data.encode('utf-8') data = data.encode(self.encoding)
self._sock.sendall(data) self.sock.sendall(data)
def isatty(self): class _StdIn(_BaseFileLike):
return True # Like _StdErr, but for stdin.
def flush(self):
pass
def readline(self, *a): def readline(self, *a):
try: try:
return self._fobj.readline(*a).replace("\r\n", "\n") return self.fobj.readline(*a).replace("\r\n", "\n")
except UnicodeError: except UnicodeError:
# Typically, under python 3, a ^C on the other end # Typically, under python 3, a ^C on the other end
return '' return ''
if __name__ == '__main__': if __name__ == '__main__':
if not sys.argv[1:]: if not sys.argv[1:]:
print('USAGE: %s PORT [banner]' % sys.argv[0]) print('USAGE: %s PORT [banner]' % sys.argv[0])
......
"""Base class for implementing servers""" """Base class for implementing servers"""
# Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details. # Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details.
from __future__ import print_function
from __future__ import absolute_import
from __future__ import division
import sys import sys
import _socket import _socket
import errno import errno
from gevent.greenlet import Greenlet from gevent.greenlet import Greenlet
from gevent.event import Event from gevent.event import Event
from gevent.hub import get_hub from gevent.hub import get_hub
from gevent._compat import string_types, integer_types, xrange from gevent._compat import string_types
from gevent._compat import integer_types
from gevent._compat import xrange
__all__ = ['BaseServer'] __all__ = ['BaseServer']
...@@ -79,9 +87,16 @@ class BaseServer(object): ...@@ -79,9 +87,16 @@ class BaseServer(object):
#: Sets the maximum number of consecutive accepts that a process may perform on #: Sets the maximum number of consecutive accepts that a process may perform on
#: a single wake up. High values give higher priority to high connection rates, #: a single wake up. High values give higher priority to high connection rates,
#: while lower values give higher priority to already established connections. #: while lower values give higher priority to already established connections.
#: Default is 100. Note, that in case of multiple working processes on the same #: Default is 100.
#: listening value, it should be set to a lower value. (pywsgi.WSGIServer sets it #:
#: to 1 when environ["wsgi.multiprocess"] is true) #: Note that, in case of multiple working processes on the same
#: listening socket, it should be set to a lower value. (pywsgi.WSGIServer sets it
#: to 1 when ``environ["wsgi.multiprocess"]`` is true)
#:
#: This is equivalent to libuv's `uv_tcp_simultaneous_accepts
#: <http://docs.libuv.org/en/v1.x/tcp.html#c.uv_tcp_simultaneous_accepts>`_
#: value. Setting the environment variable UV_TCP_SINGLE_ACCEPT to a true value
#: (usually 1) changes the default to 1.
max_accept = 100 max_accept = 100
_spawn = Greenlet.spawn _spawn = Greenlet.spawn
...@@ -286,11 +301,14 @@ class BaseServer(object): ...@@ -286,11 +301,14 @@ class BaseServer(object):
return self.address[1] return self.address[1]
def init_socket(self): def init_socket(self):
"""If the user initialized the server with an address rather than socket, """
then this function will create a socket, bind it and put it into listening mode. If the user initialized the server with an address rather than
socket, then this function must create a socket, bind it, and
put it into listening mode.
It is not supposed to be called by the user, it is called by :meth:`start` before starting It is not supposed to be called by the user, it is called by :meth:`start` before starting
the accept loop.""" the accept loop.
"""
@property @property
def started(self): def started(self):
......
...@@ -383,6 +383,7 @@ class Greenlet(greenlet): ...@@ -383,6 +383,7 @@ class Greenlet(greenlet):
@property @property
def dead(self): def dead(self):
"Boolean indicating that the greenlet is dead and will not run again." "Boolean indicating that the greenlet is dead and will not run again."
# pylint:disable=no-member
if self._greenlet__main: if self._greenlet__main:
return False return False
if self.__start_cancelled_by_kill() or self.__started_but_aborted(): if self.__start_cancelled_by_kill() or self.__started_but_aborted():
......
...@@ -739,7 +739,15 @@ def patch_thread(threading=True, _threading_local=True, Event=True, logging=True ...@@ -739,7 +739,15 @@ def patch_thread(threading=True, _threading_local=True, Event=True, logging=True
main_thread._tstate_lock.release() main_thread._tstate_lock.release()
from gevent import sleep from gevent import sleep
sleep() try:
sleep()
except: # pylint:disable=bare-except
# A greenlet could have .kill() us
# or .throw() to us. I'm the main greenlet,
# there's no where else for this to go.
from gevent import get_hub
get_hub().print_exception(_greenlet, *sys.exc_info())
# Now, this may have resulted in us getting stopped # Now, this may have resulted in us getting stopped
# if some other greenlet actually just ran there. # if some other greenlet actually just ran there.
# That's not good, we're not supposed to be stopped # That's not good, we're not supposed to be stopped
......
# Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details. # Copyright (c) 2009-2012 Denis Bilenko. See LICENSE for details.
"""TCP/SSL server""" """TCP/SSL server"""
from __future__ import print_function
from __future__ import absolute_import
from __future__ import division
from contextlib import closing from contextlib import closing
...@@ -101,7 +104,14 @@ class StreamServer(BaseServer): ...@@ -101,7 +104,14 @@ class StreamServer(BaseServer):
""" """
# the default backlog to use if none was provided in __init__ # the default backlog to use if none was provided in __init__
backlog = 256 # For TCP, 128 is the (default) maximum at the operating system level on Linux and macOS
# larger values are truncated to 128.
#
# Windows defines SOMAXCONN=0x7fffffff to mean "max reasonable value" --- that value
# was undocumented and subject to change, but appears to be 200.
# Beginning in Windows 8 there's SOMAXCONN_HINT(b)=(-(b)) which means "at least
# as many SOMAXCONN but no more than b" which is a portable way to write 200.
backlog = 128
reuse_addr = DEFAULT_REUSE_ADDR reuse_addr = DEFAULT_REUSE_ADDR
...@@ -134,13 +144,37 @@ class StreamServer(BaseServer): ...@@ -134,13 +144,37 @@ class StreamServer(BaseServer):
def set_listener(self, listener): def set_listener(self, listener):
BaseServer.set_listener(self, listener) BaseServer.set_listener(self, listener)
try:
self.socket = self.socket._sock def _make_socket_stdlib(self, fresh):
except AttributeError: # We want to unwrap the gevent wrapping of the listening socket.
pass # This lets us be just a hair more efficient: when our 'do_read' is
# called, we've already waited on the socket to be ready to accept(), so
# we don't need to (potentially) do it again. Also we avoid a layer
# of method calls. The cost, though, is that we have to manually wrap
# sockets back up to be non-blocking in do_read(). I'm not sure that's worth
# it.
#
# In the past, we only did this when set_listener() was called with a socket
# object and not an address. It makes sense to do it always though,
# so that we get consistent behaviour.
while hasattr(self.socket, '_sock'):
if fresh:
if hasattr(self.socket, '_drop_events'):
# Discard event listeners. This socket object is not shared,
# so we don't need them anywhere else.
# This matters somewhat for libuv, where we have to multiplex
# listeners, and we're about to create a new listener.
# If we don't do this, on Windows libuv tends to miss incoming
# connects and our _do_read callback doesn't get called.
self.socket._drop_events()
# XXX: Do we need to _drop() for PyPy?
self.socket = self.socket._sock # pylint:disable=attribute-defined-outside-init
def init_socket(self): def init_socket(self):
fresh = False
if not hasattr(self, 'socket'): if not hasattr(self, 'socket'):
fresh = True
# FIXME: clean up the socket lifetime # FIXME: clean up the socket lifetime
# pylint:disable=attribute-defined-outside-init # pylint:disable=attribute-defined-outside-init
self.socket = self.get_listener(self.address, self.backlog, self.family) self.socket = self.get_listener(self.address, self.backlog, self.family)
...@@ -149,6 +183,7 @@ class StreamServer(BaseServer): ...@@ -149,6 +183,7 @@ class StreamServer(BaseServer):
self._handle = self.wrap_socket_and_handle self._handle = self.wrap_socket_and_handle
else: else:
self._handle = self.handle self._handle = self.handle
self._make_socket_stdlib(fresh)
@classmethod @classmethod
def get_listener(cls, address, backlog=None, family=None): def get_listener(cls, address, backlog=None, family=None):
...@@ -157,7 +192,6 @@ class StreamServer(BaseServer): ...@@ -157,7 +192,6 @@ class StreamServer(BaseServer):
return _tcp_listener(address, backlog=backlog, reuse_addr=cls.reuse_addr, family=family) return _tcp_listener(address, backlog=backlog, reuse_addr=cls.reuse_addr, family=family)
if PY3: if PY3:
def do_read(self): def do_read(self):
sock = self.socket sock = self.socket
try: try:
...@@ -168,11 +202,13 @@ class StreamServer(BaseServer): ...@@ -168,11 +202,13 @@ class StreamServer(BaseServer):
raise raise
sock = GeventSocket(sock.family, sock.type, sock.proto, fileno=fd) sock = GeventSocket(sock.family, sock.type, sock.proto, fileno=fd)
# XXX Python issue #7995? # XXX Python issue #7995? "if no default timeout is set
# and the listening socket had a (non-zero) timeout, force
# the new socket in blocking mode to override
# platform-specific socket flags inheritance."
return sock, address return sock, address
else: else:
def do_read(self): def do_read(self):
try: try:
client_socket, address = self.socket.accept() client_socket, address = self.socket.accept()
...@@ -180,16 +216,12 @@ class StreamServer(BaseServer): ...@@ -180,16 +216,12 @@ class StreamServer(BaseServer):
if err.args[0] == EWOULDBLOCK: if err.args[0] == EWOULDBLOCK:
return return
raise raise
# XXX: When would this not be the case? In Python 3 it makes sense
# because we're using the low-level _accept method, sockobj = GeventSocket(_sock=client_socket)
# but not in Python 2. if PYPY:
if not isinstance(client_socket, GeventSocket): # Undo the ref-count bump that the constructor
# This leads to a leak of the watchers in client_socket # did. We gave it ownership.
sockobj = GeventSocket(_sock=client_socket) client_socket._drop()
if PYPY:
client_socket._drop()
else:
sockobj = client_socket
return sockobj, address return sockobj, address
def do_close(self, sock, *args): def do_close(self, sock, *args):
......
...@@ -117,6 +117,7 @@ class _RefCountChecker(object): ...@@ -117,6 +117,7 @@ class _RefCountChecker(object):
self.function(self.testcase, *args, **kwargs) self.function(self.testcase, *args, **kwargs)
finally: finally:
self.testcase.tearDown() self.testcase.tearDown()
self.testcase.doCleanups()
self.testcase.skipTearDown = True self.testcase.skipTearDown = True
self.needs_setUp = True self.needs_setUp = True
if gc_enabled: if gc_enabled:
......
...@@ -1135,6 +1135,20 @@ if PY37: ...@@ -1135,6 +1135,20 @@ if PY37:
'test_ssl.SimpleBackgroundTests.test_get_server_certificate', 'test_ssl.SimpleBackgroundTests.test_get_server_certificate',
# Probably the same as NetworkConnectionNoServer.test_create_connection_timeout # Probably the same as NetworkConnectionNoServer.test_create_connection_timeout
'test_socket.NetworkConnectionNoServer.test_create_connection', 'test_socket.NetworkConnectionNoServer.test_create_connection',
# Internals of the threading module that change.
'test_threading.ThreadTests.test_finalization_shutdown',
'test_threading.ThreadTests.test_shutdown_locks',
# Expects a deprecation warning we don't raise
'test_threading.ThreadTests.test_old_threading_api',
# This tries to use threading.interrupt_main() from a new Thread;
# but of course that's actually the same thread and things don't
# work as expected.
'test_threading.InterruptMainTests.test_interrupt_main_subthread',
'test_threading.InterruptMainTests.test_interrupt_main_noerror',
# TLS1.3 seems flaky
'test_ssl.ThreadedTests.test_wrong_cert_tls13',
] ]
if APPVEYOR: if APPVEYOR:
...@@ -1155,11 +1169,6 @@ if PY38: ...@@ -1155,11 +1169,6 @@ if PY38:
# and can't see the patched threading.get_ident() we use, so the # and can't see the patched threading.get_ident() we use, so the
# output doesn't match. # output doesn't match.
'test_threading.ExceptHookTests.test_excepthook_thread_None', 'test_threading.ExceptHookTests.test_excepthook_thread_None',
# This tries to use threading.interrupt_main() from a new Thread;
# but of course that's actually the same thread and things don't
# work as expected.
'test_threading.InterruptMainTests.test_interrupt_main_subthread',
] ]
# if 'signalfd' in os.environ.get('GEVENT_BACKEND', ''): # if 'signalfd' in os.environ.get('GEVENT_BACKEND', ''):
......
...@@ -77,6 +77,21 @@ class TimeAssertMixin(object): ...@@ -77,6 +77,21 @@ class TimeAssertMixin(object):
fuzzy=(0.01 if not sysinfo.EXPECT_POOR_TIMER_RESOLUTION and not sysinfo.LIBUV else 1.0)): fuzzy=(0.01 if not sysinfo.EXPECT_POOR_TIMER_RESOLUTION and not sysinfo.LIBUV else 1.0)):
return self.runs_in_given_time(0.0, fuzzy) return self.runs_in_given_time(0.0, fuzzy)
class TestTimeout(gevent.Timeout):
_expire_info = ''
def __init__(self, timeout):
gevent.Timeout.__init__(self, timeout, 'test timed out\n', ref=False)
def _on_expiration(self, prev_greenlet, ex):
from gevent.util import format_run_info
self._expire_info = '\n'.join(format_run_info())
gevent.Timeout._on_expiration(self, prev_greenlet, ex)
def __str__(self):
s = gevent.Timeout.__str__(self)
s += self._expire_info
return s
def _wrap_timeout(timeout, method): def _wrap_timeout(timeout, method):
if timeout is None: if timeout is None:
...@@ -84,7 +99,7 @@ def _wrap_timeout(timeout, method): ...@@ -84,7 +99,7 @@ def _wrap_timeout(timeout, method):
@wraps(method) @wraps(method)
def wrapper(self, *args, **kwargs): def wrapper(self, *args, **kwargs):
with gevent.Timeout(timeout, 'test timed out', ref=False): with TestTimeout(timeout):
return method(self, *args, **kwargs) return method(self, *args, **kwargs)
return wrapper return wrapper
...@@ -185,43 +200,26 @@ class TestCase(TestCaseMetaClass("NewBase", ...@@ -185,43 +200,26 @@ class TestCase(TestCaseMetaClass("NewBase",
# XXX: Should some core part of the loop call this? # XXX: Should some core part of the loop call this?
gevent.get_hub().loop.update_now() gevent.get_hub().loop.update_now()
self.close_on_teardown = [] self.close_on_teardown = []
self.addCleanup(self._tearDownCloseOnTearDown)
def tearDown(self): def tearDown(self):
if getattr(self, 'skipTearDown', False): if getattr(self, 'skipTearDown', False):
del self.close_on_teardown[:]
return return
cleanup = getattr(self, 'cleanup', _noop) cleanup = getattr(self, 'cleanup', _noop)
cleanup() cleanup()
self._error = self._none self._error = self._none
self._tearDownCloseOnTearDown()
self.close_on_teardown = []
super(TestCase, self).tearDown() super(TestCase, self).tearDown()
def _tearDownCloseOnTearDown(self): def _tearDownCloseOnTearDown(self):
while self.close_on_teardown: while self.close_on_teardown:
to_close = reversed(self.close_on_teardown) x = self.close_on_teardown.pop()
self.close_on_teardown = [] close = getattr(x, 'close', x)
try:
for x in to_close: close()
close = getattr(x, 'close', x) except Exception: # pylint:disable=broad-except
try: pass
close()
except Exception: # pylint:disable=broad-except
pass
@classmethod
def setUpClass(cls):
import warnings
cls._warning_cm = warnings.catch_warnings()
cls._warning_cm.__enter__()
if not sys.warnoptions:
warnings.simplefilter('default')
super(TestCase, cls).setUpClass()
@classmethod
def tearDownClass(cls):
cls._warning_cm.__exit__(None, None, None)
super(TestCase, cls).tearDownClass()
def _close_on_teardown(self, resource): def _close_on_teardown(self, resource):
""" """
......
...@@ -351,16 +351,6 @@ RUN_ALONE = [ ...@@ -351,16 +351,6 @@ RUN_ALONE = [
'test__example_webproxy.py', 'test__example_webproxy.py',
] ]
if APPVEYOR:
# Strange failures sometimes, but only on Python 3.7, reporting
# "ConnectionAbortedError: [WinError 10053] An established
# connection was aborted by the software in your host machine"
# when we've done no such thing. Try running not in parallel
RUN_ALONE += [
'test__ssl.py',
'test__server.py',
]
if APPVEYOR or TRAVIS: if APPVEYOR or TRAVIS:
RUN_ALONE += [ RUN_ALONE += [
......
from __future__ import print_function from __future__ import print_function
from __future__ import absolute_import
import gevent import gevent
from gevent import socket from gevent import socket
...@@ -24,6 +25,10 @@ def readline(conn): ...@@ -24,6 +25,10 @@ def readline(conn):
with conn.makefile() as f: with conn.makefile() as f:
return f.readline() return f.readline()
class WorkerGreenlet(gevent.Greenlet):
spawning_stack_limit = 2
class Test(greentest.TestCase): class Test(greentest.TestCase):
__timeout__ = 10 __timeout__ = 10
...@@ -48,7 +53,15 @@ class Test(greentest.TestCase): ...@@ -48,7 +53,15 @@ class Test(greentest.TestCase):
conn = socket.socket() conn = socket.socket()
self._close_on_teardown(conn) self._close_on_teardown(conn)
conn.connect((DEFAULT_CONNECT, self._server.server_port)) conn.connect((DEFAULT_CONNECT, self._server.server_port))
return conn banner = self._wait_for_prompt(conn)
return conn, banner
def _wait_for_prompt(self, conn):
return read_until(conn, b'>>> ')
def _make_server_and_connect(self, *args, **kwargs):
self._make_server(*args, **kwargs)
return self._create_connection()
def _close(self, conn, cmd=b'quit()\r\n)'): def _close(self, conn, cmd=b'quit()\r\n)'):
conn.sendall(cmd) conn.sendall(cmd)
...@@ -64,50 +77,42 @@ class Test(greentest.TestCase): ...@@ -64,50 +77,42 @@ class Test(greentest.TestCase):
self._make_server() self._make_server()
def connect(): def connect():
conn = self._create_connection() conn, _ = self._create_connection()
read_until(conn, b'>>> ')
conn.sendall(b'2+2\r\n') conn.sendall(b'2+2\r\n')
line = readline(conn) line = readline(conn)
self.assertEqual(line.strip(), '4', repr(line)) self.assertEqual(line.strip(), '4', repr(line))
self._close(conn) self._close(conn)
jobs = [gevent.spawn(connect) for _ in range(10)] jobs = [WorkerGreenlet.spawn(connect) for _ in range(10)]
done = gevent.joinall(jobs, raise_error=True) try:
done = gevent.joinall(jobs, raise_error=True)
finally:
gevent.joinall(jobs, raise_error=False)
self.assertEqual(len(done), len(jobs), done) self.assertEqual(len(done), len(jobs), done)
@greentest.skipOnAppVeyor("Times out")
def test_quit(self): def test_quit(self):
self._make_server() conn, _ = self._make_server_and_connect()
conn = self._create_connection()
read_until(conn, b'>>> ')
self._close(conn) self._close(conn)
@greentest.skipOnAppVeyor("Times out")
def test_sys_exit(self): def test_sys_exit(self):
self._make_server() conn, _ = self._make_server_and_connect()
conn = self._create_connection()
read_until(conn, b'>>> ')
self._close(conn, b'import sys; sys.exit(0)\r\n') self._close(conn, b'import sys; sys.exit(0)\r\n')
@greentest.skipOnAppVeyor("Times out")
def test_banner(self): def test_banner(self):
banner = "Welcome stranger!" # native string expected_banner = "Welcome stranger!" # native string
self._make_server(banner=banner) conn, banner = self._make_server_and_connect(banner=expected_banner)
conn = self._create_connection() self.assertEqual(banner[:len(expected_banner)], expected_banner, banner)
response = read_until(conn, b'>>> ')
self.assertEqual(response[:len(banner)], banner, response)
self._close(conn) self._close(conn)
@greentest.skipOnAppVeyor("Times out")
def test_builtins(self): def test_builtins(self):
self._make_server() conn, _ = self._make_server_and_connect()
conn = self._create_connection()
read_until(conn, b'>>> ')
conn.sendall(b'locals()["__builtins__"]\r\n') conn.sendall(b'locals()["__builtins__"]\r\n')
response = read_until(conn, b'>>> ') response = read_until(conn, b'>>> ')
self.assertTrue(len(response) < 300, msg="locals() unusable: %s..." % response) self.assertLess(
len(response), 300,
msg="locals() unusable: %s..." % response)
self._close(conn) self._close(conn)
...@@ -125,13 +130,13 @@ class Test(greentest.TestCase): ...@@ -125,13 +130,13 @@ class Test(greentest.TestCase):
gevent.sleep(0.1) gevent.sleep(0.1)
print('switched in') print('switched in')
self._make_server(locals={'bad': bad}) conn, _ = self._make_server_and_connect(locals={'bad': bad})
conn = self._create_connection()
read_until(conn, b'>>> ')
conn.sendall(b'bad()\r\n') conn.sendall(b'bad()\r\n')
response = read_until(conn, b'>>> ') response = self._wait_for_prompt(conn)
response = response.replace('\r\n', '\n') response = response.replace('\r\n', '\n')
self.assertEqual('switching out, then throwing in\nGot Empty\nswitching out\nswitched in\n>>> ', response) self.assertEqual(
'switching out, then throwing in\nGot Empty\nswitching out\nswitched in\n>>> ',
response)
self._close(conn) self._close(conn)
......
...@@ -264,12 +264,16 @@ class TestSocket(Test): ...@@ -264,12 +264,16 @@ class TestSocket(Test):
@greentest.skipOnAppVeyor("This sometimes times out for no apparent reason.") @greentest.skipOnAppVeyor("This sometimes times out for no apparent reason.")
class TestSSL(Test): class TestSSL(Test):
def _ssl_connect_task(self, connector, port): def _ssl_connect_task(self, connector, port, accepted_event):
connector.connect((DEFAULT_CONNECT, port)) connector.connect((DEFAULT_CONNECT, port))
try: try:
# Note: We get ResourceWarning about 'x' # Note: We get ResourceWarning about 'x'
# on Python 3 if we don't join the spawned thread # on Python 3 if we don't join the spawned thread
x = ssl.wrap_socket(connector) x = ssl.wrap_socket(connector)
# Wait to be fully accepted. We could otherwise raise ahead
# of the server and close ourself before it's ready to read.
accepted_event.wait()
except socket.error: except socket.error:
# Observed on Windows with PyPy2 5.9.0 and libuv: # Observed on Windows with PyPy2 5.9.0 and libuv:
# if we don't switch in a timely enough fashion, # if we don't switch in a timely enough fashion,
...@@ -281,8 +285,11 @@ class TestSSL(Test): ...@@ -281,8 +285,11 @@ class TestSSL(Test):
x.close() x.close()
def _make_ssl_connect_task(self, connector, port): def _make_ssl_connect_task(self, connector, port):
t = threading.Thread(target=self._ssl_connect_task, args=(connector, port)) accepted_event = threading.Event()
t = threading.Thread(target=self._ssl_connect_task,
args=(connector, port, accepted_event))
t.daemon = True t.daemon = True
t.accepted_event = accepted_event
return t return t
def __cleanup(self, task, *sockets): def __cleanup(self, task, *sockets):
...@@ -304,7 +311,12 @@ class TestSSL(Test): ...@@ -304,7 +311,12 @@ class TestSSL(Test):
task.join() task.join()
for s in sockets: for s in sockets:
s.close() try:
close = s.close
except AttributeError:
continue
else:
close()
del sockets del sockets
del task del task
...@@ -363,6 +375,7 @@ class TestSSL(Test): ...@@ -363,6 +375,7 @@ class TestSSL(Test):
try: try:
client_socket, _addr = listener.accept() client_socket, _addr = listener.accept()
t.accepted_event.set()
self._close_on_teardown(client_socket.close) self._close_on_teardown(client_socket.close)
client_socket = ssl.wrap_socket(client_socket, keyfile=certfile, certfile=certfile, server_side=True) client_socket = ssl.wrap_socket(client_socket, keyfile=certfile, certfile=certfile, server_side=True)
self._close_on_teardown(client_socket) self._close_on_teardown(client_socket)
...@@ -385,6 +398,7 @@ class TestSSL(Test): ...@@ -385,6 +398,7 @@ class TestSSL(Test):
try: try:
client_socket, _addr = listener.accept() client_socket, _addr = listener.accept()
t.accepted_event.set()
self._close_on_teardown(client_socket.close) # hard ref self._close_on_teardown(client_socket.close) # hard ref
client_socket = ssl.wrap_socket(client_socket, keyfile=certfile, certfile=certfile, server_side=True) client_socket = ssl.wrap_socket(client_socket, keyfile=certfile, certfile=certfile, server_side=True)
self._close_on_teardown(client_socket) self._close_on_teardown(client_socket)
...@@ -412,6 +426,7 @@ class TestSSL(Test): ...@@ -412,6 +426,7 @@ class TestSSL(Test):
try: try:
client_socket, _addr = listener.accept() client_socket, _addr = listener.accept()
t.accepted_event.set()
self._close_on_teardown(client_socket) self._close_on_teardown(client_socket)
client_socket = ssl.wrap_socket(client_socket, keyfile=certfile, certfile=certfile, server_side=True) client_socket = ssl.wrap_socket(client_socket, keyfile=certfile, certfile=certfile, server_side=True)
self._close_on_teardown(client_socket) self._close_on_teardown(client_socket)
...@@ -441,6 +456,7 @@ class TestSSL(Test): ...@@ -441,6 +456,7 @@ class TestSSL(Test):
try: try:
client_socket, _addr = listener.accept() client_socket, _addr = listener.accept()
t.accepted_event.set()
fileno = client_socket.fileno() fileno = client_socket.fileno()
self.assert_open(client_socket, fileno) self.assert_open(client_socket, fileno)
f = client_socket.makefile() f = client_socket.makefile()
...@@ -452,30 +468,30 @@ class TestSSL(Test): ...@@ -452,30 +468,30 @@ class TestSSL(Test):
finally: finally:
self.__cleanup(t, listener, connector) self.__cleanup(t, listener, connector)
@greentest.skipIf(greentest.RUNNING_ON_TRAVIS and greentest.PY37 and greentest.LIBUV,
"Often segfaults, cannot reproduce locally. "
"Not too worried about this before Python 3.7rc1. "
"https://travis-ci.org/gevent/gevent/jobs/327357684")
def test_serverssl_makefile2(self): def test_serverssl_makefile2(self):
listener = self._close_on_teardown(tcp_listener(backlog=1)) listener = self._close_on_teardown(tcp_listener(backlog=1))
port = listener.getsockname()[1] port = listener.getsockname()[1]
listener = ssl.wrap_socket(listener, keyfile=certfile, certfile=certfile) listener = ssl.wrap_socket(listener, keyfile=certfile, certfile=certfile)
connector = socket.socket() accepted_event = threading.Event()
def connect(connector=socket.socket()):
def connect(): try:
connector.connect((DEFAULT_CONNECT, port)) connector.connect((DEFAULT_CONNECT, port))
s = ssl.wrap_socket(connector) s = ssl.wrap_socket(connector)
s.sendall(b'test_serverssl_makefile2') accepted_event.wait()
s.close() s.sendall(b'test_serverssl_makefile2')
connector.close() s.shutdown(socket.SHUT_RDWR)
s.close()
finally:
connector.close()
t = threading.Thread(target=connect) t = threading.Thread(target=connect)
t.daemon = True t.daemon = True
t.start() t.start()
client_socket = None
try: try:
client_socket, _addr = listener.accept() client_socket, _addr = listener.accept()
accepted_event.set()
fileno = client_socket.fileno() fileno = client_socket.fileno()
self.assert_open(client_socket, fileno) self.assert_open(client_socket, fileno)
f = client_socket.makefile() f = client_socket.makefile()
...@@ -490,7 +506,7 @@ class TestSSL(Test): ...@@ -490,7 +506,7 @@ class TestSSL(Test):
client_socket.close() client_socket.close()
self.assert_closed(client_socket, fileno) self.assert_closed(client_socket, fileno)
finally: finally:
self.__cleanup(t, listener) self.__cleanup(t, listener, client_socket)
if __name__ == '__main__': if __name__ == '__main__':
......
...@@ -205,7 +205,7 @@ class TestCase(greentest.TestCase): ...@@ -205,7 +205,7 @@ class TestCase(greentest.TestCase):
def init_server(self): def init_server(self):
self.server = self._create_server() self.server = self._create_server()
self.server.start() self.server.start()
gevent.sleep(0.01) gevent.sleep()
@property @property
def socket(self): def socket(self):
...@@ -338,11 +338,6 @@ class TestDefaultSpawn(TestCase): ...@@ -338,11 +338,6 @@ class TestDefaultSpawn(TestCase):
self.stop_server() self.stop_server()
def init_server(self):
self.server = self._create_server()
self.server.start()
gevent.sleep(0.01)
@property @property
def socket(self): def socket(self):
return self.server.socket return self.server.socket
...@@ -359,7 +354,7 @@ class TestDefaultSpawn(TestCase): ...@@ -359,7 +354,7 @@ class TestDefaultSpawn(TestCase):
def test_server_repr_when_handle_is_instancemethod(self): def test_server_repr_when_handle_is_instancemethod(self):
# PR 501 # PR 501
self.init_server() self.init_server()
self.start_server() assert self.server.started
self.assertIn('Server', repr(self.server)) self.assertIn('Server', repr(self.server))
self.server.set_handle(self.server.handle) self.server.set_handle(self.server.handle)
......
This diff is collapsed.
...@@ -9,14 +9,10 @@ import gevent.testing as greentest ...@@ -9,14 +9,10 @@ import gevent.testing as greentest
from gevent.tests import test__socket from gevent.tests import test__socket
import ssl import ssl
#import unittest
from gevent.hub import LoopExit
def ssl_listener(private_key, certificate): def ssl_listener(private_key, certificate):
raw_listener = socket.socket() raw_listener = socket.socket()
greentest.bind_and_listen(raw_listener) greentest.bind_and_listen(raw_listener)
sock = ssl.wrap_socket(raw_listener, private_key, certificate) sock = ssl.wrap_socket(raw_listener, private_key, certificate, server_side=True)
return sock, raw_listener return sock, raw_listener
...@@ -37,7 +33,8 @@ class TestSSL(test__socket.TestTCP): ...@@ -37,7 +33,8 @@ class TestSSL(test__socket.TestTCP):
return listener return listener
def create_connection(self, *args, **kwargs): # pylint:disable=arguments-differ def create_connection(self, *args, **kwargs): # pylint:disable=arguments-differ
return ssl.wrap_socket(super(TestSSL, self).create_connection(*args, **kwargs)) return self._close_on_teardown(
ssl.wrap_socket(super(TestSSL, self).create_connection(*args, **kwargs)))
# The SSL library can take a long time to buffer the large amount of data we're trying # The SSL library can take a long time to buffer the large amount of data we're trying
# to send, so we can't compare to the timeout values # to send, so we can't compare to the timeout values
...@@ -67,14 +64,14 @@ class TestSSL(test__socket.TestTCP): ...@@ -67,14 +64,14 @@ class TestSSL(test__socket.TestTCP):
client.close() client.close()
server_sock[0][0].close() server_sock[0][0].close()
def test_fullduplex(self): # def test_fullduplex(self):
try: # try:
super(TestSSL, self).test_fullduplex() # super(TestSSL, self).test_fullduplex()
except LoopExit: # except LoopExit:
if greentest.LIBUV and greentest.WIN: # if greentest.LIBUV and greentest.WIN:
# XXX: Unable to duplicate locally # # XXX: Unable to duplicate locally
raise greentest.SkipTest("libuv on Windows sometimes raises LoopExit") # raise greentest.SkipTest("libuv on Windows sometimes raises LoopExit")
raise # raise
@greentest.ignores_leakcheck @greentest.ignores_leakcheck
def test_empty_send(self): def test_empty_send(self):
......
...@@ -234,7 +234,11 @@ class Timeout(BaseException): ...@@ -234,7 +234,11 @@ class Timeout(BaseException):
# Make sure the timer updates the current time so that we don't # Make sure the timer updates the current time so that we don't
# expire prematurely. # expire prematurely.
self.timer.start(getcurrent().throw, throws, update=True) self.timer.start(self._on_expiration, getcurrent(), throws, update=True)
def _on_expiration(self, prev_greenlet, ex):
# Hook for subclasses.
prev_greenlet.throw(ex)
@classmethod @classmethod
def start_new(cls, timeout=None, exception=None, ref=True, _one_shot=False): def start_new(cls, timeout=None, exception=None, ref=True, _one_shot=False):
......
...@@ -165,17 +165,26 @@ def _format_thread_info(lines, thread_stacks, limit, current_thread_ident): ...@@ -165,17 +165,26 @@ def _format_thread_info(lines, thread_stacks, limit, current_thread_ident):
thread = None thread = None
frame = None frame = None
for thread_ident, frame in sys._current_frames().items(): for thread_ident, frame in sys._current_frames().items():
do_stacks = thread_stacks
lines.append("*" * 80) lines.append("*" * 80)
thread = threads.get(thread_ident) thread = threads.get(thread_ident)
name = thread.name if thread else None name = None
if not thread:
# Is it an idle threadpool thread? thread pool threads
# don't have a Thread object, they're low-level
if frame.f_locals and frame.f_locals.get('gevent_threadpool_worker_idle'):
name = 'idle threadpool worker'
do_stacks = False
else:
name = thread.name
if getattr(thread, 'gevent_monitoring_thread', None): if getattr(thread, 'gevent_monitoring_thread', None):
name = repr(thread.gevent_monitoring_thread()) name = repr(thread.gevent_monitoring_thread())
if current_thread_ident == thread_ident: if current_thread_ident == thread_ident:
name = '%s) (CURRENT' % (name,) name = '%s) (CURRENT' % (name,)
lines.append('Thread 0x%x (%s)\n' % (thread_ident, name)) lines.append('Thread 0x%x (%s)\n' % (thread_ident, name))
if thread_stacks: if do_stacks:
lines.append(''.join(traceback.format_stack(frame, limit))) lines.append(''.join(traceback.format_stack(frame, limit)))
else: elif not thread_stacks:
lines.append('\t...stack elided...') lines.append('\t...stack elided...')
# We may have captured our own frame, creating a reference # We may have captured our own frame, creating a reference
...@@ -191,9 +200,14 @@ def _format_greenlet_info(lines, greenlet_stacks, limit): ...@@ -191,9 +200,14 @@ def _format_greenlet_info(lines, greenlet_stacks, limit):
lines.append('*' * 80) lines.append('*' * 80)
lines.append('* Greenlets') lines.append('* Greenlets')
lines.append('*' * 80) lines.append('*' * 80)
for tree in GreenletTree.forest(): for tree in sorted(GreenletTree.forest(),
key=lambda t: '' if t.is_current_tree else repr(t.greenlet)):
lines.append("---- Thread boundary")
lines.extend(tree.format_lines(details={ lines.extend(tree.format_lines(details={
'running_stacks': greenlet_stacks, # greenlets from other threads tend to have their current
# frame just match our current frame, which is not helpful,
# so don't render their stack.
'running_stacks': greenlet_stacks if tree.is_current_tree else False,
'running_stack_limit': limit, 'running_stack_limit': limit,
})) }))
...@@ -297,6 +311,8 @@ class GreenletTree(object): ...@@ -297,6 +311,8 @@ class GreenletTree(object):
#: The greenlet this tree represents. #: The greenlet this tree represents.
greenlet = None greenlet = None
#: Is this tree the root for the current thread?
is_current_tree = False
def __init__(self, greenlet): def __init__(self, greenlet):
self.greenlet = greenlet self.greenlet = greenlet
...@@ -458,33 +474,40 @@ class GreenletTree(object): ...@@ -458,33 +474,40 @@ class GreenletTree(object):
from gevent._greenlet_primitives import get_reachable_greenlets from gevent._greenlet_primitives import get_reachable_greenlets
main_greenlet = cls._root_greenlet(getcurrent()) main_greenlet = cls._root_greenlet(getcurrent())
trees = {} trees = {} # greenlet -> GreenletTree
roots = {} roots = {} # root greenlet -> GreenletTree
current_tree = roots[main_greenlet] = trees[main_greenlet] = cls(main_greenlet) current_tree = roots[main_greenlet] = trees[main_greenlet] = cls(main_greenlet)
current_tree.is_current_tree = True
root_greenlet = cls._root_greenlet
glets = get_reachable_greenlets() glets = get_reachable_greenlets()
for ob in glets: for ob in glets:
spawn_parent = cls.__spawning_parent(ob) spawn_parent = cls.__spawning_parent(ob)
if spawn_parent is None: if spawn_parent is None:
root = cls._root_greenlet(ob) # spawn parent is dead, or raw greenlet.
try: # reparent under the root.
tree = roots[root] spawn_parent = root_greenlet(ob)
except KeyError: # pragma: no cover
tree = GreenletTree(root) if spawn_parent is root_greenlet(spawn_parent) and spawn_parent not in roots:
roots[root] = trees[root] = tree assert spawn_parent not in trees
else: trees[spawn_parent] = roots[spawn_parent] = cls(spawn_parent)
try:
tree = trees[spawn_parent]
except KeyError: # pragma: no cover try:
tree = trees[spawn_parent] = cls(spawn_parent) parent_tree = trees[spawn_parent]
except KeyError: # pragma: no cover
parent_tree = trees[spawn_parent] = cls(spawn_parent)
try: try:
# If the child also happened to be a spawning parent,
# we could have seen it before; the reachable greenlets
# are in no particular order.
child_tree = trees[ob] child_tree = trees[ob]
except KeyError: except KeyError:
trees[ob] = child_tree = cls(ob) trees[ob] = child_tree = cls(ob)
tree.add_child(child_tree) parent_tree.add_child(child_tree)
return roots, current_tree return roots, current_tree
......
...@@ -70,8 +70,8 @@ def capture_server(evt, buf, serv): ...@@ -70,8 +70,8 @@ def capture_server(evt, buf, serv):
pass pass
else: else:
n = 200 n = 200
start = time.time() start = time.monotonic()
while n > 0 and time.time() - start < 3.0: while n > 0 and time.monotonic() - start < 3.0:
r, w, e = select.select([conn], [], [], 0.1) r, w, e = select.select([conn], [], [], 0.1)
if r: if r:
n -= 1 n -= 1
......
...@@ -4720,14 +4720,7 @@ class NetworkConnectionNoServer(unittest.TestCase): ...@@ -4720,14 +4720,7 @@ class NetworkConnectionNoServer(unittest.TestCase):
# On Solaris, ENETUNREACH is returned in this circumstance instead # On Solaris, ENETUNREACH is returned in this circumstance instead
# of ECONNREFUSED. So, if that errno exists, add it to our list of # of ECONNREFUSED. So, if that errno exists, add it to our list of
# expected errnos. # expected errnos.
expected_errnos = [ errno.ECONNREFUSED, ] expected_errnos = support.get_socket_conn_refused_errs()
if hasattr(errno, 'ENETUNREACH'):
expected_errnos.append(errno.ENETUNREACH)
if hasattr(errno, 'EADDRNOTAVAIL'):
# bpo-31910: socket.create_connection() fails randomly
# with EADDRNOTAVAIL on Travis CI
expected_errnos.append(errno.EADDRNOTAVAIL)
self.assertIn(cm.exception.errno, expected_errnos) self.assertIn(cm.exception.errno, expected_errnos)
def test_create_connection_timeout(self): def test_create_connection_timeout(self):
...@@ -5533,7 +5526,7 @@ class SendfileUsingSendTest(ThreadedTCPSocketTest): ...@@ -5533,7 +5526,7 @@ class SendfileUsingSendTest(ThreadedTCPSocketTest):
support.unlink(support.TESTFN) support.unlink(support.TESTFN)
def accept_conn(self): def accept_conn(self):
self.serv.settimeout(self.TIMEOUT) self.serv.settimeout(MAIN_TIMEOUT)
conn, addr = self.serv.accept() conn, addr = self.serv.accept()
conn.settimeout(self.TIMEOUT) conn.settimeout(self.TIMEOUT)
self.addCleanup(conn.close) self.addCleanup(conn.close)
...@@ -5717,11 +5710,11 @@ class SendfileUsingSendTest(ThreadedTCPSocketTest): ...@@ -5717,11 +5710,11 @@ class SendfileUsingSendTest(ThreadedTCPSocketTest):
def _testWithTimeoutTriggeredSend(self): def _testWithTimeoutTriggeredSend(self):
address = self.serv.getsockname() address = self.serv.getsockname()
file = open(support.TESTFN, 'rb') with open(support.TESTFN, 'rb') as file:
with socket.create_connection(address, timeout=0.01) as sock, \ with socket.create_connection(address) as sock:
file as file: sock.settimeout(0.01)
meth = self.meth_from_sock(sock) meth = self.meth_from_sock(sock)
self.assertRaises(socket.timeout, meth, file) self.assertRaises(socket.timeout, meth, file)
def testWithTimeoutTriggeredSend(self): def testWithTimeoutTriggeredSend(self):
conn = self.accept_conn() conn = self.accept_conn()
......
This diff is collapsed.
...@@ -16,6 +16,7 @@ import unittest ...@@ -16,6 +16,7 @@ import unittest
import weakref import weakref
import os import os
import subprocess import subprocess
import signal
from gevent.tests import lock_tests # gevent: use our local copy from gevent.tests import lock_tests # gevent: use our local copy
from test import support from test import support
...@@ -415,7 +416,8 @@ class ThreadTests(BaseTestCase): ...@@ -415,7 +416,8 @@ class ThreadTests(BaseTestCase):
t.setDaemon(True) t.setDaemon(True)
t.getName() t.getName()
t.setName("name") t.setName("name")
t.isAlive() with self.assertWarnsRegex(PendingDeprecationWarning, 'use is_alive()'):
t.isAlive()
e = threading.Event() e = threading.Event()
e.isSet() e.isSet()
threading.activeCount() threading.activeCount()
...@@ -576,6 +578,41 @@ class ThreadTests(BaseTestCase): ...@@ -576,6 +578,41 @@ class ThreadTests(BaseTestCase):
self.assertEqual(data.splitlines(), self.assertEqual(data.splitlines(),
["GC: True True True"] * 2) ["GC: True True True"] * 2)
def test_finalization_shutdown(self):
# bpo-36402: Py_Finalize() calls threading._shutdown() which must wait
# until Python thread states of all non-daemon threads get deleted.
#
# Test similar to SubinterpThreadingTests.test_threads_join_2(), but
# test the finalization of the main interpreter.
code = """if 1:
import os
import threading
import time
import random
def random_sleep():
seconds = random.random() * 0.010
time.sleep(seconds)
class Sleeper:
def __del__(self):
random_sleep()
tls = threading.local()
def f():
# Sleep a bit so that the thread is still running when
# Py_Finalize() is called.
random_sleep()
tls.x = Sleeper()
random_sleep()
threading.Thread(target=f).start()
random_sleep()
"""
rc, out, err = assert_python_ok("-c", code)
self.assertEqual(err, b"")
def test_tstate_lock(self): def test_tstate_lock(self):
# Test an implementation detail of Thread objects. # Test an implementation detail of Thread objects.
started = _thread.allocate_lock() started = _thread.allocate_lock()
...@@ -696,6 +733,30 @@ class ThreadTests(BaseTestCase): ...@@ -696,6 +733,30 @@ class ThreadTests(BaseTestCase):
finally: finally:
sys.settrace(old_trace) sys.settrace(old_trace)
@cpython_only
def test_shutdown_locks(self):
for daemon in (False, True):
with self.subTest(daemon=daemon):
event = threading.Event()
thread = threading.Thread(target=event.wait, daemon=daemon)
# Thread.start() must add lock to _shutdown_locks,
# but only for non-daemon thread
thread.start()
tstate_lock = thread._tstate_lock
if not daemon:
self.assertIn(tstate_lock, threading._shutdown_locks)
else:
self.assertNotIn(tstate_lock, threading._shutdown_locks)
# unblock the thread and join it
event.set()
thread.join()
# Thread._stop() must remove tstate_lock from _shutdown_locks.
# Daemon threads must never add it to _shutdown_locks.
self.assertNotIn(tstate_lock, threading._shutdown_locks)
class ThreadJoinOnShutdown(BaseTestCase): class ThreadJoinOnShutdown(BaseTestCase):
...@@ -873,15 +934,22 @@ class SubinterpThreadingTests(BaseTestCase): ...@@ -873,15 +934,22 @@ class SubinterpThreadingTests(BaseTestCase):
self.addCleanup(os.close, w) self.addCleanup(os.close, w)
code = r"""if 1: code = r"""if 1:
import os import os
import random
import threading import threading
import time import time
def random_sleep():
seconds = random.random() * 0.010
time.sleep(seconds)
def f(): def f():
# Sleep a bit so that the thread is still running when # Sleep a bit so that the thread is still running when
# Py_EndInterpreter is called. # Py_EndInterpreter is called.
time.sleep(0.05) random_sleep()
os.write(%d, b"x") os.write(%d, b"x")
threading.Thread(target=f).start() threading.Thread(target=f).start()
random_sleep()
""" % (w,) """ % (w,)
ret = test.support.run_in_subinterp(code) ret = test.support.run_in_subinterp(code)
self.assertEqual(ret, 0) self.assertEqual(ret, 0)
...@@ -898,22 +966,29 @@ class SubinterpThreadingTests(BaseTestCase): ...@@ -898,22 +966,29 @@ class SubinterpThreadingTests(BaseTestCase):
self.addCleanup(os.close, w) self.addCleanup(os.close, w)
code = r"""if 1: code = r"""if 1:
import os import os
import random
import threading import threading
import time import time
def random_sleep():
seconds = random.random() * 0.010
time.sleep(seconds)
class Sleeper: class Sleeper:
def __del__(self): def __del__(self):
time.sleep(0.05) random_sleep()
tls = threading.local() tls = threading.local()
def f(): def f():
# Sleep a bit so that the thread is still running when # Sleep a bit so that the thread is still running when
# Py_EndInterpreter is called. # Py_EndInterpreter is called.
time.sleep(0.05) random_sleep()
tls.x = Sleeper() tls.x = Sleeper()
os.write(%d, b"x") os.write(%d, b"x")
threading.Thread(target=f).start() threading.Thread(target=f).start()
random_sleep()
""" % (w,) """ % (w,)
ret = test.support.run_in_subinterp(code) ret = test.support.run_in_subinterp(code)
self.assertEqual(ret, 0) self.assertEqual(ret, 0)
...@@ -1164,6 +1239,7 @@ class BoundedSemaphoreTests(lock_tests.BoundedSemaphoreTests): ...@@ -1164,6 +1239,7 @@ class BoundedSemaphoreTests(lock_tests.BoundedSemaphoreTests):
class BarrierTests(lock_tests.BarrierTests): class BarrierTests(lock_tests.BarrierTests):
barriertype = staticmethod(threading.Barrier) barriertype = staticmethod(threading.Barrier)
class MiscTestCase(unittest.TestCase): class MiscTestCase(unittest.TestCase):
def test__all__(self): def test__all__(self):
extra = {"ThreadError"} extra = {"ThreadError"}
...@@ -1171,5 +1247,38 @@ class MiscTestCase(unittest.TestCase): ...@@ -1171,5 +1247,38 @@ class MiscTestCase(unittest.TestCase):
support.check__all__(self, threading, ('threading', '_thread'), support.check__all__(self, threading, ('threading', '_thread'),
extra=extra, blacklist=blacklist) extra=extra, blacklist=blacklist)
class InterruptMainTests(unittest.TestCase):
def test_interrupt_main_subthread(self):
# Calling start_new_thread with a function that executes interrupt_main
# should raise KeyboardInterrupt upon completion.
def call_interrupt():
_thread.interrupt_main()
t = threading.Thread(target=call_interrupt)
with self.assertRaises(KeyboardInterrupt):
t.start()
t.join()
t.join()
def test_interrupt_main_mainthread(self):
# Make sure that if interrupt_main is called in main thread that
# KeyboardInterrupt is raised instantly.
with self.assertRaises(KeyboardInterrupt):
_thread.interrupt_main()
def test_interrupt_main_noerror(self):
handler = signal.getsignal(signal.SIGINT)
try:
# No exception should arise.
signal.signal(signal.SIGINT, signal.SIG_IGN)
_thread.interrupt_main()
signal.signal(signal.SIGINT, signal.SIG_DFL)
_thread.interrupt_main()
finally:
# Restore original handler
signal.signal(signal.SIGINT, handler)
if __name__ == "__main__": if __name__ == "__main__":
unittest.main() unittest.main()
...@@ -780,6 +780,49 @@ class HandlerTests(TestCase): ...@@ -780,6 +780,49 @@ class HandlerTests(TestCase):
b"Hello, world!", b"Hello, world!",
written) written)
def testClientConnectionTerminations(self):
environ = {"SERVER_PROTOCOL": "HTTP/1.0"}
for exception in (
ConnectionAbortedError,
BrokenPipeError,
ConnectionResetError,
):
with self.subTest(exception=exception):
class AbortingWriter:
def write(self, b):
raise exception
stderr = StringIO()
h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ)
h.run(hello_app)
self.assertFalse(stderr.getvalue())
def testDontResetInternalStateOnException(self):
class CustomException(ValueError):
pass
# We are raising CustomException here to trigger an exception
# during the execution of SimpleHandler.finish_response(), so
# we can easily test that the internal state of the handler is
# preserved in case of an exception.
class AbortingWriter:
def write(self, b):
raise CustomException
stderr = StringIO()
environ = {"SERVER_PROTOCOL": "HTTP/1.0"}
h = SimpleHandler(BytesIO(), AbortingWriter(), stderr, environ)
h.run(hello_app)
self.assertIn("CustomException", stderr.getvalue())
# Test that the internal state of the handler is preserved.
self.assertIsNotNone(h.result)
self.assertIsNotNone(h.headers)
self.assertIsNotNone(h.status)
self.assertIsNotNone(h.environ)
if __name__ == "__main__": if __name__ == "__main__":
unittest.main() unittest.main()
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment