Commit ce63bebc authored by Gabriel de Marmiesse's avatar Gabriel de Marmiesse Committed by GitHub

Merge branch 'master' into simplifying_memoryview_numpy

parents c03b0bca cffb63d3
......@@ -23,11 +23,11 @@ python:
- 2.7
- 3.6
- 2.6
- 3.3
- 3.7
- 3.4
- 3.5
- 3.6-dev
- 3.7-dev
- 3.8-dev
- pypy
- pypy3
......@@ -74,9 +74,17 @@ matrix:
language: cpp
compiler: clang
cache: false
- env: STACKLESS=true BACKEND=c PY=2
python: 2.7
- env: STACKLESS=true BACKEND=c PY=3
python: 3.6
allow_failures:
- python: pypy
- python: pypy3
- python: 3.7
- python: 3.8-dev
- env: STACKLESS=true BACKEND=c PY=2
- env: STACKLESS=true BACKEND=c PY=3
exclude:
- python: pypy
env: BACKEND=cpp
......@@ -99,16 +107,22 @@ before_install:
fi
- |
if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then # Install Miniconda
curl -s -o miniconda.sh https://repo.continuum.io/miniconda/Miniconda$PY-latest-MacOSX-x86_64.sh;
if [[ "$TRAVIS_OS_NAME" == "osx" ]] || [[ "$STACKLESS" == "true" ]]; then # Install Miniconda
if [[ "$TRAVIS_OS_NAME" == "osx" ]]; then CONDA_PLATFORM=MacOSX; else CONDA_PLATFORM=Linux; fi;
curl -s -o miniconda.sh https://repo.continuum.io/miniconda/Miniconda$PY-latest-${CONDA_PLATFORM}-x86_64.sh;
bash miniconda.sh -b -p $HOME/miniconda && rm miniconda.sh;
export PATH="$HOME/miniconda/bin:$PATH"; hash -r;
#conda install --quiet --yes nomkl --file=test-requirements.txt --file=test-requirements-cpython.txt;
fi
- if [[ "$STACKLESS" == "true" ]]; then
conda config --add channels stackless;
conda install --quiet --yes stackless;
fi
install:
- python -c 'import sys; print("Python %s" % (sys.version,))'
- if [ -n "${TRAVIS_PYTHON_VERSION##*-dev}" -a -n "${TRAVIS_PYTHON_VERSION##2.6*}" ]; then pip install -r test-requirements.txt $( [ -z "${TRAVIS_PYTHON_VERSION##pypy*}" ] || echo " -r test-requirements-cpython.txt" ) $( [ -n "${TRAVIS_PYTHON_VERSION##3.3*}" ] || echo " tornado<5.0" ) ; fi
- if [ -n "${TRAVIS_PYTHON_VERSION##*-dev}" -a -n "${TRAVIS_PYTHON_VERSION##2.6*}" ]; then pip install -r test-requirements.txt $( [ -z "${TRAVIS_PYTHON_VERSION##pypy*}" ] || echo " -r test-requirements-cpython.txt" ) ; fi
- CFLAGS="-O2 -ggdb -Wall -Wextra $(python -c 'import sys; print("-fno-strict-aliasing" if sys.version_info[0] == 2 else "")')" python setup.py build
before_script: ccache -s || true
......
......@@ -53,14 +53,26 @@ Bugs fixed
* Several internal function signatures were fixed that lead to warnings in gcc-8.
(Github issue #2363)
* C lines of the module init function were unconditionally not reported in
exception stack traces.
Patch by Jeroen Demeyer. (Github issue #2492)
* The discouraged usage of GCC's attribute ``optimize("Os")`` was replaced by the
similar attribute ``cold`` to reduce the code impact of the module init functions.
(Github issue #2494)
Other changes
-------------
* The documentation was restructured, cleaned up and examples are now tested.
Contributed by Gabriel de Marmiesse. (Github issue #2245)
* Cython compiles less of its own modules at build time to reduce the installed
package size to about half of its previous size. This makes the compiler
slightly slower, by about 5-7%.
0.28.4 (2018-??-??)
0.28.4 (2018-07-08)
===================
Bugs fixed
......@@ -70,6 +82,11 @@ Bugs fixed
generated an invalid C function call to the (non-existent) base type implementation.
(Github issue #2309)
* Exception catching based on a non-literal (runtime) tuple could fail to match the
exception. (Github issue #2425)
* Compile fix for CPython 3.7.0a2. (Github issue #2477)
0.28.3 (2018-05-27)
===================
......@@ -85,6 +102,9 @@ Bugs fixed
* Work around a crash bug in g++ 4.4.x by disabling the size reduction setting
of the module init function in this version. (Github issue #2235)
* Crash when exceptions occur early during module initialisation.
(Github issue #2199)
0.28.2 (2018-04-13)
===================
......@@ -2180,9 +2200,9 @@ Features added
* GDB support. http://docs.cython.org/src/userguide/debugging.html
* A new build system with support for inline distutils directives, correct dependency tracking, and parallel compilation. http://wiki.cython.org/enhancements/distutils_preprocessing
* A new build system with support for inline distutils directives, correct dependency tracking, and parallel compilation. https://github.com/cython/cython/wiki/enhancements-distutils_preprocessing
* Support for dynamic compilation at runtime via the new cython.inline function and cython.compile decorator. http://wiki.cython.org/enhancements/inline
* Support for dynamic compilation at runtime via the new cython.inline function and cython.compile decorator. https://github.com/cython/cython/wiki/enhancements-inline
* "nogil" blocks are supported when compiling pure Python code by writing "with cython.nogil".
......
......@@ -809,8 +809,7 @@ def create_extension_list(patterns, exclude=None, ctx=None, aliases=None, quiet=
elif name:
module_name = name
if module_name == 'cython':
raise ValueError('cython is a special module, cannot be used as a module name')
Utils.raise_error_if_module_name_forbidden(module_name)
if module_name not in seen:
try:
......@@ -871,27 +870,65 @@ def cythonize(module_list, exclude=None, nthreads=0, aliases=None, quiet=False,
Compile a set of source modules into C/C++ files and return a list of distutils
Extension objects for them.
As module list, pass either a glob pattern, a list of glob patterns or a list of
Extension objects. The latter allows you to configure the extensions separately
through the normal distutils options.
When using glob patterns, you can exclude certain module names explicitly
by passing them into the 'exclude' option.
To globally enable C++ mode, you can pass language='c++'. Otherwise, this
will be determined at a per-file level based on compiler directives. This
affects only modules found based on file names. Extension instances passed
into cythonize() will not be changed.
For parallel compilation, set the 'nthreads' option to the number of
concurrent builds.
For a broad 'try to compile' mode that ignores compilation failures and
simply excludes the failed extensions, pass 'exclude_failures=True'. Note
that this only really makes sense for compiling .py files which can also
be used without compilation.
Additional compilation options can be passed as keyword arguments.
:param module_list: As module list, pass either a glob pattern, a list of glob
patterns or a list of Extension objects. The latter
allows you to configure the extensions separately
through the normal distutils options.
You can also pass Extension objects that have
glob patterns as their sources. Then, cythonize
will resolve the pattern and create a
copy of the Extension for every matching file.
:param exclude: When passing glob patterns as ``module_list``, you can exclude certain
module names explicitly by passing them into the ``exclude`` option.
:param nthreads: The number of concurrent builds for parallel compilation
(requires the ``multiprocessing`` module).
:param aliases: If you want to use compiler directives like ``# distutils: ...`` but
can only know at compile time (when running the ``setup.py``) which values
to use, you can use aliases and pass a dictionary mapping those aliases
to Python strings when calling :func:`cythonize`. As an example, say you
want to use the compiler
directive ``# distutils: include_dirs = ../static_libs/include/``
but this path isn't always fixed and you want to find it when running
the ``setup.py``. You can then do ``# distutils: include_dirs = MY_HEADERS``,
find the value of ``MY_HEADERS`` in the ``setup.py``, put it in a python
variable called ``foo`` as a string, and then call
``cythonize(..., aliases={'MY_HEADERS': foo})``.
:param quiet: If True, Cython won't print error and warning messages during the compilation.
:param force: Forces the recompilation of the Cython modules, even if the timestamps
don't indicate that a recompilation is necessary.
:param language: To globally enable C++ mode, you can pass ``language='c++'``. Otherwise, this
will be determined at a per-file level based on compiler directives. This
affects only modules found based on file names. Extension instances passed
into :func:`cythonize` will not be changed. It is recommended to rather
use the compiler directive ``# distutils: language = c++`` than this option.
:param exclude_failures: For a broad 'try to compile' mode that ignores compilation
failures and simply excludes the failed extensions,
pass ``exclude_failures=True``. Note that this only
really makes sense for compiling ``.py`` files which can also
be used without compilation.
:param annotate: If ``True``, will produce a HTML file for each of the ``.pyx`` or ``.py``
files compiled. The HTML file gives an indication
of how much Python interaction there is in
each of the source code lines, compared to plain C code.
It also allows you to see the C/C++ code
generated for each line of Cython code. This report is invaluable when
optimizing a function for speed,
and for determining when to :ref:`release the GIL <nogil>`:
in general, a ``nogil`` block may contain only "white" code.
See examples in :ref:`determining_where_to_add_types` or
:ref:`primes`.
:param compiler_directives: Allow to set compiler directives in the ``setup.py`` like this:
``compiler_directives={'embedsignature': True}``.
See :ref:`compiler-directives`.
"""
if exclude is None:
exclude = []
......
......@@ -43,9 +43,10 @@ from . import Future
from ..Debugging import print_call_chain
from .DebugFlags import debug_disposal_code, debug_temp_alloc, \
debug_coercion
from .Pythran import to_pythran, is_pythran_supported_type, is_pythran_supported_operation_type, \
is_pythran_expr, pythran_func_type, pythran_binop_type, pythran_unaryop_type, has_np_pythran, \
pythran_indexing_code, pythran_indexing_type, is_pythran_supported_node_or_none, pythran_type
from .Pythran import (to_pythran, is_pythran_supported_type, is_pythran_supported_operation_type,
is_pythran_expr, pythran_func_type, pythran_binop_type, pythran_unaryop_type, has_np_pythran,
pythran_indexing_code, pythran_indexing_type, is_pythran_supported_node_or_none, pythran_type,
pythran_is_numpy_func_supported, pythran_get_func_include_file, pythran_functor)
from .PyrexTypes import PythranExpr
try:
......@@ -5408,7 +5409,8 @@ class SimpleCallNode(CallNode):
func_type = self.function_type()
self.is_numpy_call_with_exprs = False
if has_np_pythran(env) and self.function.is_numpy_attribute:
if (has_np_pythran(env) and function.is_numpy_attribute and
pythran_is_numpy_func_supported(function)):
has_pythran_args = True
self.arg_tuple = TupleNode(self.pos, args = self.args)
self.arg_tuple = self.arg_tuple.analyse_types(env)
......@@ -5416,12 +5418,12 @@ class SimpleCallNode(CallNode):
has_pythran_args &= is_pythran_supported_node_or_none(arg)
self.is_numpy_call_with_exprs = bool(has_pythran_args)
if self.is_numpy_call_with_exprs:
env.add_include_file("pythonic/numpy/%s.hpp" % self.function.attribute)
env.add_include_file(pythran_get_func_include_file(function))
return NumPyMethodCallNode.from_node(
self,
function=self.function,
function=function,
arg_tuple=self.arg_tuple,
type=PythranExpr(pythran_func_type(self.function.attribute, self.arg_tuple.args)),
type=PythranExpr(pythran_func_type(function, self.arg_tuple.args)),
)
elif func_type.is_pyobject:
self.arg_tuple = TupleNode(self.pos, args = self.args)
......@@ -5839,10 +5841,10 @@ class NumPyMethodCallNode(SimpleCallNode):
code.putln("// function evaluation code for numpy function")
code.putln("__Pyx_call_destructor(%s);" % self.result())
code.putln("new (&%s) decltype(%s){pythonic::numpy::functor::%s{}(%s)};" % (
code.putln("new (&%s) decltype(%s){%s{}(%s)};" % (
self.result(),
self.result(),
self.function.attribute,
pythran_functor(self.function),
", ".join(a.pythran_result() for a in args)))
......@@ -7899,7 +7901,7 @@ class ListNode(SequenceNode):
return ()
def infer_type(self, env):
# TOOD: Infer non-object list arrays.
# TODO: Infer non-object list arrays.
return list_type
def analyse_expressions(self, env):
......@@ -8564,7 +8566,7 @@ class DictNode(ExprNode):
return ()
def infer_type(self, env):
# TOOD: Infer struct constructors.
# TODO: Infer struct constructors.
return dict_type
def analyse_types(self, env):
......
......@@ -3,7 +3,7 @@
# Cython Scanner - Lexical Definitions
#
from __future__ import absolute_import
from __future__ import absolute_import, unicode_literals
raw_prefixes = "rR"
bytes_prefixes = "bB"
......
......@@ -469,6 +469,8 @@ def run_pipeline(source, options, full_module_name=None, context=None):
abs_path = os.path.abspath(source)
full_module_name = full_module_name or context.extract_module_name(source, options)
Utils.raise_error_if_module_name_forbidden(full_module_name)
if options.relative_path_in_code_position_comments:
rel_path = full_module_name.replace('.', os.sep) + source_ext
if not abs_path.endswith(rel_path):
......
......@@ -2459,10 +2459,9 @@ class ModuleNode(Nodes.Node, Nodes.BlockNode):
code.put_label(code.error_label)
for cname, type in code.funcstate.all_managed_temps():
code.put_xdecref(cname, type)
# module state might not be ready for traceback generation with C-line handling yet
code.putln('if (%s) {' % env.module_cname)
code.putln('if (%s) {' % env.module_dict_cname)
code.put_add_traceback("init %s" % env.qualified_name, include_cline=False)
code.put_add_traceback("init %s" % env.qualified_name)
code.globalstate.use_utility_code(Nodes.traceback_utility_code)
# Module reference and module dict are in global variables which might still be needed
# for cleanup, atexit code, etc., so leaking is better than crashing.
......
......@@ -3260,10 +3260,7 @@ class OptimizeBuiltinCalls(Visitor.NodeRefCleanupMixin,
return node
if node.type.is_pyobject:
if operator in ('Eq', 'Ne'):
ret_type = PyrexTypes.c_bint_type
else:
ret_type = PyrexTypes.py_object_type
ret_type = PyrexTypes.py_object_type
elif node.type is PyrexTypes.c_bint_type and operator in ('Eq', 'Ne'):
ret_type = PyrexTypes.c_bint_type
else:
......
......@@ -29,94 +29,121 @@ class ShouldBeFromDirective(object):
"Illegal access of '%s' from Options module rather than directive '%s'"
% (self.options_name, self.directive_name))
# Include docstrings.
"""
The members of this module are documented using autodata in
Cython/docs/src/reference/compilation.rst.
See http://www.sphinx-doc.org/en/master/ext/autodoc.html#directive-autoattribute
for how autodata works.
Descriptions of those members should start with a #:
Donc forget to keep the docs in sync by removing and adding
the members in both this file and the .rst file.
"""
#: Whether or not to include docstring in the Python extension. If False, the binary size
#: will be smaller, but the ``__doc__`` attribute of any class or function will be an
#: empty string.
docstrings = True
# Embed the source code position in the docstrings of functions and classes.
#: Embed the source code position in the docstrings of functions and classes.
embed_pos_in_docstring = False
# Copy the original source code line by line into C code comments
# in the generated code file to help with understanding the output.
#: Copy the original source code line by line into C code comments
#: in the generated code file to help with understanding the output.
#: This is also required for coverage analysis.
emit_code_comments = True
pre_import = None # undocumented
# undocumented
pre_import = None
# Decref global variables in this module on exit for garbage collection.
# 0: None, 1+: interned objects, 2+: cdef globals, 3+: types objects
# Mostly for reducing noise in Valgrind, only executes at process exit
# (when all memory will be reclaimed anyways).
#: Decref global variables in each module on exit for garbage collection.
#: 0: None, 1+: interned objects, 2+: cdef globals, 3+: types objects
#: Mostly for reducing noise in Valgrind, only executes at process exit
#: (when all memory will be reclaimed anyways).
generate_cleanup_code = False
# Should tp_clear() set object fields to None instead of clearing them to NULL?
#: Should tp_clear() set object fields to None instead of clearing them to NULL?
clear_to_none = True
# Generate an annotated HTML version of the input source files.
#: Generate an annotated HTML version of the input source files for debugging and optimisation purposes.
#: This has the same effect as the ``annotate`` argument in :func:`cythonize`.
annotate = False
# When annotating source files in HTML, include coverage information from
# this file.
annotate_coverage_xml = None
# This will abort the compilation on the first error occurred rather than trying
# to keep going and printing further error messages.
#: This will abort the compilation on the first error occurred rather than trying
#: to keep going and printing further error messages.
fast_fail = False
# Make all warnings into errors.
#: Turn all warnings into errors.
warning_errors = False
# Make unknown names an error. Python raises a NameError when
# encountering unknown names at runtime, whereas this option makes
# them a compile time error. If you want full Python compatibility,
# you should disable this option and also 'cache_builtins'.
#: Make unknown names an error. Python raises a NameError when
#: encountering unknown names at runtime, whereas this option makes
#: them a compile time error. If you want full Python compatibility,
#: you should disable this option and also 'cache_builtins'.
error_on_unknown_names = True
# Make uninitialized local variable reference a compile time error.
# Python raises UnboundLocalError at runtime, whereas this option makes
# them a compile time error. Note that this option affects only variables
# of "python object" type.
#: Make uninitialized local variable reference a compile time error.
#: Python raises UnboundLocalError at runtime, whereas this option makes
#: them a compile time error. Note that this option affects only variables
#: of "python object" type.
error_on_uninitialized = True
# This will convert statements of the form "for i in range(...)"
# to "for i from ..." when i is a cdef'd integer type, and the direction
# (i.e. sign of step) can be determined.
# WARNING: This may change the semantics if the range causes assignment to
# i to overflow. Specifically, if this option is set, an error will be
# raised before the loop is entered, whereas without this option the loop
# will execute until an overflowing value is encountered.
#: This will convert statements of the form ``for i in range(...)``
#: to ``for i from ...`` when ``i`` is a C integer type, and the direction
#: (i.e. sign of step) can be determined.
#: WARNING: This may change the semantics if the range causes assignment to
#: i to overflow. Specifically, if this option is set, an error will be
#: raised before the loop is entered, whereas without this option the loop
#: will execute until an overflowing value is encountered.
convert_range = True
# Perform lookups on builtin names only once, at module initialisation
# time. This will prevent the module from getting imported if a
# builtin name that it uses cannot be found during initialisation.
#: Perform lookups on builtin names only once, at module initialisation
#: time. This will prevent the module from getting imported if a
#: builtin name that it uses cannot be found during initialisation.
#: Default is True.
#: Note that some legacy builtins are automatically remapped
#: from their Python 2 names to their Python 3 names by Cython
#: when building in Python 3.x,
#: so that they do not get in the way even if this option is enabled.
cache_builtins = True
# Generate branch prediction hints to speed up error handling etc.
#: Generate branch prediction hints to speed up error handling etc.
gcc_branch_hints = True
# Enable this to allow one to write your_module.foo = ... to overwrite the
# definition if the cpdef function foo, at the cost of an extra dictionary
# lookup on every call.
# If this is false it generates only the Python wrapper and no override check.
#: Enable this to allow one to write ``your_module.foo = ...`` to overwrite the
#: definition if the cpdef function foo, at the cost of an extra dictionary
#: lookup on every call.
#: If this is false it generates only the Python wrapper and no override check.
lookup_module_cpdef = False
# Whether or not to embed the Python interpreter, for use in making a
# standalone executable or calling from external libraries.
# This will provide a method which initialises the interpreter and
# executes the body of this module.
#: Whether or not to embed the Python interpreter, for use in making a
#: standalone executable or calling from external libraries.
#: This will provide a C function which initialises the interpreter and
#: executes the body of this module.
#: See `this demo <https://github.com/cython/cython/tree/master/Demos/embed>`_
#: for a concrete example.
#: If true, the initialisation function is the C main() function, but
#: this option can also be set to a non-empty string to provide a function name explicitly.
#: Default is False.
embed = None
# In previous iterations of Cython, globals() gave the first non-Cython module
# globals in the call stack. Sage relies on this behavior for variable injection.
old_style_globals = ShouldBeFromDirective('old_style_globals')
# Allows cimporting from a pyx file without a pxd file.
#: Allows cimporting from a pyx file without a pxd file.
cimport_from_pyx = False
# max # of dims for buffers -- set lower than number of dimensions in numpy, as
# slices are passed by value and involve a lot of copying
#: Maximum number of dimensions for buffers -- set lower than number of
#: dimensions in numpy, as
#: slices are passed by value and involve a lot of copying.
buffer_max_dims = 8
# Number of function closure instances to keep in a freelist (0: no freelists)
#: Number of function closure instances to keep in a freelist (0: no freelists)
closure_freelist_size = 8
......
......@@ -613,7 +613,8 @@ class TrackNumpyAttributes(VisitorTransform, SkipDeclarations):
def visit_AttributeNode(self, node):
self.visitchildren(node)
if node.obj.is_name and node.obj.name in self.numpy_module_names:
obj = node.obj
if (obj.is_name and obj.name in self.numpy_module_names) or obj.is_numpy_attribute:
node.is_numpy_attribute = True
return node
......
......@@ -594,9 +594,9 @@ class MemoryViewSliceType(PyrexType):
the packing specifiers specify how the array elements are layed-out
in memory.
'contig' -- The data are contiguous in memory along this dimension.
'contig' -- The data is contiguous in memory along this dimension.
At most one dimension may be specified as 'contig'.
'strided' -- The data aren't contiguous along this dimenison.
'strided' -- The data isn't contiguous along this dimension.
'follow' -- Used for C/Fortran contiguous arrays, a 'follow' dimension
has its stride automatically computed from extents of the other
dimensions to ensure C or Fortran memory layout.
......
......@@ -6,16 +6,20 @@ from .PyrexTypes import CType, CTypedefType, CStructOrUnionType
import cython
try:
import pythran
_pythran_available = True
except ImportError:
_pythran_available = False
# Pythran/Numpy specific operations
def has_np_pythran(env):
while env is not None:
directives = getattr(env, 'directives', None)
if directives and env.directives.get('np_pythran', False):
return True
env = env.outer_scope
if env is None:
return False
directives = getattr(env, 'directives', None)
return (directives and directives.get('np_pythran', False))
@cython.ccall
def is_pythran_supported_dtype(type_):
......@@ -111,10 +115,32 @@ def pythran_indexing_type(type_, indices):
def pythran_indexing_code(indices):
return _index_access(_index_code, indices)
def np_func_to_list(func):
if not func.is_numpy_attribute:
return []
return np_func_to_list(func.obj) + [func.attribute]
if _pythran_available:
def pythran_is_numpy_func_supported(func):
CurF = pythran.tables.MODULES['numpy']
FL = np_func_to_list(func)
for F in FL:
CurF = CurF.get(F, None)
if CurF is None:
return False
return True
else:
def pythran_is_numpy_func_supported(name):
return False
def pythran_functor(func):
func = np_func_to_list(func)
submodules = "::".join(func[:-1] + ["functor"])
return "pythonic::numpy::%s::%s" % (submodules, func[-1])
def pythran_func_type(func, args):
args = ",".join(("std::declval<%s>()" % pythran_type(a.type) for a in args))
return "decltype(pythonic::numpy::functor::%s{}(%s))" % (func, args)
return "decltype(%s{}(%s))" % (pythran_functor(func), args)
@cython.ccall
......@@ -168,6 +194,9 @@ def is_pythran_buffer(type_):
return (type_.is_numpy_buffer and is_pythran_supported_dtype(type_.dtype) and
type_.mode in ("c", "strided") and not type_.cast)
def pythran_get_func_include_file(func):
func = np_func_to_list(func)
return "pythonic/include/numpy/%s.hpp" % "/".join(func)
def include_pythran_generic(env):
# Generic files
......
......@@ -14,13 +14,15 @@ cdef class Method:
cdef dict kwargs
cdef readonly object __name__ # for tracing the scanner
## methods commented with '##' out are used by Parsing.py when compiled.
@cython.final
cdef class CompileTimeScope:
cdef public dict entries
cdef public CompileTimeScope outer
cdef declare(self, name, value)
cdef lookup_here(self, name)
cpdef lookup(self, name)
##cdef declare(self, name, value)
##cdef lookup_here(self, name)
##cpdef lookup(self, name)
@cython.final
cdef class PyrexScanner(Scanner):
......@@ -51,15 +53,15 @@ cdef class PyrexScanner(Scanner):
@cython.locals(current_level=cython.long, new_level=cython.long)
cpdef indentation_action(self, text)
#cpdef eof_action(self, text)
cdef next(self)
cdef peek(self)
##cdef next(self)
##cdef peek(self)
#cpdef put_back(self, sy, systring)
#cdef unread(self, token, value)
cdef bint expect(self, what, message = *) except -2
cdef expect_keyword(self, what, message = *)
cdef expected(self, what, message = *)
cdef expect_indent(self)
cdef expect_dedent(self)
cdef expect_newline(self, message=*, bint ignore_semicolon=*)
cdef int enter_async(self) except -1
cdef int exit_async(self) except -1
##cdef bint expect(self, what, message = *) except -2
##cdef expect_keyword(self, what, message = *)
##cdef expected(self, what, message = *)
##cdef expect_indent(self)
##cdef expect_dedent(self)
##cdef expect_newline(self, message=*, bint ignore_semicolon=*)
##cdef int enter_async(self) except -1
##cdef int exit_async(self) except -1
......@@ -2486,7 +2486,7 @@ class PyCont(ExecutionControlCommandBase):
def _pointervalue(gdbval):
"""
Return the value of the pionter as a Python int.
Return the value of the pointer as a Python int.
gdbval.type must be a pointer type
"""
......
This diff is collapsed.
......@@ -153,6 +153,13 @@ cdef extern from "Python.h":
# PyErr_SetFromErrno(type);" when the system call returns an
# error.
PyObject* PyErr_SetFromErrnoWithFilenameObject(object type, object filenameObject) except NULL
# Similar to PyErr_SetFromErrno(), with the additional behavior
# that if filenameObject is not NULL, it is passed to the
# constructor of type as a third parameter.
# In the case of OSError exception, this is used to define
# the filename attribute of the exception instance.
PyObject* PyErr_SetFromErrnoWithFilename(object type, char *filename) except NULL
# Return value: Always NULL. Similar to PyErr_SetFromErrno(),
# with the additional behavior that if filename is not NULL, it is
......
cdef extern from "<forward_list>" namespace "std" nogil:
cdef cppclass forward_list[T,ALLOCATOR=*]:
ctypedef T value_type
ctypedef ALLOCATOR allocator_type
# these should really be allocator_type.size_type and
# allocator_type.difference_type to be true to the C++ definition
# but cython doesn't support deferred access on template arguments
ctypedef size_t size_type
ctypedef ptrdiff_t difference_type
cppclass iterator:
iterator()
iterator(iterator &)
T& operator*()
iterator operator++()
bint operator==(iterator)
bint operator!=(iterator)
cppclass const_iterator(iterator):
pass
forward_list() except +
forward_list(forward_list&) except +
forward_list(size_t, T&) except +
#forward_list& operator=(forward_list&)
bint operator==(forward_list&, forward_list&)
bint operator!=(forward_list&, forward_list&)
bint operator<(forward_list&, forward_list&)
bint operator>(forward_list&, forward_list&)
bint operator<=(forward_list&, forward_list&)
bint operator>=(forward_list&, forward_list&)
void assign(size_t, T&)
T& front()
iterator before_begin()
const_iterator const_before_begin "before_begin"()
iterator begin()
const_iterator const_begin "begin"()
iterator end()
const_iterator const_end "end"()
bint empty()
size_t max_size()
void clear()
iterator insert_after(iterator, T&)
void insert_after(iterator, size_t, T&)
iterator erase_after(iterator)
iterator erase_after(iterator, iterator)
void push_front(T&)
void pop_front()
void resize(size_t)
void resize(size_t, T&)
void swap(forward_list&)
void merge(forward_list&)
void merge[Compare](forward_list&, Compare)
void splice_after(iterator, forward_list&)
void splice_after(iterator, forward_list&, iterator)
void splice_after(iterator, forward_list&, iterator, iterator)
void remove(const T&)
void remove_if[Predicate](Predicate)
void reverse()
void unique()
void unique[Predicate](Predicate)
void sort()
void sort[Compare](Compare)
......@@ -28,18 +28,23 @@ cdef class Scanner:
cdef public level
@cython.final
@cython.locals(input_state=long)
cdef next_char(self)
@cython.locals(action=Action)
cpdef tuple read(self)
@cython.final
cdef tuple scan_a_token(self)
cdef tuple position(self)
##cdef tuple position(self) # used frequently by Parsing.py
@cython.final
@cython.locals(cur_pos=Py_ssize_t, cur_line=Py_ssize_t, cur_line_start=Py_ssize_t,
input_state=long, next_pos=Py_ssize_t, state=dict,
buf_start_pos=Py_ssize_t, buf_len=Py_ssize_t, buf_index=Py_ssize_t,
trace=bint, discard=Py_ssize_t, data=unicode, buffer=unicode)
cdef run_machine_inlined(self)
@cython.final
cdef begin(self, state)
@cython.final
cdef produce(self, value, text = *)
......@@ -1858,7 +1858,7 @@ static void __Pyx__ReturnWithStopIteration(PyObject* value) {
}
#if CYTHON_FAST_THREAD_STATE
__Pyx_PyThreadState_assign
#if PY_VERSION_HEX >= 0x030700A2
#if PY_VERSION_HEX >= 0x030700A3
if (!$local_tstate_cname->exc_state.exc_type)
#else
if (!$local_tstate_cname->exc_type)
......
......@@ -360,7 +360,7 @@ static int __Pyx_GetException(PyObject **type, PyObject **value, PyObject **tb)
*value = local_value;
*tb = local_tb;
#if CYTHON_FAST_THREAD_STATE
#if PY_VERSION_HEX >= 0x030700A2
#if PY_VERSION_HEX >= 0x030700A3
tmp_type = tstate->exc_state.exc_type;
tmp_value = tstate->exc_state.exc_value;
tmp_tb = tstate->exc_state.exc_traceback;
......@@ -404,7 +404,7 @@ static CYTHON_INLINE void __Pyx_ReraiseException(void) {
PyObject *type = NULL, *value = NULL, *tb = NULL;
#if CYTHON_FAST_THREAD_STATE
PyThreadState *tstate = PyThreadState_GET();
#if PY_VERSION_HEX >= 0x030700A2
#if PY_VERSION_HEX >= 0x030700A3
type = tstate->exc_state.exc_type;
value = tstate->exc_state.exc_value;
tb = tstate->exc_state.exc_traceback;
......@@ -456,7 +456,7 @@ static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject
#if CYTHON_FAST_THREAD_STATE
static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {
#if PY_VERSION_HEX >= 0x030700A2
#if PY_VERSION_HEX >= 0x030700A3
*type = tstate->exc_state.exc_type;
*value = tstate->exc_state.exc_value;
*tb = tstate->exc_state.exc_traceback;
......@@ -473,7 +473,7 @@ static CYTHON_INLINE void __Pyx__ExceptionSave(PyThreadState *tstate, PyObject *
static CYTHON_INLINE void __Pyx__ExceptionReset(PyThreadState *tstate, PyObject *type, PyObject *value, PyObject *tb) {
PyObject *tmp_type, *tmp_value, *tmp_tb;
#if PY_VERSION_HEX >= 0x030700A2
#if PY_VERSION_HEX >= 0x030700A3
tmp_type = tstate->exc_state.exc_type;
tmp_value = tstate->exc_state.exc_value;
tmp_tb = tstate->exc_state.exc_traceback;
......@@ -511,7 +511,7 @@ static CYTHON_INLINE void __Pyx_ExceptionSwap(PyObject **type, PyObject **value,
static CYTHON_INLINE void __Pyx__ExceptionSwap(PyThreadState *tstate, PyObject **type, PyObject **value, PyObject **tb) {
PyObject *tmp_type, *tmp_value, *tmp_tb;
#if PY_VERSION_HEX >= 0x030700A2
#if PY_VERSION_HEX >= 0x030700A3
tmp_type = tstate->exc_state.exc_type;
tmp_value = tstate->exc_state.exc_value;
tmp_tb = tstate->exc_state.exc_traceback;
......
......@@ -677,9 +677,8 @@ static CYTHON_INLINE void * PyThread_tss_get(Py_tss_t *key) {
#ifndef CYTHON_SMALL_CODE
#if defined(__clang__)
#define CYTHON_SMALL_CODE
#elif defined(__GNUC__) && (!(defined(__cplusplus)) || (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ > 4)))
// At least g++ 4.4.7 can generate crashing code with this option. (GH #2235)
#define CYTHON_SMALL_CODE __attribute__((optimize("Os")))
#elif defined(__GNUC__) && (__GNUC__ > 4 || (__GNUC__ == 4 && __GNUC_MINOR__ >= 3))
#define CYTHON_SMALL_CODE __attribute__((cold))
#else
#define CYTHON_SMALL_CODE
#endif
......@@ -796,15 +795,48 @@ static CYTHON_INLINE int __Pyx_inner_PyErr_GivenExceptionMatches2(PyObject *err,
// so far, we only call PyErr_GivenExceptionMatches() with an exception type (not instance) as first argument
// => optimise for that case
static int __Pyx_PyErr_GivenExceptionMatchesTuple(PyObject *exc_type, PyObject *tuple) {
Py_ssize_t i, n;
assert(PyExceptionClass_Check(exc_type));
n = PyTuple_GET_SIZE(tuple);
#if PY_MAJOR_VERSION >= 3
// the tighter subtype checking in Py3 allows faster out-of-order comparison
for (i=0; i<n; i++) {
if (exc_type == PyTuple_GET_ITEM(tuple, i)) return 1;
}
#endif
for (i=0; i<n; i++) {
PyObject *t = PyTuple_GET_ITEM(tuple, i);
#if PY_MAJOR_VERSION < 3
if (likely(exc_type == t)) return 1;
#endif
if (likely(PyExceptionClass_Check(t))) {
if (__Pyx_inner_PyErr_GivenExceptionMatches2(exc_type, NULL, t)) return 1;
} else {
// FIXME: Py3: PyErr_SetString(PyExc_TypeError, "catching classes that do not inherit from BaseException is not allowed");
}
}
return 0;
}
static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches(PyObject *err, PyObject* exc_type) {
if (likely(err == exc_type)) return 1;
if (likely(PyExceptionClass_Check(err))) {
return __Pyx_inner_PyErr_GivenExceptionMatches2(err, NULL, exc_type);
if (likely(PyExceptionClass_Check(exc_type))) {
return __Pyx_inner_PyErr_GivenExceptionMatches2(err, NULL, exc_type);
} else if (likely(PyTuple_Check(exc_type))) {
return __Pyx_PyErr_GivenExceptionMatchesTuple(err, exc_type);
} else {
// FIXME: Py3: PyErr_SetString(PyExc_TypeError, "catching classes that do not inherit from BaseException is not allowed");
}
}
return PyErr_GivenExceptionMatches(err, exc_type);
}
static CYTHON_INLINE int __Pyx_PyErr_GivenExceptionMatches2(PyObject *err, PyObject *exc_type1, PyObject *exc_type2) {
// Only used internally with known exception types => pure safety check assertions.
assert(PyExceptionClass_Check(exc_type1));
assert(PyExceptionClass_Check(exc_type2));
if (likely(err == exc_type1 || err == exc_type2)) return 1;
if (likely(PyExceptionClass_Check(err))) {
return __Pyx_inner_PyErr_GivenExceptionMatches2(err, exc_type1, exc_type2);
......
......@@ -713,7 +713,7 @@ static CYTHON_INLINE {{c_ret_type}} __Pyx_PyInt_{{'' if ret_type.is_pyobject els
{{py: c_op = {'Eq': '==', 'Ne': '!='}[op] }}
{{py:
return_compare = (
(lambda a,b,c_op: "if ({a} {c_op} {b}) {return_true}; else {return_false};".format(
(lambda a,b,c_op, return_true=return_true, return_false=return_false: "if ({a} {c_op} {b}) {return_true}; else {return_false};".format(
a=a, b=b, c_op=c_op, return_true=return_true, return_false=return_false))
if ret_type.is_pyobject else
(lambda a,b,c_op: "return ({a} {c_op} {b});".format(a=a, b=b, c_op=c_op))
......
......@@ -235,6 +235,9 @@ static CYTHON_INLINE int __Pyx_PyUnicode_Equals(PyObject* s1, PyObject* s2, int
} else {
int result;
PyObject* py_result = PyObject_RichCompare(s1, s2, equals);
#if PY_MAJOR_VERSION < 3
Py_XDECREF(owned_ref);
#endif
if (!py_result)
return -1;
result = __Pyx_PyObject_IsTrue(py_result);
......
......@@ -603,7 +603,7 @@ static CYTHON_INLINE PyObject* {{TO_PY_FUNCTION}}({{TYPE}} value);
/////////////// CIntToPy ///////////////
static CYTHON_INLINE PyObject* {{TO_PY_FUNCTION}}({{TYPE}} value) {
const {{TYPE}} neg_one = ({{TYPE}}) -1, const_zero = ({{TYPE}}) 0;
const {{TYPE}} neg_one = ({{TYPE}}) (({{TYPE}}) 0 - ({{TYPE}}) 1), const_zero = ({{TYPE}}) 0;
const int is_unsigned = neg_one > const_zero;
if (is_unsigned) {
if (sizeof({{TYPE}}) < sizeof(long)) {
......@@ -696,7 +696,7 @@ static CYTHON_INLINE PyObject* {{TO_PY_FUNCTION}}({{TYPE}} value, Py_ssize_t wid
Py_ssize_t length, ulength;
int prepend_sign, last_one_off;
{{TYPE}} remaining;
const {{TYPE}} neg_one = ({{TYPE}}) -1, const_zero = ({{TYPE}}) 0;
const {{TYPE}} neg_one = ({{TYPE}}) (({{TYPE}}) 0 - ({{TYPE}}) 1), const_zero = ({{TYPE}}) 0;
const int is_unsigned = neg_one > const_zero;
if (format_char == 'X') {
......@@ -825,7 +825,7 @@ static CYTHON_INLINE {{TYPE}} {{FROM_PY_FUNCTION}}(PyObject *);
{{py: from Cython.Utility import pylong_join }}
static CYTHON_INLINE {{TYPE}} {{FROM_PY_FUNCTION}}(PyObject *x) {
const {{TYPE}} neg_one = ({{TYPE}}) -1, const_zero = ({{TYPE}}) 0;
const {{TYPE}} neg_one = ({{TYPE}}) (({{TYPE}}) 0 - ({{TYPE}}) 1), const_zero = ({{TYPE}}) 0;
const int is_unsigned = neg_one > const_zero;
#if PY_MAJOR_VERSION < 3
if (likely(PyInt_Check(x))) {
......
......@@ -487,3 +487,9 @@ def add_metaclass(metaclass):
orig_vars.pop('__weakref__', None)
return metaclass(cls.__name__, cls.__bases__, orig_vars)
return wrapper
def raise_error_if_module_name_forbidden(full_module_name):
#it is bad idea to call the pyx-file cython.pyx, so fail early
if full_module_name == 'cython' or full_module_name.startswith('cython.'):
raise ValueError('cython is a special module, cannot be used as a module name')
......@@ -20,7 +20,7 @@ YEAR = datetime.date.today().strftime('%Y')
# If extensions (or modules to document with autodoc) are in another directory,
# add these directories to sys.path here. If the directory is relative to the
# documentation root, use os.path.abspath to make it absolute, like shown here.
#sys.path.insert(0, os.path.abspath('.'))
sys.path.insert(0, os.path.abspath('..'))
sys.path.append(os.path.abspath('sphinxext'))
# Import support for ipython console session syntax highlighting (lives
......@@ -127,7 +127,7 @@ pygments_style = 'sphinx'
todo_include_todos = True
# intersphinx for standard :keyword:s (def, for, etc.)
intersphinx_mapping = {'python': ('http://docs.python.org/3/', None)}
intersphinx_mapping = {'python': ('https://docs.python.org/3/', None)}
# If true, keep warnings as "system message" paragraphs in the built documents.
#keep_warnings = False
......
from libc.stdlib cimport atoi
cdef parse_charptr_to_py_int(char*s):
cdef parse_charptr_to_py_int(char* s):
assert s is not NULL, "byte string value is NULL"
return atoi(s) # note: atoi() has no error detection!
......@@ -4,4 +4,4 @@ cdef extern from "string.h":
cdef char* data = "hfvcakdfagbcffvschvxcdfgccbcfhvgcsnfxjh"
cdef char* pos = strstr(needle='akd', haystack=data)
print(pos != NULL)
print(pos is not NULL)
cimport cython
@cython.profile(False)
def my_often_called_function():
pass
# mymodule.pxd
# declare a C function as "cpdef" to export it to the module
cdef extern from "math.h":
cpdef double sin(double x)
# mymodule.py
import cython
# override with Python import if not in compiled code
if not cython.compiled:
from math import sin
# calls sin() from math.h when compiled with Cython and math.sin() in Python
print(sin(0))
cdef class Shrubbery:
cdef int width, height
from __future__ import print_function
cdef class Shrubbery:
def __init__(self, w, h):
self.width = w
self.height = h
def describe(self):
print("This shrubbery is", self.width,
"by", self.height, "cubits.")
from my_module cimport Shrubbery
cdef Shrubbery another_shrubbery(Shrubbery sh1):
cdef Shrubbery sh2
sh2 = Shrubbery()
sh2.width = sh1.width
sh2.height = sh1.height
return sh2
from my_module cimport Shrubbery
cdef widen_shrubbery(Shrubbery sh, extra_width):
sh.width = sh.width + extra_width
# delorean.pyx
cdef public struct Vehicle:
int speed
float power
cdef api void activate(Vehicle *v):
if v.speed >= 88 and v.power >= 1.21:
print("Time travel achieved")
\ No newline at end of file
# marty.c
#include "delorean_api.h"
Vehicle car;
int main(int argc, char *argv[]) {
Py_Initialize();
import_delorean();
car.speed = atoi(argv[1]);
car.power = atof(argv[2]);
activate(&car);
Py_Finalize();
}
from __future__ import print_function
cimport cython
ctypedef fused char_or_float:
cython.char
cython.float
char
float
cpdef char_or_float plus_one(char_or_float var):
......@@ -12,7 +11,7 @@ cpdef char_or_float plus_one(char_or_float var):
def show_me():
cdef:
cython.char a = 127
cython.float b = 127
char a = 127
float b = 127
print('char', plus_one(a))
print('float', plus_one(b))
from cpython.ref cimport PyObject
from libc.stdint cimport uintptr_t
python_string = "foo"
cdef void* ptr = <void*>python_string
cdef uintptr_t adress_in_c = <uintptr_t>ptr
address_from_void = adress_in_c # address_from_void is a python int
cdef PyObject* ptr2 = <PyObject*>python_string
cdef uintptr_t address_in_c2 = <uintptr_t>ptr2
address_from_PyObject = address_in_c2 # address_from_PyObject is a python int
assert address_from_void == address_from_PyObject == id(python_string)
print(<object>ptr) # Prints "foo"
print(<object>ptr2) # prints "foo"
from __future__ import print_function
cdef:
struct Spam:
int tons
int i
float a
Spam *p
void f(Spam *s):
print(s.tons, "Tons of spam")
from __future__ import print_function
DEF FavouriteFood = u"spam"
DEF ArraySize = 42
DEF OtherArraySize = 2 * ArraySize + 17
cdef int a1[ArraySize]
cdef int a2[OtherArraySize]
print("I like", FavouriteFood)
\ No newline at end of file
def f(a, b, *args, c, d = 42, e, **kwds):
...
# We cannot call f with less verbosity than this.
foo = f(4, "bar", c=68, e=1.0)
def g(a, b, *, c, d):
...
# We cannot call g with less verbosity than this.
foo = g(4.0, "something", c=68, d="other")
from libc.stdio cimport FILE, fopen
from libc.stdlib cimport malloc, free
from cpython.exc cimport PyErr_SetFromErrnoWithFilenameObject
def open_file():
cdef FILE* p
p = fopen("spam.txt", "r")
if p is NULL:
PyErr_SetFromErrnoWithFilenameObject(OSError, "spam.txt")
...
def allocating_memory(number=10):
cdef double *my_array = <double *> malloc(number * sizeof(double))
if not my_array: # same as 'is NULL' above
raise MemoryError()
...
free(my_array)
cdef class A:
cdef foo(self)
cdef class B(A):
cdef foo(self, x=*)
cdef class C(B):
cpdef foo(self, x=*, int k=*)
from __future__ import print_function
cdef class A:
cdef foo(self):
print("A")
cdef class B(A):
cdef foo(self, x=None):
print("B", x)
cdef class C(B):
cpdef foo(self, x=True, int k=3):
print("C", x, k)
from __future__ import print_function
cdef class A:
cdef foo(self):
print("A")
cdef class B(A):
cpdef foo(self):
print("B")
class C(B): # NOTE: not cdef class
def foo(self):
print("C")
import numpy as np
def add_one(int[:,:] buf):
for x in range(buf.shape[0]):
for y in range(buf.shape[1]):
buf[x, y] += 1
# exporting_object must be a Python object
# implementing the buffer interface, e.g. a numpy array.
exporting_object = np.zeros((10, 20), dtype=np.intc)
add_one(exporting_object)
import numpy as np
cdef int[:, :, :] to_view, from_view
to_view = np.empty((20, 15, 30), dtype=np.intc)
from_view = np.ones((20, 15, 30), dtype=np.intc)
# copy the elements in from_view to to_view
to_view[...] = from_view
# or
to_view[:] = from_view
# or
to_view[:, :, :] = from_view
from cython cimport view
# direct access in both dimensions, strided in the first dimension, contiguous in the last
cdef int[:, ::view.contiguous] a
# contiguous list of pointers to contiguous lists of ints
cdef int[::view.indirect_contiguous, ::1] b
# direct or indirect in the first dimension, direct in the second dimension
# strided in both dimensions
cdef int[::view.generic, :] c
from cython cimport view
# VALID
cdef int[::view.indirect, ::1, :] a
cdef int[::view.indirect, :, ::1] b
cdef int[::view.indirect_contiguous, ::1, :] c
import numpy as np
def process_buffer(int[:,:] input_view not None,
int[:,:] output_view=None):
if output_view is None:
# Creating a default view, e.g.
output_view = np.empty_like(input_view)
# process 'input_view' into 'output_view'
return output_view
import numpy as np
cdef const double[:] myslice # const item type => read-only view
a = np.linspace(0, 10, num=50)
a.setflags(write=False)
myslice = a
import numpy as np
exporting_object = np.arange(0, 15 * 10 * 20, dtype=np.intc).reshape((15, 10, 20))
cdef int[:, :, :] my_view = exporting_object
# These are all equivalent
my_view[10]
my_view[10, :, :]
my_view[10, ...]
import numpy as np
array = np.arange(20, dtype=np.intc).reshape((2, 10))
cdef int[:, ::1] c_contig = array
cdef int[::1, :] f_contig = c_contig.T
cdef bint is_y_in(const unsigned char[:] string_view):
cdef int i
for i in range(string_view.shape[0]):
if string_view[i] == b'y':
return True
return False
print(is_y_in(b'hello world')) # False
print(is_y_in(b'hello Cython')) # True
cdef extern from "lunch.h":
void eject_tomato(float)
cdef enum otherstuff:
sausage, eggs, lettuce
cdef struct spamdish:
int oz_of_spam
otherstuff filler
cimport shrubbing
import shrubbing
def main():
cdef shrubbing.Shrubbery sh
sh = shrubbing.standard_shrubbery()
print("Shrubbery size is", sh.width, 'x', sh.length)
cimport c_lunch
def eject_tomato(float speed):
c_lunch.eject_tomato(speed)
from __future__ import print_function
cimport dishes
from dishes cimport spamdish
cdef void prepare(spamdish *d):
d.oz_of_spam = 42
d.filler = dishes.sausage
def serve():
cdef spamdish d
prepare(&d)
print(f'{d.oz_of_spam} oz spam, filler no. {d.filler}')
from distutils.core import setup
from Cython.Build import cythonize
setup(ext_modules=cythonize(["landscaping.pyx", "shrubbing.pyx"]))
cdef class Shrubbery:
cdef int width
cdef int length
cdef class Shrubbery:
def __cinit__(self, int w, int l):
self.width = w
self.length = l
def standard_shrubbery():
return Shrubbery(3, 7)
from __future__ import print_function
from volume cimport cube
def menu(description, size):
print(description, ":", cube(size),
"cubic metres of spam")
menu("Entree", 1)
menu("Main course", 3)
menu("Dessert", 2)
cdef float cube(float x):
return x * x * x
......@@ -8,5 +8,5 @@ cdef extern from "Rectangle.h" namespace "shapes":
Rectangle(int, int, int, int) except +
int x0, y0, x1, y1
int getArea()
void getSize(int*width, int*height)
void getSize(int* width, int* height)
void move(int, int)
# distutils: language = c++
cdef extern from "<algorithm>" namespace "std":
T max[T](T a, T b)
print(max[long](3, 4))
print(max(1.5, 2.5)) # simple template argument deduction
# distutils: language = c++
from libcpp.vector cimport vector
def main():
cdef vector[int] v = [4, 6, 5, 10, 3]
cdef int value
for value in v:
print(value)
return [x*x for x in v if x % 2 == 0]
# distutils: language = c++
cdef extern from "<vector>" namespace "std":
cdef cppclass vector[T]:
cppclass iterator:
T operator*()
iterator operator++()
bint operator==(iterator)
bint operator!=(iterator)
vector()
void push_back(T&)
T& operator[](int)
T& at(int)
iterator begin()
iterator end()
cdef vector[int].iterator iter #iter is declared as being of type vector<int>::iterator
# distutils: language = c++
from libcpp.string cimport string
from libcpp.vector cimport vector
py_bytes_object = b'The knights who say ni'
py_unicode_object = u'Those who hear them seldom live to tell the tale.'
cdef string s = py_bytes_object
print(s) # b'The knights who say ni'
cdef string cpp_string = <string> py_unicode_object.encode('utf-8')
print(cpp_string) # b'Those who hear them seldom live to tell the tale.'
cdef vector[int] vect = range(1, 10, 2)
print(vect) # [1, 3, 5, 7, 9]
cdef vector[string] cpp_strings = b'It is a good shrubbery'.split()
print(cpp_strings[1]) # b'is'
# distutils: language = c++
# import dereference and increment operators
from cython.operator cimport dereference as deref, preincrement as inc
cdef extern from "<vector>" namespace "std":
cdef cppclass vector[T]:
cppclass iterator:
T operator*()
iterator operator++()
bint operator==(iterator)
bint operator!=(iterator)
vector()
void push_back(T&)
T& operator[](int)
T& at(int)
iterator begin()
iterator end()
cdef vector[int] *v = new vector[int]()
cdef int i
for i in range(10):
v.push_back(i)
cdef vector[int].iterator it = v.begin()
while it != v.end():
print(deref(it))
inc(it)
del v
# distutils: language = c++
from libcpp.vector cimport vector
cdef vector[int] vect
cdef int i, x
for i in range(10):
vect.push_back(i)
for i in range(10):
print(vect[i])
for x in vect:
print(x)
# distutils: language = c++
from libcpp.vector cimport vector
cdef class VectorStack:
cdef vector[int] v
def push(self, x):
self.v.push_back(x)
def pop(self):
if self.v.empty():
raise IndexError()
x = self.v.back()
self.v.pop_back()
return x
......@@ -105,4 +105,4 @@ Using the Sage notebook
.. [Jupyter] http://jupyter.org/
.. [Sage] W. Stein et al., Sage Mathematics Software, http://sagemath.org
.. [Sage] W. Stein et al., Sage Mathematics Software, http://www.sagemath.org/
......@@ -22,7 +22,7 @@ according to the system used:
- **Mac OS X** To retrieve gcc, one option is to install Apple's
XCode, which can be retrieved from the Mac OS X's install DVDs or
from http://developer.apple.com.
from https://developer.apple.com/.
- **Windows** A popular option is to use the open source MinGW (a
Windows distribution of gcc). See the appendix for instructions for
......@@ -57,6 +57,6 @@ with
pip install Cython --install-option="--no-cython-compile"
.. [Anaconda] http://docs.continuum.io/anaconda/
.. [Canopy] https://enthought.com/products/canopy/
.. [Sage] W. Stein et al., Sage Mathematics Software, http://sagemath.org
.. [Anaconda] https://docs.anaconda.com/anaconda/
.. [Canopy] https://www.enthought.com/product/canopy/
.. [Sage] W. Stein et al., Sage Mathematics Software, http://www.sagemath.org/
......@@ -45,7 +45,7 @@ language.
.. [Cython] G. Ewing, R. W. Bradshaw, S. Behnel, D. S. Seljebotn et al.,
The Cython compiler, http://cython.org.
.. [IronPython] Jim Hugunin et al., http://www.codeplex.com/IronPython.
.. [IronPython] Jim Hugunin et al., https://archive.codeplex.com/?p=IronPython.
.. [Jython] J. Huginin, B. Warsaw, F. Bock, et al.,
Jython: Python for the Java platform, http://www.jython.org.
.. [PyPy] The PyPy Group, PyPy: a Python implementation written in Python,
......@@ -53,4 +53,4 @@ language.
.. [Pyrex] G. Ewing, Pyrex: C-Extensions for Python,
http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/
.. [Python] G. van Rossum et al., The Python programming language,
http://python.org.
https://www.python.org/.
# Makefile for Sphinx documentation
#
# You can set these variables from the command line.
SPHINXOPTS =
SPHINXBUILD = sphinx-build
PAPER =
# Internal variables.
PAPEROPT_a4 = -D latex_paper_size=a4
PAPEROPT_letter = -D latex_paper_size=letter
ALLSPHINXOPTS = -d build/doctrees $(PAPEROPT_$(PAPER)) $(SPHINXOPTS) .
.PHONY: help clean html web htmlhelp latex changes linkcheck
help:
@echo "Please use \`make <target>' where <target> is one of"
@echo " html to make standalone HTML files"
@echo " web to make files usable by Sphinx.web"
@echo " htmlhelp to make HTML files and a HTML help project"
@echo " latex to make LaTeX files, you can set PAPER=a4 or PAPER=letter"
@echo " changes to make an overview over all changed/added/deprecated items"
@echo " linkcheck to check all external links for integrity"
clean:
-rm -rf build/*
html:
mkdir -p build/html build/doctrees
$(SPHINXBUILD) -b html $(ALLSPHINXOPTS) build/html
@echo
@echo "Build finished. The HTML pages are in build/html."
web:
mkdir -p build/web build/doctrees
$(SPHINXBUILD) -b web $(ALLSPHINXOPTS) build/web
@echo
@echo "Build finished; now you can run"
@echo " python -m sphinx.web build/web"
@echo "to start the server."
htmlhelp:
mkdir -p build/htmlhelp build/doctrees
$(SPHINXBUILD) -b htmlhelp $(ALLSPHINXOPTS) build/htmlhelp
@echo
@echo "Build finished; now you can run HTML Help Workshop with the" \
".hhp project file in build/htmlhelp."
latex:
mkdir -p build/latex build/doctrees
$(SPHINXBUILD) -b latex $(ALLSPHINXOPTS) build/latex
@echo
@echo "Build finished; the LaTeX files are in build/latex."
@echo "Run \`make all-pdf' or \`make all-ps' in that directory to" \
"run these through (pdf)latex."
changes:
mkdir -p build/changes build/doctrees
$(SPHINXBUILD) -b changes $(ALLSPHINXOPTS) build/changes
@echo
@echo "The overview file is in build/changes."
linkcheck:
mkdir -p build/linkcheck build/doctrees
$(SPHINXBUILD) -b linkcheck $(ALLSPHINXOPTS) build/linkcheck
@echo
@echo "Link check complete; look for any errors in the above output " \
"or in build/linkcheck/output.txt."
......@@ -51,7 +51,8 @@ system, for example, it might look similar to this::
(``gcc`` will need to have paths to your included header files and paths
to libraries you want to link with.)
After compilation, a ``yourmod.so`` file is written into the target directory
After compilation, a ``yourmod.so`` (:file:`yourmod.pyd` for Windows)
file is written into the target directory
and your module, ``yourmod``, is available for you to import as with any other
Python module. Note that if you are not relying on ``cythonize`` or distutils,
you will not automatically benefit from the platform specific file extension
......@@ -104,7 +105,13 @@ the necessary include files, e.g. for NumPy::
include_path = [numpy.get_include()]
Note for Numpy users. Despite this, you will still get warnings like the
.. note::
Using memoryviews or importing NumPy with ``import numpy`` does not mean that
you have to add the path to NumPy include files. You need to add this path only
if you use ``cimport numpy``.
Despite this, you will still get warnings like the
following from the compiler, because Cython is using a deprecated Numpy API::
.../include/numpy/npy_1_7_deprecated_api.h:15:2: warning: #warning "Using deprecated NumPy API, disable it by " "#defining NPY_NO_DEPRECATED_API NPY_1_7_API_VERSION" [-Wcpp]
......@@ -184,7 +191,7 @@ be found in the `distutils documentation`_. Some useful options to know about
are ``include_dirs``, ``libraries``, and ``library_dirs`` which specify where
to find the ``.h`` and library files when linking to external libraries.
.. _distutils documentation: http://docs.python.org/extending/building.html
.. _distutils documentation: https://docs.python.org/extending/building.html
Sometimes this is not enough and you need finer customization of the
distutils :class:`Extension`.
......@@ -243,78 +250,52 @@ Cythonize arguments
The function :func:`cythonize` can take extra arguments which will allow you to
customize your build.
.. py:function:: cythonize(module_list, \
exclude=None, \
nthreads=0, \
aliases=None, \
quiet=False, \
force=False, \
language=None, \
exclude_failures=False, \
**options)
Compile a set of source modules into C/C++ files and return a list of distutils
Extension objects for them.
:param module_list: As module list, pass either a glob pattern, a list of glob
patterns or a list of Extension objects. The latter
allows you to configure the extensions separately
through the normal distutils options.
You can also pass Extension objects that have
glob patterns as their sources. Then, cythonize
will resolve the pattern and create a
copy of the Extension for every matching file.
:param exclude: When passing glob patterns as ``module_list``, you can exclude certain
module names explicitly by passing them into the ``exclude`` option.
:param nthreads: The number of concurrent builds for parallel compilation
(requires the ``multiprocessing`` module).
:param aliases: If you want to use compiler directives like ``# distutils: ...`` but
can only know at compile time (when running the ``setup.py``) which values
to use, you can use aliases and pass a dictionary mapping those aliases
to Python strings when calling :func:`cythonize`. As an example, say you
want to use the compiler
directive ``# distutils: include_dirs = ../static_libs/include/``
but this path isn't always fixed and you want to find it when running
the ``setup.py``. You can then do ``# distutils: include_dirs = MY_HEADERS``,
find the value of ``MY_HEADERS`` in the ``setup.py``, put it in a python
variable called ``foo`` as a string, and then call
``cythonize(..., aliases={'MY_HEADERS': foo})``.
:param quiet: If True, Cython won't print error and warning messages during the compilation.
:param force: Forces the recompilation of the Cython modules, even if the timestamps
don't indicate that a recompilation is necessary.
:param language: To globally enable C++ mode, you can pass ``language='c++'``. Otherwise, this
will be determined at a per-file level based on compiler directives. This
affects only modules found based on file names. Extension instances passed
into :func:`cythonize` will not be changed. It is recommended to rather
use the compiler directive ``# distutils: language = c++`` than this option.
:param exclude_failures: For a broad 'try to compile' mode that ignores compilation
failures and simply excludes the failed extensions,
pass ``exclude_failures=True``. Note that this only
really makes sense for compiling ``.py`` files which can also
be used without compilation.
:param annotate: If ``True``, will produce a HTML file for each of the ``.pyx`` or ``.py``
files compiled. The HTML file gives an indication
of how much Python interaction there is in
each of the source code lines, compared to plain C code.
It also allows you to see the C/C++ code
generated for each line of Cython code. This report is invaluable when
optimizing a function for speed,
and for determining when to :ref:`release the GIL <nogil>`:
in general, a ``nogil`` block may contain only "white" code.
See examples in :ref:`determining_where_to_add_types` or
:ref:`primes`.
:param compiler_directives: Allow to set compiler directives in the ``setup.py`` like this:
``compiler_directives={'embedsignature': True}``.
See :ref:`compiler-directives`.
.. autofunction:: Cython.Build.cythonize
Compiler options
----------------
Compiler options can be set in the :file:`setup.py`, before calling :func:`cythonize`,
like this::
from distutils.core import setup
from Cython.Build import cythonize
from Cython.Compiler import Options
Options.docstrings = False
setup(
name = "hello",
ext_modules = cythonize("lib.pyx"),
)
Here are the options that are available:
.. autodata:: Cython.Compiler.Options.docstrings
.. autodata:: Cython.Compiler.Options.embed_pos_in_docstring
.. autodata:: Cython.Compiler.Options.emit_code_comments
.. pre_import
.. autodata:: Cython.Compiler.Options.generate_cleanup_code
.. autodata:: Cython.Compiler.Options.clear_to_none
.. autodata:: Cython.Compiler.Options.annotate
.. annotate_coverage_xml
.. autodata:: Cython.Compiler.Options.fast_fail
.. autodata:: Cython.Compiler.Options.warning_errors
.. autodata:: Cython.Compiler.Options.error_on_unknown_names
.. autodata:: Cython.Compiler.Options.error_on_uninitialized
.. autodata:: Cython.Compiler.Options.convert_range
.. autodata:: Cython.Compiler.Options.cache_builtins
.. autodata:: Cython.Compiler.Options.gcc_branch_hints
.. autodata:: Cython.Compiler.Options.lookup_module_cpdef
.. autodata:: Cython.Compiler.Options.embed
.. old_style_globals
.. autodata:: Cython.Compiler.Options.cimport_from_pyx
.. autodata:: Cython.Compiler.Options.buffer_max_dims
.. autodata:: Cython.Compiler.Options.closure_freelist_size
Distributing Cython modules
----------------------------
......@@ -462,6 +443,7 @@ C-compiling the module C files.
Also take a look at the `cython_freeze
<https://github.com/cython/cython/blob/master/bin/cython_freeze>`_ tool.
.. _pyximport:
Compiling with :mod:`pyximport`
===============================
......
......@@ -11,12 +11,7 @@ Contents:
:maxdepth: 2
compilation
language_basics
extension_types
interfacing_with_other_code
special_mention
limitations
directives
Indices and tables
------------------
......
......@@ -14,7 +14,7 @@ Appendix: Installing MinGW on Windows
includes e.g. "c:\\mingw\\bin" (if you installed MinGW to
"c:\\mingw"). The following web-page describes the procedure
in Windows XP (the Vista procedure is similar):
http://support.microsoft.com/kb/310519
https://support.microsoft.com/kb/310519
4. Finally, tell Python to use MinGW as the default compiler
(otherwise it will try for Visual C). If Python is installed to
"c:\\Python27", create a file named
......
......@@ -25,7 +25,7 @@ while the cimport adds functions accessible from Cython.
A Python array is constructed with a type signature and sequence of
initial values. For the possible type signatures, refer to the Python
documentation for the `array module <http://docs.python.org/library/array.html>`_.
documentation for the `array module <https://docs.python.org/library/array.html>`_.
Notice that when a Python array is assigned to a variable typed as
memory view, there will be a slight overhead to construct the memory
......
......@@ -30,7 +30,7 @@ type that can encapsulate all memory management.
Defining external declarations
==============================
You can download CAlg `here <https://github.com/fragglet/c-algorithms/archive/master.zip>`_.
You can download CAlg `here <https://codeload.github.com/fragglet/c-algorithms/zip/master>`_.
The C API of the queue implementation, which is defined in the header
file ``c-algorithms/src/queue.h``, essentially looks like this:
......@@ -158,7 +158,7 @@ We can thus change the init function as follows:
exception instance in order to raise it may actually fail because
we are running out of memory. Luckily, CPython provides a C-API
function ``PyErr_NoMemory()`` that safely raises the right
exception for us. Since version 0.14.1, Cython automatically
exception for us. Cython automatically
substitutes this C-API call whenever you write ``raise
MemoryError`` or ``raise MemoryError()``. If you use an older
version, you have to cimport the C-API function from the standard
......@@ -192,7 +192,7 @@ Here is the most basic script for compiling a Cython module::
)
To build against the external C library, we need to make sure Cython finds the necessary libraries.
To build against the external C library, we need to make sure Cython finds the necessary libraries.
There are two ways to archive this. First we can tell distutils where to find
the c-source to compile the :file:`queue.c` implementation automatically. Alternatively,
we can build and install C-Alg as system library and dynamically link it. The latter is useful
......@@ -361,7 +361,7 @@ Here, ``Py_ssize_t``::
cdef int pop(self):
return <Py_ssize_t>cqueue.queue_pop_head(self._c_queue)
Normally, in C, we risk loosing data when we convert a larger integer type
Normally, in C, we risk losing data when we convert a larger integer type
to a smaller integer type without checking the boundaries, and ``Py_ssize_t``
may be a larger type than ``int``. But since we control how values are added
to the queue, we already know that all values that are in the queue fit into
......
......@@ -77,7 +77,7 @@ It is shipped and installed with Cython and can be used like this::
>>> import helloworld
Hello World
Since Cython 0.11, the :ref:`Pyximport<pyximport>` module also has experimental
The :ref:`Pyximport<pyximport>` module also has experimental
compilation support for normal Python modules. This allows you to
automatically run Cython on every .pyx and .py module that Python
imports, including the standard library and installed packages.
......@@ -326,7 +326,7 @@ With Cython, it is also possible to take advantage of the C++ language, notably,
part of the C++ standard library is directly importable from Cython code.
Let's see what our :file:`primes.pyx` becomes when
using `vector <http://en.cppreference.com/w/cpp/container/vector>`_ from the C++
using `vector <https://en.cppreference.com/w/cpp/container/vector>`_ from the C++
standard library.
.. note::
......@@ -336,7 +336,7 @@ standard library.
type in the ``array`` standard library module.
There is a method `reserve` available which will avoid copies if you know in advance
how many elements you are going to put in the vector. For more details
see `this page from cppreference <http://en.cppreference.com/w/cpp/container/vector>`_.
see `this page from cppreference <https://en.cppreference.com/w/cpp/container/vector>`_.
.. literalinclude:: ../../examples/tutorial/cython_tutorial/primes_cpp.pyx
:linenos:
......
......@@ -92,7 +92,7 @@ names like this::
char* strstr(const char*, const char*)
However, this prevents Cython code from calling it with keyword
arguments (supported since Cython 0.19). It is therefore preferable
arguments. It is therefore preferable
to write the declaration like this instead:
.. literalinclude:: ../../examples/tutorial/external/keyword_args.pyx
......
......@@ -23,7 +23,8 @@ the Cython version -- Cython uses ".pyx" as its file suffix.
.. literalinclude:: ../../examples/tutorial/numpy/convolve_py.py
This should be compiled to produce :file:`yourmod.so` (for Linux systems). We
This should be compiled to produce :file:`yourmod.so` (for Linux systems, on Windows
systems, it will be :file:`yourmod.pyd`). We
run a Python session to test both the Python version (imported from
``.py``-file) and the compiled Cython module.
......
......@@ -44,14 +44,9 @@ If your profiling is messed up because of the call overhead to some small
functions that you rather do not want to see in your profile - either because
you plan to inline them anyway or because you are sure that you can't make them
any faster - you can use a special decorator to disable profiling for one
function only::
cimport cython
@cython.profile(False)
def my_often_called_function():
pass
function only (regardless of whether it is globally enabled or not):
.. literalinclude:: ../../examples/tutorial/profiling_tutorial/often_called.pyx
Enabling line tracing
---------------------
......@@ -80,7 +75,7 @@ Enabling coverage analysis
--------------------------
Since Cython 0.23, line tracing (see above) also enables support for coverage
reporting with the `coverage.py <http://nedbatchelder.com/code/coverage/>`_ tool.
reporting with the `coverage.py <http://coverage.readthedocs.io/>`_ tool.
To make the coverage analysis understand Cython modules, you also need to enable
Cython's coverage plugin in your ``.coveragerc`` file as follows:
......@@ -116,7 +111,7 @@ turning it into Cython code and keep profiling until it is fast enough.
As a toy example, we would like to evaluate the summation of the reciprocals of
squares up to a certain integer :math:`n` for evaluating :math:`\pi`. The
relation we want to use has been proven by Euler in 1735 and is known as the
`Basel problem <http://en.wikipedia.org/wiki/Basel_problem>`_.
`Basel problem <https://en.wikipedia.org/wiki/Basel_problem>`_.
.. math::
......@@ -160,7 +155,7 @@ Running this on my box gives the following output:
This contains the information that the code runs in 6.2 CPU seconds. Note that
the code got slower by 2 seconds because it ran inside the cProfile module. The
table contains the real valuable information. You might want to check the
Python `profiling documentation <http://docs.python.org/library/profile.html>`_
Python `profiling documentation <https://docs.python.org/library/profile.html>`_
for the nitty gritty details. The most important columns here are totime (total
time spent in this function **not** counting functions that were called by this
function) and cumtime (total time spent in this function **also** counting the
......
......@@ -299,25 +299,11 @@ Calling C functions
Normally, it isn't possible to call C functions in pure Python mode as there
is no general way to support it in normal (uncompiled) Python. However, in
cases where an equivalent Python function exists, this can be achieved by
combining C function coercion with a conditional import as follows::
combining C function coercion with a conditional import as follows:
# in mymodule.pxd:
.. literalinclude:: ../../examples/tutorial/pure/mymodule.pxd
# declare a C function as "cpdef" to export it to the module
cdef extern from "math.h":
cpdef double sin(double x)
# in mymodule.py:
import cython
# override with Python import if not in compiled code
if not cython.compiled:
from math import sin
# calls sin() from math.h when compiled with Cython and math.sin() in Python
print(sin(0))
.. literalinclude:: ../../examples/tutorial/pure/mymodule.py
Note that the "sin" function will show up in the module namespace of "mymodule"
here (i.e. there will be a ``mymodule.sin()`` function). You can mark it as an
......@@ -334,7 +320,7 @@ to make the names match again.
Using C arrays for fixed size lists
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
Since Cython 0.22, C arrays can automatically coerce to Python lists or tuples.
C arrays can automatically coerce to Python lists or tuples.
This can be exploited to replace fixed size Python lists in Python code by C
arrays when compiled. An example:
......
......@@ -20,7 +20,7 @@ focusses on core development issues. Feel free to use it to report a
clear bug, to ask for guidance if you have time to spare to develop
Cython, or if you have suggestions for future development.
.. [DevList] Cython developer mailing list: http://mail.python.org/mailman/listinfo/cython-devel
.. [DevList] Cython developer mailing list: https://mail.python.org/mailman/listinfo/cython-devel
.. [Seljebotn09] D. S. Seljebotn, Fast numerical computations with Cython,
Proceedings of the 8th Python in Science Conference, 2009.
.. [UserList] Cython users mailing list: http://groups.google.com/group/cython-users
.. [UserList] Cython users mailing list: https://groups.google.com/group/cython-users
......@@ -39,12 +39,12 @@ is that it has no support for calling the Python/C API for operations
it does not support natively, and supports very few of the standard
Python modules.
.. [ctypes] http://docs.python.org/library/ctypes.html.
.. [ctypes] https://docs.python.org/library/ctypes.html.
.. there's also the original ctypes home page: http://python.net/crew/theller/ctypes/
.. [Pyrex] G. Ewing, Pyrex: C-Extensions for Python,
http://www.cosc.canterbury.ac.nz/greg.ewing/python/Pyrex/
.. [ShedSkin] M. Dufour, J. Coughlan, ShedSkin,
http://code.google.com/p/shedskin/
https://github.com/shedskin/shedskin
.. [SWIG] David M. Beazley et al.,
SWIG: An Easy to Use Tool for Integrating Scripting Languages with C and C++,
http://www.swig.org.
......@@ -164,7 +164,7 @@ the assignment in a try-finally construct:
To convert the byte string back into a C :c:type:`char*`, use the
opposite assignment::
cdef char* other_c_string = py_string
cdef char* other_c_string = py_string # other_c_string is a 0-terminated string.
This is a very fast operation after which ``other_c_string`` points to
the byte string buffer of the Python string itself. It is tied to the
......@@ -260,7 +260,7 @@ not modify a string they return, for example:
.. literalinclude:: ../../examples/tutorial/string/someheader.h
Since version 0.18, Cython has support for the ``const`` modifier in
Cython has support for the ``const`` modifier in
the language, so you can declare the above functions straight away as
follows:
......@@ -296,7 +296,7 @@ bytes most of which tend to be 0.
Again, no bounds checking is done if slice indices are provided, so
incorrect indices lead to data corruption and crashes. However, using
negative indices is possible since Cython 0.17 and will inject a call
negative indices is possible and will inject a call
to :c:func:`strlen()` in order to determine the string length.
Obviously, this only works for 0-terminated strings without internal
null bytes. Text encoded in UTF-8 or one of the ISO-8859 encodings is
......@@ -467,7 +467,7 @@ supports the ``__future__`` import ``unicode_literals`` that instructs
the parser to read all unprefixed :obj:`str` literals in a source file as
unicode string literals, just like Python 3.
.. _`CEP 108`: http://wiki.cython.org/enhancements/stringliterals
.. _`CEP 108`: https://github.com/cython/cython/wiki/enhancements-stringliterals
Single bytes and characters
---------------------------
......@@ -475,7 +475,7 @@ Single bytes and characters
The Python C-API uses the normal C :c:type:`char` type to represent
a byte value, but it has two special integer types for a Unicode code
point value, i.e. a single Unicode character: :c:type:`Py_UNICODE`
and :c:type:`Py_UCS4`. Since version 0.13, Cython supports the
and :c:type:`Py_UCS4`. Cython supports the
first natively, support for :c:type:`Py_UCS4` is new in Cython 0.15.
:c:type:`Py_UNICODE` is either defined as an unsigned 2-byte or
4-byte integer, or as :c:type:`wchar_t`, depending on the platform.
......
......@@ -105,21 +105,23 @@ will be very inefficient. If the attribute is private, it will not work at all
-- the code will compile, but an attribute error will be raised at run time.
The solution is to declare ``sh`` as being of type :class:`Shrubbery`, as
follows::
follows:
cdef widen_shrubbery(Shrubbery sh, extra_width):
sh.width = sh.width + extra_width
.. literalinclude:: ../../examples/userguide/extension_types/widen_shrubbery.pyx
Now the Cython compiler knows that ``sh`` has a C attribute called
:attr:`width` and will generate code to access it directly and efficiently.
The same consideration applies to local variables, for example,::
The same consideration applies to local variables, for example:
.. literalinclude:: ../../examples/userguide/extension_types/shrubbery_2.pyx
.. note::
cdef Shrubbery another_shrubbery(Shrubbery sh1):
cdef Shrubbery sh2
sh2 = Shrubbery()
sh2.width = sh1.width
sh2.height = sh1.height
return sh2
We here ``cimport`` the class :class:`Shrubbery`, and this is necessary
to declare the type at compile time. To be able to ``cimport`` an extension type,
we split the class definition into two parts, one in a definition file and
the other in the corresponding implementation file. You should read
:ref:`sharing_extension_types` to learn to do that.
Type Testing and Casting
......@@ -347,7 +349,7 @@ inherit from multiple extension types provided that the usual Python rules for
multiple inheritance are followed (i.e. the C layouts of all the base classes
must be compatible).
Since Cython 0.13.1, there is a way to prevent extension types from
There is a way to prevent extension types from
being subtyped in Python. This is done via the ``final`` directive,
usually set on an extension type using a decorator::
......@@ -419,7 +421,7 @@ compatible types.::
cdef void* ptr
def __dealloc__(self):
if self.ptr != NULL:
if self.ptr is not NULL:
free(self.ptr)
@staticmethod
......
......@@ -355,6 +355,10 @@ It is also possible to combine a header file and verbatim C code::
In this case, the C code ``#undef int`` is put right after
``#include "badheader.h"`` in the C code generated by Cython.
Note that the string is parsed like any other docstring in Python.
If you require character escapes to be passed into the C code file,
use a raw docstring, i.e. ``r""" ... """``.
Using Cython Declarations from C
================================
......@@ -458,32 +462,12 @@ contains the api call which is generating the segmentation fault does not call
the :func:`import_modulename` function before the api call which crashes.
Any public C type or extension type declarations in the Cython module are also
made available when you include :file:`modulename_api.h`.::
# delorean.pyx
cdef public struct Vehicle:
int speed
float power
cdef api void activate(Vehicle *v):
if v.speed >= 88 and v.power >= 1.21:
print("Time travel achieved")
.. sourcecode:: c
made available when you include :file:`modulename_api.h`.:
# marty.c
#include "delorean_api.h"
.. literalinclude:: ../../examples/userguide/external_C_code/delorean.pyx
Vehicle car;
int main(int argc, char *argv[]) {
Py_Initialize();
import_delorean();
car.speed = atoi(argv[1]);
car.power = atof(argv[2]);
activate(&car);
Py_Finalize();
}
.. literalinclude:: ../../examples/userguide/external_C_code/marty.c
:language: C
.. note::
......@@ -587,7 +571,7 @@ header::
If the callback may be called from another non-Python thread,
care must be taken to initialize the GIL first, through a call to
`PyEval_InitThreads() <http://docs.python.org/dev/c-api/init.html#PyEval_InitThreads>`_.
`PyEval_InitThreads() <https://docs.python.org/dev/c-api/init.html#c.PyEval_InitThreads>`_.
If you're already using :ref:`cython.parallel <parallel>` in your module, this will already have been taken care of.
The GIL may also be acquired through the ``with gil`` statement::
......
......@@ -12,7 +12,7 @@ operate on values of multiple types. Thus fused types allow `generic
programming`_ and are akin to templates in C++ or generics in languages like
Java / C#.
.. _generic programming: http://en.wikipedia.org/wiki/Generic_programming
.. _generic programming: https://en.wikipedia.org/wiki/Generic_programming
.. Note:: Support is still somewhat experimental, there may be bugs!
......
......@@ -107,21 +107,9 @@ You can declare classes with :keyword:`cdef`, making them :ref:`extension-types`
have a behavior very close to python classes, but are faster because they use a ``struct``
internally to store attributes.
Here is a simple example::
Here is a simple example:
from __future__ import print_function
cdef class Shrubbery:
cdef int width, height
def __init__(self, w, h):
self.width = w
self.height = h
def describe(self):
print("This shrubbery is", self.width,
"by", self.height, "cubits.")
.. literalinclude:: ../../examples/userguide/extension_types/shrubbery.pyx
You can read more about them in :ref:`extension-types`.
......@@ -179,20 +167,9 @@ Grouping multiple C declarations
--------------------------------
If you have a series of declarations that all begin with :keyword:`cdef`, you
can group them into a :keyword:`cdef` block like this::
from __future__ import print_function
can group them into a :keyword:`cdef` block like this:
cdef:
struct Spam:
int tons
int i
float a
Spam *p
void f(Spam *s):
print(s.tons, "Tons of spam")
.. literalinclude:: ../../examples/userguide/language_basics/cdef_block.pyx
.. _cpdef:
.. _cdef:
......@@ -311,35 +288,15 @@ To avoid repetition (and potential future inconsistencies), default argument val
not visible in the declaration (in ``.pxd`` files) but only in
the implementation (in ``.pyx`` files).
When in a ``.pyx`` file, the signature is the same as it is in Python itself::
from __future__ import print_function
cdef class A:
cdef foo(self):
print("A")
cdef class B(A):
cdef foo(self, x=None):
print("B", x)
cdef class C(B):
cpdef foo(self, x=True, int k=3):
print("C", x, k)
When in a ``.pyx`` file, the signature is the same as it is in Python itself:
.. literalinclude:: ../../examples/userguide/language_basics/optional_subclassing.pyx
When in a ``.pxd`` file, the signature is different like this example: ``cdef foo(x=*)``.
This is because the program calling the function just needs to know what signatures are
possible in C, but doesn't need to know the value of the default arguments.::
cdef class A:
cdef foo(self)
cdef class B(A):
cdef foo(self, x=*)
cdef class C(B):
cpdef foo(self, x=*, int k=*)
possible in C, but doesn't need to know the value of the default arguments.:
.. literalinclude:: ../../examples/userguide/language_basics/optional_subclassing.pxd
.. note::
The number of arguments may increase when subclassing,
......@@ -355,13 +312,9 @@ Keyword-only Arguments
----------------------
As in Python 3, ``def`` functions can have keyword-only arguments
listed after a ``"*"`` parameter and before a ``"**"`` parameter if any::
def f(a, b, *args, c, d = 42, e, **kwds):
...
listed after a ``"*"`` parameter and before a ``"**"`` parameter if any:
# We cannot call f with less verbosity than this.
foo = f(4, "bar", c=68, e=1.0)
.. literalinclude:: ../../examples/userguide/language_basics/kwargs_1.pyx
As shown above, the ``c``, ``d`` and ``e`` arguments can not be
passed as positional arguments and must be passed as keyword arguments.
......@@ -369,16 +322,12 @@ Furthermore, ``c`` and ``e`` are **required** keyword arguments
since they do not have a default value.
A single ``"*"`` without argument name can be used to
terminate the list of positional arguments::
def g(a, b, *, c, d):
...
terminate the list of positional arguments:
# We cannot call g with less verbosity than this.
foo = g(4.0, "something", c=68, d="other")
.. literalinclude:: ../../examples/userguide/language_basics/kwargs_2.pyx
Shown above, the signature takes exactly two positional
parameters and has two required keyword parameters
parameters and has two required keyword parameters.
Function Pointers
-----------------
......@@ -479,12 +428,9 @@ returns ``NULL``. The except clause doesn't work that way; its only purpose is
for propagating Python exceptions that have already been raised, either by a Cython
function or a C function that calls Python/C API routines. To get an exception
from a non-Python-aware function such as :func:`fopen`, you will have to check the
return value and raise it yourself, for example,::
return value and raise it yourself, for example:
cdef FILE* p
p = fopen("spam.txt", "r")
if p == NULL:
raise SpamError("Couldn't open the spam file")
.. literalinclude:: ../../examples/userguide/language_basics/open_file.pyx
.. _overriding_in_extension_types:
......@@ -492,39 +438,15 @@ Overriding in extension types
-----------------------------
``cpdef`` methods can override ``cdef`` methods::
from __future__ import print_function
``cpdef`` methods can override ``cdef`` methods:
cdef class A:
cdef foo(self):
print("A")
cdef class B(A):
cdef foo(self, x=None):
print("B", x)
cdef class C(B):
cpdef foo(self, x=True, int k=3):
print("C", x, k)
.. literalinclude:: ../../examples/userguide/language_basics/optional_subclassing.pyx
When subclassing an extension type with a Python class,
``def`` methods can override ``cpdef`` methods but not ``cdef``
methods::
from __future__ import print_function
cdef class A:
cdef foo(self):
print("A")
methods:
cdef class B(A):
cpdef foo(self):
print("B")
class C(B): # NOTE: not cdef class
def foo(self):
print("C")
.. literalinclude:: ../../examples/userguide/language_basics/override.pyx
If ``C`` above would be an extension type (``cdef class``),
this would not work correctly.
......@@ -556,6 +478,8 @@ possibilities.
+----------------------------+--------------------+------------------+
| char* | str/bytes | str/bytes [#]_ |
+----------------------------+--------------------+------------------+
| C array | iterable | list [#2]_ |
+----------------------------+--------------------+------------------+
| struct, | | dict [#1]_ |
| union | | |
+----------------------------+--------------------+------------------+
......@@ -568,6 +492,10 @@ possibilities.
combinations. An example is a union of an ``int`` and a ``char*``,
in which case the pointer value may or may not be a valid pointer.
.. [#2] Other than signed/unsigned char[].
The conversion will fail if the length of C array is not known at compile time,
and when using a slice of a C array.
Caveats when using a Python string in a C context
-------------------------------------------------
......@@ -632,26 +560,9 @@ You can also cast a C pointer back to a Python object reference
with ``<object>``, or a more specific builtin or extension type
(e.g. ``<MyExtType>ptr``). This will increase the reference count of
the object by one, i.e. the cast returns an owned reference.
Here is an example::
from cpython.ref cimport PyObject
from libc.stdint cimport uintptr_t
python_string = "foo"
cdef void* ptr = <void*>python_string
cdef uintptr_t adress_in_c = <uintptr_t>ptr
address_from_void = adress_in_c # address_from_void is a python int
cdef PyObject* ptr2 = <PyObject*>python_string
cdef uintptr_t address_in_c2 = <uintptr_t>ptr2
address_from_PyObject = address_in_c2 # address_from_PyObject is a python int
assert address_from_void == address_from_PyObject == id(python_string)
print(<object>ptr) # Prints "foo"
print(<object>ptr2) # prints "foo"
Here is an example:
.. literalinclude:: ../../examples/userguide/language_basics/casting_python.pyx
The precedence of ``<...>`` is such that ``<type>a.b.c`` is interpreted as ``<type>(a.b.c)``.
......@@ -971,13 +882,7 @@ the source at that point as a literal. For this to work, the compile-time
expression must evaluate to a Python value of type ``int``, ``long``,
``float``, ``bytes`` or ``unicode`` (``str`` in Py3).
::
from __future__ import print_function
cdef int a1[ArraySize]
cdef int a2[OtherArraySize]
print("I like", FavouriteFood)
.. literalinclude:: ../../examples/userguide/language_basics/compile_time.pyx
Conditional Statements
----------------------
......
......@@ -71,14 +71,9 @@ This also works conveniently as function arguments:
The ``not None`` declaration for the argument automatically rejects
None values as input, which would otherwise be allowed. The reason why
None is allowed by default is that it is conveniently used for return
arguments::
arguments:
def process_buffer(int[:,:] input not None,
int[:,:] output = None):
if output is None:
output = ... # e.g. numpy.empty_like(input)
# process 'input' into 'output'
return output
.. literalinclude:: ../../examples/userguide/memoryviews/not_none.pyx
Cython will reject incompatible buffers automatically, e.g. passing a
three dimensional buffer into a function that requires a two
......@@ -102,40 +97,24 @@ dimension::
print(buf[-1,-2])
The following function loops over each dimension of a 2D array and
adds 1 to each item::
adds 1 to each item:
def add_one(int[:,:] buf):
for x in xrange(buf.shape[0]):
for y in xrange(buf.shape[1]):
buf[x,y] += 1
.. literalinclude:: ../../examples/userguide/memoryviews/add_one.pyx
Indexing and slicing can be done with or without the GIL. It basically works
like NumPy. If indices are specified for every dimension you will get an element
of the base type (e.g. `int`). Otherwise, you will get a new view. An Ellipsis
means you get consecutive slices for every unspecified dimension::
means you get consecutive slices for every unspecified dimension:
cdef int[:, :, :] my_view = exporting_object
# These are all equivalent
my_view[10]
my_view[10, :, :]
my_view[10, ...]
.. literalinclude:: ../../examples/userguide/memoryviews/slicing.pyx
Copying
-------
Memory views can be copied in place::
cdef int[:, :, :] to_view, from_view
...
Memory views can be copied in place:
# copy the elements in from_view to to_view
to_view[...] = from_view
# or
to_view[:] = from_view
# or
to_view[:, :, :] = from_view
.. literalinclude:: ../../examples/userguide/memoryviews/copy.pyx
They can also be copied with the ``copy()`` and ``copy_fortran()`` methods; see
:ref:`view_copy_c_fortran`.
......@@ -146,10 +125,9 @@ Transposing
-----------
In most cases (see below), the memoryview can be transposed in the same way that
NumPy slices can be transposed::
NumPy slices can be transposed:
cdef int[:, ::1] c_contig = ...
cdef int[::1, :] f_contig = c_contig.T
.. literalinclude:: ../../examples/userguide/memoryviews/transpose.pyx
This gives a new, transposed, view on the data.
......@@ -171,6 +149,9 @@ As for NumPy, new axes can be introduced by indexing an array with ``None`` ::
# 2D array with shape (50, 1)
myslice[:, None]
# 3D array with shape (1, 10, 1)
myslice[None, 10:-20:2, None]
One may mix new axis indexing with all other forms of indexing and slicing.
See also an example_.
......@@ -178,13 +159,14 @@ Read-only views
---------------
Since Cython 0.28, the memoryview item type can be declared as ``const`` to
support read-only buffers as input::
support read-only buffers as input:
cdef const double[:] myslice # const item type => read-only view
.. literalinclude:: ../../examples/userguide/memoryviews/np_flag_const.pyx
a = np.linspace(0, 10, num=50)
a.setflags(write=False)
myslice = a
Using a non-const memoryview with a binary Python string produces a runtime error.
You can solve this issue with a ``const`` memoryview:
.. literalinclude:: ../../examples/userguide/memoryviews/view_string.pyx
Note that this does not *require* the input buffer to be read-only::
......@@ -436,31 +418,21 @@ The flags are as follows:
* contiguous - contiguous and direct
* indirect_contiguous - the list of pointers is contiguous
and they can be used like this::
and they can be used like this:
from cython cimport view
# direct access in both dimensions, strided in the first dimension, contiguous in the last
cdef int[:, ::view.contiguous] a
.. literalinclude:: ../../examples/userguide/memoryviews/memory_layout.pyx
# contiguous list of pointers to contiguous lists of ints
cdef int[::view.indirect_contiguous, ::1] b
Only the first, last or the dimension following an indirect dimension may be
specified contiguous:
# direct or indirect in the first dimension, direct in the second dimension
# strided in both dimensions
cdef int[::view.generic, :] c
.. literalinclude:: ../../examples/userguide/memoryviews/memory_layout_2.pyx
Only the first, last or the dimension following an indirect dimension may be
specified contiguous::
::
# INVALID
cdef int[::view.contiguous, ::view.indirect, :] a
cdef int[::1, ::view.indirect, :] b
cdef int[::view.contiguous, ::view.indirect, :] d
cdef int[::1, ::view.indirect, :] e
# VALID
cdef int[::view.indirect, ::1, :] a
cdef int[::view.indirect, :, ::1] b
cdef int[::view.indirect_contiguous, ::1, :]
The difference between the `contiguous` flag and the `::1` specifier is that the
former specifies contiguity for only one dimension, whereas the latter specifies
......@@ -668,7 +640,7 @@ You can call the function in a Cython file in the following way:
Several things to note:
- ``::1`` requests a C contiguous view, and fails if the buffer is not C contiguous.
See :ref:`c_and_fortran_contiguous_memoryviews`.
- ``&arr_memview[0]`` can be understood as 'the adress of the first element of the
- ``&arr_memview[0]`` can be understood as 'the address of the first element of the
memoryview'. For contiguous arrays, this is equivalent to the
start address of the flat memory buffer.
- ``arr_memview.shape[0]`` could have been replaced by ``arr_memview.size``,
......@@ -686,7 +658,5 @@ call functions in C files, see :ref:`using_c_libraries`.
.. _GIL: http://docs.python.org/dev/glossary.html#term-global-interpreter-lock
.. _new style buffers: http://docs.python.org/c-api/buffer.html
.. _pep 3118: http://www.python.org/peps/pep-3118.html
.. _NumPy: http://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html#memory-layout
.. _example: http://www.scipy.org/Numpy_Example_List#newaxis
.. _NumPy: https://docs.scipy.org/doc/numpy/reference/arrays.ndarray.html#memory-layout
.. _example: https://docs.scipy.org/doc/numpy/reference/arrays.indexing.html
......@@ -160,7 +160,8 @@ Cython version -- Cython uses ``.pyx`` as its file suffix.
.. literalinclude:: ../../examples/userguide/numpy_tutorial/compute_py.py
This should be compiled to produce :file:`compute_cy.so` (for Linux systems). We
This should be compiled to produce :file:`convolve_cy.so` (for Linux systems,
on Windows systems, this will be a ``.pyd`` file). We
run a Python session to test both the Python version (imported from
``.py``-file) and the compiled Cython module.
......
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
This diff is collapsed.
Markdown is supported
0%
or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment