summaryrefslogtreecommitdiff
diff options
context:
space:
mode:
authormattip <matti.picus@gmail.com>2019-02-20 23:46:20 +0200
committermattip <matti.picus@gmail.com>2019-02-28 11:43:59 +0200
commit62433284d65a3629a199958da2df3a807c60fab4 (patch)
tree61a822b3dab1edc78eff9019e61dc5e1dd0e607d
parentb9ab1a57b9c7ff9462b8d678bce91274d0ad4d12 (diff)
downloadnumpy-62433284d65a3629a199958da2df3a807c60fab4.tar.gz
DOC: reduce warnings when building, reword, tweak doc building
-rw-r--r--azure-pipelines.yml2
-rw-r--r--doc/TESTS.rst.txt7
-rw-r--r--doc/release/1.16.0-notes.rst4
-rw-r--r--doc/source/dev/index.rst3
-rw-r--r--doc/source/reference/arrays.dtypes.rst8
-rw-r--r--doc/source/reference/arrays.ndarray.rst17
-rw-r--r--doc/source/reference/arrays.scalars.rst8
-rw-r--r--doc/source/reference/c-api.array.rst19
-rw-r--r--doc/source/reference/maskedarray.baseclass.rst9
-rw-r--r--doc/source/reference/routines.ma.rst9
-rw-r--r--doc/source/reference/routines.testing.rst2
-rw-r--r--doc/source/reference/ufuncs.rst4
-rw-r--r--numpy/core/_add_newdocs.py139
-rw-r--r--numpy/core/defchararray.py2
-rw-r--r--numpy/core/fromnumeric.py1
-rw-r--r--numpy/core/shape_base.py8
-rw-r--r--numpy/doc/glossary.py20
-rw-r--r--numpy/doc/structured_arrays.py15
-rw-r--r--numpy/linalg/linalg.py2
19 files changed, 162 insertions, 117 deletions
diff --git a/azure-pipelines.yml b/azure-pipelines.yml
index f964fffaa..3f5a221de 100644
--- a/azure-pipelines.yml
+++ b/azure-pipelines.yml
@@ -63,7 +63,7 @@ jobs:
displayName: 'make gfortran available on mac os vm'
- script: python -m pip install --upgrade pip setuptools wheel
displayName: 'Install tools'
- - script: python -m pip install cython nose pytz pytest pickle5 vulture docutils sphinx numpydoc matplotlib
+ - script: python -m pip install cython nose pytz pytest pickle5 vulture docutils sphinx==1.7.9 numpydoc matplotlib
displayName: 'Install dependencies; some are optional to avoid test skips'
- script: /bin/bash -c "! vulture . --min-confidence 100 --exclude doc/,numpy/distutils/ | grep 'unreachable'"
displayName: 'Check for unreachable code paths in Python modules'
diff --git a/doc/TESTS.rst.txt b/doc/TESTS.rst.txt
index daf82aaaa..8169ea38a 100644
--- a/doc/TESTS.rst.txt
+++ b/doc/TESTS.rst.txt
@@ -37,10 +37,9 @@ or from the command line::
$ python runtests.py
-SciPy uses the testing framework from NumPy (specifically
-:ref:`numpy-testing`), so all the SciPy examples shown here are also
-applicable to NumPy. NumPy's full test suite can be run as
-follows::
+SciPy uses the testing framework from :mod:`numpy.testing`, so all
+the SciPy examples shown here are also applicable to NumPy. NumPy's full test
+suite can be run as follows::
>>> import numpy
>>> numpy.test()
diff --git a/doc/release/1.16.0-notes.rst b/doc/release/1.16.0-notes.rst
index 341d5f715..1034d6e6c 100644
--- a/doc/release/1.16.0-notes.rst
+++ b/doc/release/1.16.0-notes.rst
@@ -176,7 +176,7 @@ of:
* :c:member:`PyUFuncObject.core_dim_flags`
* :c:member:`PyUFuncObject.core_dim_sizes`
* :c:member:`PyUFuncObject.identity_value`
-* :c:function:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`
+* :c:func:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`
New Features
@@ -407,7 +407,7 @@ Additionally, `logaddexp` now has an identity of ``-inf``, allowing it to be
called on empty sequences, where previously it could not be.
This is possible thanks to the new
-:c:function:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`, which allows
+:c:func:`PyUFunc_FromFuncAndDataAndSignatureAndIdentity`, which allows
arbitrary values to be used as identities now.
Improved conversion from ctypes objects
diff --git a/doc/source/dev/index.rst b/doc/source/dev/index.rst
index 825b93b53..43aff1931 100644
--- a/doc/source/dev/index.rst
+++ b/doc/source/dev/index.rst
@@ -3,11 +3,12 @@ Contributing to NumPy
#####################
.. toctree::
- :maxdepth: 3
+ :maxdepth: 2
conduct/code_of_conduct
gitwash/index
development_environment
+ ../benchmarking
style_guide
releasing
governance/index
diff --git a/doc/source/reference/arrays.dtypes.rst b/doc/source/reference/arrays.dtypes.rst
index f2072263f..b55feb247 100644
--- a/doc/source/reference/arrays.dtypes.rst
+++ b/doc/source/reference/arrays.dtypes.rst
@@ -14,7 +14,7 @@ following aspects of the data:
1. Type of the data (integer, float, Python object, etc.)
2. Size of the data (how many bytes is in *e.g.* the integer)
3. Byte order of the data (:term:`little-endian` or :term:`big-endian`)
-4. If the data type is :term:`structured`, an aggregate of other
+4. If the data type is :term:`structured data type`, an aggregate of other
data types, (*e.g.*, describing an array item consisting of
an integer and a float),
@@ -42,7 +42,7 @@ needed in NumPy.
pair: dtype; field
Structured data types are formed by creating a data type whose
-:term:`fields` contain other data types. Each field has a name by
+:term:`field` contain other data types. Each field has a name by
which it can be :ref:`accessed <arrays.indexing.fields>`. The parent data
type should be of sufficient size to contain all its fields; the
parent is nearly always based on the :class:`void` type which allows
@@ -145,7 +145,7 @@ Array-scalar types
This is true for their sub-classes as well.
Note that not all data-type information can be supplied with a
- type-object: for example, :term:`flexible` data-types have
+ type-object: for example, `flexible` data-types have
a default *itemsize* of 0, and require an explicitly given size
to be useful.
@@ -511,7 +511,7 @@ Endianness of this data:
dtype.byteorder
-Information about sub-data-types in a :term:`structured` data type:
+Information about sub-data-types in a :term:`structured data type`:
.. autosummary::
:toctree: generated/
diff --git a/doc/source/reference/arrays.ndarray.rst b/doc/source/reference/arrays.ndarray.rst
index 98a811d35..5e685f25c 100644
--- a/doc/source/reference/arrays.ndarray.rst
+++ b/doc/source/reference/arrays.ndarray.rst
@@ -82,10 +82,12 @@ Indexing arrays
Arrays can be indexed using an extended Python slicing syntax,
``array[selection]``. Similar syntax is also used for accessing
-fields in a :ref:`structured array <arrays.dtypes.field>`.
+fields in a :term:`structured data type`.
.. seealso:: :ref:`Array Indexing <arrays.indexing>`.
+.. _memory-layout:
+
Internal memory layout of an ndarray
====================================
@@ -127,7 +129,7 @@ strided scheme, and correspond to memory that can be *addressed* by the strides:
where :math:`d_j` `= self.shape[j]`.
Both the C and Fortran orders are :term:`contiguous`, *i.e.,*
-:term:`single-segment`, memory layouts, in which every part of the
+single-segment, memory layouts, in which every part of the
memory block can be accessed by some combination of the indices.
While a C-style and Fortran-style contiguous array, which has the corresponding
@@ -143,14 +145,15 @@ different. This can happen in two cases:
considered C-style and Fortran-style contiguous.
Point 1. means that ``self`` and ``self.squeeze()`` always have the same
-contiguity and :term:`aligned` flags value. This also means that even a high
-dimensional array could be C-style and Fortran-style contiguous at the same
-time.
+contiguity and ``aligned`` flags value. This also means
+that even a high dimensional array could be C-style and Fortran-style
+contiguous at the same time.
.. index:: aligned
An array is considered aligned if the memory offsets for all elements and the
-base offset itself is a multiple of `self.itemsize`.
+base offset itself is a multiple of `self.itemsize`. Understanding
+`memory-alignment` leads to better performance on most hardware.
.. note::
@@ -441,7 +444,7 @@ Each of the arithmetic operations (``+``, ``-``, ``*``, ``/``, ``//``,
``%``, ``divmod()``, ``**`` or ``pow()``, ``<<``, ``>>``, ``&``,
``^``, ``|``, ``~``) and the comparisons (``==``, ``<``, ``>``,
``<=``, ``>=``, ``!=``) is equivalent to the corresponding
-:term:`universal function` (or :term:`ufunc` for short) in NumPy. For
+universal function (or :term:`ufunc` for short) in NumPy. For
more information, see the section on :ref:`Universal Functions
<ufuncs>`.
diff --git a/doc/source/reference/arrays.scalars.rst b/doc/source/reference/arrays.scalars.rst
index 9c4f05f75..d27d61e2c 100644
--- a/doc/source/reference/arrays.scalars.rst
+++ b/doc/source/reference/arrays.scalars.rst
@@ -177,7 +177,7 @@ Any Python object:
.. note::
- The data actually stored in :term:`object arrays <object array>`
+ The data actually stored in object arrays
(*i.e.*, arrays having dtype :class:`object_`) are references to
Python objects, not the objects themselves. Hence, object arrays
behave more like usual Python :class:`lists <list>`, in the sense
@@ -188,8 +188,10 @@ Any Python object:
on item access, but instead returns the actual object that
the array item refers to.
-The following data types are :term:`flexible`. They have no predefined
-size: the data they describe can be of different length in different
+.. index:: flexible
+
+The following data types are **flexible**: they have no predefined
+size and the data they describe can be of different length in different
arrays. (In the character codes ``#`` is an integer denoting how many
elements the data type consists of.)
diff --git a/doc/source/reference/c-api.array.rst b/doc/source/reference/c-api.array.rst
index 44d09a9fe..1a3592781 100644
--- a/doc/source/reference/c-api.array.rst
+++ b/doc/source/reference/c-api.array.rst
@@ -219,7 +219,7 @@ From scratch
If *data* is ``NULL``, then new unitinialized memory will be allocated and
*flags* can be non-zero to indicate a Fortran-style contiguous array. Use
- :c:ref:`PyArray_FILLWBYTE` to initialze the memory.
+ :c:func:`PyArray_FILLWBYTE` to initialze the memory.
If *data* is not ``NULL``, then it is assumed to point to the memory
to be used for the array and the *flags* argument is used as the
@@ -573,8 +573,9 @@ From other objects
return NULL;
}
if (arr == NULL) {
+ /*
... validate/change dtype, validate flags, ndim, etc ...
- // Could make custom strides here too
+ Could make custom strides here too */
arr = PyArray_NewFromDescr(&PyArray_Type, dtype, ndim,
dims, NULL,
fortran ? NPY_ARRAY_F_CONTIGUOUS : 0,
@@ -588,10 +589,14 @@ From other objects
}
}
else {
+ /*
... in this case the other parameters weren't filled, just
validate and possibly copy arr itself ...
+ */
}
+ /*
... use arr ...
+ */
.. c:function:: PyObject* PyArray_CheckFromAny( \
PyObject* op, PyArray_Descr* dtype, int min_depth, int max_depth, \
@@ -2660,22 +2665,22 @@ cost of a slight overhead.
.. code-block:: c
- PyArrayIterObject \*iter;
- PyArrayNeighborhoodIterObject \*neigh_iter;
+ PyArrayIterObject *iter;
+ PyArrayNeighborhoodIterObject *neigh_iter;
iter = PyArray_IterNew(x);
- //For a 3x3 kernel
+ /*For a 3x3 kernel */
bounds = {-1, 1, -1, 1};
neigh_iter = (PyArrayNeighborhoodIterObject*)PyArrayNeighborhoodIter_New(
iter, bounds, NPY_NEIGHBORHOOD_ITER_ZERO_PADDING, NULL);
for(i = 0; i < iter->size; ++i) {
for (j = 0; j < neigh_iter->size; ++j) {
- // Walk around the item currently pointed by iter->dataptr
+ /* Walk around the item currently pointed by iter->dataptr */
PyArrayNeighborhoodIter_Next(neigh_iter);
}
- // Move to the next point of iter
+ /* Move to the next point of iter */
PyArrayIter_Next(iter);
PyArrayNeighborhoodIter_Reset(neigh_iter);
}
diff --git a/doc/source/reference/maskedarray.baseclass.rst b/doc/source/reference/maskedarray.baseclass.rst
index 17a0a940d..0b7482f2b 100644
--- a/doc/source/reference/maskedarray.baseclass.rst
+++ b/doc/source/reference/maskedarray.baseclass.rst
@@ -62,12 +62,15 @@ The :class:`MaskedArray` class
+.. _ma-attributes:
+
Attributes and properties of masked arrays
------------------------------------------
.. seealso:: :ref:`Array Attributes <arrays.ndarray.attributes>`
+.. _ma-data:
.. attribute:: MaskedArray.data
Returns the underlying data, as a view of the masked array.
@@ -82,6 +85,8 @@ Attributes and properties of masked arrays
The type of the data can be accessed through the :attr:`baseclass`
attribute.
+.. _ma-mask:
+
.. attribute:: MaskedArray.mask
Returns the underlying mask, as an array with the same shape and structure
@@ -89,6 +94,8 @@ Attributes and properties of masked arrays
A value of ``True`` indicates an invalid entry.
+.. _ma-recordmask:
+
.. attribute:: MaskedArray.recordmask
Returns the mask of the array if it has no named fields. For structured
@@ -102,6 +109,8 @@ Attributes and properties of masked arrays
array([False, False, True, False, False])
+.. _ma-fillvalue:
+
.. attribute:: MaskedArray.fill_value
Returns the value used to fill the invalid entries of a masked array.
diff --git a/doc/source/reference/routines.ma.rst b/doc/source/reference/routines.ma.rst
index 15f2ba0a4..b99378e42 100644
--- a/doc/source/reference/routines.ma.rst
+++ b/doc/source/reference/routines.ma.rst
@@ -68,9 +68,6 @@ Inspecting the array
ma.is_masked
ma.is_mask
- ma.MaskedArray.data
- ma.MaskedArray.mask
- ma.MaskedArray.recordmask
ma.MaskedArray.all
ma.MaskedArray.any
@@ -80,6 +77,10 @@ Inspecting the array
ma.size
+.. seealso:: :ref:`ma.MaskedArray.data <ma-data>`,
+ :ref:`ma.MaskedArray.mask <ma-mask>` and
+ :ref:`ma.MaskedArray.recordmask <ma-recordmask>`
+
_____
Manipulating a MaskedArray
@@ -285,8 +286,8 @@ Filling a masked array
ma.MaskedArray.get_fill_value
ma.MaskedArray.set_fill_value
- ma.MaskedArray.fill_value
+.. seealso: :ref:`ma.MaskedArray.fill_value <ma-fill_value>`
_____
diff --git a/doc/source/reference/routines.testing.rst b/doc/source/reference/routines.testing.rst
index 77c046768..c676dec07 100644
--- a/doc/source/reference/routines.testing.rst
+++ b/doc/source/reference/routines.testing.rst
@@ -1,5 +1,3 @@
-.. _numpy-testing:
-
.. module:: numpy.testing
Test Support (:mod:`numpy.testing`)
diff --git a/doc/source/reference/ufuncs.rst b/doc/source/reference/ufuncs.rst
index 3cc956887..e81d0f1ee 100644
--- a/doc/source/reference/ufuncs.rst
+++ b/doc/source/reference/ufuncs.rst
@@ -59,7 +59,7 @@ understood by four rules:
entry in that dimension will be used for all calculations along
that dimension. In other words, the stepping machinery of the
:term:`ufunc` will simply not step along that dimension (the
- :term:`stride` will be 0 for that dimension).
+ :ref:`stride <memory-layout>` will be 0 for that dimension).
Broadcasting is used throughout NumPy to decide how to handle
disparately shaped arrays; for example, all arithmetic operations (``+``,
@@ -70,7 +70,7 @@ arrays before operation.
.. index:: broadcastable
-A set of arrays is called ":term:`broadcastable`" to the same shape if
+A set of arrays is called "broadcastable" to the same shape if
the above rules produce a valid result, *i.e.*, one of the following
is true:
diff --git a/numpy/core/_add_newdocs.py b/numpy/core/_add_newdocs.py
index 2f5a48ed8..d6c784ba1 100644
--- a/numpy/core/_add_newdocs.py
+++ b/numpy/core/_add_newdocs.py
@@ -161,62 +161,63 @@ add_newdoc('numpy.core', 'nditer',
----------
op : ndarray or sequence of array_like
The array(s) to iterate over.
+
flags : sequence of str, optional
- Flags to control the behavior of the iterator.
+ Flags to control the behavior of the iterator.
- * "buffered" enables buffering when required.
- * "c_index" causes a C-order index to be tracked.
- * "f_index" causes a Fortran-order index to be tracked.
- * "multi_index" causes a multi-index, or a tuple of indices
+ * ``buffered`` enables buffering when required.
+ * ``c_index`` causes a C-order index to be tracked.
+ * ``f_index`` causes a Fortran-order index to be tracked.
+ * ``multi_index`` causes a multi-index, or a tuple of indices
with one per iteration dimension, to be tracked.
- * "common_dtype" causes all the operands to be converted to
+ * ``common_dtype`` causes all the operands to be converted to
a common data type, with copying or buffering as necessary.
- * "copy_if_overlap" causes the iterator to determine if read
+ * ``copy_if_overlap`` causes the iterator to determine if read
operands have overlap with write operands, and make temporary
copies as necessary to avoid overlap. False positives (needless
copying) are possible in some cases.
- * "delay_bufalloc" delays allocation of the buffers until
- a reset() call is made. Allows "allocate" operands to
+ * ``delay_bufalloc`` delays allocation of the buffers until
+ a reset() call is made. Allows ``allocate`` operands to
be initialized before their values are copied into the buffers.
- * "external_loop" causes the `values` given to be
+ * ``external_loop`` causes the ``values`` given to be
one-dimensional arrays with multiple values instead of
zero-dimensional arrays.
- * "grow_inner" allows the `value` array sizes to be made
- larger than the buffer size when both "buffered" and
- "external_loop" is used.
- * "ranged" allows the iterator to be restricted to a sub-range
+ * ``grow_inner`` allows the ``value`` array sizes to be made
+ larger than the buffer size when both ``buffered`` and
+ ``external_loop`` is used.
+ * ``ranged`` allows the iterator to be restricted to a sub-range
of the iterindex values.
- * "refs_ok" enables iteration of reference types, such as
+ * ``refs_ok`` enables iteration of reference types, such as
object arrays.
- * "reduce_ok" enables iteration of "readwrite" operands
+ * ``reduce_ok`` enables iteration of ``readwrite`` operands
which are broadcasted, also known as reduction operands.
- * "zerosize_ok" allows `itersize` to be zero.
+ * ``zerosize_ok`` allows `itersize` to be zero.
op_flags : list of list of str, optional
- This is a list of flags for each operand. At minimum, one of
- "readonly", "readwrite", or "writeonly" must be specified.
-
- * "readonly" indicates the operand will only be read from.
- * "readwrite" indicates the operand will be read from and written to.
- * "writeonly" indicates the operand will only be written to.
- * "no_broadcast" prevents the operand from being broadcasted.
- * "contig" forces the operand data to be contiguous.
- * "aligned" forces the operand data to be aligned.
- * "nbo" forces the operand data to be in native byte order.
- * "copy" allows a temporary read-only copy if required.
- * "updateifcopy" allows a temporary read-write copy if required.
- * "allocate" causes the array to be allocated if it is None
- in the `op` parameter.
- * "no_subtype" prevents an "allocate" operand from using a subtype.
- * "arraymask" indicates that this operand is the mask to use
+ This is a list of flags for each operand. At minimum, one of
+ ``readonly``, ``readwrite``, or ``writeonly`` must be specified.
+
+ * ``readonly`` indicates the operand will only be read from.
+ * ``readwrite`` indicates the operand will be read from and written to.
+ * ``writeonly`` indicates the operand will only be written to.
+ * ``no_broadcast`` prevents the operand from being broadcasted.
+ * ``contig`` forces the operand data to be contiguous.
+ * ``aligned`` forces the operand data to be aligned.
+ * ``nbo`` forces the operand data to be in native byte order.
+ * ``copy`` allows a temporary read-only copy if required.
+ * ``updateifcopy`` allows a temporary read-write copy if required.
+ * ``allocate`` causes the array to be allocated if it is None
+ in the ``op`` parameter.
+ * ``no_subtype`` prevents an ``allocate`` operand from using a subtype.
+ * ``arraymask`` indicates that this operand is the mask to use
for selecting elements when writing to operands with the
'writemasked' flag set. The iterator does not enforce this,
but when writing from a buffer back to the array, it only
copies those elements indicated by this mask.
- * 'writemasked' indicates that only elements where the chosen
- 'arraymask' operand is True will be written to.
- * "overlap_assume_elementwise" can be used to mark operands that are
+ * ``writemasked`` indicates that only elements where the chosen
+ ``arraymask`` operand is True will be written to.
+ * ``overlap_assume_elementwise`` can be used to mark operands that are
accessed only in the iterator order, to allow less conservative
- copying when "copy_if_overlap" is present.
+ copying when ``copy_if_overlap`` is present.
op_dtypes : dtype or tuple of dtype(s), optional
The required data type(s) of the operands. If copying or buffering
is enabled, the data will be converted to/from their original types.
@@ -225,7 +226,7 @@ add_newdoc('numpy.core', 'nditer',
Fortran order, 'A' means 'F' order if all the arrays are Fortran
contiguous, 'C' order otherwise, and 'K' means as close to the
order the array elements appear in memory as possible. This also
- affects the element memory order of "allocate" operands, as they
+ affects the element memory order of ``allocate`` operands, as they
are allocated to be compatible with iteration order.
Default is 'K'.
casting : {'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional
@@ -233,20 +234,20 @@ add_newdoc('numpy.core', 'nditer',
or buffering. Setting this to 'unsafe' is not recommended,
as it can adversely affect accumulations.
- * 'no' means the data types should not be cast at all.
- * 'equiv' means only byte-order changes are allowed.
- * 'safe' means only casts which can preserve values are allowed.
- * 'same_kind' means only safe casts or casts within a kind,
- like float64 to float32, are allowed.
- * 'unsafe' means any data conversions may be done.
+ * 'no' means the data types should not be cast at all.
+ * 'equiv' means only byte-order changes are allowed.
+ * 'safe' means only casts which can preserve values are allowed.
+ * 'same_kind' means only safe casts or casts within a kind,
+ like float64 to float32, are allowed.
+ * 'unsafe' means any data conversions may be done.
op_axes : list of list of ints, optional
If provided, is a list of ints or None for each operands.
The list of axes for an operand is a mapping from the dimensions
of the iterator to the dimensions of the operand. A value of
-1 can be placed for entries, causing that dimension to be
- treated as "newaxis".
+ treated as `newaxis`.
itershape : tuple of ints, optional
- The desired shape of the iterator. This allows "allocate" operands
+ The desired shape of the iterator. This allows ``allocate`` operands
with a dimension mapped by op_axes not corresponding to a dimension
of a different operand to get a value not equal to 1 for that
dimension.
@@ -263,19 +264,19 @@ add_newdoc('numpy.core', 'nditer',
finished : bool
Whether the iteration over the operands is finished or not.
has_delayed_bufalloc : bool
- If True, the iterator was created with the "delay_bufalloc" flag,
+ If True, the iterator was created with the ``delay_bufalloc`` flag,
and no reset() function was called on it yet.
has_index : bool
- If True, the iterator was created with either the "c_index" or
- the "f_index" flag, and the property `index` can be used to
+ If True, the iterator was created with either the ``c_index`` or
+ the ``f_index`` flag, and the property `index` can be used to
retrieve it.
has_multi_index : bool
- If True, the iterator was created with the "multi_index" flag,
+ If True, the iterator was created with the ``multi_index`` flag,
and the property `multi_index` can be used to retrieve it.
index
- When the "c_index" or "f_index" flag was used, this property
+ When the ``c_index`` or ``f_index`` flag was used, this property
provides access to the index. Raises a ValueError if accessed
- and `has_index` is False.
+ and ``has_index`` is False.
iterationneedsapi : bool
Whether iteration requires access to the Python API, for example
if one of the operands is an object array.
@@ -288,11 +289,11 @@ add_newdoc('numpy.core', 'nditer',
and optimized iterator access pattern. Valid only before the iterator
is closed.
multi_index
- When the "multi_index" flag was used, this property
+ When the ``multi_index`` flag was used, this property
provides access to the index. Raises a ValueError if accessed
- accessed and `has_multi_index` is False.
+ accessed and ``has_multi_index`` is False.
ndim : int
- The iterator's dimension.
+ The dimensions of the iterator.
nop : int
The number of iterator operands.
operands : tuple of operand(s)
@@ -301,8 +302,8 @@ add_newdoc('numpy.core', 'nditer',
shape : tuple of ints
Shape tuple, the shape of the iterator.
value
- Value of `operands` at current iteration. Normally, this is a
- tuple of array scalars, but if the flag "external_loop" is used,
+ Value of ``operands`` at current iteration. Normally, this is a
+ tuple of array scalars, but if the flag ``external_loop`` is used,
it is a tuple of one dimensional arrays.
Notes
@@ -313,12 +314,12 @@ add_newdoc('numpy.core', 'nditer',
The Python exposure supplies two iteration interfaces, one which follows
the Python iterator protocol, and another which mirrors the C-style
do-while pattern. The native Python approach is better in most cases, but
- if you need the iterator's coordinates or index, use the C-style pattern.
+ if you need the coordinates or index of an iterator, use the C-style pattern.
Examples
--------
Here is how we might write an ``iter_add`` function, using the
- Python iterator protocol::
+ Python iterator protocol:
>>> def iter_add_py(x, y, out=None):
... addop = np.add
@@ -329,7 +330,7 @@ add_newdoc('numpy.core', 'nditer',
... addop(a, b, out=c)
... return it.operands[2]
- Here is the same function, but following the C-style pattern::
+ Here is the same function, but following the C-style pattern:
>>> def iter_add(x, y, out=None):
... addop = np.add
@@ -341,7 +342,7 @@ add_newdoc('numpy.core', 'nditer',
... it.iternext()
... return it.operands[2]
- Here is an example outer product function::
+ Here is an example outer product function:
>>> def outer_it(x, y, out=None):
... mulop = np.multiply
@@ -361,7 +362,7 @@ add_newdoc('numpy.core', 'nditer',
array([[1, 2, 3],
[2, 4, 6]])
- Here is an example function which operates like a "lambda" ufunc::
+ Here is an example function which operates like a "lambda" ufunc:
>>> def luf(lamdaexpr, *args, **kwargs):
... '''luf(lambdaexpr, op1, ..., opn, out=None, order='K', casting='safe', buffersize=0)'''
@@ -2031,21 +2032,27 @@ add_newdoc('numpy.core.multiarray', 'ndarray', ('ctypes',
as well as documented private attributes):
.. autoattribute:: numpy.core._internal._ctypes.data
+ :noindex:
.. autoattribute:: numpy.core._internal._ctypes.shape
+ :noindex:
.. autoattribute:: numpy.core._internal._ctypes.strides
+ :noindex:
.. automethod:: numpy.core._internal._ctypes.data_as
+ :noindex:
.. automethod:: numpy.core._internal._ctypes.shape_as
+ :noindex:
.. automethod:: numpy.core._internal._ctypes.strides_as
+ :noindex:
If the ctypes module is not available, then the ctypes attribute
of array objects still returns something useful, but ctypes objects
are not returned and errors may be raised instead. In particular,
- the object will still have the as parameter attribute which will
+ the object will still have the ``as_parameter`` attribute which will
return an integer equal to the data attribute.
Examples
@@ -4861,7 +4868,7 @@ add_newdoc('numpy.core', 'ufunc', ('reduce',
out : ndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If not provided or `None`,
a freshly-allocated array is returned. For consistency with
- :ref:`ufunc.__call__`, if given as a keyword, this may be wrapped in a
+ ``ufunc.__call__``, if given as a keyword, this may be wrapped in a
1-element tuple.
.. versionchanged:: 1.13.0
@@ -4978,7 +4985,7 @@ add_newdoc('numpy.core', 'ufunc', ('accumulate',
out : ndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If not provided or `None`,
a freshly-allocated array is returned. For consistency with
- :ref:`ufunc.__call__`, if given as a keyword, this may be wrapped in a
+ ``ufunc.__call__``, if given as a keyword, this may be wrapped in a
1-element tuple.
.. versionchanged:: 1.13.0
@@ -5060,7 +5067,7 @@ add_newdoc('numpy.core', 'ufunc', ('reduceat',
out : ndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If not provided or `None`,
a freshly-allocated array is returned. For consistency with
- :ref:`ufunc.__call__`, if given as a keyword, this may be wrapped in a
+ ``ufunc.__call__``, if given as a keyword, this may be wrapped in a
1-element tuple.
.. versionchanged:: 1.13.0
diff --git a/numpy/core/defchararray.py b/numpy/core/defchararray.py
index 007fc6186..3fd7d14c4 100644
--- a/numpy/core/defchararray.py
+++ b/numpy/core/defchararray.py
@@ -29,7 +29,7 @@ from numpy.compat import asbytes, long
import numpy
__all__ = [
- 'chararray', 'equal', 'not_equal', 'greater_equal', 'less_equal',
+ 'equal', 'not_equal', 'greater_equal', 'less_equal',
'greater', 'less', 'str_len', 'add', 'multiply', 'mod', 'capitalize',
'center', 'count', 'decode', 'encode', 'endswith', 'expandtabs',
'find', 'index', 'isalnum', 'isalpha', 'isdigit', 'islower', 'isspace',
diff --git a/numpy/core/fromnumeric.py b/numpy/core/fromnumeric.py
index cdb6c4bed..04b1e9fae 100644
--- a/numpy/core/fromnumeric.py
+++ b/numpy/core/fromnumeric.py
@@ -912,6 +912,7 @@ def sort(a, axis=-1, kind='quicksort', order=None):
data types.
.. versionadded:: 1.17.0
+
Timsort is added for better performance on already or nearly
sorted data. On random data timsort is almost identical to
mergesort. It is now used for stable sort while quicksort is still the
diff --git a/numpy/core/shape_base.py b/numpy/core/shape_base.py
index 08e07bb66..e43519689 100644
--- a/numpy/core/shape_base.py
+++ b/numpy/core/shape_base.py
@@ -347,9 +347,9 @@ def stack(arrays, axis=0, out=None):
"""
Join a sequence of arrays along a new axis.
- The `axis` parameter specifies the index of the new axis in the dimensions
- of the result. For example, if ``axis=0`` it will be the first dimension
- and if ``axis=-1`` it will be the last dimension.
+ The ``axis`` parameter specifies the index of the new axis in the
+ dimensions of the result. For example, if ``axis=0`` it will be the first
+ dimension and if ``axis=-1`` it will be the last dimension.
.. versionadded:: 1.10.0
@@ -357,8 +357,10 @@ def stack(arrays, axis=0, out=None):
----------
arrays : sequence of array_like
Each array must have the same shape.
+
axis : int, optional
The axis in the result array along which the input arrays are stacked.
+
out : ndarray, optional
If provided, the destination to place the result. The shape must be
correct, matching that of what stack would have returned if no
diff --git a/numpy/doc/glossary.py b/numpy/doc/glossary.py
index a3707340d..162288b14 100644
--- a/numpy/doc/glossary.py
+++ b/numpy/doc/glossary.py
@@ -159,7 +159,7 @@ Glossary
field
In a :term:`structured data type`, each sub-type is called a `field`.
- The `field` has a name (a string), a type (any valid :term:`dtype`, and
+ The `field` has a name (a string), a type (any valid dtype, and
an optional `title`. See :ref:`arrays.dtypes`
Fortran order
@@ -209,6 +209,9 @@ Glossary
Key 1: b
Key 2: c
+ itemsize
+ The size of the dtype element in bytes.
+
list
A Python container that can hold any number of objects or items.
The items do not have to be of the same type, and can even be
@@ -377,6 +380,15 @@ Glossary
structured data type
A data type composed of other datatypes
+ subarray
+ A :term:`structured data type` may contain a :term:`ndarray` with its
+ own dtype and shape.
+
+ title
+ In addition to field names, structured array fields may have an
+ associated :ref:`title <titles>` which is an alias to the name and is
+ commonly used for plotting.
+
tuple
A sequence that may contain a variable number of types of any
kind. A tuple is immutable, i.e., once constructed it cannot be
@@ -416,6 +428,12 @@ Glossary
Universal function. A fast element-wise array operation. Examples include
``add``, ``sin`` and ``logical_or``.
+ vectorized
+ A loop-based function that operates on data with fixed strides.
+ Compilers know how to take advantage of well-constructed loops and
+ match the data to specialized hardware that can operate on a number
+ of operands in parallel.
+
view
An array that does not own its data, but refers to another array's
data instead. For example, we may create a view that only shows
diff --git a/numpy/doc/structured_arrays.py b/numpy/doc/structured_arrays.py
index da3a74bd6..c3605b49a 100644
--- a/numpy/doc/structured_arrays.py
+++ b/numpy/doc/structured_arrays.py
@@ -57,7 +57,7 @@ A structured datatype can be thought of as a sequence of bytes of a certain
length (the structure's :term:`itemsize`) which is interpreted as a collection
of fields. Each field has a name, a datatype, and a byte offset within the
structure. The datatype of a field may be any numpy datatype including other
-structured datatypes, and it may also be a :term:`sub-array` which behaves like
+structured datatypes, and it may also be a :term:`subarray` which behaves like
an ndarray of a specified shape. The offsets of the fields are arbitrary, and
fields may even overlap. These offsets are usually determined automatically by
numpy, but can also be specified.
@@ -231,7 +231,7 @@ each field's offset is a multiple of its size and that the itemsize is a
multiple of the largest field size, and raise an exception if not.
If the offsets of the fields and itemsize of a structured array satisfy the
-alignment conditions, the array will have the ``ALIGNED`` :ref:`flag
+alignment conditions, the array will have the ``ALIGNED`` :attr:`flag
<numpy.ndarray.flags>` set.
A convenience function :func:`numpy.lib.recfunctions.repack_fields` converts an
@@ -266,7 +266,7 @@ providing a 3-element tuple ``(datatype, offset, title)`` instead of the usual
>>> np.dtype({'name': ('i4', 0, 'my title')})
dtype([(('my title', 'name'), '<i4')])
-The ``dtype.fields`` dictionary will contain :term:`titles` as keys, if any
+The ``dtype.fields`` dictionary will contain title as keys, if any
titles are used. This means effectively that a field with a title will be
represented twice in the fields dictionary. The tuple values for these fields
will also have a third element, the field title. Because of this, and because
@@ -431,8 +431,9 @@ array, as follows::
Assignment to the view modifies the original array. The view's fields will be
in the order they were indexed. Note that unlike for single-field indexing, the
-view's dtype has the same itemsize as the original array, and has fields at the
-same offsets as in the original array, and unindexed fields are merely missing.
+dtype of the view has the same itemsize as the original array, and has fields
+at the same offsets as in the original array, and unindexed fields are merely
+missing.
.. warning::
In Numpy 1.15, indexing an array with a multi-field index returned a copy of
@@ -453,7 +454,7 @@ same offsets as in the original array, and unindexed fields are merely missing.
Numpy 1.12, and similar code has raised ``FutureWarning`` since 1.7.
In 1.16 a number of functions have been introduced in the
- :module:`numpy.lib.recfunctions` module to help users account for this
+ :mod:`numpy.lib.recfunctions` module to help users account for this
change. These are
:func:`numpy.lib.recfunctions.repack_fields`.
:func:`numpy.lib.recfunctions.structured_to_unstructured`,
@@ -610,7 +611,7 @@ creating record arrays, see :ref:`record array creation routines
<routines.array-creation.rec>`.
A record array representation of a structured array can be obtained using the
-appropriate :ref:`view`::
+appropriate `view <numpy-ndarray-view>`_::
>>> arr = np.array([(1, 2., 'Hello'), (2, 3., "World")],
... dtype=[('foo', 'i4'),('bar', 'f4'), ('baz', 'a10')])
diff --git a/numpy/linalg/linalg.py b/numpy/linalg/linalg.py
index 304fce69f..5e6e423a7 100644
--- a/numpy/linalg/linalg.py
+++ b/numpy/linalg/linalg.py
@@ -2621,10 +2621,8 @@ def multi_dot(arrays):
instead of::
>>> _ = np.dot(np.dot(np.dot(A, B), C), D)
- ...
>>> # or
>>> _ = A.dot(B).dot(C).dot(D)
- ...
Notes
-----