1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
|
NumPy 1.11.0 Release Notes
**************************
This release supports Python 2.6 - 2.7 and 3.2 - 3.5.
Highlights
==========
* The datetime64 type is now timezone naive. See "datetime64 changes" below
for more details.
Dropped Support
===============
* Bento build support and related files have been removed.
* Single file build support and related files have been removed.
Future Changes
==============
* Relaxed stride checking will become the default in 1.12.0.
* Support for Python 2.6, 3.2, and 3.3 will be dropped in 1.12.0.
Compatibility notes
===================
datetime64 changes
~~~~~~~~~~~~~~~~~~
In prior versions of NumPy the experimental datetime64 type always stored
times in UTC. By default, creating a datetime64 object from a string or
printing it would convert from or to local time::
# old behavior
>>>> np.datetime64('2000-01-01T00:00:00')
numpy.datetime64('2000-01-01T00:00:00-0800') # note the timezone offset -08:00
A concensus of datetime64 users agreed that this behavior is undesirable
and at odds with how datetime64 is usually used (e.g., by pandas_). For
most use cases, a timezone naive datetime type is preferred, similar to the
``datetime.datetime`` type in the Python standard library. Accordingly,
datetime64 no longer assumes that input is in local time, nor does it print
local times::
>>>> np.datetime64('2000-01-01T00:00:00')
numpy.datetime64('2000-01-01T00:00:00')
For backwards compatibility, datetime64 still parses timezone offsets, which
it handles by converting to UTC. However, the resulting datetime is timezone
naive::
>>> np.datetime64('2000-01-01T00:00:00-08')
DeprecationWarning: parsing timezone aware datetimes is deprecated; this will raise an error in the future
numpy.datetime64('2000-01-01T08:00:00')
As a corollary to this change, we no longer prohibit casting between datetimes
with date units and datetimes with timeunits. With timezone naive datetimes,
the rule for casting from dates to times is no longer ambiguous.
pandas_: http://pandas.pydata.org
DeprecationWarning to error
~~~~~~~~~~~~~~~~~~~~~~~~~~~
* Indexing with floats raises IndexError,
e.g., a[0, 0.0].
* Indexing with non-integer array_like raises IndexError,
e.g., a['1', '2']
* Indexing with multiple ellipsis raises IndexError,
e.g., a[..., ...].
* Indexing with boolean where integer expected raises IndexError,
e.g., a[False:True:True].
* Non-integers used as index values raise TypeError,
e.g., in reshape, take, and specifying reduce axis.
FutureWarning to changed behavior
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
* In ``np.lib.split`` an empty array in the result always had dimension
``(0,)`` no matter the dimensions of the array being split. This
has been changed so that the dimensions will be preserved. A
``FutureWarning`` for this change has been in place since Numpy 1.9 but,
due to a bug, sometimes no warning was raised and the dimensions were
already preserved.
C API
~~~~~
Removed the ``check_return`` and ``inner_loop_selector`` members of
the ``PyUFuncObject`` struct (replacing them with ``reserved`` slots
to preserve struct layout). These were never used for anything, so
it's unlikely that any third-party code is using them either, but we
mention it here for completeness.
New Features
============
* `np.histogram` now provides plugin estimators for automatically
estimating the optimal number of bins. Passing one of ['auto', 'fd',
'scott', 'rice', 'sturges'] as the argument to 'bins' results in the
corresponding estimator being used.
* A benchmark suite using `Airspeed Velocity
<http://spacetelescope.github.io/asv/>`__ has been added, converting the
previous vbench-based one. You can run the suite locally via ``python
runtests.py --bench``. For more details, see ``benchmarks/README.rst``.
* A new function ``np.shares_memory`` that can check exactly whether two
arrays have memory overlap is added. ``np.may_share_memory`` also now has
an option to spend more effort to reduce false positives.
* ``SkipTest`` and ``KnownFailureException`` exception classes are exposed
in the ``numpy.testing`` namespace. Raise them in a test function to mark
the test to be skipped or mark it as a known failure, respectively.
* ``f2py.compile`` has a new ``extension`` keyword parameter that allows the
fortran extension to be specified for generated temp files. For instance,
the files can be specifies to be ``*.f90``. The ``verbose`` argument is
also activated, it was previously ignored.
* A ``dtype`` parameter has been added to ``np.random.randint``
Random ndarrays of the following types can now be generated:
- np.bool,
- np.int8, np.uint8,
- np.int16, np.uint16,
- np.int32, np.uint32,
- np.int64, np.uint64,
- np.int_ (long), np.intp
The specification is by precision rather than by C type. Hence, on some
platforms np.int64 may be a `long` instead of `long long` even if the
specified dtype is `long long` because the two may have the same
precision. The resulting type depends on which C type numpy uses for the
given precision. The byteorder specification is also ignored, the
generated arrays are always in native byte order.
* ``np.moveaxis`` allows for moving one or more array axes to a new position
by explicitly providing source and destination axes.
Improvements
============
*np.gradient* now supports an ``axis`` argument
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The ``axis`` parameter was added to *np.gradient* for consistency.
It allows to specify over which axes the gradient is calculated.
*np.lexsort* now supports arrays with object data-type
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The function now internally calls the generic ``npy_amergesort``
when the type does not implement a merge-sort kind of ``argsort``
method.
*np.ma.core.MaskedArray* now supports an ``order`` argument
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
When constructing a new ``MaskedArray`` instance, it can be
configured with an ``order`` argument analogous to the one
when calling ``np.ndarray``. The addition of this argument
allows for the proper processing of an ``order`` argument
in several MaskedArray-related utility functions such as
``np.ma.core.array`` and ``np.ma.core.asarray``.
Memory and speed improvements for masked arrays
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Creating a masked array with ``mask=True`` (resp. ``mask=False``) now uses
``np.ones`` (resp. ``np.zeros``) to create the mask, which is faster and avoid
a big memory peak. Another optimization was done to avoid a memory peak and
useless computations when printing a masked array.
*ndarray.tofile* now uses fallocate on linux
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The function now uses the fallocate system call to reserve sufficient
diskspace on filesystems that support it.
``np.dot`` optimized for operations of the form ``A.T @ A`` and ``A @ A.T``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Previously, ``gemm`` BLAS operations were used for all matrix products. Now,
if the matrix product is between a matrix and its transpose, it will use
``syrk`` BLAS operations for a performance boost.
**Note:** Requires the transposed and non-transposed matrices to share data.
Changes
=======
Pyrex support was removed from ``numpy.distutils``. The method
``build_src.generate_a_pyrex_source`` will remain available; it has been
monkeypatched by users to support Cython instead of Pyrex. It's recommended to
switch to a better supported method of build Cython extensions though.
*np.broadcast* can now be called with a single argument
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The resulting object in that case will simply mimic iteration over
a single array. This change obsoletes distinctions like
if len(x) == 1:
shape = x[0].shape
else:
shape = np.broadcast(*x).shape
Instead, ``np.broadcast`` can be used in all cases.
*np.trace* now respects array subclasses
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This behaviour mimics that of other functions such as ``np.diagonal`` and
ensures, e.g., that for masked arrays ``np.trace(ma)`` and ``ma.trace()`` give
the same result.
Deprecations
============
Views of arrays in Fortran order
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
The f_contiguous flag was used to signal that views as a dtypes that
changed the element size would change the first index. This was always a
bit problematical for arrays that were both f_contiguous and c_contiguous
because c_contiguous took precedence. Relaxed stride checking results in
more such dual contiguous arrays and breaks some existing code as a result.
Note that this also affects changing the dtype by assigning to the dtype
attribute of an array. The aim of this deprecation is to restrict views to
c_contiguous arrays at some future time. A work around that is backward
compatible is to use `a.T.view(...).T` instead. A parameter will also be
added to the view method to explicitly ask for Fortran order views, but
that will not be backward compatible.
Invalid arguments for array ordering
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
It is currently possible to pass in arguments for the ``order``
parameter in methods like ``array.flatten`` or ``array.ravel``
that were not one of the following: 'C', 'F', 'A', 'K' (note that
all of these possible values are unicode- and case-insensitive).
Such behaviour will not be allowed in future releases.
Random number generator in the ``testing`` namespace
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Python standard library random number generator was previously exposed in the
``testing`` namespace as ``testing.rand``. Using this generator is not
recommended and it will be removed in a future release. Use generators from
``numpy.random`` namespace instead.
Random integer generation on a closed interval
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
In accordance with the Python C API, which gives preference to the half-open
interval over the closed one, ``np.random.random_integers`` is being
deprecated in favor of calling ``np.random.randint``, which has been
enhanced with the ``dtype`` parameter as described under "New Features".
However, ``np.random.random_integers`` will not be removed anytime soon.
|