diff options
35 files changed, 43 insertions, 44 deletions
diff --git a/doc/source/dev/gitwash/development_setup.rst b/doc/source/dev/gitwash/development_setup.rst index 2be7125da..a2fc61d2e 100644 --- a/doc/source/dev/gitwash/development_setup.rst +++ b/doc/source/dev/gitwash/development_setup.rst @@ -112,7 +112,7 @@ Look it over - the ``main`` branch you just cloned on your own machine - the ``main`` branch from your fork on GitHub, which git named ``origin`` by default - - the ``main`` branch on the the main NumPy repo, which you named + - the ``main`` branch on the main NumPy repo, which you named ``upstream``. :: diff --git a/doc/source/reference/c-api/iterator.rst b/doc/source/reference/c-api/iterator.rst index 83644d8b2..b4adaef9b 100644 --- a/doc/source/reference/c-api/iterator.rst +++ b/doc/source/reference/c-api/iterator.rst @@ -653,7 +653,7 @@ Construction and Destruction may not be repeated. The following example is how normal broadcasting applies to a 3-D array, a 2-D array, a 1-D array and a scalar. - **Note**: Before NumPy 1.8 ``oa_ndim == 0` was used for signalling that + **Note**: Before NumPy 1.8 ``oa_ndim == 0` was used for signalling that ``op_axes`` and ``itershape`` are unused. This is deprecated and should be replaced with -1. Better backward compatibility may be achieved by using :c:func:`NpyIter_MultiNew` for this case. diff --git a/doc/source/reference/c-api/ufunc.rst b/doc/source/reference/c-api/ufunc.rst index 2909ce9af..39447ae24 100644 --- a/doc/source/reference/c-api/ufunc.rst +++ b/doc/source/reference/c-api/ufunc.rst @@ -171,7 +171,7 @@ Functions `numpy.dtype.num` (built-in only) that the corresponding function in the ``func`` array accepts. For instance, for a comparison ufunc with three ``ntypes``, two ``nin`` and one ``nout``, where the - first function accepts `numpy.int32` and the the second + first function accepts `numpy.int32` and the second `numpy.int64`, with both returning `numpy.bool_`, ``types`` would be ``(char[]) {5, 5, 0, 7, 7, 0}`` since ``NPY_INT32`` is 5, ``NPY_INT64`` is 7, and ``NPY_BOOL`` is 0. diff --git a/doc/source/reference/random/parallel.rst b/doc/source/reference/random/parallel.rst index 7f0207bde..bff955948 100644 --- a/doc/source/reference/random/parallel.rst +++ b/doc/source/reference/random/parallel.rst @@ -28,8 +28,8 @@ streams. `~SeedSequence` avoids these problems by using successions of integer hashes with good `avalanche properties`_ to ensure that flipping any bit in the input -input has about a 50% chance of flipping any bit in the output. Two input seeds -that are very close to each other will produce initial states that are very far +has about a 50% chance of flipping any bit in the output. Two input seeds that +are very close to each other will produce initial states that are very far from each other (with very high probability). It is also constructed in such a way that you can provide arbitrary-sized integers or lists of integers. `~SeedSequence` will take all of the bits that you provide and mix them diff --git a/doc/source/reference/swig.interface-file.rst b/doc/source/reference/swig.interface-file.rst index 6dd74f4ec..a22b98d39 100644 --- a/doc/source/reference/swig.interface-file.rst +++ b/doc/source/reference/swig.interface-file.rst @@ -904,7 +904,7 @@ Routines * ``PyArrayObject* ary``, a NumPy array. - Require the given ``PyArrayObject`` to to be Fortran ordered. If + Require the given ``PyArrayObject`` to be Fortran ordered. If the ``PyArrayObject`` is already Fortran ordered, do nothing. Else, set the Fortran ordering flag and recompute the strides. diff --git a/doc/source/user/basics.subclassing.rst b/doc/source/user/basics.subclassing.rst index 55b23bb78..cee794b8c 100644 --- a/doc/source/user/basics.subclassing.rst +++ b/doc/source/user/basics.subclassing.rst @@ -523,7 +523,7 @@ which inputs and outputs it converted. Hence, e.g., >>> a.info {'inputs': [0, 1], 'outputs': [0]} -Note that another approach would be to to use ``getattr(ufunc, +Note that another approach would be to use ``getattr(ufunc, methods)(*inputs, **kwargs)`` instead of the ``super`` call. For this example, the result would be identical, but there is a difference if another operand also defines ``__array_ufunc__``. E.g., lets assume that we evalulate diff --git a/doc/source/user/building.rst b/doc/source/user/building.rst index 1a1220502..f6554c5e2 100644 --- a/doc/source/user/building.rst +++ b/doc/source/user/building.rst @@ -341,7 +341,7 @@ intended host and not the build system, set:: where ``${ARCH_TRIPLET}`` is an architecture-dependent suffix appropriate for the host architecture. (This should be the name of a ``_sysconfigdata`` file, -without the ``.py`` extension, found in in the host Python library directory.) +without the ``.py`` extension, found in the host Python library directory.) When using external linear algebra libraries, include and library directories should be provided for the desired libraries in ``site.cfg`` as described diff --git a/doc/source/user/c-info.how-to-extend.rst b/doc/source/user/c-info.how-to-extend.rst index 155d56306..ffa141b95 100644 --- a/doc/source/user/c-info.how-to-extend.rst +++ b/doc/source/user/c-info.how-to-extend.rst @@ -111,7 +111,7 @@ Defining functions ================== The second argument passed in to the Py_InitModule function is a -structure that makes it easy to to define functions in the module. In +structure that makes it easy to define functions in the module. In the example given above, the mymethods structure would have been defined earlier in the file (usually right before the init{name} subroutine) to: diff --git a/numpy/core/_add_newdocs.py b/numpy/core/_add_newdocs.py index a800143a8..7081f9a59 100644 --- a/numpy/core/_add_newdocs.py +++ b/numpy/core/_add_newdocs.py @@ -5253,7 +5253,7 @@ add_newdoc('numpy.core', 'ufunc', ('accumulate', dtype : data-type code, optional The data-type used to represent the intermediate results. Defaults to the data-type of the output array if such is provided, or the - the data-type of the input array if no output array is provided. + data-type of the input array if no output array is provided. out : ndarray, None, or tuple of ndarray and None, optional A location into which the result is stored. If not provided or None, a freshly-allocated array is returned. For consistency with diff --git a/numpy/core/numeric.py b/numpy/core/numeric.py index 014fa0a39..3e9b6c414 100644 --- a/numpy/core/numeric.py +++ b/numpy/core/numeric.py @@ -136,7 +136,7 @@ def zeros_like(a, dtype=None, order='K', subok=True, shape=None): """ res = empty_like(a, dtype=dtype, order=order, subok=subok, shape=shape) - # needed instead of a 0 to get same result as zeros for for string dtypes + # needed instead of a 0 to get same result as zeros for string dtypes z = zeros(1, dtype=res.dtype) multiarray.copyto(res, z, casting='unsafe') return res diff --git a/numpy/core/src/multiarray/array_coercion.c b/numpy/core/src/multiarray/array_coercion.c index 2598e4bde..ff77d6883 100644 --- a/numpy/core/src/multiarray/array_coercion.c +++ b/numpy/core/src/multiarray/array_coercion.c @@ -67,8 +67,8 @@ * * The code here avoid multiple conversion of array-like objects (including * sequences). These objects are cached after conversion, which will require - * additional memory, but can drastically speed up coercion from from array - * like objects. + * additional memory, but can drastically speed up coercion from array like + * objects. */ diff --git a/numpy/core/src/multiarray/common.c b/numpy/core/src/multiarray/common.c index aa95d285a..8264f83b2 100644 --- a/numpy/core/src/multiarray/common.c +++ b/numpy/core/src/multiarray/common.c @@ -108,8 +108,8 @@ PyArray_DTypeFromObjectStringDiscovery( /* * This function is now identical to the new PyArray_DiscoverDTypeAndShape - * but only returns the the dtype. It should in most cases be slowly phased - * out. (Which may need some refactoring to PyArray_FromAny to make it simpler) + * but only returns the dtype. It should in most cases be slowly phased out. + * (Which may need some refactoring to PyArray_FromAny to make it simpler) */ NPY_NO_EXPORT int PyArray_DTypeFromObject(PyObject *obj, int maxdims, PyArray_Descr **out_dtype) diff --git a/numpy/core/src/multiarray/convert_datatype.c b/numpy/core/src/multiarray/convert_datatype.c index b21fc3cfa..09a92b33c 100644 --- a/numpy/core/src/multiarray/convert_datatype.c +++ b/numpy/core/src/multiarray/convert_datatype.c @@ -3656,7 +3656,7 @@ PyArray_GetObjectToGenericCastingImpl(void) -/* Any object object is simple (could even use the default) */ +/* Any object is simple (could even use the default) */ static NPY_CASTING any_to_object_resolve_descriptors( PyArrayMethodObject *NPY_UNUSED(self), diff --git a/numpy/core/src/multiarray/ctors.c b/numpy/core/src/multiarray/ctors.c index 25eb91977..17a49091a 100644 --- a/numpy/core/src/multiarray/ctors.c +++ b/numpy/core/src/multiarray/ctors.c @@ -1637,8 +1637,8 @@ PyArray_FromAny(PyObject *op, PyArray_Descr *newtype, int min_depth, * Thus, we check if there is an array included, in that case we * give a FutureWarning. * When the warning is removed, PyArray_Pack will have to ensure - * that that it does not append the dimensions when creating the - * subarrays to assign `arr[0] = obj[0]`. + * that it does not append the dimensions when creating the subarrays + * to assign `arr[0] = obj[0]`. */ int includes_array = 0; if (cache != NULL) { diff --git a/numpy/core/src/multiarray/dtype_transfer.c b/numpy/core/src/multiarray/dtype_transfer.c index 4877f8dfa..b0db94817 100644 --- a/numpy/core/src/multiarray/dtype_transfer.c +++ b/numpy/core/src/multiarray/dtype_transfer.c @@ -3393,8 +3393,8 @@ wrap_aligned_transferfunction( * For casts between two dtypes with the same type (within DType casts) * it also wraps the `copyswapn` function. * - * This function is called called from `ArrayMethod.get_loop()` when a - * specialized cast function is missing. + * This function is called from `ArrayMethod.get_loop()` when a specialized + * cast function is missing. * * In general, the legacy cast functions do not support unaligned access, * so an ArrayMethod using this must signal that. In a few places we do diff --git a/numpy/core/src/multiarray/nditer_constr.c b/numpy/core/src/multiarray/nditer_constr.c index bf32e1f6b..0b9717ade 100644 --- a/numpy/core/src/multiarray/nditer_constr.c +++ b/numpy/core/src/multiarray/nditer_constr.c @@ -992,7 +992,7 @@ npyiter_check_per_op_flags(npy_uint32 op_flags, npyiter_opitflags *op_itflags) } /* - * Prepares a a constructor operand. Assumes a reference to 'op' + * Prepares a constructor operand. Assumes a reference to 'op' * is owned, and that 'op' may be replaced. Fills in 'op_dataptr', * 'op_dtype', and may modify 'op_itflags'. * diff --git a/numpy/core/src/npymath/npy_math_complex.c.src b/numpy/core/src/npymath/npy_math_complex.c.src index 8c432e483..ce2772273 100644 --- a/numpy/core/src/npymath/npy_math_complex.c.src +++ b/numpy/core/src/npymath/npy_math_complex.c.src @@ -1696,7 +1696,7 @@ npy_catanh@c@(@ctype@ z) if (ax < SQRT_3_EPSILON / 2 && ay < SQRT_3_EPSILON / 2) { /* * z = 0 was filtered out above. All other cases must raise - * inexact, but this is the only only that needs to do it + * inexact, but this is the only one that needs to do it * explicitly. */ raise_inexact(); diff --git a/numpy/core/src/umath/dispatching.c b/numpy/core/src/umath/dispatching.c index 81d47a0e1..6f541340e 100644 --- a/numpy/core/src/umath/dispatching.c +++ b/numpy/core/src/umath/dispatching.c @@ -78,7 +78,7 @@ NPY_NO_EXPORT int PyUFunc_AddLoop(PyUFuncObject *ufunc, PyObject *info, int ignore_duplicate) { /* - * Validate the info object, this should likely move to to a different + * Validate the info object, this should likely move to a different * entry-point in the future (and is mostly unnecessary currently). */ if (!PyTuple_CheckExact(info) || PyTuple_GET_SIZE(info) != 2) { diff --git a/numpy/core/src/umath/ufunc_type_resolution.c b/numpy/core/src/umath/ufunc_type_resolution.c index 90846ca55..a7df09b8f 100644 --- a/numpy/core/src/umath/ufunc_type_resolution.c +++ b/numpy/core/src/umath/ufunc_type_resolution.c @@ -416,7 +416,7 @@ PyUFunc_SimpleBinaryComparisonTypeResolver(PyUFuncObject *ufunc, } } else { - /* Usually a failure, but let the the default version handle it */ + /* Usually a failure, but let the default version handle it */ return PyUFunc_DefaultTypeResolver(ufunc, casting, operands, type_tup, out_dtypes); } diff --git a/numpy/core/tests/test_indexing.py b/numpy/core/tests/test_indexing.py index 1c2253856..ff999a7b9 100644 --- a/numpy/core/tests/test_indexing.py +++ b/numpy/core/tests/test_indexing.py @@ -1332,7 +1332,7 @@ class TestBooleanIndexing: class TestArrayToIndexDeprecation: - """Creating an an index from array not 0-D is an error. + """Creating an index from array not 0-D is an error. """ def test_array_to_index_error(self): diff --git a/numpy/doc/ufuncs.py b/numpy/doc/ufuncs.py index eecc15083..c99e9abc9 100644 --- a/numpy/doc/ufuncs.py +++ b/numpy/doc/ufuncs.py @@ -75,7 +75,7 @@ The axis keyword can be used to specify different axes to reduce: :: >>> np.add.reduce(np.arange(10).reshape(2,5),axis=1) array([10, 35]) -**.accumulate(arr)** applies the binary operator and generates an an +**.accumulate(arr)** applies the binary operator and generates an equivalently shaped array that includes the accumulated amount for each element of the array. A couple examples: :: diff --git a/numpy/lib/function_base.py b/numpy/lib/function_base.py index a215f63d3..d4abde425 100644 --- a/numpy/lib/function_base.py +++ b/numpy/lib/function_base.py @@ -3551,8 +3551,8 @@ def sinc(x): Parameters ---------- x : ndarray - Array (possibly multi-dimensional) of values for which to to - calculate ``sinc(x)``. + Array (possibly multi-dimensional) of values for which to calculate + ``sinc(x)``. Returns ------- diff --git a/numpy/lib/histograms.py b/numpy/lib/histograms.py index b6909bc1d..44e4b51c4 100644 --- a/numpy/lib/histograms.py +++ b/numpy/lib/histograms.py @@ -506,8 +506,8 @@ def histogram_bin_edges(a, bins=10, range=None, weights=None): with non-normal datasets. 'scott' - Less robust estimator that that takes into account data - variability and data size. + Less robust estimator that takes into account data variability + and data size. 'stone' Estimator based on leave-one-out cross-validation estimate of diff --git a/numpy/lib/nanfunctions.py b/numpy/lib/nanfunctions.py index d7ea1ca65..cf76e7909 100644 --- a/numpy/lib/nanfunctions.py +++ b/numpy/lib/nanfunctions.py @@ -188,9 +188,8 @@ def _divide_by_count(a, b, out=None): """ Compute a/b ignoring invalid results. If `a` is an array the division is done in place. If `a` is a scalar, then its type is preserved in the - output. If out is None, then then a is used instead so that the - division is in place. Note that this is only called with `a` an inexact - type. + output. If out is None, then a is used instead so that the division + is in place. Note that this is only called with `a` an inexact type. Parameters ---------- diff --git a/numpy/polynomial/chebyshev.py b/numpy/polynomial/chebyshev.py index 89ce815d5..5c595bcf6 100644 --- a/numpy/polynomial/chebyshev.py +++ b/numpy/polynomial/chebyshev.py @@ -1119,7 +1119,7 @@ def chebval(x, c, tensor=True): If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with - with themselves and with the elements of `c`. + themselves and with the elements of `c`. c : array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the diff --git a/numpy/polynomial/hermite.py b/numpy/polynomial/hermite.py index 9b0735a9a..e20339121 100644 --- a/numpy/polynomial/hermite.py +++ b/numpy/polynomial/hermite.py @@ -827,7 +827,7 @@ def hermval(x, c, tensor=True): If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with - with themselves and with the elements of `c`. + themselves and with the elements of `c`. c : array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the diff --git a/numpy/polynomial/laguerre.py b/numpy/polynomial/laguerre.py index d9ca373dd..5d058828d 100644 --- a/numpy/polynomial/laguerre.py +++ b/numpy/polynomial/laguerre.py @@ -826,7 +826,7 @@ def lagval(x, c, tensor=True): If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with - with themselves and with the elements of `c`. + themselves and with the elements of `c`. c : array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the diff --git a/numpy/polynomial/legendre.py b/numpy/polynomial/legendre.py index 2e8052e7c..23a2c089a 100644 --- a/numpy/polynomial/legendre.py +++ b/numpy/polynomial/legendre.py @@ -857,7 +857,7 @@ def legval(x, c, tensor=True): If `x` is a list or tuple, it is converted to an ndarray, otherwise it is left unchanged and treated as a scalar. In either case, `x` or its elements must support addition and multiplication with - with themselves and with the elements of `c`. + themselves and with the elements of `c`. c : array_like Array of coefficients ordered so that the coefficients for terms of degree n are contained in c[n]. If `c` is multidimensional the diff --git a/numpy/random/_common.pyx b/numpy/random/_common.pyx index 864150458..607034a38 100644 --- a/numpy/random/_common.pyx +++ b/numpy/random/_common.pyx @@ -65,7 +65,7 @@ cdef object random_raw(bitgen_t *bitgen, object lock, object size, object output Notes ----- - This method directly exposes the the raw underlying pseudo-random + This method directly exposes the raw underlying pseudo-random number generator. All values are returned as unsigned 64-bit values irrespective of the number of bits produced by the PRNG. diff --git a/numpy/random/_examples/cython/extending.pyx b/numpy/random/_examples/cython/extending.pyx index 3a7f81aa0..30efd7447 100644 --- a/numpy/random/_examples/cython/extending.pyx +++ b/numpy/random/_examples/cython/extending.pyx @@ -31,7 +31,7 @@ def uniform_mean(Py_ssize_t n): random_values = np.empty(n) # Best practice is to acquire the lock whenever generating random values. # This prevents other threads from modifying the state. Acquiring the lock - # is only necessary if if the GIL is also released, as in this example. + # is only necessary if the GIL is also released, as in this example. with x.lock, nogil: for i in range(n): random_values[i] = rng.next_double(rng.state) diff --git a/numpy/random/bit_generator.pyx b/numpy/random/bit_generator.pyx index 123d77b40..fe45f85b0 100644 --- a/numpy/random/bit_generator.pyx +++ b/numpy/random/bit_generator.pyx @@ -576,7 +576,7 @@ cdef class BitGenerator(): Notes ----- - This method directly exposes the the raw underlying pseudo-random + This method directly exposes the raw underlying pseudo-random number generator. All values are returned as unsigned 64-bit values irrespective of the number of bits produced by the PRNG. diff --git a/numpy/random/tests/test_generator_mt19937.py b/numpy/random/tests/test_generator_mt19937.py index e16a82973..7c61038a4 100644 --- a/numpy/random/tests/test_generator_mt19937.py +++ b/numpy/random/tests/test_generator_mt19937.py @@ -2563,7 +2563,7 @@ class TestSingleEltArrayInput: def test_jumped(config): # Each config contains the initial seed, a number of raw steps # the sha256 hashes of the initial and the final states' keys and - # the position of of the initial and the final state. + # the position of the initial and the final state. # These were produced using the original C implementation. seed = config["seed"] steps = config["steps"] diff --git a/numpy/testing/_private/parameterized.py b/numpy/testing/_private/parameterized.py index db9629a94..3a29a1811 100644 --- a/numpy/testing/_private/parameterized.py +++ b/numpy/testing/_private/parameterized.py @@ -1,5 +1,5 @@ """ -tl;dr: all code code is licensed under simplified BSD, unless stated otherwise. +tl;dr: all code is licensed under simplified BSD, unless stated otherwise. Unless stated otherwise in the source files, all code is copyright 2010 David Wolever <david@wolever.net>. All rights reserved. diff --git a/tools/swig/README b/tools/swig/README index 7fa0599c6..c539c597f 100644 --- a/tools/swig/README +++ b/tools/swig/README @@ -15,7 +15,7 @@ system used here, can be found in the NumPy reference guide. Testing ------- The tests are a good example of what we are trying to do with numpy.i. -The files related to testing are are in the test subdirectory:: +The files related to testing are in the test subdirectory:: Vector.h Vector.cxx diff --git a/tools/swig/numpy.i b/tools/swig/numpy.i index 99ed073ab..0ef92bab1 100644 --- a/tools/swig/numpy.i +++ b/tools/swig/numpy.i @@ -524,7 +524,7 @@ return success; } - /* Require the given PyArrayObject to to be Fortran ordered. If the + /* Require the given PyArrayObject to be Fortran ordered. If the * the PyArrayObject is already Fortran ordered, do nothing. Else, * set the Fortran ordering flag and recompute the strides. */ |