NumPy 1.12.0 Release Notes — NumPy v2.0 Manual (2024)

Table of Contents
Highlights# Dropped Support# Added Support# Build System Changes# Deprecations# Assignment of ndarray object’s data attribute# Unsafe int casting of the num attribute in linspace# Insufficient bit width parameter to binary_repr# Future Changes# Multiple-field manipulation of structured arrays# Compatibility notes# DeprecationWarning to error# FutureWarning to changed behavior# power and ** raise errors for integer to negative integer powers# Relaxed stride checking is the default# The np.percentile ‘midpoint’ interpolation method fixed for exact indices# keepdims kwarg is passed through to user-class methods# bitwise_and identity changed# ma.median warns and returns nan when unmasked invalid values are encountered# Greater consistency in assert_almost_equal# NoseTester behaviour of warnings during testing# assert_warns and deprecated decorator more specific# C API# New Features# Writeable keyword argument for as_strided# axes keyword argument for rot90# Generalized flip# BLIS support in numpy.distutils# Hook in numpy/__init__.py to run distribution-specific checks# New nanfunctions nanc*msum and nancumprod added# np.interp can now interpolate complex values# New polynomial evaluation function polyvalfromroots added# New array creation function geomspace added# New context manager for testing warnings# New masked array functions ma.convolve and ma.correlate added# New float_power ufunc# np.loadtxt now supports a single integer as usecol argument# Improved automated bin estimators for histogram# np.roll can now roll multiple axes at the same time# The __complex__ method has been implemented for the ndarrays# pathlib.Path objects now supported# New bits attribute for np.finfo# New signature argument to np.vectorize# Emit py3kwarnings for division of integer arrays# numpy.sctypes now includes bytes on Python3 too# Improvements# bitwise_and identity changed# Generalized Ufuncs will now unlock the GIL# Caches in np.fft are now bounded in total size and item count# Improved handling of zero-width string/unicode dtypes# Integer ufuncs vectorized with AVX2# Order of operations optimization in np.einsum# quicksort has been changed to an introsort# ediff1d improved performance and subclass handling# Improved precision of ndarray.mean for float16 arrays# Changes# All array-like methods are now called with keyword arguments in fromnumeric.py# Operations on np.memmap objects return numpy arrays in most cases# stacklevel of warnings increased#

This release supports Python 2.7 and 3.4 - 3.6.

Highlights#

The NumPy 1.12.0 release contains a large number of fixes and improvements, butfew that stand out above all others. That makes picking out the highlightssomewhat arbitrary but the following may be of particular interest or indicateareas likely to have future consequences.

  • Order of operations in np.einsum can now be optimized for large speed improvements.

  • New signature argument to np.vectorize for vectorizing with core dimensions.

  • The keepdims argument was added to many functions.

  • New context manager for testing warnings

  • Support for BLIS in numpy.distutils

  • Much improved support for PyPy (not yet finished)

Dropped Support#

  • Support for Python 2.6, 3.2, and 3.3 has been dropped.

Added Support#

  • Support for PyPy 2.7 v5.6.0 has been added. While not complete (nditerupdateifcopy is not supported yet), this is a milestone for PyPy’sC-API compatibility layer.

Build System Changes#

  • Library order is preserved, instead of being reordered to match that ofthe directories.

Deprecations#

Assignment of ndarray object’s data attribute#

Assigning the ‘data’ attribute is an inherently unsafe operation as pointedout in gh-7083. Such a capability will be removed in the future.

Unsafe int casting of the num attribute in linspace#

np.linspace now raises DeprecationWarning when num cannot be safelyinterpreted as an integer.

Insufficient bit width parameter to binary_repr#

If a ‘width’ parameter is passed into binary_repr that is insufficient torepresent the number in base 2 (positive) or 2’s complement (negative) form,the function used to silently ignore the parameter and return a representationusing the minimal number of bits needed for the form in question. Such behavioris now considered unsafe from a user perspective and will raise an error in thefuture.

Future Changes#

  • In 1.13 NAT will always compare False except for NAT != NAT,which will be True. In short, NAT will behave like NaN

  • In 1.13 np.average will preserve subclasses, to match the behavior of mostother numpy functions such as np.mean. In particular, this means calls whichreturned a scalar may return a 0-d subclass object instead.

Multiple-field manipulation of structured arrays#

In 1.13 the behavior of structured arrays involving multiple fields will changein two ways:

First, indexing a structured array with multiple fields (eg,arr[['f1', 'f3']]) will return a view into the original array in 1.13,instead of a copy. Note the returned view will have extra padding bytescorresponding to intervening fields in the original array, unlike the copy in1.12, which will affect code such as arr[['f1', 'f3']].view(newdtype).

Second, for numpy versions 1.6 to 1.12 assignment between structured arraysoccurs “by field name”: Fields in the destination array are set to theidentically-named field in the source array or to 0 if the source does not havea field:

>>> a = np.array([(1,2),(3,4)], dtype=[('x', 'i4'), ('y', 'i4')])>>> b = np.ones(2, dtype=[('z', 'i4'), ('y', 'i4'), ('x', 'i4')])>>> b[:] = a>>> barray([(0, 2, 1), (0, 4, 3)], dtype=[('z', '<i4'), ('y', '<i4'), ('x', '<i4')])

In 1.13 assignment will instead occur “by position”: The Nth field of thedestination will be set to the Nth field of the source regardless of fieldname. The old behavior can be obtained by using indexing to reorder the fieldsbeforeassignment, e.g., b[['x', 'y']] = a[['y', 'x']].

Compatibility notes#

DeprecationWarning to error#

FutureWarning to changed behavior#

  • np.full now returns an array of the fill-value’s dtype if no dtype isgiven, instead of defaulting to float.

  • np.average will emit a warning if the argument is a subclass of ndarray,as the subclass will be preserved starting in 1.13. (see Future Changes)

power and ** raise errors for integer to negative integer powers#

The previous behavior depended on whether numpy scalar integers or numpyinteger arrays were involved.

For arrays

  • Zero to negative integer powers returned least integral value.

  • Both 1, -1 to negative integer powers returned correct values.

  • The remaining integers returned zero when raised to negative integer powers.

For scalars

  • Zero to negative integer powers returned least integral value.

  • Both 1, -1 to negative integer powers returned correct values.

  • The remaining integers sometimes returned zero, sometimes thecorrect float depending on the integer type combination.

All of these cases now raise a ValueError except for those integercombinations whose common type is float, for instance uint64 and int8. It wasfelt that a simple rule was the best way to go rather than have specialexceptions for the integer units. If you need negative powers, use an inexacttype.

Relaxed stride checking is the default#

This will have some impact on code that assumed that F_CONTIGUOUS andC_CONTIGUOUS were mutually exclusive and could be set to determine thedefault order for arrays that are now both.

The np.percentile ‘midpoint’ interpolation method fixed for exact indices#

The ‘midpoint’ interpolator now gives the same result as ‘lower’ and ‘higher’ whenthe two coincide. Previous behavior of ‘lower’ + 0.5 is fixed.

keepdims kwarg is passed through to user-class methods#

numpy functions that take a keepdims kwarg now pass the valuethrough to the corresponding methods on ndarray sub-classes. Previously thekeepdims keyword would be silently dropped. These functions now havethe following behavior:

  1. If user does not provide keepdims, no keyword is passed to the underlyingmethod.

  2. Any user-provided value of keepdims is passed through as a keywordargument to the method.

This will raise in the case where the method does not support akeepdims kwarg and the user explicitly passes in keepdims.

The following functions are changed: sum, product,sometrue, alltrue, any, all, amax, amin,prod, mean, std, var, nanmin, nanmax,nansum, nanprod, nanmean, nanmedian, nanvar,nanstd

bitwise_and identity changed#

The previous identity was 1, it is now -1. See entry in Improvements formore explanation.

ma.median warns and returns nan when unmasked invalid values are encountered#

Similar to unmasked median the masked median ma.median now emits a Runtimewarning and returns NaN in slices where an unmasked NaN is present.

Greater consistency in assert_almost_equal#

The precision check for scalars has been changed to match that for arrays. Itis now:

Note that this is looser than previously documented, but agrees with theprevious implementation used in assert_array_almost_equal. Due to thechange in implementation some very delicate tests may fail that did notfail before.

NoseTester behaviour of warnings during testing#

When raise_warnings="develop" is given, all uncaught warnings will nowbe considered a test failure. Previously only selected ones were raised.Warnings which are not caught or raised (mostly when in release mode)will be shown once during the test cycle similar to the default pythonsettings.

assert_warns and deprecated decorator more specific#

The assert_warns function and context manager are now more specificto the given warning category. This increased specificity leads to thembeing handled according to the outer warning settings. This means thatno warning may be raised in cases where a wrong category warning is givenand ignored outside the context. Alternatively the increased specificitymay mean that warnings that were incorrectly ignored will now be shownor raised. See also the new suppress_warnings context manager.The same is true for the deprecated decorator.

C API#

No changes.

New Features#

Writeable keyword argument for as_strided#

np.lib.stride_tricks.as_strided now has a writeablekeyword argument. It can be set to False when no write operationto the returned array is expected to avoid accidentalunpredictable writes.

axes keyword argument for rot90#

The axes keyword argument in rot90 determines the plane in which thearray is rotated. It defaults to axes=(0,1) as in the original function.

Generalized flip#

flipud and fliplr reverse the elements of an array along axis=0 andaxis=1 respectively. The newly added flip function reverses the elements ofan array along any given axis.

  • np.count_nonzero now has an axis parameter, allowingnon-zero counts to be generated on more than just a flattenedarray object.

BLIS support in numpy.distutils#

Building against the BLAS implementation provided by the BLIS library is nowsupported. See the [blis] section in site.cfg.example (in the root ofthe numpy repo or source distribution).

Hook in numpy/__init__.py to run distribution-specific checks#

Binary distributions of numpy may need to run specific hardware checks or loadspecific libraries during numpy initialization. For example, if we aredistributing numpy with a BLAS library that requires SSE2 instructions, wewould like to check the machine on which numpy is running does have SSE2 inorder to give an informative error.

Add a hook in numpy/__init__.py to import a numpy/_distributor_init.pyfile that will remain empty (bar a docstring) in the standard numpy source,but that can be overwritten by people making binary distributions of numpy.

New nanfunctions nanc*msum and nancumprod added#

Nan-functions nanc*msum and nancumprod have been added tocompute c*msum and cumprod by ignoring nans.

np.interp can now interpolate complex values#

np.lib.interp(x, xp, fp) now allows the interpolated array fpto be complex and will interpolate at complex128 precision.

New polynomial evaluation function polyvalfromroots added#

The new function polyvalfromroots evaluates a polynomial at given pointsfrom the roots of the polynomial. This is useful for higher order polynomials,where expansion into polynomial coefficients is inaccurate at machineprecision.

New array creation function geomspace added#

The new function geomspace generates a geometric sequence. It is similarto logspace, but with start and stop specified directly:geomspace(start, stop) behaves the same aslogspace(log10(start), log10(stop)).

New context manager for testing warnings#

A new context manager suppress_warnings has been added to the testingutils. This context manager is designed to help reliably test warnings.Specifically to reliably filter/ignore warnings. Ignoring warningsby using an “ignore” filter in Python versions before 3.4.x can quicklyresult in these (or similar) warnings not being tested reliably.

The context manager allows to filter (as well as record) warnings similarto the catch_warnings context, but allows for easier specificity.Also printing warnings that have not been filtered or nesting thecontext manager will work as expected. Additionally, it is possibleto use the context manager as a decorator which can be useful whenmultiple tests give need to hide the same warning.

New masked array functions ma.convolve and ma.correlate added#

These functions wrapped the non-masked versions, but propagate through maskedvalues. There are two different propagation modes. The default causes maskedvalues to contaminate the result with masks, but the other mode only outputsmasks if there is no alternative.

New float_power ufunc#

The new float_power ufunc is like the power function except allcomputation is done in a minimum precision of float64. There was a longdiscussion on the numpy mailing list of how to treat integers to negativeinteger powers and a popular proposal was that the __pow__ operator shouldalways return results of at least float64 precision. The float_powerfunction implements that option. Note that it does not support object arrays.

np.loadtxt now supports a single integer as usecol argument#

Instead of using usecol=(n,) to read the nth column of a fileit is now allowed to use usecol=n. Also the error message ismore user friendly when a non-integer is passed as a column index.

Improved automated bin estimators for histogram#

Added ‘doane’ and ‘sqrt’ estimators to histogram via the binsargument. Added support for range-restricted histograms with automatedbin estimation.

np.roll can now roll multiple axes at the same time#

The shift and axis arguments to roll are now broadcast against eachother, and each specified axis is shifted accordingly.

The __complex__ method has been implemented for the ndarrays#

Calling complex() on a size 1 array will now cast to a pythoncomplex.

pathlib.Path objects now supported#

The standard np.load, np.save, np.loadtxt, np.savez, and similarfunctions can now take pathlib.Path objects as an argument instead of afilename or open file object.

New bits attribute for np.finfo#

This makes np.finfo consistent with np.iinfo which already has thatattribute.

New signature argument to np.vectorize#

This argument allows for vectorizing user defined functions with coredimensions, in the style of NumPy’sgeneralized universal functions. This allowsfor vectorizing a much broader class of functions. For example, an arbitrarydistance metric that combines two vectors to produce a scalar could bevectorized with signature='(n),(n)->()'. See np.vectorize for fulldetails.

Emit py3kwarnings for division of integer arrays#

To help people migrate their code bases from Python 2 to Python 3, thepython interpreter has a handy option -3, which issues warnings at runtime.One of its warnings is for integer division:

$ python -3 -c "2/3"-c:1: DeprecationWarning: classic int division

In Python 3, the new integer division semantics also apply to numpy arrays.With this version, numpy will emit a similar warning:

$ python -3 -c "import numpy as np; np.array(2)/np.array(3)"-c:1: DeprecationWarning: numpy: classic int division

numpy.sctypes now includes bytes on Python3 too#

Previously, it included str (bytes) and unicode on Python2, but only str(unicode) on Python3.

Improvements#

bitwise_and identity changed#

The previous identity was 1 with the result that all bits except the LSB weremasked out when the reduce method was used. The new identity is -1, whichshould work properly on twos complement machines as all bits will be set toone.

Generalized Ufuncs will now unlock the GIL#

Generalized Ufuncs, including most of the linalg module, will now unlockthe Python global interpreter lock.

Caches in np.fft are now bounded in total size and item count#

The caches in np.fft that speed up successive FFTs of the same length can nolonger grow without bounds. They have been replaced with LRU (least recentlyused) caches that automatically evict no longer needed items if either thememory size or item count limit has been reached.

Improved handling of zero-width string/unicode dtypes#

Fixed several interfaces that explicitly disallowed arrays with zero-widthstring dtypes (i.e. dtype('S0') or dtype('U0'), and fixed severalbugs where such dtypes were not handled properly. In particular, changedndarray.__new__ to not implicitly convert dtype('S0') todtype('S1') (and likewise for unicode) when creating new arrays.

Integer ufuncs vectorized with AVX2#

If the cpu supports it at runtime the basic integer ufuncs now use AVX2instructions. This feature is currently only available when compiled with GCC.

Order of operations optimization in np.einsum#

np.einsum now supports the optimize argument which will optimize theorder of contraction. For example, np.einsum would complete the chain dotexample np.einsum(‘ij,jk,kl->il’, a, b, c) in a single pass which wouldscale like N^4; however, when optimize=True np.einsum will createan intermediate array to reduce this scaling to N^3 or effectivelynp.dot(a, b).dot(c). Usage of intermediate tensors to reduce scaling hasbeen applied to the general einsum summation notation. See np.einsum_pathfor more details.

quicksort has been changed to an introsort#

The quicksort kind of np.sort and np.argsort is now an introsort whichis regular quicksort but changing to a heapsort when not enough progress ismade. This retains the good quicksort performance while changing the worst caseruntime from O(N^2) to O(N*log(N)).

ediff1d improved performance and subclass handling#

The ediff1d function uses an array instead on a flat iterator for thesubtraction. When to_begin or to_end is not None, the subtraction is performedin place to eliminate a copy operation. A side effect is that certainsubclasses are handled better, namely astropy.Quantity, since the completearray is created, wrapped, and then begin and end values are set, instead ofusing concatenate.

Improved precision of ndarray.mean for float16 arrays#

The computation of the mean of float16 arrays is now carried out in float32 forimproved precision. This should be useful in packages such as Theanowhere the precision of float16 is adequate and its smaller footprint isdesirable.

Changes#

All array-like methods are now called with keyword arguments in fromnumeric.py#

Internally, many array-like methods in fromnumeric.py were being called withpositional arguments instead of keyword arguments as their external signatureswere doing. This caused a complication in the downstream ‘pandas’ librarythat encountered an issue with ‘numpy’ compatibility. Now, all array-likemethods in this module are called with keyword arguments instead.

Operations on np.memmap objects return numpy arrays in most cases#

Previously operations on a memmap object would misleadingly return a memmapinstance even if the result was actually not memmapped. For example,arr + 1 or arr + arr would return memmap instances, although no memoryfrom the output array is memmapped. Version 1.12 returns ordinary numpy arraysfrom these operations.

Also, reduction of a memmap (e.g. .sum(axis=None) now returns a numpyscalar instead of a 0d memmap.

stacklevel of warnings increased#

The stacklevel for python based warnings was increased so that most warningswill report the offending line of the user code instead of the line thewarning itself is given. Passing of stacklevel is now tested to ensure thatnew warnings will receive the stacklevel argument.

This causes warnings with the “default” or “module” filter to be shown oncefor every offending user code line or user module instead of only once. Onpython versions before 3.4, this can cause warnings to appear that were falselyignored before, which may be surprising especially in test suits.

NumPy 1.12.0 Release Notes — NumPy v2.0 Manual (2024)
Top Articles
Latest Posts
Article information

Author: Wyatt Volkman LLD

Last Updated:

Views: 6234

Rating: 4.6 / 5 (46 voted)

Reviews: 93% of readers found this page helpful

Author information

Name: Wyatt Volkman LLD

Birthday: 1992-02-16

Address: Suite 851 78549 Lubowitz Well, Wardside, TX 98080-8615

Phone: +67618977178100

Job: Manufacturing Director

Hobby: Running, Mountaineering, Inline skating, Writing, Baton twirling, Computer programming, Stone skipping

Introduction: My name is Wyatt Volkman LLD, I am a handsome, rich, comfortable, lively, zealous, graceful, gifted person who loves writing and wants to share my knowledge and understanding with you.