Python: Can't call numpy() on Tensor that requires grad

avatar
Borislav Hadzhiev

Last updated: Apr 11, 2024
5 min

banner

# Table of Contents

  1. Python: Can't call numpy() on Tensor that requires grad
  2. Using the no_grad() context manager to solve the error
  3. Getting the error when drawing a scatter plot in matplotlib

# Python: Can't call numpy() on Tensor that requires grad

The Python "RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead" occurs when you try to convert a tensor with a gradient to a NumPy array.

To solve the error, convert your tensor to one that doesn't require a gradient by using detach().

runtime error cant call numpy on tensor that requires grad

Here is an example of how the error occurs.

main.py
import torch t = torch.tensor([1.0, 2.0, 3.0], requires_grad=True) print(t) # ๐Ÿ‘‰๏ธ tensor([1., 2., 3.], requires_grad=True) print(type(t)) # ๐Ÿ‘‰๏ธ <class 'torch.Tensor'> # โ›”๏ธ RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead. t = t.numpy()

When the requires_grad attribute is set to True, gradients need to be computed for the Tensor.

To solve the error, use the tensor.detach() method to convert the tensor to one that doesn't require a gradient before calling numpy().

main.py
import torch t = torch.tensor([1.0, 2.0, 3.0], requires_grad=True) print(t) # ๐Ÿ‘‰๏ธ tensor([1., 2., 3.], requires_grad=True) print(type(t)) # ๐Ÿ‘‰๏ธ <class 'torch.Tensor'> # โœ… Call detach() before calling numpy() t = t.detach().numpy() print(t) # ๐Ÿ‘‰๏ธ [1. 2. 3.] print(type(t)) # ๐Ÿ‘‰๏ธ <class 'numpy.ndarray'>

call detach before calling numpy

The code for this article is available on GitHub

The tensor.detach() method returns a new Tensor that is detached from the current graph.

The result never requires a gradient.

In other words, the method returns a new tensor that shares the same storage but doesn't track gradients (requires_grad is set to False).

The new tensor can safely be converted to a NumPy ndarray by calling the tensor.numpy() method.

If you have a list of tensors, use a list comprehension to iterate over the list and call detach() on each tensor.

main.py
import torch t1 = torch.tensor([1.0, 2.0, 3.0], requires_grad=True) t2 = torch.tensor([4.0, 5.0, 6.0], requires_grad=True) tensors = [t1, t2] result = [t.detach().numpy() for t in tensors] # ๐Ÿ‘‡๏ธ [array([1., 2., 3.], dtype=float32), array([4., 5., 6.], dtype=float32)] print(result)

call detach on each tensor

The code for this article is available on GitHub

We used a list comprehension to iterate over the list of tensors.

List comprehensions are used to perform some operation for every element or select a subset of elements that meet a condition.

On each iteration, we call detach() before calling numpy() so no error is raised.

# Using the no_grad() context manager to solve the error

You can also use the no_grad() context manager to solve the error.

The context manager disables gradient calculation.

main.py
import torch t = torch.tensor([1.0, 2.0, 3.0], requires_grad=True) print(t) # ๐Ÿ‘‰๏ธ tensor([1., 2., 3.], requires_grad=True) print(type(t)) # ๐Ÿ‘‰๏ธ <class 'torch.Tensor'> with torch.no_grad(): t = t.detach().numpy() print(t) # ๐Ÿ‘‰๏ธ [1. 2. 3.] print(type(t)) # ๐Ÿ‘‰๏ธ <class 'numpy.ndarray'>

using no grad context manager

The code for this article is available on GitHub

The no_grad context manager disables gradient calculation.

In the context manager (the indented block), the result of every computation will have requires_grad=False even if the inputs have requires_grad=True.

Calling the numpy() method on a tensor that is attached to a computation graph is not allowed.

We first have to make sure that the tensor is detached before calling numpy().

# Getting the error when drawing a scatter plot in matplotlib

If you got the error when drawing a scatter plot in matplotlib, try using the torch.no_grad() method as we did in the previous subheading.

main.py
import torch t = torch.tensor([1.0, 2.0, 3.0], requires_grad=True) with torch.no_grad(): # ๐Ÿ‘‰๏ธ YOUR CODE THAT CAUSES THE ERROR HERE pass
The code for this article is available on GitHub

Make sure to add your code to the indented block inside the no_grad() context manager.

The context manager will disable gradient calculation which should resolve the error as long as your code is indented inside the with torch.no_grad() statement.

If the error persists, try to add an import statement for the fastio.basics module at the top of your file.

main.py
from fastai.basics import * # ๐Ÿ‘‡๏ธ the rest of your code

The no_grad context manager will set requires_grad=False as long as your code is indented in the block.

# Additional Resources

You can learn more about the related topics by checking out the following tutorials:

I wrote a book in which I share everything I know about how to become a better, more efficient programmer.
book cover
You can use the search field on my Home Page to filter through all of my articles.

Copyright ยฉ 2024 Borislav Hadzhiev