ChatGPT解决这个技术问题 Extra ChatGPT

NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0) to a numpy array

I try to pass 2 loss functions to a model as Keras allows that.

loss: String (name of objective function) or objective function or Loss instance. See losses. If the model has multiple outputs, you can use a different loss on each output by passing a dictionary or a list of losses. The loss value that will be minimized by the model will then be the sum of all individual losses.

The two loss functions:

def l_2nd(beta):
    def loss_2nd(y_true, y_pred):
        ...
        return K.mean(t)

    return loss_2nd

and

def l_1st(alpha):
    def loss_1st(y_true, y_pred):
        ...
        return alpha * 2 * tf.linalg.trace(tf.matmul(tf.matmul(Y, L, transpose_a=True), Y)) / batch_size

    return loss_1st

Then I build the model:

l2 = K.eval(l_2nd(self.beta))
l1 = K.eval(l_1st(self.alpha))
self.model.compile(opt, [l2, l1])

When I train, it produces the error:

1.15.0-rc3 WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630:
calling BaseResourceVariable.__init__ (from
tensorflow.python.ops.resource_variable_ops) with constraint is
deprecated and will be removed in a future version. Instructions for
updating: If using Keras pass *_constraint arguments to layers.
--------------------------------------------------------------------------- 
NotImplementedError                       Traceback (most recent call
last) <ipython-input-20-298384dd95ab> in <module>()
     47                          create_using=nx.DiGraph(), nodetype=None, data=[('weight', int)])
     48 
---> 49     model = SDNE(G, hidden_size=[256, 128],)
     50     model.train(batch_size=100, epochs=40, verbose=2)
     51     embeddings = model.get_embeddings()

10 frames <ipython-input-19-df29e9865105> in __init__(self, graph,
hidden_size, alpha, beta, nu1, nu2)
     72         self.A, self.L = self._create_A_L(
     73             self.graph, self.node2idx)  # Adj Matrix,L Matrix
---> 74         self.reset_model()
     75         self.inputs = [self.A, self.L]
     76         self._embeddings = {}

<ipython-input-19-df29e9865105> in reset_model(self, opt)

---> 84         self.model.compile(opt, loss=[l2, l1])
     85         self.get_embeddings()
     86 

/usr/local/lib/python3.6/dist-packages/tensorflow_core/python/training/tracking/base.py
in _method_wrapper(self, *args, **kwargs)
    455     self._self_setattr_tracking = False  # pylint: disable=protected-access
    456     try:
--> 457       result = method(self, *args, **kwargs)
    458     finally:
    459       self._self_setattr_tracking = previous_value  # pylint: disable=protected-access

NotImplementedError: Cannot convert a symbolic Tensor (2nd_target:0)
to a numpy array.

Please help, thanks!

Even I am facing the same issue, and it works perfectly when I disable eager execution.
@Siddhant did you find an alternative without having to disable eager execution? Disabling it seems to fix the issue, but I am no longer benefiting from the other functionalities of eager execution.

C
CGFoX

For me, the issue occurred when upgrading from numpy 1.19 to 1.20 and using ray's RLlib, which uses tensorflow 2.2 internally. Simply downgrading with

pip install numpy==1.19.5

solved the problem; the error did not occur anymore.

Update (comment by @codeananda): You can also update to a newer TensorFlow (2.6+) version now that resolves the problem (pip install -U tensorflow).


This was my problem too and the solution worked, thanks
Why is the lower one the accepted version? It just says: Don't use numpy. But when you're dependent on it, it's impossible to implement that solution. This here is the only right answer.
What I'd like to know is if there is a workaround. I don't want to rollback my numpy.
this is a red herring .. the correct answer should address the root cause which the below solution does perfectly..we all learn something from it
There seems to be some compatibility issue with tensorflow and numpy 1.20. Numpy 1.20 is not officially supported by tensorflow anyways, so the right solution for now is to downgrade to numpy 1.19 until some future tensorflow release implements compatibility with numpy 1.20.
T
T D Nguyen

I found the solution to this problem:

It was because I mixed symbolic tensor with a non-symbolic type, such as a numpy. For example. It is NOT recommended to have something like this:

def my_mse_loss_b(b):
     def mseb(y_true, y_pred):
         ...
         a = np.ones_like(y_true) #numpy array here is not recommended
         return K.mean(K.square(y_pred - y_true)) + a
     return mseb

Instead, you should convert all to symbolic tensors like this:

def my_mse_loss_b(b):
     def mseb(y_true, y_pred):
         ...
         a = K.ones_like(y_true) #use Keras instead so they are all symbolic
         return K.mean(K.square(y_pred - y_true)) + a
     return mseb

Hope this help!


But I'm not using numpy anywhere in my calculations. github.com/siddhantkushwaha/east_1x/blob/east_tf2.0/losses.py
I mean try to assure that all types are all symbolic as shown in the example, especially the params of your loss functions. It works for my case.
I have the same problem. tf.zeros internally uses numpy, and, therefore, cannot avoid it.
f
fidel morris omolo

I faced the same error. When I tried passing my input layer to the Data augmentation Sequential layer. The error and my code is as shown below.
Error:
NotImplementedError: Cannot convert a symbolic Tensor (data_augmentation/random_rotation_5/rotation_matrix/strided_slice:0) to a numpy array. This error may indicate that you're trying to pass a Tensor to a NumPy call, which is not supported.

My code that generated the error:


#Create data augmentation layer using the Sequential model using horizontal flipping, rotations and zoom etc.
data_augmentation = Sequential([
    preprocessing.RandomFlip("horizontal"),
    preprocessing.RandomRotation(0.2),
    preprocessing.RandomZoom(0.2),
    preprocessing.RandomHeight(0.2),
    preprocessing.RandomWidth(0.2)
   # preprocessing.Rescale()
], name="data_augmentation")

# Setting up the input_shape and base model, and freezing the underlying base model layers.
input_shape = (224,224,3)
base_model = tf.keras.applications.EfficientNetB0(include_top=False)
base_model.trainable=False

#Create the input layers
inputs = tf.keras.Input(shape=input_shape, name="input_layer")

#Add in data augmentation Sequential model as a layer
x = data_augmentation(inputs) #This is the line of code that generated the error.

My solution to the generated Error: Solution 1: I was running on a lower version of Tensorflow version 2.4.0. So i uninstalled it and reinstalled it to get a higher version 2.6.0. The newer tensor flow version automatically uninstalls and reinstall numpy version (1.19.5) (if numpy is already installed in your local machine). This will automatically solve the bug. Enter the below commands in the terminal of your current conda environment:

pip uninstall tensorflow
pip install tensorflow

Solution 2: Its the simplest of all the suggested solutions I guess. Run your code in Google colab instead of your local machine. Colab will always have the latest packages preinstalled.


Very helpful to note that by upgrading to a more recent version of TF (2.6+) it automatically installs the right numpy version to avoid this issue.
A
Alper

I tried to add a SimpleRNN layer to my model and I received a similar error (NotImplementedError: Cannot convert a symbolic Tensor (SimpleRNN-1/strided_slice:0) to a numpy array) with Python 3.9.5.

When I created another environment with Python 3.8.10 and all the other modules I needed, the issue was solved.


T
Tarik

As others have indicated this is due to an incompatibility between specific tensorflow versions and specific numpy versions.

The following is my specific environment and the list of packages I have installed:

conda version 4.11.0

Commands to setup working environment:

conda activate base
conda create -y --name myenv python=3.9
conda activate myenv
conda install -y tensorflow=2.4
conda install -y numpy=1.19.2
conda install -y keras

System Information

System:    Kernel: 5.4.0-100-generic x86_64 bits: 64 compiler: gcc v: 9.3.0 
           Desktop: Cinnamon 5.2.7 wm: muffin dm: LightDM Distro: Linux Mint 20.3 Una 
           base: Ubuntu 20.04 focal 
Machine:   Type: Laptop System: LENOVO product: 20308 v: Lenovo Ideapad Flex 14 serial: <filter> 
           Chassis: type: 10 v: Lenovo Ideapad Flex 14 serial: <filter> 
           Mobo: LENOVO model: Strawberry 4A v: 31900059Std serial: <filter> UEFI: LENOVO 
           v: 8ACN30WW date: 12/06/2013 
CPU:       Topology: Dual Core model: Intel Core i5-4200U bits: 64 type: MT MCP arch: Haswell 
           rev: 1 L2 cache: 3072 KiB 
           flags: avx avx2 lm nx pae sse sse2 sse3 sse4_1 sse4_2 ssse3 vmx bogomips: 18357 
           Speed: 798 MHz min/max: 800/2600 MHz Core speeds (MHz): 1: 798 2: 798 3: 798 4: 799 
Graphics:  Device-1: Intel Haswell-ULT Integrated Graphics vendor: Lenovo driver: i915 v: kernel 
           bus ID: 00:02.0 chip ID: 8086:0a16 
           Display: x11 server: X.Org 1.20.13 driver: modesetting unloaded: fbdev,vesa 
           resolution: 1366x768~60Hz 
           OpenGL: renderer: Mesa DRI Intel HD Graphics 4400 (HSW GT2) v: 4.5 Mesa 21.2.6 
           compat-v: 3.0 direct render: Yes 
 

S
Singh

I ran into this issue while converting darknet weights to a TensorFlow model. I got rid of this issue when I created a new environment with Tensorflow v2.3 ( earlier it was Tensorflow v2.2) and NumPy comes preinstalled with it.

So maybe updating your TF version might solve this problem.


g
gojira

I had the same problem and resolved it.

To find the root cause, I created a new anaconda environment with python 3.8 and conda installed tensorflow (installs 2.4)

When I ran the keras LSTM code, it bugged out on

rnnmodel.add(LSTM(128, dropout=0.2, recurrent_dropout=0.2))

Fixed it by installing the latest tensorflow 2.8

pip uninstall tensorflow
pip install tensorflow

p
philsavor

The following config works for me.

python=3.8

tensorflow=2.8.0


As it’s currently written, your answer is unclear. Please edit to add additional details that will help others understand how this addresses the question asked. You can find more information on how to write good answers in the help center.
A
Avateer

Go to the Anaconda Navigator Find your installed package (numpy) Click on the green hook left beside the package "mark for specific version installation" Select version, apply


harder to do from the command prompt

关注公众号,不定期副业成功案例分享
Follow WeChat

Success story sharing

Want to stay one step ahead of the latest teleworks?

Subscribe Now