ChatGPT解决这个技术问题 Extra ChatGPT

Can I run Keras model on gpu?

I'm running a Keras model, with a submission deadline of 36 hours, if I train my model on the cpu it will take approx 50 hours, is there a way to run Keras on gpu?

I'm using Tensorflow backend and running it on my Jupyter notebook, without anaconda installed.

I found this: medium.com/@kegui/… It feels like one could peruse highly rated questions in a narrow field here, and then make a full "answer" on Medium, and make actual money from views.
For AMD GPU. See this post. stackoverflow.com/a/60016869/6117565

V
Vikash Singh

Yes you can run keras models on GPU. Few things you will have to check first.

your system has GPU (Nvidia. As AMD doesn't work yet) You have installed the GPU version of tensorflow You have installed CUDA installation instructions Verify that tensorflow is running with GPU check if GPU is working

sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))

for TF > v2.0

sess = tf.compat.v1.Session(config=tf.compat.v1.ConfigProto(log_device_placement=True))

(Thanks @nbro and @Ferro for pointing this out in the comments)

OR

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())

output will be something like this:

[
  name: "/cpu:0"device_type: "CPU",
  name: "/gpu:0"device_type: "GPU"
]

Once all this is done your model will run on GPU:

To Check if keras(>=2.1.1) is using GPU:

from keras import backend as K
K.tensorflow_backend._get_available_gpus()

All the best.


i will have to install python 3.5 for this right?else tensorflow will not work?
Not necessary. TF works with 2.7 and 3.5 both. Chose the correct version of TF that's it.
alright ,ill go with 2.7,havig issues with installing 3.5
I get this Error -Could not find any downloads that satisfy the requirement tensorflow in /usr/local/lib/python2.7/dist-packages Downloading/unpacking tensorflow Cleaning up... No distributions at all found for tensorflow in /usr/local/lib/python2.7/dist-packages Storing debug log for failure in /home/hyperworks/.pip/pip.log
K.tensorflow_backend._get_available_gpus() does not work in TensorFlow 2.0.
T
Tensorflow Support

2.0 Compatible Answer: While above mentioned answer explain in detail on how to use GPU on Keras Model, I want to explain how it can be done for Tensorflow Version 2.0.

To know how many GPUs are available, we can use the below code:

print("Num GPUs Available: ", len(tf.config.experimental.list_physical_devices('GPU')))

To find out which devices your operations and tensors are assigned to, put tf.debugging.set_log_device_placement(True) as the first statement of your program.

Enabling device placement logging causes any Tensor allocations or operations to be printed. For example, running the below code:

tf.debugging.set_log_device_placement(True)

# Create some tensors
a = tf.constant([[1.0, 2.0, 3.0], [4.0, 5.0, 6.0]])
b = tf.constant([[1.0, 2.0], [3.0, 4.0], [5.0, 6.0]])
c = tf.matmul(a, b)

print(c)

gives the Output shown below:

Executing op MatMul in device /job:localhost/replica:0/task:0/device:GPU:0 tf.Tensor( [[22. 28.] [49. 64.]], shape=(2, 2), dtype=float32)

For more information, refer this link


There are now XLA_GPU that are not showns if I list only 'GPU'. Maybe that is also the reason keras doesn't seem to see my gpu
A
Allan Karlson

Sure. I suppose that you have already installed TensorFlow for GPU.

You need to add the following block after importing keras. I am working on a machine which have 56 core cpu, and a gpu.

import keras
import tensorflow as tf


config = tf.ConfigProto( device_count = {'GPU': 1 , 'CPU': 56} ) 
sess = tf.Session(config=config) 
keras.backend.set_session(sess)

Of course, this usage enforces my machines maximum limits. You can decrease cpu and gpu consumption values.


Error module 'tensorflow' has no attribute 'ConfigProto'
You are using tensorflow 2? I tested it for tf 1.X.
the only answer which actually tells that running keras on gpu requires installing whole another stack of software, starting from nvidia driver to '-gpu' build of the keras itself, plus minding cudnn and cuda proper installation and linking
N
Navin Kumar

Of course. if you are running on Tensorflow or CNTk backends, your code will run on your GPU devices defaultly.But if Theano backends, you can use following

Theano flags: "THEANO_FLAGS=device=gpu,floatX=float32 python my_keras_script.py"


M
MonkeyBack

I'm using Anaconda on Windows 10, with a GTX 1660 Super. I first installed the CUDA environment following this step-by-step. However there is now a keras-gpu metapackage available on Anaconda which apparently doesn't require installing CUDA and cuDNN libraries beforehand (mine were already installed anyway).

This is what worked for me to create a dedicated environment named keras_gpu:

# need to downgrade from tensorflow 2.1 for my particular setup
conda create --name keras_gpu keras-gpu=2.3.1 tensorflow-gpu=2.0

To add on @johncasey 's answer but for TensorFlow 2.0, adding this block works for me:

import tensorflow as tf
from tensorflow.python.keras import backend as K

# adjust values to your needs
config = tf.compat.v1.ConfigProto( device_count = {'GPU': 1 , 'CPU': 8} )
sess = tf.compat.v1.Session(config=config) 
K.set_session(sess)

This post solved the set_session error I got: you need to use the keras backend from the tensorflow path instead of keras itself.


B
BSalita

Using Tensorflow 2.5, building on @MonkeyBack's answer:

conda create --name keras_gpu keras-gpu tensorflow-gpu

# should show GPU is available
python -c "import tensorflow as tf;print('GPUs Available:', tf.config.list_physical_devices('GPU'))"

T
Tae-Sung Shin

See if your script is running GPU in Task manager. If not, suspect your CUDA version is right one for the tensorflow version you are using, as the other answers suggested already.

Additionally, a proper CUDA DNN library for the CUDA version is required to run GPU with tensorflow. Download/extract it from here and put the DLL (e.g., cudnn64_7.dll) into CUDA bin folder (e.g., C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v10.1\bin).