ChatGPT解决这个技术问题 Extra ChatGPT

如何检查 keras 是否使用 gpu 版本的 tensorflow?

当我运行 keras 脚本时,我得到以下输出:

Using TensorFlow backend.
2017-06-14 17:40:44.621761: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use SSE4.1 instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621783: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use SSE4.2 instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621788: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use AVX instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621791: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use AVX2 instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:40:44.621795: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use FMA instructions, but these are 
available 
on your machine and could speed up CPU computations.
2017-06-14 17:40:44.721911: I 
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful 
NUMA node read from SysFS had negative value (-1), but there must be 
at least one NUMA node, so returning NUMA node zero
2017-06-14 17:40:44.722288: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 
with properties: 
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:0a:00.0
Total memory: 3.95GiB
Free memory: 3.69GiB
2017-06-14 17:40:44.722302: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0 
2017-06-14 17:40:44.722307: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0:   Y 
2017-06-14 17:40:44.722312: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating 
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M, 
pci bus id: 0000:0a:00.0)

这是什么意思?我使用的是 GPU 还是 CPU 版本的 tensorflow?

在安装 keras 之前,我使用的是 GPU 版本的 tensorflow。

此外,sudo pip3 list 显示 tensorflow-gpu(1.1.0) 而没有像 tensorflow-cpu 这样的东西。

运行 [this stackoverflow question] 中提到的命令,给出以下信息:

The TensorFlow library wasn't compiled to use SSE4.1 instructions, 
but these are available on your machine and could speed up CPU 
computations.
2017-06-14 17:53:31.424793: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use SSE4.2 instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424803: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use AVX instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424812: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use AVX2 instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.424820: W 
tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow 
library wasn't compiled to use FMA instructions, but these are 
available on your machine and could speed up CPU computations.
2017-06-14 17:53:31.540959: I 
tensorflow/stream_executor/cuda/cuda_gpu_executor.cc:901] successful 
NUMA node read from SysFS had negative value (-1), but there must be 
at least one NUMA node, so returning NUMA node zero
2017-06-14 17:53:31.541359: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:887] Found device 0 
with properties: 
name: GeForce GTX 850M
major: 5 minor: 0 memoryClockRate (GHz) 0.9015
pciBusID 0000:0a:00.0
Total memory: 3.95GiB
Free memory: 128.12MiB
2017-06-14 17:53:31.541407: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:908] DMA: 0 
2017-06-14 17:53:31.541420: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:918] 0:   Y 
2017-06-14 17:53:31.541441: I 
tensorflow/core/common_runtime/gpu/gpu_device.cc:977] Creating 
TensorFlow device (/gpu:0) -> (device: 0, name: GeForce GTX 850M, 
pci bus id: 0000:0a:00.0)
2017-06-14 17:53:31.547902: E 
tensorflow/stream_executor/cuda/cuda_driver.cc:893] failed to 
allocate 128.12M (134348800 bytes) from device: 
CUDA_ERROR_OUT_OF_MEMORY
Device mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce 
GTX 850M, pci bus id: 0000:0a:00.0
2017-06-14 17:53:31.549482: I 
tensorflow/core/common_runtime/direct_session.cc:257] Device 
mapping:
/job:localhost/replica:0/task:0/gpu:0 -> device: 0, name: GeForce 
GTX 850M, pci bus id: 0000:0a:00.0

W
Wilmar van Ommeren

您正在使用 GPU 版本。您可以列出可用的 tensorflow 设备(另请查看 this 问题):

from tensorflow.python.client import device_lib
print(device_lib.list_local_devices()) # list of DeviceAttributes

编辑:

使用 tensorflow >= 1.4,您可以运行 following 函数:

import tensorflow as tf
tf.test.is_gpu_available() # True/False

# Or only check for gpu's with cuda support
tf.test.is_gpu_available(cuda_only=True) 

编辑2:

上述函数在 tensorflow > 2.1 中已弃用。相反,您应该使用以下功能:

import tensorflow as tf
tf.config.list_physical_devices('GPU')

笔记:

在您的情况下,cpu 和 gpu 都可用,如果您使用 tensorflow 的 cpu 版本,则不会列出 gpu。在您的情况下,无需设置您的 tensorflow 设备 (with tf.device("..")),tensorflow 将自动选择您的 gpu!

此外,您的 sudo pip3 list 清楚地表明您正在使用 tensorflow-gpu。如果您有 tensoflow cpu 版本,则名称将类似于 tensorflow(1.1.0)

检查 this 问题以获取有关警告的信息。


P
Paul Williams

为了让 Keras 使用 GPU,很多事情都必须顺利进行。把它放在你的 jupyter notebook 的顶部附近:

# confirm TensorFlow sees the GPU
from tensorflow.python.client import device_lib
assert 'GPU' in str(device_lib.list_local_devices())

# confirm Keras sees the GPU (for TensorFlow 1.X + Keras)
from keras import backend
assert len(backend.tensorflow_backend._get_available_gpus()) > 0

# confirm PyTorch sees the GPU
from torch import cuda
assert cuda.is_available()
assert cuda.device_count() > 0
print(cuda.get_device_name(cuda.current_device()))

注意:随着 TensorFlow 2.0 的发布,Keras 现在作为 TF API 的一部分包含在内。


A
Ashok Kumar Jayaraman

要找出您的操作和张量分配给哪些设备,请创建将 log_device_placement 配置选项设置为 True 的会话。

# Creates a graph.
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
# Creates a session with log_device_placement set to True.
sess = tf.Session(config=tf.ConfigProto(log_device_placement=True))
# Runs the op.
print(sess.run(c))

您应该看到以下输出:

Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: Tesla K40c, pci bus
id: 0000:05:00.0
b: /job:localhost/replica:0/task:0/device:GPU:0
a: /job:localhost/replica:0/task:0/device:GPU:0
MatMul: /job:localhost/replica:0/task:0/device:GPU:0
[[ 22.  28.]
 [ 49.  64.]]

有关详细信息,请参阅链接 Using GPU with tensorflow


关注公众号,不定期副业成功案例分享
关注公众号

不定期副业成功案例分享

领先一步获取最新的外包任务吗?

立即订阅