Your kernel may not have been built with NUMA support. 20:04:22.369609: I tensorflow/core/common_runtime/gpu/gpu_:1609] Could not identify NUMA node of platform GPU id 0, defaulting to 0. Includes a GPU load test to verify PCI-Express lane configuration. Displays overclock, default clocks and 3D/boost clocks (if available) Detailed reporting on memory subsystem: memory size, type, speed, bus width. Displays adapter, GPU and display information. 20:04:22.369084: I tensorflow/stream_executor/cuda/cuda_gpu_:925] could not open file to read NUMA node: /sys/bus/pci/devices/0000:65:00.0/numa_node Supports NVIDIA, AMD, ATI and Intel graphics devices. Your kernel may have been built without NUMA support. To enable them in other operations, rebuild TensorFlow with the appropriate compiler flags. The headers in the table listed below describe the following: Model The marketing name for the GPU assigned by AMD/ATI.Note that ATI trademarks have been replaced by AMD trademarks starting with the Radeon HD 6000 series for desktop and AMD FirePro series for professional graphics. 20:04:21.536059: I tensorflow/core/platform/cpu_feature_:151] This TensorFlow binary is optimized with oneAPI Deep Neural Network Library (oneDNN) to use the following CPU instructions in performance-critical operations: AVX2 AVX512F FMA Why is it using OneDNN instead of CuDNN and why do I get “could not open file to read NUMA node”? When I run Visual Studio Code from an Ubuntu distro in WSL2 and open a python tensorflow2 jupyter notebook and execute “device_name = tf.test.gpu_device_name()” I getwhat is shown below. However, I was not sure if I also needed to install cudnn on WSL2. I installed CUDA on WSL2 Ubuntu distribution.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |