
$ sudo chmod a+r /usr/local/cuda/include/cudnn.h /usr/local/cuda/lib64/libcudnn* Update Bash File $ sudo cp lib64/libcudnn* /usr/local/cuda/lib64 $ sudo cp include/cudnn.h /usr/local/cuda/include The toolkit default install location is /usr/local/cuda or use which nvcc to check where your CUDA installation is.Ĭopy the cuDNN files under CUDA as follows. Next you need to uncompress and copy cuDNN to the toolkit directory. The installation of cuDNN is just copying some files.įirst, download cuDNN (you’ll need to register for the Accelerated Computing Developer Program).įor CUDA® Toolkit 8.0, you need cuDNN v5.1. $ makeĭuring the CUDA installation, if you skipped installing samples or if you can’t find samples at all, you can still have them from github. Verify running CUDA GPU jobs by compiling the samples and executing the deviceQuery. $ cd /usr/local/cuda/samples/1_Utilities/deviceQuery $ nvcc -VĬhange directory to CUDA samples. Otherwise, when you are running Tensorflow, you will face with an import error. $ export LD_LIBRARY_PATH=/usr/local/cuda-8.0/lib64:$LD_LIBRARY_PATH $ export PATH=/usr/local/cuda-8.0/bin:$PATH Insert the lines below inside the bash file. The toolkit default install location is /usr/local/cuda. $ sudo service lightdm start CUDA PathnamesĪppend the relevant Cuda pathnames to the LD_LIBRARY_PATH environment variable. Once it has completed, restart the X server. Now change directory into the location you have downloaded CUDA. Use Ctrl + Alt + F7 to exit from virtual terminal. In Ubuntu, it is Ctrl + Alt + F1, to login, and gain access to the system. Once you stop X server, you’ll see a black screen and you need to get to the virtual terminal. Before CUDA can be installed you, there are a few steps you need to do otherwise you will get an error telling you an X server is running and it won’t let you install.
