盒子
盒子
文章目录
  1. 组件&版本
  2. 安装过程
    1. 安装开发所需依赖包
    2. 安装CUDA 8.0
    3. 安装 cuDNN
    4. 设置 CUDA 环境变量
    5. 安装cuda samples
    6. 安装Intel MKL,openBlas 或Atlas
    7. 安装opencv
    8. 安装python依赖库
    9. 编译Caffe
  3. 更换cuDNN与caffe版本
    1. 为啥要换
    2. 确定依赖关系
    3. 重装过程
    4. 修改后的组件版本
  4. 一点心得

ubuntu14.04 安装 caffe

组件&版本

  • Ubuntu

    • 14.04
  • CUDA

    • 8.0
  • cuDNN
    • cudnn-7.0-linux-x64-v3.0-prod.tgz
  • opencv
    • 3.2
  • python
    • Python 2.7.6
  • caffe
    • release candidate 3

安装过程

安装开发所需依赖包

sudo apt-get install build-essential  
sudo apt-get install libprotobuf-dev libleveldb-dev libsnappy-dev libopencv-dev libboost-all-dev libhdf5-serial-dev libgflags-dev libgoogle-glog-dev liblmdb-dev protobuf-compiler

安装CUDA 8.0

  • 确定 GPU 支持 CUDA

    lspci | grep -i nvidia

    显示

    02:00.0 VGA compatible controller: NVIDIA Corporation Device 1b06 (rev a1)
    02:00.1 Audio device: NVIDIA Corporation Device 10ef (rev a1)
    03:00.0 VGA compatible controller: NVIDIA Corporation Device 1b06 (rev a1)
    03:00.1 Audio device: NVIDIA Corporation Device 10ef (rev a1)
    82:00.0 VGA compatible controller: NVIDIA Corporation Device 1b06 (rev a1)
    82:00.1 Audio device: NVIDIA Corporation Device 10ef (rev a1)
    83:00.0 VGA compatible controller: NVIDIA Corporation Device 1b06 (rev a1)
    83:00.1 Audio device: NVIDIA Corporation Device 10ef (rev a1)

    根据 http://developer.nvidia.com/cuda-gpus 去验证,发现支持 CUDA

  • 确定 linux 版本支持 CUDA

    uname -m && cat /etc/*release

    显示

    x86_64
    DISTRIB_ID=Ubuntu
    DISTRIB_RELEASE=14.04
    DISTRIB_CODENAME=trusty
    DISTRIB_DESCRIPTION="Ubuntu 14.04.5 LTS"
    NAME="Ubuntu"
    VERSION="14.04.5 LTS, Trusty Tahr"
    ID=ubuntu
    ID_LIKE=debian
    PRETTY_NAME="Ubuntu 14.04.5 LTS"
    VERSION_ID="14.04"
    HOME_URL="http://www.ubuntu.com/"
    SUPPORT_URL="http://help.ubuntu.com/"
    BUG_REPORT_URL="http://bugs.launchpad.net/ubuntu/"
  • 确定系统已经安装了 gcc

    gcc --version

    显示

    gcc (Ubuntu 4.8.5-2ubuntu1~14.04.1) 4.8.5
    Copyright (C) 2015 Free Software Foundation, Inc.
    This is free software; see the source for copying conditions. There is NO
    warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
  • 确定系统已经安装了正确的 Kernel Headers 和开发包

    • 查看系统正在运行的 kernel 版本

      uname -r

      显示

      3.13.0-142-generic
    • 安装对应的 kernels header 和开发包

      sudo apt-get install linux-headers-$(uname -r)
  • 安装 CUDA

  • 进行 md5 校验

    md5sum cuda-repo-ubuntu1404_8.0.61-1_amd64.deb
  • 对比相同,然后使用 deb 文件进行安装

    sudo dpkg -i cuda-repo-ubuntu1404_8.0.61-1_amd64.deb
    sudo apt-get update
    sudo apt-get install cuda
  • 重启,完成 cuda 安装

安装 cuDNN

  • https://developer.nvidia.com/rdp/cudnn-download 下载 cuDNN ,版本为 cudnn-7.0-linux-x64-v3.0-prod.tgz

  • 安装

    tar -zxvf cudnn-7.0-linux-x64-v3.0-prod.tgz
    cd cuda
    sudo cp lib64/* /usr/local/cuda/lib64/
    sudo cp include/cudnn.h /usr/local/cuda/include/
  • 更新软链接

    cd /usr/local/cuda/lib64
    sudo rm -rf libcudnn.so libcudnn.so.7.0
    sudo ln -s libcudnn.so.7.0.64 libcudnn.so.7.0
    sudo ln -s libcudnn.so.7.0 libcudnn.so

设置 CUDA 环境变量

  • VERSION1

    • /etc/profile 中添加 CUDA 环境变量

    • 执行

      sudo vim /etc/profile

      在打开的文件中加入如下两句话

      export PATH=/usr/local/cuda/bin:$PATH
      export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
    • 保存后,使环境变量立即生效,执行

      source /etc/profile
  • VERSION2

    • ~/.bashrc 中添加 CUDA 环境变量

    • 执行

      sudo vim ~/.bashrc

      在打开的文件中加入如下两句话

      export PATH=/usr/local/cuda/bin:$PATH
      export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH
    • 保存后,使环境变量立即生效,执行

  • 我用的 VERSION2

安装cuda samples

  • 进入 /usr/local/cuda/samples

    cd /usr/local/cuda/samples
  • 执行下面的命令来 build samples

    sudo make all -j4
  • 全部编译完成后,进入 samples/bin/x86_64/linux/release ,运行 deviceQuery

    cd /usr/local/cuda/samples/bin/x86_64/linux/release
    ./deviceQuery

    如果出现显卡信息,则驱动及显卡安装成功,结果如下

    ./deviceQuery Starting...

    CUDA Device Query (Runtime API) version (CUDART static linking)

    Detected 4 CUDA Capable device(s)

    Device 0: "GeForce GTX 1080 Ti"
    .
    .
    .
    Device 3: "GeForce GTX 1080 Ti"
    CUDA Driver Version / Runtime Version 9.0 / 8.0
    CUDA Capability Major/Minor version number: 6.1
    Total amount of global memory: 11172 MBytes (11715084288 bytes)
    (28) Multiprocessors, (128) CUDA Cores/MP: 3584 CUDA Cores
    GPU Max Clock rate: 1582 MHz (1.58 GHz)
    Memory Clock rate: 5505 Mhz
    Memory Bus Width: 352-bit
    L2 Cache Size: 2883584 bytes
    Maximum Texture Dimension Size (x,y,z) 1D=(131072), 2D=(131072, 65536), 3D=(16384, 16384, 16384)
    Maximum Layered 1D Texture Size, (num) layers 1D=(32768), 2048 layers
    Maximum Layered 2D Texture Size, (num) layers 2D=(32768, 32768), 2048 layers
    Total amount of constant memory: 65536 bytes
    Total amount of shared memory per block: 49152 bytes
    Total number of registers available per block: 65536
    Warp size: 32
    Maximum number of threads per multiprocessor: 2048
    Maximum number of threads per block: 1024
    Max dimension size of a thread block (x,y,z): (1024, 1024, 64)
    Max dimension size of a grid size (x,y,z): (2147483647, 65535, 65535)
    Maximum memory pitch: 2147483647 bytes
    Texture alignment: 512 bytes
    Concurrent copy and kernel execution: Yes with 2 copy engine(s)
    Run time limit on kernels: No
    Integrated GPU sharing Host Memory: No
    Support host page-locked memory mapping: Yes
    Alignment requirement for Surfaces: Yes
    Device has ECC support: Disabled
    Device supports Unified Addressing (UVA): Yes
    Device PCI Domain ID / Bus ID / location ID: 0 / 131 / 0
    Compute Mode:
    < Default (multiple host threads can use ::cudaSetDevice() with device simultaneously) >
    > Peer access from GeForce GTX 1080 Ti (GPU0) -> GeForce GTX 1080 Ti (GPU1) : Yes
    > Peer access from GeForce GTX 1080 Ti (GPU0) -> GeForce GTX 1080 Ti (GPU2) : No
    > Peer access from GeForce GTX 1080 Ti (GPU0) -> GeForce GTX 1080 Ti (GPU3) : No
    > Peer access from GeForce GTX 1080 Ti (GPU1) -> GeForce GTX 1080 Ti (GPU0) : Yes
    > Peer access from GeForce GTX 1080 Ti (GPU1) -> GeForce GTX 1080 Ti (GPU2) : No
    > Peer access from GeForce GTX 1080 Ti (GPU1) -> GeForce GTX 1080 Ti (GPU3) : No
    > Peer access from GeForce GTX 1080 Ti (GPU2) -> GeForce GTX 1080 Ti (GPU0) : No
    > Peer access from GeForce GTX 1080 Ti (GPU2) -> GeForce GTX 1080 Ti (GPU1) : No
    > Peer access from GeForce GTX 1080 Ti (GPU2) -> GeForce GTX 1080 Ti (GPU3) : Yes
    > Peer access from GeForce GTX 1080 Ti (GPU3) -> GeForce GTX 1080 Ti (GPU0) : No
    > Peer access from GeForce GTX 1080 Ti (GPU3) -> GeForce GTX 1080 Ti (GPU1) : No
    > Peer access from GeForce GTX 1080 Ti (GPU3) -> GeForce GTX 1080 Ti (GPU2) : Yes

    deviceQuery, CUDA Driver = CUDART, CUDA Driver Version = 9.0, CUDA Runtime Version = 8.0, NumDevs = 4, Device0 = GeForce GTX 1080 Ti, Device1 = GeForce GTX 1080 Ti, Device2 = GeForce GTX 1080 Ti, Device3 = GeForce GTX 1080 Ti
    Result = PASS

    因为我是 4 个 GPU ,所以会有多个 Device 信息

    安装成功!

安装Intel MKL,openBlas 或Atlas

我选择的是 Atlas ,为 caffe 默认使用的,不要额外配置,安装命令

sudo apt-get install libatlas-base-dev

安装opencv

  • 首先安装必须的包

    sudo apt-get install cmake git libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev
  • opencv 官网下载最新版本 opencv3.2.0.zip

    解压

    unzip 3.2.0.zip
  • 编译

    cd  opencv-3.2.0
    mkdir release
    cd release
    cmake -D CMAKE_BUILD_TYPE=RELEASE -D CMAKE_INSTALL_PREFIX=/usr/local .. -DWITH_LAPACK=OFF
    make -j4
    sudo make install
  • 测试

    因为我的 GPU 服务器走的别人的网关,这里显示不出来图片,也就没有测试。

    但是是需要测试的,自行 Google

安装python依赖库

  • 下载 caffe 源码,解压后进入 caffe-master 下的 python 目录

    这里一定要注意当前下载的 caffe 版本是否支持自己安装的 cuDNN 版本

  • 安装 python-pip

    sudo apt-get install python-pip
  • 执行如下命令安装相关依赖

    for req in $(cat requirements.txt); do pip install $req; done

编译Caffe

  • 进入 caffe-master 目录,复制一份 Makefile.config.examples

    cp Makefile.config.example Makefile.config
  • 修改 Makefile.config

    开启对 CuDNNOpencv3 的支持

    ## Refer to http://caffe.berkeleyvision.org/installation.html
    # Contributions simplifying and improving our build system are welcome!

    # cuDNN acceleration switch (uncomment to build with cuDNN).
    USE_CUDNN := 1

    # CPU-only switch (uncomment to build without GPU support).
    # CPU_ONLY := 1

    # uncomment to disable IO dependencies and corresponding data layers
    # USE_OPENCV := 0
    # USE_LEVELDB := 0
    # USE_LMDB := 0

    # uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
    # You should not set this flag if you will be reading LMDBs with any
    # possibility of simultaneous read and write
    # ALLOW_LMDB_NOLOCK := 1

    # Uncomment if you're using OpenCV 3
    OPENCV_VERSION := 3

    # To customize your choice of compiler, uncomment and set the following.
    # N.B. the default for Linux is g++ and the default for OSX is clang++
    # CUSTOM_CXX := g++

    # CUDA directory contains bin/ and lib/ directories that we need.
    CUDA_DIR := /usr/local/cuda
    # On Ubuntu 14.04, if cuda tools are installed via
    # "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
    # CUDA_DIR := /usr

    # CUDA architecture setting: going with all of them.
    # For CUDA < 6.0, comment the *_50 lines for compatibility.
    CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
    -gencode arch=compute_20,code=sm_21 \
    -gencode arch=compute_30,code=sm_30 \
    -gencode arch=compute_35,code=sm_35 \
    -gencode arch=compute_50,code=sm_50 \
    -gencode arch=compute_50,code=compute_50

    # BLAS choice:
    # atlas for ATLAS (default)
    # mkl for MKL
    # open for OpenBlas
    BLAS := atlas
    # Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
    # Leave commented to accept the defaults for your choice of BLAS
    # (which should work)!
    # BLAS_INCLUDE := /path/to/your/blas
    # BLAS_LIB := /path/to/your/blas

    # Homebrew puts openblas in a directory that is not on the standard search path
    # BLAS_INCLUDE := $(shell brew --prefix openblas)/include
    # BLAS_LIB := $(shell brew --prefix openblas)/lib

    # This is required only if you will compile the matlab interface.
    # MATLAB directory should contain the mex binary in /bin.
    # MATLAB_DIR := /usr/local
    # MATLAB_DIR := /Applications/MATLAB_R2012b.app

    # NOTE: this is required only if you will compile the python interface.
    # We need to be able to find Python.h and numpy/arrayobject.h.
    PYTHON_INCLUDE := /usr/include/python2.7 \
    /usr/lib/python2.7/dist-packages/numpy/core/include
    # Anaconda Python distribution is quite popular. Include path:
    # Verify anaconda location, sometimes it's in root.
    # ANACONDA_HOME := $(HOME)/anaconda
    # PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
    # $(ANACONDA_HOME)/include/python2.7 \
    # $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include \

    # Uncomment to use Python 3 (default is Python 2)
    # PYTHON_LIBRARIES := boost_python3 python3.5m
    # PYTHON_INCLUDE := /usr/include/python3.5m \
    # /usr/lib/python3.5/dist-packages/numpy/core/include

    # We need to be able to find libpythonX.X.so or .dylib.
    PYTHON_LIB := /usr/lib
    # PYTHON_LIB := $(ANACONDA_HOME)/lib

    # Homebrew installs numpy in a non standard path (keg only)
    # PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
    # PYTHON_LIB += $(shell brew --prefix numpy)/lib

    # Uncomment to support layers written in Python (will link against Python libs)
    # WITH_PYTHON_LAYER := 1

    # Whatever else you find you need goes here.
    INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
    LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

    # If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
    # INCLUDE_DIRS += $(shell brew --prefix)/include
    # LIBRARY_DIRS += $(shell brew --prefix)/lib

    # Uncomment to use `pkg-config` to specify OpenCV library paths.
    # (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
    # USE_PKG_CONFIG := 1

    BUILD_DIR := build
    DISTRIBUTE_DIR := distribute

    # Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
    # DEBUG := 1

    # The ID of the GPU that 'make runtest' will use to run unit tests.
    TEST_GPUID := 0

    # enable pretty build (comment to see full commands)
    Q ?= @
  • 保存退出,编译

    make all -j4
    make test -j4
    make runtest -j4
  • 结果

    [ FAILED ] 5 tests, listed below:
    [ FAILED ] CuDNNConvolutionLayerTest/0.TestSimpleConvolutionCuDNN, where TypeParam = float
    [ FAILED ] CuDNNConvolutionLayerTest/0.TestGradientCuDNN, where TypeParam = float
    [ FAILED ] CuDNNConvolutionLayerTest/0.TestGradientGroupCuDNN, where TypeParam = float
    [ FAILED ] CuDNNConvolutionLayerTest/0.TestSobelConvolutionCuDNN, where TypeParam = float
    [ FAILED ] CuDNNConvolutionLayerTest/0.TestSimpleConvolutionGroupCuDNN, where TypeParam = float

    这个是因为我们使用的是 cuDNN v3 版本,而这部分只在 cuDNN v4work

更换cuDNN与caffe版本

为啥要换

我当然不想换!因为我老大不太满意!也没有说不满意,就是素质三连了一下

都行

随你

都可以

。。。

确定依赖关系

  • CuDNN 是专门针对 Deep Learning 框架设计的一套 GPU 计算加速方案,目前支持的 DL 库包括 CaffeConvNetTorch7

    基本原理是把 lib 文件加入到系统能找到的 lib 文件夹里, 把头文件加到系统能找到的 include 文件夹里就可以。这里把他们加到 CUDA 的文件夹下

  • CUDA 是直接安装的、OpenCV 安装的时候是依赖于 CUDA 的、cuDNN 的安装就是将其 lib文件加入到系统能找到的lib文件夹里, 把头文件加到系统能找到的include文件夹里,这里把他们加到CUDA的文件夹下

  • 所以说 cuDNN 无关于 CUDA ,无关于 OpenCV ,它只和 caffe 有关系,因为 caffe 安装的时候要依赖它。

重装过程

  • cuDNN 文件结构

    [email protected]:/home/jedy/deeplearning/cuda# tree -N
    .
    ├── include
    │   └── cudnn.h
    └── lib64
    ├── libcudnn.so
    ├── libcudnn.so.5
    ├── libcudnn.so.5.0.5
    └── libcudnn_static.a

    2 directories, 5 files

    而安装 cuDNN 的命令是

    tar -zxvf cudnn-7.0-linux-x64-v3.0-prod.tgz 
    cd cuda
    sudo cp lib64/* /usr/local/cuda/lib64/
    sudo cp include/cudnn.h /usr/local/cuda/include/

    #更新软链接
    cd /usr/local/cuda/lib64
    sudo rm -rf libcudnn.so libcudnn.so.7.0
    sudo ln -s libcudnn.so.7.0.64 libcudnn.so.7.0
    sudo ln -s libcudnn.so.7.0 libcudnn.so
  • 重装 cuDNN

    • 因为 cuDNN 只是将文件拷贝进去,所以我们只需要下载更新版本的 cuDNN 然后覆盖安装即可。我是打算一步到位,使用最新的 caffe ,又怕 cuDNN 向下兼容不好,所以下载了 v5 版本,版本为 cudnn-8.0-linux-x64-v5.0-ga.tgz

    • 进入到 /usr/local/cuda/lib64 中,删除旧版本的 cuDNN

      cd /usr/local/cuda/lib64
      sudo rm -rf libcudnn.so libcudnn.so.7.0 libcudnn.so.7.0.64
    • 解压新版本的 cuDNN 并拷贝

      tar -zxvf cudnn-8.0-linux-x64-v5.0-ga.tgz
      cd cuda
      sudo cp lib64/* /usr/local/cuda/lib64/
      sudo cp include/cudnn.h /usr/local/cuda/include/

      这里的 include/cudnn.h 我们并没有删除旧版本的,而是采用了覆盖安装的方式

    • 更新软链接

      cd /usr/local/cuda/lib64
      sudo rm -rf libcudnn.so libcudnn.so.5
      sudo ln -s libcudnn.so.5.0.5 libcudnn.so.5
      sudo ln -s libcudnn.so.5 libcudnn.so
  • 重装 caffe

    • cafferoot 目录执行

      make clean
    • 删除旧版本的 caffe

    • 下载新版本的 caffe ,我这里直接下载了 1.0 版本的 caffe

    • 进入 caffe-master 目录,复制一份 Makefile.config.examples

      cp Makefile.config.example Makefile.config
    • 修改 Makefile.config ,开启 CuDNNOpencv3 的支持即可,修改好的如下

      ## Refer to http://caffe.berkeleyvision.org/installation.html
      # Contributions simplifying and improving our build system are welcome!

      # cuDNN acceleration switch (uncomment to build with cuDNN).
      USE_CUDNN := 1

      # CPU-only switch (uncomment to build without GPU support).
      # CPU_ONLY := 1

      # uncomment to disable IO dependencies and corresponding data layers
      # USE_OPENCV := 0
      # USE_LEVELDB := 0
      # USE_LMDB := 0

      # uncomment to allow MDB_NOLOCK when reading LMDB files (only if necessary)
      # You should not set this flag if you will be reading LMDBs with any
      # possibility of simultaneous read and write
      # ALLOW_LMDB_NOLOCK := 1

      # Uncomment if you're using OpenCV 3
      OPENCV_VERSION := 3

      # To customize your choice of compiler, uncomment and set the following.
      # N.B. the default for Linux is g++ and the default for OSX is clang++
      # CUSTOM_CXX := g++

      # CUDA directory contains bin/ and lib/ directories that we need.
      CUDA_DIR := /usr/local/cuda
      # On Ubuntu 14.04, if cuda tools are installed via
      # "sudo apt-get install nvidia-cuda-toolkit" then use this instead:
      # CUDA_DIR := /usr

      # CUDA architecture setting: going with all of them.
      # For CUDA < 6.0, comment the *_50 through *_61 lines for compatibility.
      # For CUDA < 8.0, comment the *_60 and *_61 lines for compatibility.
      CUDA_ARCH := -gencode arch=compute_20,code=sm_20 \
      -gencode arch=compute_20,code=sm_21 \
      -gencode arch=compute_30,code=sm_30 \
      -gencode arch=compute_35,code=sm_35 \
      -gencode arch=compute_50,code=sm_50 \
      -gencode arch=compute_52,code=sm_52 \
      -gencode arch=compute_60,code=sm_60 \
      -gencode arch=compute_61,code=sm_61 \
      -gencode arch=compute_61,code=compute_61

      # BLAS choice:
      # atlas for ATLAS (default)
      # mkl for MKL
      # open for OpenBlas
      BLAS := atlas
      # Custom (MKL/ATLAS/OpenBLAS) include and lib directories.
      # Leave commented to accept the defaults for your choice of BLAS
      # (which should work)!
      # BLAS_INCLUDE := /path/to/your/blas
      # BLAS_LIB := /path/to/your/blas

      # Homebrew puts openblas in a directory that is not on the standard search path
      # BLAS_INCLUDE := $(shell brew --prefix openblas)/include
      # BLAS_LIB := $(shell brew --prefix openblas)/lib

      # This is required only if you will compile the matlab interface.
      # MATLAB directory should contain the mex binary in /bin.
      # MATLAB_DIR := /usr/local
      # MATLAB_DIR := /Applications/MATLAB_R2012b.app

      # NOTE: this is required only if you will compile the python interface.
      # We need to be able to find Python.h and numpy/arrayobject.h.
      PYTHON_INCLUDE := /usr/include/python2.7 \
      /usr/lib/python2.7/dist-packages/numpy/core/include
      # Anaconda Python distribution is quite popular. Include path:
      # Verify anaconda location, sometimes it's in root.
      # ANACONDA_HOME := $(HOME)/anaconda
      # PYTHON_INCLUDE := $(ANACONDA_HOME)/include \
      # $(ANACONDA_HOME)/include/python2.7 \
      # $(ANACONDA_HOME)/lib/python2.7/site-packages/numpy/core/include

      # Uncomment to use Python 3 (default is Python 2)
      # PYTHON_LIBRARIES := boost_python3 python3.5m
      # PYTHON_INCLUDE := /usr/include/python3.5m \
      # /usr/lib/python3.5/dist-packages/numpy/core/include

      # We need to be able to find libpythonX.X.so or .dylib.
      PYTHON_LIB := /usr/lib
      # PYTHON_LIB := $(ANACONDA_HOME)/lib

      # Homebrew installs numpy in a non standard path (keg only)
      # PYTHON_INCLUDE += $(dir $(shell python -c 'import numpy.core; print(numpy.core.__file__)'))/include
      # PYTHON_LIB += $(shell brew --prefix numpy)/lib

      # Uncomment to support layers written in Python (will link against Python libs)
      # WITH_PYTHON_LAYER := 1

      # Whatever else you find you need goes here.
      INCLUDE_DIRS := $(PYTHON_INCLUDE) /usr/local/include
      LIBRARY_DIRS := $(PYTHON_LIB) /usr/local/lib /usr/lib

      # If Homebrew is installed at a non standard location (for example your home directory) and you use it for general dependencies
      # INCLUDE_DIRS += $(shell brew --prefix)/include
      # LIBRARY_DIRS += $(shell brew --prefix)/lib

      # NCCL acceleration switch (uncomment to build with NCCL)
      # https://github.com/NVIDIA/nccl (last tested version: v1.2.3-1+cuda8.0)
      # USE_NCCL := 1

      # Uncomment to use `pkg-config` to specify OpenCV library paths.
      # (Usually not necessary -- OpenCV libraries are normally installed in one of the above $LIBRARY_DIRS.)
      # USE_PKG_CONFIG := 1

      # N.B. both build and distribute dirs are cleared on `make clean`
      BUILD_DIR := build
      DISTRIBUTE_DIR := distribute

      # Uncomment for debugging. Does not work on OSX due to https://github.com/BVLC/caffe/issues/171
      # DEBUG := 1

      # The ID of the GPU that 'make runtest' will use to run unit tests.
      TEST_GPUID := 0

      # enable pretty build (comment to see full commands)
      Q ?= @
    • 保存退出,编译

      make all -j4
      make test -j4
      make runtest -j4
    • 没有报错,成功

      [----------] Global test environment tear-down
      [==========] 2101 tests from 277 test cases ran. (371248 ms total)
      [ PASSED ] 2101 tests.

修改后的组件版本

  • Ubuntu
    • 14.04
  • CUDA
    • 8.0
  • cuDNN
    • cudnn-8.0-linux-x64-v5.0-ga.tgz
  • opencv
    • 3.2
  • python
    • Python 2.7.6
  • caffe
    • 1.0

一点心得

  • make 编译的时候可以通过

    make -j4

    指令指定多线程编译,可以很大程度加快编译速度。具体可以开多少看自己机器配置

  • 看好OpenCV支持的CUDA版本

  • 看好caffe支持的cuDNN版本

  • 每个人硬件、环境的不同都会导致踩到不同的坑。这个就多 Google 吧!

部分参考 Ubuntu 14.04上安装caffe

支持一下
万一真的就有人扫了呢
  • 微信扫一扫
  • 支付宝扫一扫