1、 CPU p p p CPU p p np p CPU p p AI GPU p np CPU p p StorageSystemrequestBatch &StreamManagergRPC/HTTPclientModel v1ModelLoaderVersionManagerServingInterfaceModel v2responseServing System20min/500ms/200ms/I/O200 / 100 / 50 QPS 100 QPS (precision) 99.3% (recall) 97.8% (accuracy) 95.1%CPU Tensorboard,
2、Timeline Visual DL Vtune Dtracestrace Plockstatlockstat Compiler Option MKL-DNN Intel OpenVINO PerfSarNumactl iostat vmstatblktrace Vtunep p n CPU p p 1. MKL-DNN, Open VINOResnet50MobileNetInceptionV4IntelOpen VINOSDK02468CompilerOptionMathLibrarywith vinowith mkl only Xeon E5-2650 v4 CPUbatchsize=1
3、CPUs=8Mem=16GIntel CPUs 1. MKL-DNN MKL-DNN Tensorflow or 1. Open VINO Open VINO DockerDeep Learning applicationIR modelOriginalmodelModelOptimizer ToolDL Inference Engine APIHeterogeneous Execution EngineCPU PluginMKL-DNNGPU Plugincl-DNN 1. OpenMP OMP 120 fps 100806040200 KMP_BLOCKTIME = 10 KMP_AFFI
4、NITY=granularity=fine, verbose,compact,1,0 OMP_NUM_THREADS = number of cpu coresmobilenetomp_num_threads=2Resnet-50omp_num_threads=4omp_num_threads=16in containeromp_num_threads=8 Xeon E5-2650 v4 CPUbatchsize=1CPUs=8Mem=16G 1. CPU CPU 700 fps6005004003002001000 CPU batchsize KPI CPU alexnet, bs =1al
5、exnet, bs=1 28cpus=2 cpus=4 cpus=8 cpus=12 cpus=24 Xeon E5-2650 v4 CPU1 1. CPU CPU f 120 fps100806040200Boardwell Xeon E5-2650 v4Skylake Gold 6148E5-2650Gold 6148InceptionV4MobileNetRestNet-50 batchsize=1CPUs=4Mem=16G 1. NHWC vs NCHW NHWCTensorflow CPU NHWC MKL-DNN Tensorflow NCHW NCHW 1. NUMA 20 fp
6、s151050InceptionV4Resnet-50NUMA(0-3)NUMA(0-2,12)NUMA(0-1,12-13)A Xeon E5-2650 v4 CPUbatchsize=1NUMA node node 5% 10% 2. 1. 2. mkl 2. 1. 2. MKL 2. 720 ()1412108 - - 6420 3. Batchsize (fps)m )353025201510516001400120010008006004002000 batchsize latency batchsize batchsize0InceptionResnet16 32Inception
7、Resnet16 3212481248 Xeon E5-2650 v4 CPUCPUs=4Mem=16G 3. Post-training Quantization Training-aware Quantization TF-lite 2-5 Caffe 1-3 4. 4. CPU DockerCustomized Serving applicationModelOptimizer ToolVINOmodelDockerSchedulerMesosDockerOpenVINO Inference EngineDockerTensorflow ServingModelTransform ToolTF modelServerServerTensorflow with MLK-DNN 4. Web 4. LOCUST a) b) VINO p p p CPU np CPU CPU 100+ GPU MKL-DNN - CNN 1-4 Open VINO - CNN 2-8 CPU SKYLAKE GPU (P4) / pipeline p p p CPU p nl l l CPU l l l l