©2019 by Circuit Blvd., Inc.

  • Sungjoon Ahn

Video streaming application on SPDK OCSSD

Running applications on SPDK OCSSD

We describe how applications run on SPDK OCSSD by typically partitioning the single physical SSD device. Video streaming is a fitting application since it requires relatively high bandwidth and low latencies for each stream. As the capacity of modern SSDs is increasing rapidly, it makes a lot of sense to partition the SSDs by OCSSD framework to guarantee QoS for the individual I/O streams.


Video streaming server off OCSSD ftl_bdev

We run our video streaming server app on top of the system we built as described in the previous tech notes. We chose to run video servers on top of NVMe-oF TCP connections where each OCSSD ftl_bdev is exposed as a kernel block device. Video servers run inside Docker containers on the same physical machine as OCSSD qemu-nvme VM runs. Each video server exposes IP address/port pairs for RTMP compatible video players or HTML5 web browsers to stream videos.

Fig 1. System overview

Installing and launching video servers

First, clone the app-vs-p Docker source files and build the container image.

cbuser@pm111:~/github$ git clone \        https://github.com/circuitblvd/app-vs-p.git
cbuser@pm111:~/github/app-vs-p$ docker build -t app-vs-p .

Second, set up the OCSSD qemu-nvme and NVMe-oF TCP environment that we posted in the previous tech notes. Third, make video file directories under /tmp/ directory of the physical server. Populate video files as well as their thumbnails and make the catalog consistent with javascript advanced-$IDX.js files under the Docker source file directory app-vs-p/js-work/. For example, /tmp/mm/mp4-5 should match with app-vs-p/js-work/advanced-5.js. $IDX is a Docker run command parameter we explain in the fourth paragraph. If you have different video catalogs, then you need to edit advanced-$IDX.js to make it match with your video catalogs. You also need to rebuild your Docker image. You can see example video catalogs in the listing below.

cbuser@svcb-0011u1804:/tmp/mm$ ls -1
mp4-5
mp4-6
mp4-7
mp4-8
cbuser@svcb-0011u1804:/tmp/mm$ ls -1 mp4-5
160820_313_NYC_USAFlag19_1080p.mp4
160929_044_London_BigBen2_1080p.mp4
160929_045_London_BigBen3_1080p.mp4
160929_106_London_WaterlooStationTimeLapse2_1080p.mp4
170422A_002_SlowMoStatue_1080p.mp4
170422B_016_Florence_1080P.mp4
170422B_046_Florence_1080P.mp4
170422B_062_LeaningTowerPisa_1080P.mp4
170609_C_Agra_110.mp4
a19.mp4
a47a.mp4
harbour.mp4
img
Milan_Cathedral_CCBY_NatureClip.mp4
cbuser@svcb-0011u1804:/tmp/mm$ ls -1 mp4-5/img
160820_313_NYC_USAFlag19_1080p.png
160929_044_London_BigBen2_1080p.png
160929_045_London_BigBen3_1080p.png
160929_106_London_WaterlooStationTimeLapse2_1080p.png
170422A_002_SlowMoStatue_1080p.png
170422B_016_Florence_1080P.png
170422B_046_Florence_1080P.png
170422B_062_LeaningTowerPisa_1080P.png
170609_C_Agra_110.png
a19.png
a47a.png
harbour.png
Milan_Cathedral_CCBY_NatureClip.png 

Fourth, run the following Docker commands to instantiate four app-vs-p Docker images in detached mode. We run the Docker images in privileged mode because we need root access to NVMe-oF devices. We are using the physical server's host network address which is specified by "--network host". Directory /dev and /tmp are passed to the Docker image because the video server copies video files from /tmp/mm/mp4-$IDX directories to file systems built on top of /dev/nvme?n1 NVMe-oF TCP devices. Environment variable parameters that are passed to Docker run script are as follows: TP specifies the type of NVMe-oF transport. TIP and TPORT specify the IP address/port number of exposed SPDK ftl_bdev. Likewise, TNQN/SN specifies the associated name and the serial number of the ftl_bdev. IDX is the index number used to match the video catalog and the video player web port number for the running container app-vs-p. In the listings, we marked differences between four commands with italic bold.

cbuser@pm111:~/github/app-vs-p$ docker run --privileged --network host \  -d -it -v /dev:/dev -v /tmp:/tmp -e TP=tcp -e TIP=10.12.90.142 \          -e TPORT=4420 -e TNQN=nqn.2016-06.io.spdk:cnode01 \                       
-e SN=SPDK00000000000001 -e IDX=5 app-vs-p
cbuser@pm111:~/github/app-vs-p$ docker run --privileged --network host \  -d -it -v /dev:/dev -v /tmp:/tmp -e TP=tcp -e TIP=10.12.90.142 \          -e TPORT=4421 -e TNQN=nqn.2016-06.io.spdk:cnode23 \                        -e SN=SPDK00000000000023 -e IDX=6 app-vs-p          cbuser@pm111:~/github/app-vs-p$ docker run --privileged --network host \  -d -it -v /dev:/dev -v /tmp:/tmp -e TP=tcp -e TIP=10.12.90.142 \          -e TPORT=4422 -e TNQN=nqn.2016-06.io.spdk:cnode45 \                        -e SN=SPDK00000000000045 -e IDX=7 app-vs-p        cbuser@pm111:~/github/app-vs-p$ docker run --privileged --network host \  -d -it -v /dev:/dev -v /tmp:/tmp -e TP=tcp -e TIP=10.12.90.142 \          -e TPORT=4423 -e TNQN=nqn.2016-06.io.spdk:cnode67 \                        -e SN=SPDK00000000000067 -e IDX=8 app-vs-p 

When the containers are launched, one can check the Docker image logs using the following command. We attach a screenshot where video players run on web browsers with IP/PORT numbers taken from the Docker image logs.

cbuser@pm111:~/github/app-vs-p$ docker ps
CONTAINER ID IMAGE    COMMAND CREATED           STATUS PORTS         NAMES                                                             d0e3c0cea08a app-vs-p "./run" About an hour ago Up     About an hour compassionate_einstein                                            b9fa4b131f13 app-vs-p "./run" About an hour ago Up     About an hour gallant_easley                                                   e645f163adfc app-vs-p "./run" About an hour ago Up     About an hour epic_wing                                                      
e9de8cc9cddb app-vs-p "./run" About an hour ago Up     About an hour cranky_wiles                                                                                                                              
                                                 
cbuser@pm111:~/github/app-vs-p$ docker logs e9de8cc9cddb 
****************NVMe-oF discover and connect***************
nvme discover -t tcp -a 10.12.90.142 -s 4420                                                                          
                                                      
Discovery Log Number of Records 4, Generation counter 9      
=====Discovery Log Entry 0======                                    trtype:  unrecognized                                 
adrfam:  ipv4                                                      
subtype: nvme subsystem 
treq:    not specified 
portid:  0          
trsvcid: 4420  
subnqn:  nqn.2016-06.io.spdk:cnode01 
traddr:  10.12.90.142                                                   ...(truncated)... 
****************Making html dirs and files**************** 
Done! 
 
****************Launching nginx**************** 
10.12.90.111:8085 rtmp:1940

Fig 2. Four web video players streaming from ftl_bdevs

Under the hood of our video server

Our video streaming server is based on nginx web server and RTMP streaming module. We also implement HTML5 video player to be served off the nginx web server. The video player is based on video.js javascript framework.


Your downloaded app-vs-p/Dockerfile shows that the image is based on ubuntu:18.10 base image. All the required tools such as nvme-cli are installed before nginx, nginx-rtmp-module, and video.js components are installed. Docker image's local file system directories are created and required files are copied onto them. The nginx.conf file configures which file types are served off which web ports. We give 8080 and 1935 as basic HTML and RTMP service ports. But the run script of each Docker image changes these values based on their given $IDX value. The index.html file is the front of the web player which references necessary CSS files and video.js files.


Regarding video.js framework, we particularly use videojs-playlist-ui as shown in the screenshot above. It provides main video player with player controls such as volume, and play/pause. It also presents playlist vertically with selectable thumbnails and playback duration and video titles.


Using NVMe-oF TCP protocol, the app-vs-p/run script first discovers and connects to the given ftl_bdev which is specified by Docker parameters TP, TIP, TPORT, TNQN, and SN. Then the ext4 file system is made on the connected /dev/nvme?n1 kernel block device. By correctly using source video directory under /tmp/mm/mp4-$IDX, all the video files and thumbnails are copied onto the file system. Finally, nginx directory and files are tweaked to serve the particular Docker image. Our github repo includes app-vs-p/logs directory where log files from this run script can be found.


Assumptions and limitations

  • NVMe-oF TCP connections are not cleaned up automatically on Docker exits. You need to disconnect the connections manually with nvme-cli commands.

  • The video files in the example are mostly downloaded from here. They seem to be freely used but you would want to double check for commercial use. Video directory and file names should be matched with app-vs-p Docker codes.

  • Only HTML5 streaming compatible mp4 format video files are supported.

  • Some video.js code versions are not the latest and some of them don't seem to be supported anymore. However, the codes are working at the time of this writing.

  • We believe all the source codes including video.js are Apache license. But if one wants to use the source codes for commercial products, they should check the license issues themselves. CircuitBlvd., Inc. is not responsible for inappropriate 3rd party license usages.

Software versions

  • Linux OS: Ubuntu 18.10 Desktop for the physical machine, the qemu virtual machine, and the Docker containers

  • Linux kernel: 5.0.5 version

  • SPDK: v19.01-642-g130a5f772 (/w two cherry picks)

  • nginx: 1.12.2

  • nginx-rtmp-module: v1.1.7.10-97-ga5ac72c

  • video.js: 7.4.1, videojs-playlist: 4.3.1, videojs-playlist-ui: 3.5.2, videojs-mux: 2.5.0

Questions?

Contact info@circuitblvd.com for additional information.