This is the first section of my nginx, rtmp, oauth live streaming server. As I described in my introduction page I wanted to live stream output from my GoPro, but as I do not have thousands of followers on YouTube and I avoid Facebook like the plague it left me with few other easy and affordablr options.
So I did it myself.
This series is an overview of my adventure in setting up my very own live streaming server. It includes a little of this, and a lot of that, but mostly it just works.
Seeing as I am also a little curious about security, I wanted the streams to be encrypted as well as access controlled…. all without spending any more money than I already do on my home network.
So lets get into it.
Section 1: The nginx config and how it controls access
The nginx config comprises two sections. The first section describes how incoming rtmp sessions are handled (the Producer), and the second section describes the user web server interface (the Consumer). For my project it was important to have access control and security associated with the Consumer to restrict access and encrypt the output streams. I was not as concerned as much about securing the incoming Producer streams since without access to the Consumer they do little other than take up some bandwidth. My users could watch those errant streams, but with a quick address filter in nginx I can stop them if I choose.
The nginx config file /etc/nginx/nginx.conf (for most people) has to include a new rtmp section:
So in the rtmp section there are two applications defined, live and hls. hls is only accessible as an internal path from the live application.
Following through the config, a Producer enters the rtmp URL into their device, such as rtmp://example.com/live/Go_Pro_Stream (note the underscores - nginx does not like stream names with spaces). When the stream starts it notifies the Python application via the on_publish URL (more on that in a later section) that a new stream has started. Similarly when the stream stops, nginx notifies the application via the on_publish_done URL that the stream has stopped. For now, it is possible to comment out the on_publish and on_publish_done until an application is ready to manage the connections.
When the stream is ongoing, nginx takes the incoming data and pushes it through a localhost URL to the hls application. Access to the hls application is restricted to the previous live application via localhost connections only. hls will now package up the incoming stream into MGET Transprt Stream (.ts) segments stored in the hls_path folder with the stream name added to the folder. For example, using my above config, if my stream name was Go_Pro_Stream, the segments will be written to the /opt/streams/live/Go_Pro_Streams folder. Make sure the nginx process has permission to write in the hls_path folder.
The generated keys needed by client player to decrypt the streams will be located under the hls_key_path folder with the stream name added to the path. As in my example above, this will be the /opt/streams/keys/Go_Pro_Streams folder.
When a stream is being produced the contents of the hls_path segment folder will look similar to:
Since the stream is “live” and I am not preserving the stream, there may only be between a dozen and two dozen files. As more segments are created, eventually the old segments are not needed anymore and are removed. This keeps the total size and number of these files to a minimum.
Similarly the content of the hls_key_path key folder will look similar to:
Without going too deep into RTMP, the main file describing what to fetch to stream the video is the index.m3u8 file. In it is the list of available transport stream segments, the .ts files, as well the keys needed to decrypt the different transport stream segments. For example, an index.m3u8 might look like:
In a nutshell, the sequence says to load key 1561929216707.key which will then be used to decrypt the next four segments 1561929226870.ts, 1561929232704.ts, 1561929237706.ts, and 1561929243543.ts. Then the key switches to 1561929249177.key which is used to decrypt the next two segments 1561929249177.ts and 1561929254480.ts.
As nginx continues to receive new data and create new encrypted transport stream segment files, the index.m3u8 is continuously updated with the new information. When the web player starts running low on data to playback, it will request a new copy of the index.m3u8 file to learn what the new segment names are and which keys are used to decrypt them.
These are all generated by nginx so you don’t need to be too concerned about them. What is important is that any request to download these files needs to be controlled by the Consumer configuration of nginx.
The Consumer is the front-end web page definition for nginx and is where all the security happens. For now this will be simply an explanation of its function as there is no OAuth enabled application receiving its requests yet. That will be discussed later.
To provide the necessary security though a few additional modules need to be added to nginx that are not included in most distributions, which unfortunately means building your own nginx and running it instead of the version included with your OS. This is not terribly difficult but does require your system be setup to build nginx. More on that though in the next section.
The confguration file might be named /etc/nginx/sites-enabled/stream and be appropriately soft-linked into the /etc/nginx/sites-enabled folder.
The files the client fundamentally will request include:
- The index.m3u8 manifest file
- The .key files
- The .ts transport stream files
- Other web pages to run the web application
The magic here is to modify the contents of the index.m3u8 file uniquely for each user and use that uniqueness to verify that this user is allowed to access the files. This part I gleaned from Ben Wilber’s tutorial which proved very helpful in understanding how to secure the key files.
When the index.m3u8 file is requested the nginx configuration will inline modify the URLs pointing to the keys and add a hash parameter to each URL. When the client submits new requests for a key, the parameter is also submitted. The nginx server can then recompute the hash to verify this request originated from the same client. Someone else sending the same URL with the same hash parameter would cause the nginx server to generate a different hash and thus be detected as an imposter. This primarily stops someone different from replaying the stream. However another step is taken to actually authenticate the user’s crednetials through the web application as well before authorizing the user to have access to the decryption key file. This step depends upon your prefered back-end authorization. Ben Wilber chose django, a decent all around backend database and user management architecture, but I already had OAuth working with my cloud server, so I chose to go the OAuth route.
Validating the user access with nginx though requires a few additional modules not included in the base distribution, which I will discuss in a later section. For now, asusming those modules are working, the magic in validating the user request involves first generating a hash of their session identity, a secret string and the stream name and base64 encoding it in the $sig parameter:
Then as the index.m3u8 file is downloaded, modify the key URL to add the $sig parameter to the URL using:
This results in modifying the index.m3u8 file that is on disk to instead include something like the following when received by the client:
Notice how the URI for the key now appears with the parameter ?s=dKiMvdm4GbMuhiAycFcrXjxf+78= in the above example. This was the encoded hash added by nginx that is used to verify the request for the key came from the same system that downloaded the index.m3u8 file.
This does not complete the authorization though. That occurs in the line proxy_pass http://stream_backend/authorize_key; and will be covered in a later section on the python application.
I do hope this proves useful to someone out there.
- Introduction: What this project is about
- Section 1: The nginx config and how it controls access <== YOU ARE HERE
- Section 2: Dealing with the missing nginx pieces (Coming Soon)
- Section 3: Designing an application to glue it together (Coming Soon)
- Section 4: Integrating OAuth authentication using Nextcloud (Coming Soon)
- Section 5: Session storage with Redis (cool Enterprise scaling option) (Coming Soon)
- Section 6: Streaming with my GoPro or with ffmpeg (Coming Soon)
In the next section I will discuss adding the missing nginx modules.