autoradio
The autoradio service aims to provide a reliable, fault-tolerant Icecast streaming cluster for audio and video. It provides all the necessary components to ensure that the traffic from the source to the clients is uninterrupted, even in face of high load or server crashes. All this, if possible, without any operator intervention.
It is a full-stack service, meaning that it includes its own DNS and HTTP servers, for full control of the request flow.
Autoradio works by using etcd to coordinate the various nodes and store the global mount configuration. The intended target is a set of homogeneous servers (or virtual machines) dedicated to this purpose. Autoradio also needs a dedicated DNS domain (or a delegation for a subdomain).
Installation
The simplest installation method is probably to use the pre-built
Debian packages (only available for amd64 at the moment), by placing
this line in /etc/apt/sources.list.d/autoradio.list
:
deb http://www.incal.net/ale/debian autoradio/
And then running:
$ sudo apt-get update
$ sudo apt-get install etcd autoradio
This will install and start the necessary jobs (which will initially fail due to the missing configuration).
Edit /etc/default/autoradio
and set, at least, the DOMAIN
variable to what you've assigned to the cluster. The jobs will
automatically start as soon as the configuration is saved.
Full cluster install procedure
This assumes that you have an existing domain name (here example.com) that you control, and that you will run the cluster under a sub-domain (radio.example.com). Follow these steps to bootstrap a new streaming cluster:
-
Make sure that, on each of your servers, the output of
hostname -f
is the fully-qualified hostname of the machine, and that it resolves to its public IP (possibly using/etc/hosts
). -
On every server, run the above-mentioned steps to set up the APT repository and install (do not configure) the
etcd
andautoradio
packages. -
Pick one of your servers and add a delegation for radio.example.com to it. For instance, with
bind
:radio IN NS 3600 machine1.example.com.
-
On machine1, edit
/etc/default/etcd
and setBOOTSTRAP=1
. Once you save the file, theetcd
daemon will start with an empty database. -
On machine1, edit
/etc/default/autoradio
and setDOMAIN=radio.example.com
. This will start theradiod
andredirectord
daemons, and you will be able to serve DNS records for the radio.example.com zone. Check with:$ ping -c1 radio.example.com
This should send a ping to machine1.
-
Set up all other machines, setting
ETCD_SERVER=etcd.radio.example.com
in/etc/default/etcd
andDOMAIN=radio.example.com
in/etc/default/autoradio
.
Securing etcd
In a production cluster, you will want to limit access to the etcd daemons so that only the other nodes can connect to it. While it is possible to do this with firewall rules, the dynamic membership of the cluster may make this difficult. We suggest using instead etcd's support for X509 client authentication, together with a tool to manage an online CA (such as autoca). This way, enrolling a new machine in the cluster only requires generating a new client certificate, and no other configuration.
Install the CA certificate in /etc/autoradio/etcd_ca.pem
, the client
certificate in /etc/autoradio/etcd_client.pem
and its private key in
/etc/autoradio/etcd_client.key
, and the clients will connect to
etcd using SSL authentication.
Building from source
To build autoradio from source, you should have a Go environment set up properly on your machine. Autoradio uses godep to manage its dependencies, so make sure you have that installed as well. Building autoradio should then be as simple as running, from the top-level source directory:
$ godep go install ./...
This should install the radiod
, redirectord
and radioctl
executables in $GOPATH/bin
.
Operation
In order to create a new stream (mount, in the Icecast terminology),
assuming you are running autoradio on the example.com
domain:
-
On any node, run:
$ radioctl create-mount /path/to/mount.ogg
this will output the username and password used to authenticate the source. Take note of them.
The cluster will be automatically reconfigured with the new mount in a few seconds at most.
-
Configure the source, using the username/password provided in the previous step, and point it at the following URL:
http://stream.example.com/path/to/mount.ogg
-
Tell your users to listen to the stream at:
http://stream.example.com/path/to/mount.ogg.m3u
DNS zone delegation
Since we can't modify the DNS glue records for the zone delegation in real-time, we have to restrict slightly the assumptions on the availability of nodes in the cluster: you have to assume that at least N of your nodes will be partially available at any one time (i.e. at least one of a chosen subset of N servers will be reachable). The number N should be fairly low, say 3. Then, you can use those 3 servers as the nameservers for zone delegation, and the other nodes are free to have dynamic membership.
Firewalls
The users should be able to reach ports 53/tcp, 53/udp, 80/tcp and 8000/tcp on all nodes. Nodes should be able to reach 4001/tcp and 4002/tcp on each other; these two ports can be public if you've set up X509-based authentication to etcd.
Instrumentation
The radiod
and redirectord
daemons can send runtime metrics to
a statsd server (by default on localhost:8125).
Transcoding
It is possible to set up a mount to relay an upstream mount re-encoded
with different parameters, using the radioctl
create-transcoding-mount
command. In this case, autoradio will
automatically start up a process (a liquidsoap instance) to perform
the re-encoding, which will connect as the mount source. A
master-election protocol is used to ensure that only one such process
per mount is started in the whole cluster.
Testing
There's a Vagrant configuration in the vagrant-test
subdirectory
that will turn up a test three-node cluster (with Debian Wheezy as the
base system) using pre-packaged binaries. To run it:
$ cd vagrant-test
$ vagrant up
It will take a while to download the base image the first time, then
it will turn up three nodes called node1, node2 and node3.
Use vagrant ssh
to inspect them.