Distributed icecast cluster.


The autoradio service aims to provide a reliable, fault-tolerant Icecast streaming service for audio and video. It provides all the necessary components to ensure that the traffic from the source to the clients is uninterrupted, even in face of high load or server crashes. All this, if possible, without any operator intervention.

It is a full-stack service, meaning that it includes its own DNS and HTTP servers, for full control of the request flow.

Autoradio works by using etcd to coordinate the various nodes and store the global mount configuration. The intended target is a set of homogeneous servers (or virtual machines) dedicated to this purpose. Autoradio also needs a dedicated DNS domain (or a delegation for a subdomain).


The simplest installation method is probably to use the pre-built Debian packages (only available for amd64 at the moment), by placing this line in /etc/apt/sources.list.d/autoradio.list:

deb http://www.incal.net/ale/debian autoradio/

And then running:

$ sudo apt-key adv --recv-key 0xC0EAC2F9CE9ED9B0
$ sudo apt-get update
$ sudo apt-get install etcd autoradio-server

Full cluster install procedure

Note: this procedure assumes a Debian distribution, it should work either with Wheezy (oldstable) or Jessie (stable).

This assumes that you have an existing domain name (here example.com) that you control, and that you will run the cluster under a sub-domain (radio.example.com). The procedure will install an etcd server on each node, so it will work best for a small, odd number of machines.

Having said that, follow these steps to bootstrap a new streaming cluster:

  1. Make sure that, on each of your servers, the output of hostname -f is the fully-qualified hostname of the machine, and that it resolves to its public IP (possibly using /etc/hosts). This way autoradio can detect the IP address that peers should use when communicating with each host. Also, for simplicity, we're going to assume that each host can resolve the IP address of each other just by using its short name.

  2. Pick one of your servers, say host1, and add a delegation for radio.example.com to it. For instance, in a bind-formatted zone file:

    radio  IN   NS  3600  host1.example.com.
  3. On host1, edit /etc/default/etcd with the following contents:


    Once you save the file, restart the etcd daemon: this will initialize an empty database:

    $ service etcd restart
  4. On host1, edit /etc/default/autoradio and add:

  5. Run the steps in the Installation section above to set up the APT repository and install the etcd and autoradio packages using the configuration you just wrote.

This will start the radiod and redirectord daemons, and you will be able to serve DNS records for the radio.example.com zone.

Check that the service is healthy with:

    $ ping -c1 radio.example.com

This should send a ping to *host1*.

Now that the first node is up and running, set up the remaining machines. For every host:

  1. Set up etcd. First, run the following command on the first machine (host1):

    $ etcdctl member add host2 http://host2:2380

(remember not to include a final slash on the node URL).

This will print out some environment variables. You should copy the ETCD_INITIAL_CLUSTER line into /etc/default/etcd on the new host. The other lines of that file should be identical to what shown in step 4 of the previous checklist, replacing the host name where necessary.

Note that you will need to wait for etcd on the new machine to start successfully before you can run etcdctl member add for the next one. For further instructions on how to change the etcd cluster configuration at runtime, see the etcd documentation.

  1. Set DOMAIN in /etc/default/autoradio (as shown in step 5 of the previous checklist above), and the daemons will start automatically.

  2. Install the autoradio packages, see Installation section above. The daemons should start automatically with the new configuration.

Building from source

To build autoradio from source, you should have a Go environment set up properly on your machine. Autoradio uses godep to manage its dependencies, so make sure you have that installed as well. Building autoradio should then be as simple as running, from the top-level source directory:

$ godep go install ./...

This should install the radiod, redirectord and radioctl executables in $GOPATH/bin.


In order to create a new stream (mount, in the Icecast terminology), assuming you are running autoradio on the example.com domain:

  1. On any node, run:

    $ radioctl create-mount /path/to/mount.ogg

this will output the username and password used to authenticate the source. Take note of them.

The cluster will be automatically reconfigured with the new mount in a few seconds at most.

  1. Configure the source, using the username/password provided in the previous step, and point it at the following URL:

  2. Tell your users to listen to the stream at:


Note: some sources are unable to handle HTTP redirects: in that case, you might want to enable proxying on autoradio, and tell the client to use the direct-path URL:


DNS zone delegation

Since we can't modify the DNS glue records for the zone delegation in real-time, we have to restrict slightly the assumptions on the availability of nodes in the cluster: you have to assume that at least N of your nodes will be partially available at any one time (i.e. at least one of a chosen subset of N servers will be reachable). The number N should be fairly low, say 3. Then, you can use those 3 servers as the nameservers for zone delegation, and the other nodes are free to have dynamic membership.


The autoradio HTTP server can operate in one of two modes:

  • clients connect directly to Icecast

    When a client connects to the service on port 80, it is sent a redirect to an Icecast server on port 8000. Unfortunately some older clients (especially sources) don't handle redirects too well.

  • connections to Icecast are proxied by autoradio

    Clients talk to the autoradio HTTP server, which proxies connections to the back-end Icecast servers. This way the clients only need to talk to port 80, which not only avoids using redirects but might simplify access for people behind corporate proxies and such.

This behavior is controlled by the --enable-icecast-proxy command-line flag to redirectord. It is set to true by default.


The users should be able to reach ports 53/tcp, 53/udp, 80/tcp and 8000/tcp (the latter only if proxying is disabled) on all nodes. Nodes should be able to reach 2379/tcp and 2380/tcp (the etcd ports) on each other; these two ports can be public if you've set up X509-based authentication for etcd.

Securing etcd

In a production cluster, you will want to limit access to the etcd daemons so that only the other nodes can connect to it. While it is possible to do this with firewall rules, the dynamic membership of the cluster may make this difficult. We suggest using instead etcd's support for X509 client authentication, together with a tool to manage an online CA (such as autoca. This way, enrolling a new machine in the cluster only requires generating a new client certificate, and no other configuration.

Install the CA certificate in /etc/autoradio/etcd_ca.pem, the client certificate in /etc/autoradio/etcd_client.pem and its private key in /etc/autoradio/etcd_client.key, and the clients will connect to etcd using SSL authentication.


The radiod and redirectord daemons can send runtime metrics to a statsd server (by default on localhost:8125).


It is possible to set up a mount to relay an upstream mount re-encoded with different parameters, using the radioctl create-transcoding-mount command. In this case, autoradio will automatically start up a process (a liquidsoap instance) to perform the re-encoding, which will connect as the mount source. A master-election protocol is used to ensure that only one such process per mount is started in the whole cluster.


There's a Vagrant environment in the vagrant-test subdirectory that will set up a test three-node cluster (with Debian Jessie as the base system) using pre-packaged binaries. To run it:

$ cd vagrant-test
$ vagrant up

It will take a while to download the base image the first time, then it will turn up three nodes called node1, node2 and node3. Use vagrant ssh to inspect them.

If you want to test a locally-built package, copy the autoradio and etcd Debian packages in the vagrant-test directory and, in that same directory, run

$ dpkg-scanpackages -m . >Packages

the provisioning process will automatically use the local packages if they are available.