README.md 8.7 KB
Newer Older
ale's avatar
ale committed
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54

autoradio
=========

The *autoradio* service aims to provide a reliable, fault-tolerant
Icecast streaming service for audio and video. It provides all the
necessary components to ensure that the traffic from the source to the
clients is uninterrupted, even in face of high load or server crashes.
All this, if possible, without any operator intervention.

It is a full-stack service, meaning that it includes its own DNS and
HTTP servers, for full control of the request flow.

Autoradio works by using [etcd](https://github.com/coreos/etcd) to
coordinate the various nodes and store the global mount configuration.
The intended target is a set of homogeneous servers (or virtual
machines) dedicated to this purpose. Autoradio also needs a dedicated
DNS domain (or a delegation for a subdomain).


# Installation

The simplest installation method is probably to use the pre-built
Debian packages (only available for amd64 at the moment), by placing
this line in `/etc/apt/sources.list.d/autoradio.list`:

    deb http://www.incal.net/ale/debian autoradio/

And then running:

    $ sudo apt-get update
    $ sudo apt-get install etcd autoradio

This will install and start the necessary jobs (which will initially
fail due to the missing configuration).

Edit `/etc/default/autoradio` and set, at least, the `DOMAIN`
variable to what you've assigned to the cluster. The jobs will
automatically start as soon as the configuration is saved.


## Full cluster install procedure

Note: this procedure requires etcd 2.0 or later.

This assumes that you have an existing domain name (here
*example.com*) that you control, and that you will run the cluster
under a sub-domain (*radio.example.com*). The procedure will install
an etcd server on each node, so it will work best for a small, odd
number of machines.

Having said that, follow these steps to bootstrap a new streaming
cluster:

ale's avatar
ale committed
55 56 57
1. Make sure that, on each of your servers, the output of
   `hostname -f` is the fully-qualified hostname of the machine,
   and that it resolves to its public IP (possibly using `/etc/hosts`).
ale's avatar
ale committed
58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73

2. On every server, run the above-mentioned steps to set up the APT
   repository and install (do not configure) the `etcd` and
   `autoradio` packages.

3. Pick one of your servers and add a delegation for
   *radio.example.com* to it. For instance, with `bind`:

        radio  IN   NS  3600  machine1.example.com.

4. On *machine1*, edit `/etc/default/etcd` with the following
   contents:

        START=1
        BOOTSTRAP=1

ale's avatar
ale committed
74
    Once you save the file, the *etcd* daemon will start and
ale's avatar
ale committed
75 76 77
    initialize an empty database.

5. On *machine1*, edit `/etc/default/autoradio` and set
ale's avatar
ale committed
78 79
   `DOMAIN=radio.example.com`. This will start the *radiod* and
   *redirectord* daemons, and you will be able to serve DNS records
ale's avatar
ale committed
80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101
   for the *radio.example.com* zone. Check with:

        $ ping -c1 radio.example.com

    This should send a ping to *machine1*.

6. Set up the remaining machines. It is a two-step process: first, run
   the following command on the first machine:

        $ etcdctl member add machine2 http://machine2:2380/

    This will print out some environment variables. You should copy
    the `ETCD_INITIAL_CLUSTER` line into `/etc/default/etcd` on
    machine2, resulting in something like:

        START=1
        ETCD_INITIAL_CLUSTER=machine1=http://...,machine2=http://...
    
    Finally, set `DOMAIN=radio.example.com` in
    `/etc/default/autoradio`, and the daemons will start
    automatically.

ale's avatar
ale committed
102
    Note that you will need to wait for etcd on the new machine to
ale's avatar
ale committed
103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118
    start successfully before you can run `etcdctl member add` for the
    next one. For further instructions on how to change the etcd
    cluster configuration at runtime, see
    [the etcd documentation](https://github.com/coreos/etcd/blob/master/Documentation/runtime-configuration.md).


## Building from source

To build autoradio from source, you should have a Go environment set
up properly on your machine. Autoradio uses
[godep](https://github.com/tools/godep) to manage its dependencies, so
make sure you have that installed as well. Building autoradio should
then be as simple as running, from the top-level source directory:

    $ godep go install ./...

ale's avatar
ale committed
119
This should install the *radiod*, *redirectord* and *radioctl*
ale's avatar
ale committed
120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198
executables in `$GOPATH/bin`.


# Operation

In order to create a new stream (*mount*, in the Icecast terminology),
assuming you are running autoradio on the `example.com` domain:

1. On any node, run:

        $ radioctl create-mount /path/to/mount.ogg

   this will output the username and password used to authenticate the
   source. Take note of them.

   The cluster will be automatically reconfigured with the new mount in
   a few seconds at most.

2. Configure the source, using the username/password provided in the
   previous step, and point it at the following URL:

        http://stream.example.com/path/to/mount.ogg

3. Tell your users to listen to the stream at:

        http://stream.example.com/path/to/mount.ogg.m3u

Note: some sources are unable to handle HTTP redirects: in that case,
you might want to enable proxying on autoradio, and tell the client to
use the direct-path URL:

    http://stream.example.com/_stream/path/to/mount.ogg


## DNS zone delegation

Since we can't modify the DNS glue records for the zone delegation in
real-time, we have to restrict slightly the assumptions on the
availability of nodes in the cluster: you have to assume that at least
N of your nodes will be partially available at any one time (i.e. at
least one of a chosen subset of N servers will be reachable). The
number N should be fairly low, say 3. Then, you can use those 3
servers as the nameservers for zone delegation, and the other nodes
are free to have dynamic membership.


## Proxy

The autoradio HTTP server can operate in one of two modes:

* clients connect directly to Icecast

    When a client connects to the service on port 80, it is sent a
    redirect to an Icecast server on port 8000. Unfortunately some
    older clients (especially sources) don't handle redirects too
    well.

* connections to Icecast are proxied by autoradio

    Clients talk to the autoradio HTTP server, which proxies
    connections to the back-end Icecast servers. This way the clients
    only need to talk to port 80, which not only avoids using
    redirects but might simplify access for people behind corporate
    proxies and such.

    This behavior is controlled by the `--enable-icecast-proxy`
    command-line flag to *redirectord*.


## Firewalls

The users should be able to reach ports 53/tcp, 53/udp, 80/tcp and
8000/tcp on all nodes. Nodes should be able to reach 4001/tcp and
4002/tcp on each other; these two ports can be public if you've set up
X509-based authentication to etcd.


## Securing etcd

ale's avatar
ale committed
199
In a production cluster, you will want to limit access to the etcd
ale's avatar
ale committed
200 201
daemons so that only the other nodes can connect to it. While it is
possible to do this with firewall rules, the dynamic membership of the
ale's avatar
ale committed
202
cluster may make this difficult. We suggest using instead etcd's
ale's avatar
ale committed
203 204 205 206 207 208 209 210
support for X509 client authentication, together with a tool to manage
an online CA (such as [autoca](https://git.autistici.org/ai/autoca).
This way, enrolling a new machine in the cluster only requires
generating a new client certificate, and no other configuration.

Install the CA certificate in `/etc/autoradio/etcd_ca.pem`, the client
certificate in `/etc/autoradio/etcd_client.pem` and its private key in
`/etc/autoradio/etcd_client.key`, and the clients will connect to
ale's avatar
ale committed
211
etcd using SSL authentication.
ale's avatar
ale committed
212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253


## Instrumentation

The *radiod* and *redirectord* daemons can send runtime metrics to
a *statsd* server (by default on localhost:8125).


## Transcoding

It is possible to set up a mount to relay an upstream mount re-encoded
with different parameters, using the `radioctl
create-transcoding-mount` command. In this case, autoradio will
automatically start up a process (a
[liquidsoap](http://savonet.sourceforge.net/) instance) to perform the
re-encoding, which will connect as the mount source. A master-election
protocol is used to ensure that only one such process per mount is
started in the whole cluster.


# Testing

There's a [Vagrant](http://www.vagrantup.com/) environment in the
`vagrant-test` subdirectory that will set up a test three-node cluster
(with Debian Wheezy as the base system) using pre-packaged binaries.
To run it:

    $ cd vagrant-test
    $ vagrant up

It will take a while to download the base image the first time, then
it will turn up three nodes called **node1**, **node2** and **node3**.
Use `vagrant ssh` to inspect them.

If you want to test a locally-built package, copy the `autoradio` and
`etcd` Debian packages in the `vagrant-test` directory and set the
`LOCAL` environment variable to a non-empty string when invoking
vagrant:

    $ LOCAL=1 vagrant up