Skip to content
Snippets Groups Projects
Commit ae312e5c authored by ale's avatar ale
Browse files

Remove obsolete test files, update test documentation

parent c2c9a23b
No related branches found
No related tags found
1 merge request!197Remove obsolete test files, update test documentation
Pipeline #15393 failed
...@@ -66,7 +66,7 @@ and in README files for individual Ansible roles: ...@@ -66,7 +66,7 @@ and in README files for individual Ansible roles:
* [Quick start guide](docs/quickstart.md) * [Quick start guide](docs/quickstart.md)
* [Reference](docs/reference.md) ([PDF](docs/reference.pdf)) * [Reference](docs/reference.md) ([PDF](docs/reference.pdf))
* [Testing](docs/testing.md) * [Notes on testing](test/README.md)
# Requirements # Requirements
......
Testing
===
This repository contains some integration tests that use Vagrant to
spin up virtual machines and run Ansible against them. The virtual
machines will be destroyed and re-created every time the tests run, so
it would be a good idea to use a local caching proxy for Debian
packages (such as *apt-cacher-ng*).
## Networking
The virtual machines used in tests make use of a randomly determined
/24 network (so we can run multiple overlapping tests in the same
host). To pick a specific network you can use the *--network* option
to *run-test.sh*. On this network, the local (host) machine will have
the address "x.x.x.1".
## Running tests
To run a test, go to the *test* subdirectory of this repository, and
run the *run-test.sh* command, with the name of the test
environment. Right now only two such test environments are defined,
*base* (including just the *frontend* role and a sample HTTP server),
and *full* (including all builtin services) (replace x.x.x with the
test network):
```shell
cd test
./run-test.sh --apt-proxy x.x.x.1:3142 base
```
These tests will set up a very simple Vagrant environment, turn up
services using Ansible, and verify their functionality using the
built-in integration test suite (see below).
The test environment created by the *run-test.sh* script will be
automatically removed when the script terminates, unless the *--keep*
option is specified.
## Integration tests
Float comes with a suite of integration tests, meant to be run on a
live environment (test or otherwise). These tests can be run from your
Ansible directory using the *float* command-line tool:
```shell
/path/to/float/float run integration-test
```
The test suite requires a small amount of configuration in order to
run on a non-test environment, as it needs admin credentials in order
to automatically test SSO-protected services. This is stored in a YAML
file, you can point the test suite at your own test parameters using
the `TEST_PARAMS` environment variable, e.g.:
```shell
env TEST_PARAMS=my-params.yml /path/to/float/float run integration-test
```
The built-in test parameters configuration uses the credentials for
the default admin user used in test environments (*admin*/*password*):
```yaml
---
priv_user:
name: admin
password: password
```
The integration test suite runs the following checks:
* check that public endpoints for built-in services are reachable
* check that no Prometheus alerts are firing
More tests will be added.
The float test suite is meant to check basic functionality of the Float Integration Tests
builtin services. ===
It can run both *on-cluster*, within a Docker container, or on an The float integration test suite is meant to verify the basic
external host (the one running Ansible). It needs access to the functionality of the builtin services. Its main purpose is to verify,
Ansible configuration dump as generated by the *integration-test.yml* at the end of a setup process (generally as part of a CI environment),
that the target infrastructure is operating as expected.
To avoid requiring complex dependencies on the management host, the
test suite is packaged as a container, and it is then run on the
target hosts themselves. The test suite is built as a series of Python
unit tests, and the container image just invokes them sequentially.
The test suite comes with a simple Ansible playbook, which performs
the following steps:
* it dumps the float configuration into a temporary file, so that the
test suite can read it (to find which services are running, on
which hosts, etc);
* it pulls the latest version of the container image;
* it runs the test suite.
This playbook is targeted for the test environments generated by
"float create-env", specifically because it expects to find a host
named *host1*, and the default test admin credentials
(admin/password). If this is not the case, it should be easy to
customize the playbook according to your specific environment.
## Checks performed by the integration test
* Verify that all public endpoints for built-in services are
reachable, even those behind authentication. There is no test of
functionality, just that the response HTTP status code is 200.
* Verify that no monitoring alerts are currently firing (no firint
alerts with severity=page).
# Running tests in a CI environment
A very common CI test pattern is to test a specific configuration on
short-lived ephemeral virtual machines created for this purpose. Such
a test job should perform (at least) the following steps:
* Generate, or provide somehow, an environment configuration to be
used for the test. This is the primary purpose of the *float
create-env* command, which exposes many parameters of the VM setup
as command-line parameters. On the other hand, if your setup is
completely static, you can also just use a pre-created test
environment configuration somewhere in your repository. An example
(assuming you have *float* in the PATH, and that you intend to
create a new test configuration in the *test-1* directory, running 3
hosts):
```shell
float create-env --domain=example.com --num-hosts=3 test-1
```
* Initialize test credentials by running the *init-credentials.yml*
playbook. playbook.
```shell
cd test-1 && float run init-credentials.yml
```
* Start the virtual machines that constitute the test targets, whose
configuration was likely generated as part of step 1. The details
are going to depend on which specific technology you are using for
VM management. Float creates a Vagrant configuration by default, so
if you're using Vagrant, this is simply:
```shell
cd test-1 && vagrant up
```
* Run your main playbook to configure the VMs:
```shell
cd test-1 && float run site.yml
```
* Run the integration tests:
```shell
cd test-1 && float run /path/to/float/test/integration-test-docker.yml
```
* Stop the virtual machines used for the test, regardless of its result:
```shell
cd test-1 && vagrant destroy -f
```
### How to make CI run (somewhat) faster
Ansible, at least the way float uses it, can be frustratingly
slow. This is even more of a problem in a CI context, where the
feedback latency can be quite annoying. There's a limit to what can be
done, as most of the slowness is intrinsic in the choice of Ansible as
the implementation layer, but there are a number of things that can be
done to bring down the execution time to more reasonable values. In
order of importance:
1. Install
[Mitogen](https://mitogen.networkgenomics.com/ansible_detailed.html).
There is really no reason why you wouldn't, it magically and
transparently makes Ansible *significantly* faster (5-10 times in
our tests). To enable it, you'll need to modify the *strategy*
setting in the Ansible configuration, either manually or by passing
`-e ansible_cfg.defaults.strategy=mitogen_linear` to float
create-env.
2. Give test environments appropriate resources. For starters, make
sure you have fast disks for the virtual machines: the float setup
does a lot of I/O, due to the package installations and the setup
of container images. Also ensure that the management host (the one
running Ansible, usually the CI runner itself) is not CPU-bound.
3. Set up an APT cache (for instance with *apt-cacher-ng*). Set the
Ansible variable *apt_proxy* to the host:port of the cache, or pass
the `-e config.apt_proxy=HOST:PORT` option to float create-env.
4. Quite a bit of time is spent downloading container images from
float's registry, so using some sort of Docker registry cache could
help. Most caching methods will require a MitM SSL CA, which can be
set up using the `podman_additional_ssl_ca` configuration
variable. You can then set `podman_https_proxy` to the HOST:PORT
address of your cache.
#!/bin/sh
set -e
../float create-env --vagrant --num-hosts 1 --domain example.com --debian-dist bullseye "$@"
cat > "$1/services.yml" <<EOF
---
frontend:
scheduling_group: frontend
service_credentials:
- name: nginx
enable_server: false
- name: ssoproxy
enable_server: false
- name: replds-acme
systemd_services:
- nginx.service
- sso-proxy.service
- bind9.service
- replds@acme.service
ports:
- 5005
ok:
scheduling_group: all
num_instances: 1
containers:
- name: http
image: registry.git.autistici.org/ai3/docker/okserver:latest
port: 3100
env:
PORT: 3100
public_endpoints:
- name: ok
port: 3100
scheme: http
EOF
cat > "$1/passwords.yml" <<EOF
- name: ssoproxy_session_auth_key
description: sso-proxy cookie authentication key
type: binary
length: 64
- name: ssoproxy_session_enc_key
description: sso-proxy cookie encryption key
type: binary
length: 32
- name: dnssec_nsec3_salt
description: Salt used by dnssec-signzone for NSEC3 replies (public,
recommended to be rotated occasionally)
type: binary
length: 32
EOF
exit 0
#!/bin/sh
set -e
../float create-env --vagrant --num-hosts 1 --domain example.com "$@"
cat > "$1/services.yml" <<EOF
---
frontend:
scheduling_group: frontend
service_credentials:
- name: nginx
enable_server: false
- name: ssoproxy
enable_server: false
- name: replds-acme
systemd_services:
- nginx.service
- sso-proxy.service
- bind9.service
- replds@acme.service
ports:
- 5005
ok:
scheduling_group: all
num_instances: 1
containers:
- name: http
image: registry.git.autistici.org/ai3/docker/okserver:latest
port: 3100
env:
PORT: 3100
public_endpoints:
- name: ok
port: 3100
scheme: http
EOF
cat > "$1/passwords.yml" <<EOF
- name: ssoproxy_session_auth_key
description: sso-proxy cookie authentication key
type: binary
length: 64
- name: ssoproxy_session_enc_key
description: sso-proxy cookie encryption key
type: binary
length: 32
- name: dnssec_nsec3_salt
description: Salt used by dnssec-signzone for NSEC3 replies (public,
recommended to be rotated occasionally)
type: binary
length: 32
EOF
exit 0
#!/bin/sh
set -e
../float create-env --vagrant --num-hosts 2 --domain example.com "$@"
cat > "$1/services.yml" <<EOF
---
include:
- "../../services.yml.no-elasticsearch"
ok:
scheduling_group: backend
containers:
- name: http
image: registry.git.autistici.org/ai3/docker/okserver:latest
port: 3100
env:
PORT: 3100
resources:
ram: 1g
cpu: 0.5
public_endpoints:
- name: ok
port: 3100
scheme: http
EOF
cat > "$1/passwords.yml" <<EOF
---
- include: ../../passwords.yml.default
EOF
cat > "$1/group_vars/all/disable-elasticsearch.yml" <<EOF
---
enable_elasticsearch: false
EOF
exit 0
#!/bin/sh
set -e
../float create-env --vagrant --num-hosts 2 --domain example.com "$@"
# Patch the Vagrantfile to add a second private network.
sed -i -e 's/^\(.*m.vm.network.*\)/\1\n m.vm.network "private_network", ip: "192.168.144.#{9+i}", libvirt__dhcp_enabled: false/' "$1/Vagrantfile"
cat > "$1/services.yml" <<EOF
---
frontend:
scheduling_group: frontend
service_credentials:
- name: nginx
enable_server: false
- name: ssoproxy
enable_server: false
- name: replds-acme
systemd_services:
- nginx.service
- sso-proxy.service
- bind9.service
- replds@acme.service
ports:
- 5005
ok:
scheduling_group: all
num_instances: 1
containers:
- name: http
image: registry.git.autistici.org/ai3/docker/okserver:latest
port: 3100
env:
PORT: 3100
public_endpoints:
- name: ok
port: 3100
scheme: http
EOF
cat > "$1/passwords.yml" <<EOF
- name: ssoproxy_session_auth_key
description: sso-proxy cookie authentication key
type: binary
length: 64
- name: ssoproxy_session_enc_key
description: sso-proxy cookie encryption key
type: binary
length: 32
- name: dnssec_nsec3_salt
description: Salt used by dnssec-signzone for NSEC3 replies (public,
recommended to be rotated occasionally)
type: binary
length: 32
EOF
exit 0
---
- hosts: host1
tasks:
- name: Dump Ansible configuration for test
copy:
dest: /tmp/test-config.yml
content: "{{ vars|to_nice_yaml }}"
- name: Setup test Docker image
command: "podman pull registry.git.autistici.org/ai3/float:integration-test"
- name: Run tests
command: docker run --net host --mount type=bind,source=/tmp/test-config.yml,destination=/test-config.yml registry.git.autistici.org/ai3/float:integration-test
---
- hosts: localhost
gather_facts: no
tasks:
- name: Dump Ansible configuration for test
copy:
dest: /tmp/test-config.yml
content: "{{ vars|to_nice_yaml }}"
- name: Invoke test runner
command: env PYTHONPATH=../test TEST_CONFIG=/tmp/test-config.yml python ../test/float_integration_test/test_system.py -v
---
- hosts: host1
tasks:
- name: Dump float configuration
copy:
dest: /tmp/test-config.yml
content: "{{ vars | to_nice_yaml }}"
- name: Pull the test suite container image
command: "float-pull-image {{ test_image }}"
register: test_container_image
failed_when: "test_container_image.rc not in [0, 42]"
- name: Run tests
command: "docker run --rm --network host --mount type=bind,source=/tmp/test-config.yml,destination=/test-config.yml {{ test_image }}"
vars:
test_image: "registry.git.autistici.org/ai3/float:integration-test"
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Please register or to comment