More guide magic

This commit is contained in:
Luke Murphy 2021-03-01 12:31:24 +01:00
parent 60d5346087
commit 60e0457f33
No known key found for this signature in database
GPG Key ID: 5E2EF5A63E3718CC
2 changed files with 72 additions and 7 deletions

View File

@ -9,7 +9,7 @@ In order to deploy an application you need two things:
## Create your server ## Create your server
Co-op Cloud has itself near zero system requirements. You only need to worry about the system resource usage of your apps and the overhead of running containers with the docker runtime (often negligible. If you want to know more, see [this FAQ entry](/faq/#isnt-running-everything-in-container-really-inefficient)). We will deploy a new Nextcloud instance in this guide, so you will only need 1GB of RAM according to [their documentation](https://docs.nextcloud.com/server/latest/admin_manual/installation/system_requirements.html). Co-op Cloud has itself near zero system requirements. You only need to worry about the system resource usage of your apps and the overhead of running containers with the docker runtime (often negligible. If you want to know more, see [this FAQ entry](/faq/#isnt-running-everything-in-container-really-inefficient)). We will deploy a new Nextcloud instance in this guide, so you will only need 1GB of RAM according to [their documentation](https://docs.nextcloud.com/server/latest/admin_manual/installation/system_requirements.html). You may also be interested in this [FAQ entry](/faq/#arent-container-horrible-from-a-security-perpective) if you are curious about security in the context of containers.
## Wire up your DNS ## Wire up your DNS
@ -24,6 +24,18 @@ Where `116.203.211.204` can be replaced with the IP address of your server.
On your server, you'll want to install [Docker](https://www.docker.com/). This can be done by following the [install documentation](https://docs.docker.com/engine/install/). On your server, you'll want to install [Docker](https://www.docker.com/). This can be done by following the [install documentation](https://docs.docker.com/engine/install/).
On a Debian system, that can be done like so.
```bash
$ sudo apt-get remove docker docker-engine docker.io containerd runc
$ sudo apt update
$ sudo apt-get install -y apt-transport-https ca-certificates curl gnupg-agent software-properties-common
$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo apt-key add -
$ sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/debian $(lsb_release -cs) stable"
$ sudo apt-get update
$ sudo apt-get install -y docker-ce docker-ce-cli containerd.io
```
## Bootstrap abra ## Bootstrap abra
Once your DNS and docker daemon are up, you can install [abra](https://git.autonomic.zone/autonomic-cooperative/abra) locally on your developer machine and hook it up to your server. Once your DNS and docker daemon are up, you can install [abra](https://git.autonomic.zone/autonomic-cooperative/abra) locally on your developer machine and hook it up to your server.
@ -51,12 +63,12 @@ $ abra server add example.com
Where `example.com` is replaced with your identifier for your server DNS. Where `example.com` is replaced with your identifier for your server DNS.
`abra server add` accepts also a `<user>` and `<port>` argument for your custom SSH connection details. What is happening here is that you are using the underlying SSH connection to make a secure connection to the server installed Docker daemon. This allows `abra` to run remote deployments from your local development machine. `abra server add` accepts also a `<user>` and `<port>` arguments for your custom SSH connection details. What is happening here is that you are using the underlying SSH machinery to make a secure connection to the server installed Docker daemon. This allows `abra` to run remote deployments from your local development machine.
Once you've added the sever, you can initialise the new single-host swarm. Once you've added the sever, you can initialise the new single-host swarm.
```bash ```bash
$ abra server init example.com $ abra server example.com init
``` ```
You will now have a new `~/.abra/` folder on your local file system which stores all the configuration of your Co-op Cloud instance. You can easily share this as a git repository with others. You will now have a new `~/.abra/` folder on your local file system which stores all the configuration of your Co-op Cloud instance. You can easily share this as a git repository with others.
@ -67,17 +79,66 @@ In order to have your Co-op cloud installation automagically provision SSL certi
```bash ```bash
$ abra app new --server example.com --domain traefik.example.com traefik $ abra app new --server example.com --domain traefik.example.com traefik
$ abra app traefik.autonomic.zone deploy ```
You will want to take a look at your generated configuration and tweak the `LETS_ENCRYPT_EMAIL` value:
```bash
$ abra app traefik.example.zone config
```
This is the required environment variables that you can configure and are injected into the app configuration when deployed.
```
$ abra app traefik.example.zone deploy
```
We can then check that everything came up as expected.
```bash
$ abra app traefik.example.zone ps # status check
$ abra app traefik.example.zone logs # logs watch
``` ```
## Deploy Nextcloud ## Deploy Nextcloud
And now we can deploy apps. Go ahead and create a new nextcloud app, generate secrets and deploy it. And now we can deploy apps.
Let's create a new Nextcloud app.
```bash ```bash
$ abra app new --server example.com --domain cloud.example.com nextcloud $ abra app new --server example.com --domain cloud.example.com nextcloud
$ abra app cloud.example.com secret generate --all
$ abra app cloud.autonomic.zone deploy
``` ```
And we need to generate secrets for the app: database connection password, root password and admin password.
```bash
$ abra app cloud.example.com secret generate --all
```
Take care, these secrets are only shown once on the terminal so make sure to take note of them! Abra makes use of the [Docker secrets](https://docs.docker.com/engine/swarm/secrets/) mechanism to ship these secrets securely to the server and store them as encrypted data.
Then we can deploy the Nextcloud.
```bash
$ abra app cloud.example.zone deploy
```
And once again, we can watch to see that things come up correctly.
```bash
$ abra app nextcloud.example.zone ps # status check
$ abra app nextcloud.example.zone logs # logs watch
```
!!! note
Since Nextcloud takes some time to come up live, you can run the `ps` command under `watch` like so.
```bash
$ watch abra app nextcloud.example.zone ps
```
And you can wait until you see that all containers have the "Running" state.
Your Traefik instance should now detect that a new app is coming up and generate SSL certificates for it. Your Traefik instance should now detect that a new app is coming up and generate SSL certificates for it.

View File

@ -149,3 +149,7 @@ The Co-op Cloud is and will always be available under [copyleft licenses](https:
## Isn't running everything in container inefficient? ## Isn't running everything in container inefficient?
It is true that if you install 3 applications and each one requires a MySQL database, then you will have 3 installations of MySQL on your system, running in containers. Systems like [YunoHost](/faq/#yunohost) mutualise every part of the system for maximum resource efficiency - if there is a MySQL instance available on the system, then just make a new database there and share the MySQL instance instead of creating more. However, as we see it, this creates a tight coupling between applications on the database level - running a migration on one application where you need to turn the database off takes down the other applications. It's a balance, of course. In this project, we think that running multiple databases and maintaining more strict application isolation is worth the hit in resource efficiency. It is easier to maintain and migrate going forward in relation to other applications and problems with apps typically have a smaller problem space - you know another app is not interfering with it because there is no interdependency. It can also pay off when dealing with GDPR related issues and the need to have more stricter data layer separation. It is true that if you install 3 applications and each one requires a MySQL database, then you will have 3 installations of MySQL on your system, running in containers. Systems like [YunoHost](/faq/#yunohost) mutualise every part of the system for maximum resource efficiency - if there is a MySQL instance available on the system, then just make a new database there and share the MySQL instance instead of creating more. However, as we see it, this creates a tight coupling between applications on the database level - running a migration on one application where you need to turn the database off takes down the other applications. It's a balance, of course. In this project, we think that running multiple databases and maintaining more strict application isolation is worth the hit in resource efficiency. It is easier to maintain and migrate going forward in relation to other applications and problems with apps typically have a smaller problem space - you know another app is not interfering with it because there is no interdependency. It can also pay off when dealing with GDPR related issues and the need to have more stricter data layer separation.
## Aren't container horrible from a security perpective?
It depends, just like any other technology and understanding of security. Yes, we've watched [that CCC talk](https://media.ccc.de/v/rc3-49321-devops_disasters_3_1). It's on us all as the libre software community to deliver secure software and we think one of the promises of Co-op Cloud is more cooperation with developers of the software (who favour containers as a publishing format) and packagers and hosters (who deliver the software to the end-user). This means that we can patch our app containers directly in conversation with upstream app developers and work towards a culture of security around containers. We definitely recommend using best-in-class security auditing tools like [docker-bench-security](https://github.com/docker/docker-bench-security), IDS systems like [OSSEC](https://www.ossec.net/), security profiles like [Apparmor](https://docs.docker.com/engine/security/apparmor/) and hooking these into your existing monitoring, alert and update maintenance flows. These are organisational concerns that Co-op Cloud can't solve for you which any software system will require.