Hosting Your Own Cloud | Docker Compose, Nextcloud, Collabora, Nginx, Prometheus, Grafana
Hosting Your Own Cloud | Docker Compose, Nextcloud, Collabora, Nginx, Prometheus, Grafana
This will be a long post, where ultimately, if followed, you will learn how to set-up and self-host Nextcloud for file syncing, calendars, password managers and more, with Collabora for Google Docs-like real-time document editing, behind an Nginx reverse proxy entirely in Docker containers using docker-compose, with metrics from Prometheus visualized with Grafana, and free monitoring on a host OS of Ubuntu 16.04, through a CDN such as CloudFlare. Was that enough words for one sentence?
Preamble
Why
- As we learn more about how our personal information, mined from ‘free’ services, is being misused and abused, packaged and sold, I wanted to take a second crack at setting up my own ‘Cloud’. Enter: Nextcloud. This project has matured a lot since I tried it’s predecessor, OwnCloud, some years back.
What This Implementation is Not Good For
- Scalability.
- High-availability.
- High storage reqiurements. This post doesn’t cover it, but I’ve separated Nextcloud’s
/var/www/html
(/var/www/html/data
is where user files are stored by default) from the VM’s disk by using Digital Ocean’s Volumes. It’s surprisingly easy to implement. - It’s also a little overboard with the metrics if you’re self-hosting it mostly for yourself.
What This Implementation is Good For
- A simple and efficient implementation. Everything is in a container.
- Very little setup on the host OS is required.
- Whether or not you’re following along exactly, to implement this same archtiecture for you, there’s still a good Compose template that you can springboard off of and use as a basis for doing this however you like.
- Prometheus/Grafana are optional. If you don’t need them, don’t create the nginx config and comment the relevant lines out of
docker-compose.yml
Common Paths
- Some common paths on the Ubuntu VM will be referenced regularly.
- The default
docker-compose
file will be at/opt/docker/docker-compose.yml
. NOTE you’ll always need to specify the-f <PATH_TO_FILE
> when invokingdocker-compose
as it assumes this file is in a different location by default. Ctrl + r is your friend. - All data that will be mounted as a bind mount in the container will be stored in
/opt/docker-files/
including the Prometheus config file as well as Nginx configurations and cache.
Architecture
Assumptions
Server Virtual Machine (VM)
You…
- Know how to use Linux at an Amateur level and are running the commands in this post with root privileges.
- Are familiar with Docker.
- Are running Ubuntu 16.04 LTS server.
- Have securely set up SSH.
- Have enabled
swap
. More on that later.
3rd party Service Providers
- You can use Dynamic DNS, or buy a domain. Hover is pretty good (referall link)
- Digital Ocean for hosting. A droplet with 1GB of RAM (This is an affiliate link). A $5 droplet will do fine. Collabora uses a lot of RAM.
- CloudFlare as your CDN (free), which is not doing SSL validation for your webserver’s cert (because we will generate a self-signed one ourselves. They’re expensive if you buy them.
FQDNs / DNS
Example sub-domain names for ease of reference:
- nextcloud.yourdomain.com
- collabora.yourdomain.com
- grafana.yourdomain.com
How it Works - From a User’s Browser to Individual Containers
CloudFlare…
Receives a request to https://nextcloud.yourdomain.com. The domain name that you own has CloudFlare’s nameservers already set, so that their CDN is used to carry the request to Digital Ocean.
Digital Ocean…
Allows at least ports :22
, :80
, and :443
to your VM with their Firewall.
Nginx (Reverse Proxy)…
Some people would suggest running Traefik instead of Nginx. I haven’t used it, but it may be easier to implement than Nginx - I’m just already familiar with doing this in Nginx. My Nginx container is consuming 4.44MB of RAM at the moment.
- Works as a reverse proxy to your containers, receiving the request and handling SSL Offloading, so that all of your containers or any more domains you will host won’t need their own certificate.
Allow
/Deny
directives are set to only server to traffic from CloudFlare and/or a monitoring service (As of this writing, Digital Ocean’s Firewall rules don’t allow whitelisting IP ranges, so the best we can do is allowing an explicit whitelist to access what’s behind the reverse proxy).- Redirects any HTTP:80 traffic to HTTPS:443.
- Performs passive healthchecks for all backend containers.
- Serves cached assets from disk.
Tuning / Tweaks
CloudFlare
- Ensure that Rocket Loader and Auto Minify for JavaScript are disabled in the Speed section of the dashboard, else you may get browser console errors relating to CSP and/or a mis-behaving UI.
OS
ntpd
- To become one with the sweet slip-and-slide of time, take a second to configure the NTP daemon on your host OS.
You’ll notice in the compose file that each container has a bind mount to
/etc/localtime:/etc/localtime:ro
so that they’re all synchronized with the host OS’s clock.
swap
It’s not the best thing for SSD’s to use swap, but with a vm.swappiness=1
value, I feel like that’s good enough. It means it will use the SSD to offload memory to the smallest possible degree (not including when 0 RAM is free. vm.swappiness=0
would use swap only in that circumstance).
A condensed versions for the commands to add 1GB of swap are below, taken from this:
# create the swapfile
fallocate -l 1G /swapfile
# permissions
chmod 600 /swapfile
# set this new file as swapfile
mkswap /swapfile
# enable swap with this file
swapon /swapfile
# ensure this file is mounted as swap on boot
echo '/swapfile none swap sw 0 0' | sudo tee -a /etc/fstab
# set minimum swappiness level until reboot. `0` will make it use swap only runs out of memory
sysctl vm.swappiness=1
# set swappiness permanently
echo 'vm.swappiness=1' | sudo tee -a /etc/sysctl.conf
Docker
Docker Images That We’ll Use
- nginx:apline
- nextcloud:12.0
- mariadb:latest
- collabora:latest
- node-exporter:latest
- prometheus:latest
- grafana:latest
Installation and Configuration
Install and Start Docker
New User Setup
To be safe and to be closer to ‘best practice’ with a new user account, service
:
- Create a new user:
adduser service
- Add them to the
sudo
anddocker
groups:
usermod -aG sudo,docker service
Metrics
Add these JSON properties to /etc/docker/daemon.json
:
{
"metrics-addr" : "127.0.0.1:9323",
"experimental" : true
}
- Newer versions of the docker engine expose metrics directly to a scraper like Prometheus. Previously, containers like cAdvisor would collect docker container metrics for Prometheus to scrape. Nowadays, cAdvisor seems unnecessary from what I’ve read.
- Containers like Prometheus will then be able to access Docker’s metrics via http://172.17.0.1/metrics to a container (Prometheus in this example). This internal IP is the default gateway in the container’s network by default - and the daemon is listening. I would think there’s a better way to reference this
address:port
, but I haven’t found one.
- Restart docker service:
systemctl restart docker
docker.compose.yml
- Official Documentation: https://docs.docker.com/compose/compose-file/
- In my case, I keep it in
/opt/docker/docker-compose.yml
and use the-f
option to point to that file, like so:docker-compose -f /opt/docker/docker-compose.yml <up -d|down>
. - Version
3.4
is not necessary,3.0
and above, and perhaps2.0
should work fine.
---
version: "3.4"
networks:
n-nextcloud-db: ~
n-nginx-collabora: ~
n-nginx-grafana: ~
n-nginx-nextcloud: ~
n-prometheus-grafana: ~
n-prometheus-node-exporter: ~
volumes:
v-grafana: ~
v-nextcloud: ~
v-nextcloud-db: ~
v-prometheus: ~
v-nginx-rev-proxy: ~
services:
grafana:
depends_on:
- prometheus
- node-exporter
environment:
- "GF_SECURITY_ADMIN_PASSWORD=PASSWORD"
- "GF_SERVER_ROOT_URL=https://grafana.yourdomain.com"
image: "grafana/grafana:latest"
networks:
- n-nginx-grafana
- n-prometheus-grafana
restart: unless-stopped
volumes:
- "v-grafana:/var/lib/grafana"
- "/etc/localtime:/etc/localtime:ro"
nextcloud-apache:
depends_on:
- nextcloud-db
image: "nextcloud:12.0"
networks:
- n-nginx-nextcloud
- n-nextcloud-db
restart: unless-stopped
volumes:
- "v-nextcloud:/var/www/html"
- "/etc/localtime:/etc/localtime:ro"
nextcloud-collabora:
cap_add:
- MKNOD
environment:
- domain=nextcloud\.yourdomain\.com
- username=USERNAME
- password=PASSWORD
expose:
- "9980"
image: collabora/code
networks:
- n-nginx-nextcloud
- n-nginx-collabora
restart: unless-stopped
volumes:
- "/etc/localtime:/etc/localtime:ro"
nextcloud-db:
environment:
- MYSQL_ROOT_PASSWORD=PASSWORD
- MYSQL_PASSWORD=PASSWORD
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
image: mariadb
networks:
- n-nextcloud-db
restart: unless-stopped
volumes:
- "v-nextcloud-db:/var/lib/mysql"
- "/etc/localtime:/etc/localtime:ro"
nginx-rev-proxy:
depends_on:
- nextcloud-apache
- nextcloud-collabora
- grafana
image: "nginx:alpine"
networks:
- n-nginx-nextcloud
- n-nginx-collabora
- n-nginx-grafana
ports:
- "80:80"
- "443:443"
restart: unless-stopped
volumes:
- "v-nginx-rev-proxy:/etc/nginx"
- "/opt/docker-files/nginx/sites-enabled:/etc/nginx/sites-enabled:ro"
- "/opt/docker-files/nginx/nginx.conf:/etc/nginx/nginx.conf:ro"
- "/opt/docker-files/nginx/ssl:/etc/nginx/ssl:ro"
- "/opt/docker-files/nginx/static:/etc/nginx/static:ro"
- "/opt/docker-files/nginx/conf.d:/etc/nginx/conf.d:ro"
- "/opt/docker-files/nginx/cache:/etc/nginx/cache:rw"
- "/etc/localtime:/etc/localtime:ro"
node-exporter:
image: prom/node-exporter
networks:
- n-prometheus-node-exporter
prometheus:
depends_on:
- node-exporter
image: "prom/prometheus:latest"
networks:
- n-prometheus-grafana
- n-prometheus-node-exporter
restart: unless-stopped
volumes:
- "/opt/docker-files/prometheus.yml:/etc/prometheus/prometheus.yml"
- "v-prometheus:/prometheus"
- "/etc/localtime:/etc/localtime:ro"
Nginx
- The Nginx container is listening on
0.0.0.0:80->80/tcp, 0.0.0.0:443->443/tcp
. All other containers are only accessible via user-defined docker networks within the Docker daemon. - Routes one IP to multiple backends depending upon the
HOST
header in the request. https://github.com/nextcloud/docker/blob/master/.examples/nginx.conf
SSL Certificate Generation / Security
- I’m using self-signed certs on Nginx. The Let’s Encrypt project is fantastic, but I just don’t see a need for it when CloudFlare is free. I have configured CloudFlare to not do strict certificate checking since I’m using a self-signed cert. Users see CloudFlare’s certificate.
- Create the directory for the bind mount:
mkdir -p /opt/docker-files/nginx/ssl/
- Generate SSL keys:
openssl req -x509 -nodes -days 740 -newkey rsa:4096 -keyout /opt/docker-files/nginx/ssl/nginx-rev-proxy.key -out /opt/docker-files/nginx/ssl/nginx.crt
- Generate more secure DHE parameters:
openssl dhparam -out /opt/docker-files/nginx/ssl/dhparam.pem 4096
Info on what this is used for is here as well as other information on how to better secure Nginx.
Nginx Configuration Files
Nginx Folder Tree
[user@server:/opt/docker-files/nginx] $ tree -ugpR /opt/docker-files/nginx/
/opt/docker-files/nginx/
├── [drwxrwx--- systemd-timesync docker ] cache
│ ├── [drwx------ systemd-timesync root ] cl [error opening dir]
│ └── [drwx------ systemd-timesync root ] nc [error opening dir]
├── [drwxr-xr-x user docker ] conf.d
│ ├── [-rw-r--r-- user docker ] network-whitelist-cf-mon.conf
│ ├── [-rw-r--r-- user docker ] network-whitelist-cloudflare.conf
│ └── [-rw-r--r-- user docker ] network-whitelist-monitoring.conf
├── [-rw-r--r-- user docker ] mime.types
├── [-rw-r--r-- user docker ] nginx.conf
├── [drwxr-xr-x user docker ] sites-enabled
│ ├── [-rw-r--r-- user docker ] collabora.yourdomain.com.conf
│ └── [-rw-r--r-- user docker ] nextcloud.yourdomain.com.conf
├── [drwxr-xr-x user docker ] ssl
│ ├── [-rw-r--r-- user docker ] dhparam.pem
│ ├── [-rw-r--r-- user docker ] nginx.crt
│ ├── [-rw-r--r-- user docker ] nginx.key
conf.d
These files are referenced selectively in the site-specific Nginx configs to allow taffic from a combination of three sources:
- Traffic through CloudFlare
- Traffic through CloudFlare and the monitoring service
- Traffic from the monitoring service
/opt/docker-files/nginx/conf.d/network-whitelist-cloudflare.conf
/opt/docker-files/nginx/conf.d/network-whitelist-cf-mon.conf
/opt/docker-files/nginx/conf.d/network-whitelist-monitoring.conf
You can find relevant CloudFlare and Uptime Robot IP’s here:
- https://www.cloudflare.com/ips-v4
- https://www.cloudflare.com/ips-v6
- https://uptimerobot.com/inc/files/ips/IPv4andIPv6.txt
Here’s an example of one of those files that I use (/opt/docker-files/nginx/conf.d/network-whitelist-cloudflare.conf
) with values taken from the above links:
allow 127.0.0.1;
# Cloudflare
allow 103.21.244.0/22;
allow 103.22.200.0/22;
allow 103.31.4.0/22;
allow 104.16.0.0/12;
allow 108.162.192.0/18;
allow 131.0.72.0/22;
allow 141.101.64.0/18;
allow 162.158.0.0/15;
allow 172.64.0.0/13;
allow 173.245.48.0/20;
allow 188.114.96.0/20;
allow 190.93.240.0/20;
allow 197.234.240.0/22;
allow 198.41.128.0/17;
allow 2400:cb00::/32;
allow 2405:8100::/32;
allow 2405:b500::/32;
allow 2606:4700::/32;
allow 2803:f800::/32;
allow 2c0f:f248::/32;
allow 2a06:98c0::/29;
deny all;
nginx.conf
This is almost the default Nginx config, with minor changes like include
and ssl_protocols
.
user nginx;
worker_processes auto;
pid /run/nginx.pid;
events {
worker_connections 768;
# multi_accept on;
}
http {
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
gzip off;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
}
==NOTE== that there’s no IP addresses listed in the below configs for the various backends, such as nextcloud-apache
. Nginx is doing DNS resolution; the service name of the container defined in docker-compose.yml
is treated like a hostname that the containers in connected networks can resolve to IPs:
docker exec -ti docker_nginx-rev-proxy_1 /bin/sh -c "ping -c 1 nextcloud-apache"
PING nextcloud-apache (172.19.0.3): 56 data bytes
64 bytes from 172.19.0.3: seq=0 ttl=64 time=0.275 ms
--- nextcloud-apache ping statistics ---
1 packets transmitted, 1 packets received, 0% packet loss
round-trip min/avg/max = 0.275/0.275/0.275 ms
nextcloud.yourdomain.net
proxy_cache_path /etc/nginx/cache/nc/ use_temp_path=off levels=1:2 keys_zone=nc-cache:8m max_size=512m inactive=24h;
upstream backend_nc {
server nextcloud-apache:80 max_fails=3 fail_timeout=10s;
zone backend_nc 128k;
keepalive 16;
}
server {
listen 80;
server_name nextcloud.yourdomain.com www.nc.yourdomain.com;
server_tokens off;
include conf.d/network-whitelist-cf-mon.conf;
return 301 https://nextcloud.yourdomain.com$request_uri;
}
server {
listen 443 ssl http2;
server_name nextcloud.yourdomain.com;
server_tokens off;
# SSL
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:20m;
# logs
#access_log /var/log/nginx/nc_access.log;
#error_log /var/log/nginx/nc_error.log;
gzip off;
# The apache container adds these I believe
#add_header X-Content-Type-Options nosniff;
#add_header X-XSS-Protection "1; mode=block";
#add_header X-Robots-Tag none;
#add_header X-Download-Options noopen;
#add_header X-Permitted-Cross-Domain-Policies none;
# Uncomment if your server is build with the ngx_pagespeed module
# This module is currently not supported.
#pagespeed off;
# The following 2 rules are only needed for the user_webfinger app.
# Uncomment it if you're planning to use this app.
#rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
#rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json
# last;
location = /.well-known/carddav {
return 302 $scheme://$host/remote.php/dav;
}
location = /.well-known/caldav {
return 302 $scheme://$host/remote.php/dav;
}
# http://alanthing.com/blog/2017/05/15/robots-dot-txt-disallow-all-with-nginx/
location = /robots.txt {
include conf.d/network-whitelist-cloudflare.conf;
add_header Content-Type text/plain;
return 200 "User-agent: *\nDisallow: /\n";
}
location = /status.php {
# Only allow monitoring service
include conf.d/network-whitelist-cf-mon.conf;
proxy_pass http://backend_nc;
}
# Security
location ~ ^/(?:build|tests|config|lib|3rdparty|templates|data)/ {
deny all;
}
location ~ ^/(?:\.|autotest|occ|issue|indie|db_|console) {
deny all;
}
# cache static assets
location ~* \.(?:css|js|svg|svgz|woff|ico)$ {
expires 1d;
log_not_found off;
access_log off;
client_body_timeout 1m;
client_max_body_size 10m;
include conf.d/network-whitelist-cf-mon.conf;
proxy_buffering off;
proxy_http_version 1.1;
proxy_pass http://backend_nc;
proxy_pass_header Authorization;
proxy_read_timeout 10s;
proxy_redirect off;
proxy_request_buffering off;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
# Allow large/long uploads
client_body_timeout 60m;
client_max_body_size 1g;
include conf.d/network-whitelist-cf-mon.conf;
proxy_buffering off;
proxy_cache nc-cache;
proxy_cache_valid 200 302 30m;
proxy_cache_valid 404 1m;
proxy_http_version 1.1;
proxy_pass http://backend_nc;
proxy_pass_header Authorization;
proxy_read_timeout 60s;
proxy_redirect off;
proxy_request_buffering off;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
#proxy_ssl_session_reuse off;
}
}
collabora.yourdomain.net
upstream backend_cl {
server nextcloud-collabora:9980 max_fails=3 fail_timeout=5s;
keepalive 64;
}
proxy_cache_path /etc/nginx/cache/cl/ use_temp_path=on levels=1:2 keys_zone=cl-cache:8m max_size=1g inactive=48h;
server {
listen 80;
server_name collabora.yourdomain.com;
include conf.d/network-whitelist-cf-mon.conf;
return 302 $host$request_uri;
}
server {
listen 443 ssl http2;
server_name collabora.yourdomain.com;
include conf.d/network-whitelist-cloudflare.conf;
# SSL
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:20m;
# logs
#access_log /var/log/nginx/cl_access.log;
#error_log /var/log/nginx/cl_error.log;
gzip off;
# static files
location ^~ /loleaflet {
proxy_pass https://backend_cl;
proxy_cache cl-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_read_timeout 60;
proxy_connect_timeout 60;
proxy_redirect off;
proxy_set_header Host $http_host;
}
# WOPI discovery URL
location ^~ /hosting/discovery {
proxy_pass https://backend_cl;
proxy_set_header Host $http_host;
proxy_ssl_session_reuse on;
}
# main websocket
location ~ ^/lool/(.*)/ws$ {
proxy_pass https://backend_cl;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_read_timeout 60s;
proxy_ssl_session_reuse on;
}
# download, presentation and image upload
location ~ ^/lool {
client_max_body_size 100m;
client_body_timeout 30m;
proxy_pass https://backend_cl;
proxy_set_header Host $http_host;
proxy_ssl_session_reuse on;
}
# Admin Console websocket
location ^~ /lool/adminws {
proxy_pass https://backend_cl;
proxy_cache cl-cache;
proxy_cache_valid 200 302 60m;
proxy_cache_valid 404 1m;
proxy_read_timeout 60;
proxy_connect_timeout 60;
proxy_redirect off;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $http_host;
proxy_ssl_session_reuse on;
}
}
grafana.yourdomain.com
proxy_cache_path /etc/nginx/cache/gf/ use_temp_path=off levels=1:2 keys_zone=gf-cache:8m max_size=512m inactive=24h;
upstream backend_gf {
server grafana:3000 max_fails=3 fail_timeout=5s;
zone backend_gf 128k;
keepalive 16;
}
server {
listen 80;
server_name grafana.yourdomain.com;
server_tokens off;
include conf.d/network-whitelist-cf-mon.conf;
return 301 https://grafana.yourdomain.com$request_uri;
}
server {
listen 443 ssl http2;
server_name grafana.yourdomain.com;
server_tokens off;
# SSL
ssl_certificate /etc/nginx/ssl/nginx.crt;
ssl_certificate_key /etc/nginx/ssl/nginx.key;
ssl_protocols TLSv1.1 TLSv1.2 TLSv1.3;
ssl_ciphers 'EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH';
ssl_dhparam /etc/nginx/ssl/dhparam.pem;
ssl_prefer_server_ciphers on;
ssl_session_cache shared:SSL:20m;
# logs
#access_log /var/log/nginx/gf_access.log;
#error_log /var/log/nginx/gf_error.log;
gzip off;
# The apache container or CloudFlare seems to add these
#add_header X-Content-Type-Options nosniff;
#add_header X-XSS-Protection "1; mode=block";
#add_header X-Robots-Tag none;
#add_header X-Download-Options noopen;
#add_header X-Permitted-Cross-Domain-Policies none;
location ~* \.(?:css|js|svg|svgz|woff|ico)$ {
expires 1d;
log_not_found off;
access_log off;
client_body_timeout 1m;
client_max_body_size 10m;
include conf.d/network-whitelist-cf-mon.conf;
proxy_buffering off;
proxy_http_version 1.1;
proxy_pass http://backend_gf;
proxy_pass_header Authorization;
proxy_read_timeout 10s;
proxy_redirect off;
proxy_request_buffering off;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
location / {
client_body_timeout 5m;
client_max_body_size 10m;
include conf.d/network-whitelist-cf-mon.conf;
proxy_buffering off;
proxy_cache gf-cache;
proxy_cache_valid 200 302 30m;
proxy_cache_valid 404 1m;
proxy_http_version 1.1;
proxy_pass http://backend_gf;
proxy_pass_header Authorization;
proxy_read_timeout 30s;
proxy_redirect off;
proxy_request_buffering off;
proxy_set_header Connection "";
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Real-IP $remote_addr;
}
}
Permissions
Ensure that /opt/docker-files/nginx/cache
is recursively owned by user systemd-timesync
:
sudo chown -R systemd-timesync /opt/docker-files/nginx/cache
Nextcloud / Collabora
Initial Setup
Finalize Installation via Web Browser
The Nextcloud container will be able to resolve the IP for the hostname of the Database server via the name of the service as defined in docker-compose for the Database: nextcloud-db
.
The password for the DB is not the root MySQL password.
Collabora
- Through the UI as admin, install the Collabora app.
- Go to Admin > Collabora Online And enter your equivalent of collabora.yourdomain.com
Issues
- The Collabora container uses SSL by default and still does after disabling it according to this article. Not a big deal, but I would prefer to not encrypt HTTP between containers inside a single Docker daemon, for the sake of efficiency.
- I often have to restart the container if the server is rebooted or I restart the daemon; not sure why.
- If you get an error saying ‘your document is corrupted’… unfortunately restarting the daemon worked for me, as it did for these folks.
- You may notice that any failed login attempts in the logging UI refer to a private IP address for one of your containers, and not the remote host:
Login failed: 'admin' (Remote IP: '172.19.0.4')
Modify your /var/www/html/config/config.php, adding the two lines to add the Nginx container, or whatever you may be using as a reverse proxy, as a trusted proxy, then restart the container. Then, Nextcloud will look at the HTTP headers for x-forwarded-for
to get the real IP.
Login failed: 'admin' (Remote IP: '197.234.240.1')
Monitoring
Prometheus
- Docker’s documentation for Prometheus is here.
- The Prometheus container won’t be exposed to the internet as there’s no authentication method enabled by default (that I’m aware of). We don’t need it anyway, since we’ll only be looking at the metrics it collects through Grafana.
- Configuration file for the Prometheus container, located at
/opt/docker-files/prometheus.yml
:
global:
scrape_interval: 60s # By default, scrape targets every 15 seconds.
evaluation_interval: 999s # By default, scrape targets every 15 seconds.
# scrape_timeout is set to the global default (10s).
# Attach these labels to any time series or alerts when communicating with
# external systems (federation, remote storage, Alertmanager).
external_labels:
monitor: 'docker'
# A scrape configuration containing exactly one endpoint to scrape:
# Here it's Prometheus itself.
scrape_configs:
# The job name is added as a label `job=<job_name>` to any timeseries scraped from this config.
- job_name: 'prometheus'
static_configs:
- targets: ['prometheus:9090']
- job_name: 'node'
static_configs:
- targets: ['node-exporter:9100']
- job_name: 'docker'
static_configs:
- targets: ['172.17.0.1:9323']
I don’t need very frequent scraping; only once per minute is plenty. Also, since I have no rules defined, I set the rule evaluation property to a high value.
Grafana
- Configure Grafana and Prometheus
- Docker Documentation for working with Prometheus
- A great Grafana Dashboard to monitor the host machine. Here’s an example of mine:
Uptime Robot
- A simple, easy, and free monitoring service (with whom I’m only a customer and have no other relationship with). Get emails when something goes down. Monitor, for example, https://nextcloud.yourdomain.com/status.php . It should be self-explanatory.
- For Collabora, using this URL will work as a healthcheck: https://collabora.yourdomain.com/hosting/discovery.
Maintenance
- Nextcloud’s
occ
command will allow you to put the server into maintenance mode:docker exec --user www-data docker_nextcloud-apache_1 php occ maintenance:mode --<on|off>
- Further reading on all of those options here
Backups (Ansible)
Nextcloud Backup Docs I’ve written this Ansible standalone playbook to:
- Take a snapshot of the VM image in Digital Ocean using do_snapshot, a CLI tool written in Ruby.
- Archive a bunch of important directories into a single tarball.
- Update and upgrade all packages.
- Upgrade all containers to the latest version.
rsync
the tarball to the control machine. ==NOTE== You’ll need to update some variables as it pertains to your setup
---
####################
# Auther: Andrew Aadland
# Date: 20180118
# Purpose: Take snapshot, backup data, upgrade OS, upgrade containers
# tags:
## snapshot - perform snapshot on digital ocean
## upgrade - upgrade OS packages and reboot
####################
# tag info
- hosts: all
become: true
become_method: sudo
vars:
# misc
date: "{{ ansible_date_time.date }}"
# names of containers are the same as in docker-compose.yml, preceded by 'docker_'
nginx_container: "docker_nginx-rev-proxy_1"
collabora_container: "docker_nextcloud-collabora_1"
nextcloud_container: "docker_nextcloud-apache_1"
nextcloud_db_container: "docker_nextcloud-db_1"
nextcloud_db_password: "PASSWORD"
# digital ocean
do_name: "DROPLET_NAME"
do_api_key: "API_KEY"
do_snapshot: "/opt/digitalocean/do_snapshot/bin/do_snapshot --shutdown --no-stop-by-power --clean --keep=3 --digital-ocean-access-token={{ do_api_key }}"
# remote
remote_block_storage_path: "/mnt/volume-blabla"
remote_db_dump_file_name: "nextcloud-db_{{ date }}.backup"
remote_nextcloud_data_path: "{{ remote_block_storage_path }}/nextcloud-data"
remote_docker_volumes: "{{ remote_docker_path }}/volumes"
remote_temp_backup_dir: "{{ remote_block_storage_path }}/backups"
remote_backup_archive_file_path: "{{ remote_temp_backup_dir }}/{{ date }}-backup.tar.gz"
remote_docker_files: "/opt/docker-files"
remote_compose_file_path: "/opt/docker/docker-compose.yml"
remote_docker_path: "/var/lib/docker"
# local
local_backup_data_path: "/opt/backups/{{ do_name }}"
tasks:
- name: "FILE - ensure proper ownership of remote backup folder"
file:
path: "{{ remote_temp_backup_dir }}"
owner: user
group: docker
state: directory
mode: 0700
- name: "PROCESS - DOCKER - enable nextcloud maintenance mode"
command: "docker exec --user www-data {{ nextcloud_container }} php occ maintenance:mode --on"
ignore_errors: true
- name: "MISC - pause"
pause: seconds=5
# some containers will create broken symlinks each time they're started. This unlinks any symlinks so be careful!
# you can first locate these via command: find /var/lib/docker/volumes -xtype l -exec ls -l {} \;
- name: "PROCESS - unlink a symlink in docker volumes that breaks Ansible's archive module. needs to allow -h option"
command: "find {{ remote_docker_volumes }} -xtype l -exec unlink {} \\;"
- name: "PROCESS - DOCKER - backup Nextcloud DB inside container"
command: "docker exec {{ nextcloud_db_container }} /bin/bash -c 'mysqldump --single-transaction -u nextcloud --password={{ nextcloud_db_password }} nextcloud > /{{ remote_db_dump_file_name }}'"
- name: "PROCESS - DOCKER - copy from container to remote host"
command: "docker cp {{ nextcloud_db_container }}:/{{ remote_db_dump_file_name }} {{ remote_temp_backup_dir }}"
- name: "PROCESS - DOCKER - delete DB backup from container"
command: "docker exec -t {{ nextcloud_db_container }} /bin/bash -c 'rm -f /{{ remote_temp_backup_dir }}/{{ remote_db_dump_file_name }}'"
- name: "PROCESS - DOCKER - stop docker containers"
command: "docker-compose -f {{ remote_compose_file_path }} stop"
ignore_errors: true
- name: "SYSTEM - stop and disable docker daemon"
systemd:
name: docker
state: stopped
enabled: false
- name: "LOCAL - run 'do_snapshot' to take an image snapshot"
tags: snapshot
command: "{{ do_snapshot }}"
delegate_to: localhost
become: false
- name: "SYSTEM - probe for remote host"
wait_for_connection:
sleep: 2
delay: 1
timeout: 600
- name: "FILE - create archive of volumes, docker files, mysqldump, etc"
archive:
format: gz
path:
- "{{ remote_nextcloud_data_path }}" # Nextcloud user data
- "{{ remote_temp_backup_dir }}/{{ remote_db_dump_file_name }}" # MySQL dump file
- "{{ remote_docker_volumes }}" # path to all of the docker volumes
- "{{ remote_docker_files }}" # path to config files and stuff
- "{{ remote_compose_file_path }}" # docker-compose.yml
# archive file that'll be rsync-d
dest: "{{ remote_backup_archive_file_path }}"
- name: "SYSTEM - upgrade packages"
tags: upgrade
apt:
upgrade: dist
- name: "SYSTEM - reboot"
tags: upgrade
command: shutdown -r
- name: "SYSTEM - probe for remote host"
wait_for_connection:
sleep: 2
delay: 10
timeout: 600
- name: "SYSTEM - start/enable docker daemon"
systemd:
name: docker
state: started
enabled: true
- name: "MISC - pause"
pause: seconds=5
- name: "PROCESS - DOCKER - upgrade container versions to latest"
command: "docker-compose -f {{ remote_compose_file_path }} pull --parallel"
- name: "PROCESS - DOCKER - Bring up docker-compose"
command: "docker-compose -f {{ remote_compose_file_path }} up -d --remove-orphans"
- name: "PROCESS - DOCKER - restart collabora"
command: "docker restart {{ collabora_container }}"
- name: "PROCESS - DOCKER - clear cache and reload nginx"
command: "docker exec {{ nginx_container }} /bin/sh -c 'nginx -t && rm -rf /etc/nginx/cache/* && nginx -s reload'"
- name: "PROCESS - DOCKER - disable nextcloud maintenance mode"
command: "docker exec --user www-data {{ nextcloud_container }} php occ maintenance:mode --off"
- name: "PROCESS - DOCKER - trigger upgrade inside of nextcloud"
command: "docker exec --user www-data {{ nextcloud_container }} php occ upgrade --no-interaction"
- name: "FILE - ensure local backup folder exists"
file:
path: "{{ local_backup_data_path }}"
owner: user
group: docker
state: directory
mode: 0700
delegate_to: localhost
become: false
- name: "PROCESS - rsync backup archive to control machine"
synchronize:
mode: pull
archive: true
partial: true
delete: false
checksum: true
src: "{{ remote_backup_archive_file_path }}"
dest: "{{ local_backup_data_path }}"
- name: "SYSTEM - cleanup/erase {{ remote_temp_backup_dir }}"
file:
path: "{{ remote_temp_backup_dir }}"
state: absent
You should also backup (in case you’re trying to do a restore) some important values from Nextcloud’s config.php
in the container by running this command on the host:
docker exec -t docker_nextcloud-apache_1 /bin/sh -c 'egrep '(instanceid|passwordsalt)' /var/www/html/config/config.php'
Upgrading to new Docker Images (Manually)
- Stop the containers
docker-compose -f /opt/docker/docker-compose.yml stop
- Download the newest versions of images available
docker-compose -f /opt/docker/docker-compose.yml pull --parallel
- Cleanup untagged images with a tag like
<none>
docker rmi $(docker images | awk '{print $2, $3}' | grep "<none>" | awk '{print $2}')
- Start containers
docker-compose -f /opt/docker/docker-compose.yml up -d
Conclusion
- It turns out that for personal usage of Nextcloud, having Grafana is… not very important for me. I’ve commented out the 3 monitoring containers in my compose file to free up a little more RAM.