Categorycomputer stuff

Installing Seafile with Docker and Apache 2

Seafile is an open source file sharing software that allows its users to setup their own cloud storage at home. Think Dropbox, but self-hosted.

This post is about deploying an instance of Seafile using Docker on a non-standard port (i.e. not 80 and 443). The reason to deploy the app on a non-standard port is usually because there is already another webserver running on that port (in my case Apache 2). In such scenarios, it is necessary to set up a reverse proxy so the Seafile port can be exposed by Apache on ports 80/443. Under typical circumstances, deploying Seafile on a standard port would mean that the app would host itself without the need for Apache 2 at all. This post also assumes Ubuntu 16.04 as the operating system, but in theory should apply to other Linux OS’s.

The process can be split into multiple parts:

  1. Setup Docker
  2. Deploy Seafile
  3. Configure Apache 2
  4. Setup systemd
  5. Setup e-mail (optional)
  6. Backing up the image
  7. Troubleshooting

Setup Docker

This guide won’t go into depth on how to setup Docker, but rather the setup guide and more information about what Docker is can be found here: https://docs.docker.com/get-started/

Deploy Seafile

The key page in the Seafile documentation for deploying Seafile within Docker can be found here: https://manual.seafile.com/deploy/deploy_with_docker.html. The command I used is as follows:

docker run -d --name seafile \
  -e SEAFILE_SERVER_HOSTNAME=seafile.example.com \
  -e SEAFILE_ADMIN_EMAIL=me@example.com \
  -e SEAFILE_ADMIN_PASSWORD=a_very_secret_password \
  -v /opt/seafile-data:/shared \
  -p 8000:80 \
  seafileltd/seafile:latest

I recommend setting up Seafile with an actual secret password because it’s not easy to change once it’s set up.

Note that the port argument shows 8000:80. It means that Docker will use the host port 8000 and the container will use port 80 – meaning that using Docker will interpret requests from my host OS on port 8000 as requests on port 80 in the container. 8000 was used on my setup because port 80 was already used.

Once the command is executed, the necessary files will be downloaded and the container will start and will be accessible on port 8000.

There is also an option to setup Seafile with an SSL cert from Let’s Encrypt, but I didn’t do this because I already had a cert that I wanted to reuse.

Once the server is up, navigate to https://seafile.example.com/sys/settings/ and change SERVICE_URL and FILE_SERVER_URL to match your domain name.

Configure Apache 2

Adding a reverse proxy on Apache 2 will allow Apache 2 to route requests to port 8000, which will go to port 80 in the container. The main reference for this section is this: https://manual.seafile.com/deploy/deploy_seahub_at_non-root_domain.html. This is done by adding a new site to Apache 2. The following configuration assumes that a LetsEncrypt SSL certificate is used.

cd /etc/apache2/sites-available
sudo nano seafile.conf
<IfModule mod_ssl.c>
<VirtualHost *:443>
        ServerName seafile.example.com
        ServerAdmin me@example.com

        RewriteEngine On

        # ModSecurity does not process requests.  There is a hard limit of 1 GB
        # with ModSecurity.
        SecRuleEngine Off

        # Whitelist internal IPs for mod_evasive
        DOSWhitelist 192.168.1.*

        <Location /media>
                Require all granted
        </Location>

        # Stuff for seafile server
        ProxyPass /seafhttp http://127.0.0.1:8082
        ProxyPassReverse /seafhttp http://127.0.0.1:8082
        RewriteRule ^/seafhttp - [QSA,L]

        # Stuff for seahub
        SetEnvIf Authorization "(.*)" HTTP_AUTHORIZATION=$1
        ProxyPreserveHost On
        ProxyPass / http://127.0.0.1:8000/
        ProxyPassReverse / http://127.0.0.1:8000/

        ErrorLog ${APACHE_LOG_DIR}/seafile_error.log
        CustomLog ${APACHE_LOG_DIR}/seafile_access.log combined

        Include /etc/letsencrypt/options-ssl-apache.conf

        SSLCertificateFile /path/to/cert/file.pem
        SSLCertificateKeyFile /path/to/cert/key/file.pem
</VirtualHost>
</IfModule>

I’ve added my local domain to the whitelist so that mod_evasive doesn’t think my local host is a threat.

There are two sets of ProxyPass and ProxyPassReverse lines. The reason for this is that Seafile is split up into Seahub and Seafile, where one uses port 8082 (by default), while the other uses 8000. These two lines are essentially what allows Apache to read requests from port 443 and pass them on to the container.

Once the file is saved, execute the following

sudo a2ensite seafile.conf
sudo service apache2 reload

Navigating to the site at the Apache 2 defined port should now load Seafile without having to go to port 8000.

Setup systemd

systemd allows the Docker container to be started up on boot, and be controlled like this:

sudo service seafile start/stop/restart

The configuration is as follows:

sudo nano /etc/systemd/system/seafile.service
[Unit]
Description=Seafile Server
After=network.target mysql.service

[Service]
Type=oneshot
ExecStart=/usr/bin/docker container start seafile
ExecStop=/usr/bin/docker container stop seafile
ExecReload=/usr/bin/docker container restart seafile
RemainAfterExit=yes
User=root
Group=root

[Install]
WantedBy=multi-user.target

Once the configuration is saved, shut off the existing running instance of Seafile and then let systemd start it up for you.

sudo systemctl daemon-reload
sudo service seafile start

At this point, Seafile should start automatically on boot, with Apache 2 doing a reverse proxy to the true Seafile port.

Setup e-mail (optional)

Optionally, Seafile can be configured to send e-mails. The reference is here: https://manual.seafile.com/config/sending_email.html. The config file should be under:

/opt/seafile-data/seafile/conf/seahub_settings.py

I happen to be using Gmail SMTP, so at the end of the file I’ve added

EMAIL_USE_TLS = True
EMAIL_HOST = 'smtp.gmail.com'
EMAIL_HOST_USER = 'me@example.com'
EMAIL_HOST_PASSWORD = 'password'
EMAIL_PORT = 587
DEFAULT_FROM_EMAIL = EMAIL_HOST_USER
SERVER_EMAIL = EMAIL_HOST_USER

Backing up the image

After setting everything up, you’ll probably want to backup the Docker image. First, print out the list of running containers

sudo docker ps

Then take a snapshot of the current running state:

docker commit -p  

Running this command lists all the images and should show this newly saved image:

sudo docker images

I don’t have a private docker repository, so I saved the image to the disk and backed up the file by doing the below. Just make sure to actually move the image to an actual backup location.

sudo docker save -o /.tar 

The image should be backed up now. To restore the image, it’s a matter of running this:

sudo docker load -i 

Listing the images as above should show that the image was successfully loaded. Running this starts it back up again.

sudo docker run 

Troubleshooting

Seafile logs can be helpful if it doesn’t start up correctly. These can be found at:

/opt/seafile-data/logs/seafile

There may be a need to ssh into the Docker container. The command for that is:

sudo docker exec -it seafile /bin/bash

Setting up your own Counter-Strike 1.6 dedicated server via Docker

Once upon a time, you had to run the HLDSUpdateTool, and then SteamCMD. But now, awesome people on the internet have created Docker images for setting up a Counter-Strike 1.6 dedicated server. Now, all you have to do is:

  1. Install Docker on your machine
  2. Get a Docker image for a CS 1.6 server (I created this one, which is based off of an existing one. Mine has:
    • A lot of maps
    • Metamod
    • AMXModX (with high ping kicker, podbot control menu, round money, rock the vote, and admin all in one)
    • Podbot
  3. Customise the server (e.g. editing the server.cfg, amxx.cfg, and other config files, etc.)
  4. Start it up! (the README.md in the above linked Git repos has more info on this)

As far as ports go, I only needed to forward 27015 on my machine, but your mileage may vary. Others have reported that some more ports must also be forwarded on some machines.

Then optionally, if you’re running Ubuntu and you want this server to start up like a service via systemd, you’ll need this:

  1. An executable script with path /usr/local/bin/hlds with contents (make sure the DIR variable matches your installation directory):
    #!/bin/sh
    
    # Do not change this path
    PATH=/bin:/usr/bin:/sbin:/usr/sbin:/usr/local/bin
    
    # The path to the game you want to host. example = /home/newuser/dod
    DIR=/opt/cs-16-server/bin
    DAEMON=./server
    
    start()
    {
        echo  -n "Starting HLDS"
        if [ -e $DIR ]; then
            cd $DIR
            $DAEMON start
        else
            echo "No such directory: $DIR!"
        fi
    }
    
    stop()
    {
        echo -n "Stopping HLDS"
        if [ -e $DIR ]; then
            cd $DIR
            $DAEMON stop
        else
            echo "No such directory: $DIR!"
        fi
    }
    
    reload()
    {
        echo -n "Restarting HLDS"
        stop
        sleep 1
        start
    }
    
    case "$1" in
        start|stop|reload)
            "$1"
            ;;
        *)
            echo  "Usage: $ 0 {start | stop | reload | status}"
            exit  1
            ;;
    esac
    
    exit  0
    
  2. A script with path /etc/systemd/system/hlds.service with contents:
    [Unit]
    Description=HLDS
    
    [Service]
    Type=oneshot
    ExecStart=/usr/local/bin/hlds start
    ExecStop=/usr/local/bin/hlds stop
    ExecReload=/usr/local/bin/hlds reload
    RemainAfterExit=yes
    
    [Install]
    WantedBy=multi-user.target
    
  3. Then execute:
    systemctl daemon-reload
    systemctl enable hlds
    service hlds start
    

Feel free to check out my server that’s running right now with the same Docker image! Here’s the command to connect via the console.

connect henrypoon.com:27015

Running Selenium Webdriver on Bash for Windows

Bash for Windows has been working pretty great for me until I needed to run Selenium Webdriver on it. I quickly learned that it wouldn’t work just work right out of the box, and the set up for it is quite convoluted.

You will need

Bash setup

Install Firefox:

sudo apt-get install firefox

Export DISPLAY variable in ~/.bashrc. Just add this to the ~/.bashrc:

export DISPLAY=:0

Xming setup

Just download it from the link above and run it.

Download geckodriver

Download geckodriver from the link above and put in the /usr/bin/ folder in Bash for Windows.

Running Selenium

Here is a piece of sample Python code I used to setup the web browser. It will setup the browser, open Google, sit there for 10 seconds and then quit.  Make sure to have Xming running otherwise the browser isn’t going to start.

import time

from selenium import webdriver
from selenium.webdriver import DesiredCapabilities

def execute_with_retry(method, max_attempts):
    e = None
    for i in range(0, max_attempts):
        try:
            return method()
        except Exception as e:
            print(e)
            time.sleep(1)
    if e is not None:
        raise e

capabilities = DesiredCapabilities.FIREFOX
capabilities["marionette"] = True
firefox_bin = "/usr/bin/firefox"
browser = execute_with_retry(lambda: webdriver.Firefox(
    firefox_binary=firefox_bin, capabilities=capabilities), 10)

browser.get("https://www.google.com")

time.sleep(10)

browser.close()

Jenkins pipeline for Spring with beta and prod stages and deployment rollback

In the past couple of days, I’ve been experimenting a bit with the Jenkins pipeline plugin to create a code deployment pipeline with independent beta and prod stages for a Spring Boot app. I even managed to add rolling back a deployment in case a prod deployment fails! It took me a bit of time to Google my way through how to do everything, so I figure I just lay it all out here in case it helps other people do the same thing.

The nice thing about all of this is that I can push a code change to git, and Jenkins can build it, run through all the tests and then deploy to production automatically.

Layout of a pipeline script

Pipeline scripts are written in Groovy, which is a variant of Java. In general, a pipeline script is laid out like this:

node {
    def SOME_CONSTANT = "whatever"
    ...

    stage('some stage name like Build') {
        // Stuff to do as a part of this stage
    }

    stage('another stage') {
        // More stuff
    }

    ...
}

Each stage represents a stage in the pipeline (e.g. building, beta deployment, prod deployment etc.)

Defining some constants

First, a list of constants can be defined that can be used throughout the pipeline so that if the pipeline script gets reused somewhere, only these constants have to be changed. I’m not aware of anything that lets me reuse a pipeline in Jenkins without copying and pasting the code somewhere else so for now I’ll have to live with copying and pasting.

My project uses Maven so I got Jenkins to download its own copy of Maven that it can use to execute builds and have defined it as a constant. The name I’ve given it in the Jenkins Global Tool Configuration is “Maven 3.3.9” exactly. My project also uses Tomcat so there are some Tomcat specific things in there that may or may not be relevant to your use case.

    
def MAVEN_HOME = tool 'Maven 3.3.9'
    def WORKSPACE = pwd()

    def PROJECT_NAME = "name-of-project"
    def WAR_PATH_RELATIVE = "App/target/${PROJECT_NAME}.war"
    def WAR_PATH_FULL = "${WORKSPACE}/${WAR_PATH_RELATIVE}"
    def TOMCAT_CTX_PATH_BETA = "Tomcat-context-path-for-the-beta-stage"
    def TOMCAT_CTX_PATH_PROD = "Tomcat-context-path-for-the-prod-stage"
    def GIT_REPO_URL = "URL-to-git-repo-ending-in-.git"

Preparation Stage

First, the code has to be retrieved from the repository before it gets built. Jenkins allows storing username/password pairs so that they can be referenced without having to write out the password in plaintext. Jenkins uses a “credential ID” for this.

   
    stage('Preparation') {
        git branch: "master",
        credentialsId: "credentials-ID-stored-in-Jenkins-that-can-access-the-git-repo",
        url: "${GIT_REPO_URL}"
    }

Build Stage

Next, the code must be built and unit tested. “mvn clean install” will do just that (depending on what you want, you can always put in a different maven goal). The junit command is just there to take the resulting XML that gets generated during the build process and posts a graph of how many tests were run for each build.

    
    stage('Build') {
        sh "'${MAVEN_HOME}/bin/mvn' clean install"
        junit '**/target/surefire-reports/TEST-*.xml'
    }

Beta Stage

Once the build succeeds, you’ll want to deploy it to a beta environment, so integration tests can happen. The following happens at this stage:

  1. Get the right credentials to get permissions to Tomcat
  2. Call the deploy method to deploy the war file in Tomcat (more on that later)
  3. If the deployment fails for whatever reason, print out the deployment log for debugging and fail the build
  4. If the deployment succeeds, run the integration tests (the command to do this may differ based on use case)
    
    stage('Beta') {
        withCredentials([[$class: 'UsernamePasswordMultiBinding',
            credentialsId: 'credential-id-for-tomcat',
            usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD']]) {
                // Password is available as an env variable, but will be masked 
                // if you try to print it out any which way
                def output = deploy(WAR_PATH_FULL, TOMCAT_CTX_PATH_BETA,
                        env.USERNAME, env.PASSWORD)
                if (output.contains("FAIL - Deployed application at context path " + 
                        "/${TOMCAT_CTX_PATH_BETA} but context failed to start")) {
                    echo "----- Beta deployment log -----"
                    echo output
                    echo "-------------------------------"
                    currentBuild.result = 'FAILURE'
                    error "Beta stage deployment failure"
                }
            }

        echo "Running integration tests"
        sh "'${MAVEN_HOME}/bin/mvn' -Dtest=*IT test"
        junit '**/target/surefire-reports/TEST-*.xml'
    }

The deploy method is what takes care of the actual deployment to Tomcat, which is how the Jenkins build tells Tomcat about the newly built war file. The deploy method is below (be sure to change the server IP and port). Alternatively, I could have built my project as an embedded jar file, but that has a different challenge in figuring out how to get Jenkins to tell the OS to execute the newly built jar file as a particular user.

  1. Based on the Tomcat context path, decide whether a beta or production deployment is happening
  2. Make a copy of the build war file and add .prod or .beta to the end of it so as to keep the original
  3. Set the Spring profile to use on the newly copied war file (more on that later)
  4. Call the curl command to do the actual deployment to Tomcat (more on that later)
def deploy(warPathFull, tomcatCtxPath, username, password) {
    def envSuffix = ""
    def isBeta = tomcatCtxPath.contains("beta")
    if (isBeta) {
        envSuffix = "beta"
    } else {
        envSuffix = "prod"
    }
    sh script: "cp ${warPathFull} ${warPathFull}.${envSuffix}" 
    setSpringProfile(warPathFull, isBeta)
    def output = sh script: "curl --upload-file '${warPathFull}.${envSuffix}' " +
            "'http://${username}:${password}@localhost:8081/manager/text/deploy" + 
            "?path=/${tomcatCtxPath}&update=true'", returnStdout: true
    return output
}

In the case of my project, I’m building a single war file that does not have a Spring profile (beta/prod) defined. This means that I have to manually define this before I deploy the app to Tomcat since there are some things that differ between beta and prod like database URL’s. To do this, I wrote a method that opens the war file like a zip (jar/war files are zip files) and adds a line to my application.properties to define a Spring profile.

Admittedly, doing this zip file manipulation seems kind of hacky. Alternatively, I could have defined my build such that I had a separate beta build and a prod build to avoid modifying the zip file, but the drawback is that I’d then have to build my code twice.

def setSpringProfile(warPathFull, isBeta) {
    def zipFileFullPath = warPathFull + "." + (isBeta ? "beta" : "prod")
    def zipIn = new File(zipFileFullPath)
    def zip = new ZipFile(zipIn)
    def zipTemp = File.createTempFile("temp_${System.nanoTime()}", 'zip')
    zipTemp.deleteOnExit()
    def zos = new ZipOutputStream(new FileOutputStream(zipTemp))
    def toModify = "WEB-INF/classes/application.properties"

    for(e in zip.entries()) {
        if(!e.name.equalsIgnoreCase(toModify)) {
            zos.putNextEntry(e)
            zos << zip.getInputStream(e).bytes
        } else {
            zos.putNextEntry(new ZipEntry(toModify))
            zos << zip.getInputStream(e).bytes
            zos << ("\nspring.profiles.active=" + (isBeta ? "beta" : "prod")).bytes
        }
        zos.closeEntry()
    }

    zos.close()
    zipIn.delete()
    zipTemp.renameTo(zipIn)
}

A curl command to Tomcat is what actually does the deployment. To deploy a file to Tomcat, do the following below. This will deploy the war file to Tomcat and instantly run it, thus it will be accessible at the given context path.

curl --upload-file 'path-to-war-file' http://username:password@server-address:port/manager/text/deploy?path=/tomcat-context-path&update=true

Prod Stage

The same kind of stuff happens in the prod stage as in the beta stage with a few exceptions. The following happens at this stage:

  1. Get the right credentials to get permissions to Tomcat
  2. Call the deploy method to deploy the war file in Tomcat (except this time it is prod)
  3. If the deployment fails for whatever reason, print out the deployment log for debugging and roll back the deployment
  4. If the deployment succeeds, save the build files, and then the pipeline is finished. Alternatively, smoke tests can be run at this point, but I did not implement this in my project

Rollback is important because if the deployment fails, you don’t want to be stuck with a broken environment. Since build artifacts are saved on successful deployments, these same artifacts can be brought back if future deployments fail. This means they can be redeployed so that the code can be fixed before another deployment happens.

    stage('Prod') {
        withCredentials([[$class: 'UsernamePasswordMultiBinding',
            credentialsId: 'credential-id-for-tomcat',
            usernameVariable: 'USERNAME', passwordVariable: 'PASSWORD']]) {
                // Password is available as an env variable, but will be masked 
                // if you try to print it out any which way
                def output = deploy(WAR_PATH_FULL, TOMCAT_CTX_PATH_PROD, env.USERNAME, env.PASSWORD)
                if (output.contains("FAIL - Deployed application at context path " + 
                        "/${TOMCAT_CTX_PATH_PROD} but context failed to start")) {
                    echo "Prod stage deployment failure, rolling back deployment"
                    echo "----- Prod deployment log -----"
                    echo output
                    echo "-------------------------------"
                    step([$class: 'CopyArtifact',
                            filter: "${WAR_PATH_RELATIVE}",
                            fingerprintArtifacts: true,
                            projectName: "${PROJECT_NAME}",
                            target: "${WAR_PATH_RELATIVE}.rollback"])
                    deploy(WAR_PATH_FULL + ".rollback/" + WAR_PATH_RELATIVE,
                            TOMCAT_CTX_PATH_PROD, env.USERNAME, env.PASSWORD)
                    currentBuild.result = 'FAILURE'
                    error "Prod deployment rolled back"
                } else {
                    archiveArtifacts artifacts: "${WAR_PATH_RELATIVE}*", fingerprint: true
                }
            }
    }

At the end you get to have something like this:

That pretty much sums up the whole Jenkins pipeline that I’ve been using lately for Spring projects!

Fixing T-Mobile international roaming

T-mobile has awesome phone plans for people living in America to get free international roaming in over 140 countries. For most people, all they’d have to do is flip the switch on their phones that enable international roaming. For a minority of people, there are a few more settings to mess with. In the past year, I’ve had two issues that resulted in me losing my data connection outside of USA and I’m documenting some things below to try since it worked for me – maybe it’ll help some people out.

Check that international data roaming is enabled

Try a different APN

I’m using “T-Mobile US LTE 260”. Not sure if changing this requires a phone restart. I restarted my phone anyway.

Set the APN protocol and APN roaming protocol to IPv4/IPv6

Just tap on the APN you want to change. Hitting the 3 dots will show the save button. Not sure if changing this requires a phone restart. I restarted my phone anyway.

© 2019 Henry Poon's Blog

Theme by Anders NorénUp ↑