Saturday, 10 August 2024

Using Apache as a Reverse Proxy with Tailscale backend server

Apache as a Reverse Proxy

I started to use Tailscale to access my computers externally after moving to an ISP who use CGNAT. This enabled backups to and from my NAS from external servers I rented. I decided against opening up any of the machine within the Tailnet to full worldwide access, as that seemed too "open".

This works well, but I still had one website that I hosted on a RaspberryPi at home which I needed web access to. I originally just added my phone to the Tailnet, but there were problems with re-issuing LetsEncrypt certificates, and I wanted to retain my own domain name for the website, rather than use the .ts.net ones.

I had an external server on my Tailnet already running Apache, and decided the simple solution was to use that existing service to sit in front of the RaspberryPi and forward HTTP requests to it. After all, I only was using this website; it wasn't used by anyone else. This proved a little trickier than hoped, but eventually I got there, and these are the steps I went through...

Add VirtualHost to Apache

On the external server I added an website definition to Apache (i.e. in /etc/apache2/sites-available) which just consisted of the VirtualHost definition for the website domain:

<VirtualHost *:80>
    ServerName mydomain.com

    ProxyPass         "/" "http://tailnetname.xxxx-yyyy.ts.net/"
    ProxyPassReverse  "/" "https://mydomain.com/"
</VirtualHost>

where "mydomain.com" is the domain name you are exposing to the outside world, and "tailnetname.xxxx-yyyy" is the Tailnet name of the server actually hosting the website (in my case the RaspberryPi running locally).
Enable that site (a2ensite) and restart Apache.

Note that I'm using Apache to handle the SSL connection, and talking to the backend server over HTTP. This is just as secure, as all tailnet traffic is encrypted, and the tailnet name (.ts.net) is not exposed to the web (it has a CGNAT IP address, as do all machines in a tailnet). You could use HTTPS to get to the backend, but that seems a pointless overhead to me.

Issues I had later on were due to originally setting the ProxyPassReverse to the tailnet name, rather than mydomain.com - you want Apache to add headers to retain the external name of the website, not the tailnet name.

Adding LetsEncrypt Certificate

With the definition in Apache, use certbot to create your SSL certificate. certbot was already installed, so it was just a case of running "sudo cetbot certonly --apache" and letting certbot offer me the website to add the certificate to. I don't recall if this worked without any changes to the backend server, but I think it did. I manually updated the site definition in Apache, but you can let certbot do that by removing the 'certonly' parameter. Either way, you end up with the site definition amended with the following lines:

<VirtualHost *:80>
    ServerName mydomain.com

    RewriteEngine on
    RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,NE,R=permanent]
</VirtualHost>

<VirtualHost *:443>
SSLCertificateFile /etc/letsencrypt/live/mydomain.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/mydomain.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf

- which are the standard lines to redirect any HTTP traffic to HTTPS, and point the site at the SSL certificates just created.

Changing the backend server

At this point, requests to https://mydomain.com should go via the external server, over your tailnet to the correct backend server (providing you have set the tailnet access rules appropriately - I have fairly open rules so nothing needed changing).

Although the connection was working, I kept getting 404 errors in the browser. This was caused by a number of issues, which needed fixing.

Firstly, the website definition was still setup to handle SSL traffic from the outside world, (as above) so the HTTP requests were being redirected to HTTPS.
The mod_rewrite rules and VirtualHost for port 443 needed removing.

Secondly, the VirtualHost definition still included mydomain.com as the ServerName. This meant Apache (on the RaspberryPi) was confused by the response pointing at mydomain.com, as it could be served locally, I think. In short I needed to just have the ServerName set as "tailnetname.xxxx-yyyy.ts.net" and remove any references to mydomain.com.

The final issue was I had the flask-talisman module installed in the web application - this does its own redirect of HTTP to HTTPS requests, and this was the final cause of the 404 responses (as there was no local handler for port 443 on the website). Maybe I could have avoided two of these issues by sticking with SSL on the backend, but there we are. I initially fixed this by just removing talisman, but eventually just changed the options, as I still wanted it to add the CSP headers, and so on.

Additional Changes

As the backend is now always receiving requests via the proxy, logging needs to handle the different HTTP headers, to record the actual external details, rather than those of the proxy.






Thursday, 6 April 2017

onclick link not working? Check your input field names!

Had two almost identical web-pages, both using 'buttons' with onclick events linking to another page:

button onclick=location.href="...somewhere..."

After exhaustive fiddling, adding/removing javascript, checking the form method, playing around with submit types, it turned out that one of the input fields on the form was:

name='location'

so the onclick was just updating this input field, and not linking to the new page.

See my submit !=== submit page(!)

Monday, 4 April 2016

Upgrading/Replacing HDD with SSD

With the falling price of SSDs, it's a good time to dump the Hard Drive, and go for one.

Typically SSDs cost more per GB, and have less capacity - 1TB drives are appearing, but they are expensive. I decided to swap out my 500GB HDD for a 480GB SSD. 

It's quite straight-forward, and I adopted an approach of not updating any partition sizes on the old HDD, so I always had a working fall-back.

I split my drive into three areas, one for /boot, another for the root directory (/) and the final one for /home, where all user data resides. You might just have a single partition, making the whole process simpler.

Partition New Drive
First, attach the new drive to your existing computer, and use something like GPartEd to create a Partition Table, and add some partitions to the drive. It makes sense to keep this roughly in line with your old HDD, so in my case, I created an msdos Partition Table, and built 4 partitions:
  • sda1 (/boot) for the boot partition of 500MB (ample for Ubuntu)
  • sda5 as a swap partition (same size as your RAM)
  • sda6 (/) for the root directory of 26GB (again, ample for Ubuntu)
  • sda7 (/home) ... whatever is left
Mark sda1 as 'boot' and this is all there is to it.

The crucial point, is to make sure the boot and root partitions (sda1 and sda6 in my case) are at least as large as their existing counterparts on your HDD. You're going to clone the old partitions into them, and you can't clone into a smaller partition (obviously).

Copy Basic Partitions
Re-boot from a LiveCD (most have GPartEd on them) and with both old/new drives unmounted, copy across the partitions, using Copy/Paste in GPartEd. This will build the contents of sda1 and sda6 in my case.

You now have a working drive.

Copy User Data
My old /home folder was about 435GB; the new one 410GB, so I just used rsync to copy across the used data areas:

      rsync -av old/mount/point new/mount/point/       (watch trailing slashes!)

This obviously takes some time. Go to bed and let it whirl away.

You have now copied all data to the new drive.

Update fstab
The drive mount information held in /etc/fstab uses UUIDs (typically) or labels to identify the drives to mount at boot time, and where to mount them. It's likely that (at least) the (smaller) /home partition will have a different UUID, so use 'sudo blkid' to obtain a list of all the UUIDs and ensure that /etc/fstab (on the new SSD ... you'll need to mount the new drive, navigate to its /etc/fstab - for me on sda6 - and update the file there).

Switch to new Drive and boot
Remove the old HDD and replace with the SSD. Boot up.

(Repair Grub)
This might well fail. I was stuck on the prompt as copying the boot partition won't update Grub for your new SSD, however much you might hope it does.

From the grub prompt, check you have the drive partitions accessible by entering 'ls'. There should be your drive, and its partitions shown (as 'hd0,1', etc.) 

You need to first set the root, based on where your /boot partition is. For me, it's sda1:

grub> set root=(hd0,1)

Then point to a Ubuntu image ...

grub> linux /vmlinuz[-xxxxxx] root=/dev/sda6

(after "/vmlinuz" press TAB to see the image options you have, and choose the newest one which will fill in the [-xxxxxxx] part)

For 'root=' point to the root partition (not the boot partition) - sda6 for me.

Do similar TAB completion on the initrd setting:

grub> initrd /initrd[-xxxxxxx]

picking the initrd.img to match the version from above.

Then boot:

grub> boot

All being well, you should now boot to Ubuntu.

Make fix permanent
To re-build the Grub menu options, from command prompt, in a working, booted system:

sudo update-grub

(this will build the Grub menu options as normal).

sudo grub install /dev/sda

(place the generated options into the /boot partition)


Hopefully, you're now done.



Wednesday, 4 February 2015

Ubuntu 14.10 Upgrade

I'm now trying to keep fairly up-to-date on Ubuntu versions, and the upgrades are becoming easier to do - very little needs 'fixing' post-upgrade, even if you're not using a vanilla install (I don't user Unity but Cairo-dock, and prefer Nemo over Nautilus). I'm also having issue both recovering from Suspend (using Nouveau) and stability problems on some pages with Chromium.

Here are the set of changes required post upgrade to 14.10 from 14.04:

Zeitgeist
Zeitgeist is an irritating Unity hangover which builds a large sqlite database of files/searches/etc. on your local computer. Even with all options disabled it runs in the background, so I always kill it off by deleting it's autostart file from /etc/xdg.

Nemo
The Nemo package in the official repo's no longer handles the desktop (the effect of this is that the desktop does redraw the screen correctly ... the 'old' image is left behind when you move windows around) However there is a PPA which is maintained by the webupd8team and this fixes the issue. Add, or re-enable, their PPA, then uninstall Nemo (it will have been replaced with the official one and won't play happily with the PPA version) then reinstall it from the PPA.

... and that's about it!  








Monday, 26 January 2015

MySQL Loading Files

The method of loading records one at a time (via SQL Insert) is too slow when dealing with a large number of records, even if you choose to disable indexes before the load starts. Loading via a 'LOAD DATA' command is much faster, but there are a number of hurdles which you might face.

Firstly, there are two variations of this command:
  • LOAD DATA INFILE, and 
  • LOAD DATA LOCAL INFILE
- the difference between the two is where the file to be imported resides. In this case, 'LOCAL' means on the local machine performing the SQL command (in which case the MySQL client reads, the file, and it is then transferred to the MySQL server and loaded) and ommiting 'LOCAL' means the file is already on the server hosting the MySQL server.

Therefore, in either case the appropriate user (either the local, client user) or the remote, server user, must be able to read the file in question. 

A number of issues can arise with access being required at all folder levels up to the location of the file. See various ServerFault discussions on the matter. Best to place the upload file in a general temporary area which is "world-readable".

With LOCAL there are also additional security concerns, covered here.

These concerns mean that most distributions of MySQL do not, out of the box, permit use of the 'LOCAL' parameter, and you receive the "not supported in this version" error. This isn't strictly true - to make it work you need to start both Server and Client with a parameter to enable the use of local files. For the Server, it's 
  • local-infile=1 in /etc/mysql/my.cnf
For the client, it's necessary to set the local_infile option in the database connect command (which varies by client) e.g. local_infile: true in database.yml for a Rails application.

However, it's better to avoid these issues, and just upload from a file on the MySQL server (if you have access). Again, the file itself must be readable, by the (typically) 'mysql' user ... on *nix, /tmp would be one such location.

One further problem you might encounter is that you get a 'file not found' error still ... one that doesn't imply a permissions error (e.g. not a ErrFile 13) but simply that the file doesn't exist even though it does, and is world-readable. If this occurs then check either auth.log or syslog to see if AppArmor is the problem:

Jan 26 12:43:45 localhost kernel: [13726.977235] type=1400 audit(1422276225.103:76): apparmor="DENIED" operation="open" profile="/usr/sbin/mysqld" name="/tmp/upload_file" pid=20692 comm="mysqld" requested_mask="r" denied_mask="r" fsuid=114 ouid=1000

Here, you can see that before MySQL is passed the file, AppArmor has denied it access - effectively saying to MySQL that the file doesn't exist. To resolve this issue, update the MySQL permissions in AppArmor to include read-access to the location you wish to upload from, by editing /etc/apparmor.d/usr.bin.mysqld (or the local/ version):

  /tmp/* r,                        (...... your upload location)
  /run/mysqld/mysqld.pid rw,
  /run/mysqld/mysqld.sock w,

If you change the MySQL or AppArmor configuration, then restart the service.

Additionally, the connecting MySQL user will need the FILE privilege setting.

Update
Latest MySQL (5.7+) now lets you declare the location you'd like to use for loading your data from in secure_file_priv. Use SHOW VARIABLES LIKE "secure_file_priv" to see the location, and put your files there (setting the value if need be - can't be dynamically changed, so add to mysql.conf.d/mysqld.cnf ... or wherever).