Samsung Data Migration doesn’t work

Was having troubles with getting the Samsung Data Migration software to work on Windows 10 after installing a new 980 Pro SSD NVME into my system (Gigabyte X570 Xorus Elite Wifi).  Turns out you can’t run it directly from your Downloads folder, also that the Samsung driver must but the one in use, and the old SSD must be removed after shutdown.

  1. Make sure the Samsung NVME Driver is installed
  2. Run it directly from disk root (eg. C:\Samsung_Data_Migration_Setup.exe)
  3. Remove old drive immediately after shutdown on completion

If you want to re-use or retain your old drive for use, disable it in BIOS or physically remove it for first few bootups. Then use something like GParted to format/remove the old Windows partitions from your old disk.

JIRA reverse proxy with Nginx and Let’s Encrypt

We want our JIRA to work on jira.mydomain.com without the :8080 port, unlike the instructions in the Atlassian guide where it uses the path /jira. I’m hosting my JIRA server on AWS EC2 on a Ubuntu instance.

First steps are the usual when dealing with linux.

sudo apt-get update
sudo apt-get upgrade

And now we want to install nginx

sudo apt-get install nginx

I’m mostly following the Atlassian guide however it is currently a bit out of date and not very comprehensive.

 

In Jira’s server.xml config file we want to replace the line that should like:

<Context docBase="${catalina.home}/atlassian-jira" path="" reloadable="false" useHttpOnly="true">

with:

<Context docBase="${catalina.home}/atlassian-jira" path="/" reloadable="false" useHttpOnly="true">

and further above look for:

<Connector port="8080" maxThreads="150" minSpareThreads="25" connectionTimeout="20000" enableLookups="false" maxHttpHeaderSize="8192" protocol="HTTP/1.1" useBodyEncodingForURI="true" redirectPort="8443" acceptCount="100" disableUploadTimeout="true"/>

and replace it with the following (while replacing jira.domain.com with your own)

JIRA Proxy connector code (It’s in a file because WP breaks this code with html encoding)

Taking note that in the above that Atlassian forgot their own workaround for a Tomcat server issue. WordPress likely breaks the code above with formatting, you’ll need to copy the code from that link. It’s likely the the relaxedQueryChars section.

Now in your nginx config (/etc/nginx/sites-enabled/default) you can either edit your existing, or just delete everything inside it and start with a blank file instead. My server is only hosting jira by itself, so I don’t need to listen for any other domains. SSL config will come later.

server {
	listen *:80;
	server_name jira.mydomain.com;
	location / {
		proxy_set_header X-Forwarded-Host $host;
		proxy_set_header X-Forwarded-Server $host;
		proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
		proxy_pass http://jira.mydomain.com:8080;
		client_max_body_size 10M;
	}
}

Restart your server or jira and nginx individually and your site should apear on the addresses configured above in the connector lines in the JIRA config. eg. jira.mydomain.com:8082 And now working on the hostname we configured above(jira.mydomain.com).

Now because I’m lazy and prefer to take the easy way when available. Just run through all the steps to setup Let’s Encrypt’s Certbot out of the box. It should pickup your Nginx site and auto-configure the config files for you.

Your domain should now work on https://jira.mydomain.com

If you get the warning below on the dashboard page, it’s likely because a gadget or something like the activity feed isn’t yet providing content via https. In my case it’s the latest activity from confluence (that has yet to get the https treatment) being the cause of concern.

You should get the happy green padlock on other jira pages provided they don’t have any content on those pages coming from unsecure sources too.

If you’re using AWS EC2 to run your instance, you’ll also need to remember to open up the ports we’re now using for Jira. TCP 443 to allow for https, and TCP 80 if you’re redirecting nginx port 80 (http) to 443 (https). Optionally TCP 8080-8082 to access the backup ports.

The base url in Jira will need to be reconfigured also. You’ll also need to reconfigure JIRA Server (Beta), any Application links in other software (eg. Confluence, Bitbucket, etc), and anything else that was previously using your old url (eg. http://jira.mydomain.com:8080) to the new one (eg. https://jira.mydomain.com). However in Application links such as Confluence, you’ll likely need to use the direct IP address at port 8082.

NOTE: JIRA updates seem to break this. You’ll likely need to redo this every time you update, or back up the file beforehand.

Filezilla Server “Failed to bind the listen socket on port 21 to the following IPs”

An older server that’s just ticked away in the background for years on end with Filezilla Server installed just stopped working for some reason. Of course nobody really used it, but everybody wanted to use it when it stopped working.

In the logs I got Failed to bind the listen socket on port 21 to the following IPs.

It just started working after I had done nothing for a while, while I was googling for a solution. But I had tried a number of things:

  • Uninstalled IIS (even though it was not using any FTP ports as confirmed by netstat -b -n > C:\output.txt)
  • Uninstalled Filezilla server and reinstalled Filezilla Server
  • Set the Filezilla service (under services) to an automatic (delayed) start. I had noticed with some other server applications tried to start before the network was ready after upgrading the server to SSDs.
  • Deleted and re-added all the Filezilla server firewall settings, both in normal (under Control Panel) and through the Advanced Security version (under Administrative Tools). Remember to browse… and choose the actual ‘FileZilla Server.exe’ and not the server interface.

Easy trust relationship fix

“The trust relationship between this workstation and the primary domain failed”

Shit happens and so does this error.

Besides the usual complex fixes and leaving/rejoining the domain which can break a user profile (even though most things can be fixed by copying contents).

I found by simply changing the domain from mydomain to mydomain.local(or vice versa) works without breaking the user profile and ‘rejoins’ the domain. Although apparently using the .local suffix is no longer recommended by Microsoft although it still comes added by default.

scanning a directory with thousands of files

tl;dr: scandir() is slow and bloated. Use opendir() and readdir().
Sometimes in an older codebase things are written that work at first, then get slower over time but then reach a point where things completely fail. Originally scanning hundreds of files wasn’t so performance intensive but with project scaling and time, hundreds of thousands of files takes its toll with the PHP instance running out of memory after trying and then just stalling completely. (PHP 5.6) scandir($directory) was the offender. I don’t have metrics other than ‘it didn’t work‘ with about half a million 50kb files. I’ve since replaced it with opendir() and readdir() which seems to use a more efficient method of using memory pointers which gets to work straight away as opposed to using loading the entire directory contents into memory as scandir() does (or tries to when trying to work with a large number of files) before doing anything. What used to be there was basically:
//array_diff to skip the . .. dot paths that scandir lists under linux
$dir_array = array_diff(scandir($directory), array('..', '.')); //Would fail here under load

foreach ($dir_array as $filename) {
//do stuff
} //endforeach
And now looks like:
if ($handle = opendir($directory)) {
        while (false !== ($filename = readdir($handle))) {
            if ($filename != "." && $filename != "..") {  //ignoring dot paths under linux
            //dostuff 
            } //endif linux dir filename
         }//endwhile
} //endif opendir
This quickly got things working again without too much hassle being virtually a drop in replacement. Although it was still slow because of the sheer number of files to deal with. Eventually things were changed so that the situation where a single directory with millions of files accumulating over time doesn’t happen (and shouldn’t ever happen).