Building Microsoft Edit on Mac OSX Silicon

Microsoft have released source code for the MS-DOS era text editor, Edit – remade in Rust, on their Github. Christopher Nguyen, a product manager on Microsoft’s Windows Terminal team wrote:

What motivated us to build Edit was the need for a default CLI text editor in 64-bit versions of Windows. 32-bit versions of Windows ship with the MS-DOS editor, but 64-bit versions do not have a CLI editor installed inbox.

My computer legacy originally started on MS-DOS on an IBM PS/2, and Edit was the very first text editor I ever used – so I was keen to give this a go on my Mac.

The release includes binaries for Windows & Linux, but does not include binaries for Mac. I thought this was a good chance to blog about how I built and installed edit:

# Install rust nightly and source rust environment to configure the shell
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --default-toolchain nightly -y
. "$HOME/.cargo/env"

# Clone the repo & cd to its directory
git clone --branch v1.2.0 --depth 1 https://github.com/microsoft/edit.git
cd edit

# Build and install to ~/.cargo/bin/edit
cargo install --path .

# Install for all users
sudo cp ~/.cargo/bin/edit /usr/local/bin/

You can then run it simply with edit

Upgrading from MongoDB 3.6 to MongoDB 7

I was faced with this challenge when a workload in my homelab was using a very dated MongoDB 3.6 database, and updating it the workload, its now possible to run MongoDB 7.

I was not able to easily find any resources online that described this kind of upgrade so I decided to write my own.

The upgrade path for MongoDB is complex, requiring the sequence:
3.6 → 4.0
4.0 → 4.2
4.2 → 4.4
4.4 → 5.0
5.0 → 6.0
6.0 → 7.0

Additionally the package installation on Debian, particularly for the EOL versions is complex – so I opted to use Docker images from MongoDB for each version. In my case I performed this on a Debian environment under WSL.

At the start and the end I transferred the MongoDB database to and from the folder ~/db

With this process there were some differences with version 6 and 7 – but it was fairly straightforward.

I first began the migration by retrieving the docker images

docker pull mongo:3.6
docker pull mongo:4.0
docker pull mongo:4.2
docker pull mongo:4.4
docker pull mongo:5.0
docker pull mongo:6.0
docker pull mongo:7.0

Version 3.6

I felt it important to validate setFeatureCompatibilityVersion was set to 3.6 and confirm the Docker image is able to read my database

I started the container:

docker run -d \
  --name mongo36 \
  -v ~/db:/data/db \
  -p 27017:27017 \
  mongo:3.6

I spawned a mongo shell:

docker exec -it mongo36 mongo

I validated setFeatureCompatibilityVersion:

use admin
db.adminCommand({ getParameter: 1, featureCompatibilityVersion: 1 })
// If needed, set it to 3.6
db.adminCommand({ setFeatureCompatibilityVersion: "3.6" })
// exit the Mongo shell
exit

I stopped the container:

docker stop mongo36
docker rm mongo36

Version 4.0

I started the container:

docker run -d \
  --name mongo40 \
  -v ~/db:/data/db \
  -p 27017:27017 \
  mongo:4.0

I spawned a mongo shell:

docker exec -it mongo40 mongo

I set the new setFeatureCompatibilityVersion:

use admin
// Confirm current FCV
db.adminCommand({ getParameter: 1, featureCompatibilityVersion: 1 })

// Now set to 4.0
db.adminCommand({ setFeatureCompatibilityVersion: "4.0" })
exit

I stopped the container:

docker stop mongo40
docker rm mongo40

Version 4.2

I started the container:

docker run -d \
  --name mongo42 \
  -v ~/db:/data/db \
  -p 27017:27017 \
  mongo:4.2

I spawned a mongo shell:

docker exec -it mongo42 mongo

I set the new setFeatureCompatibilityVersion:

use admin
// Confirm current FCV
db.adminCommand({ getParameter: 1, featureCompatibilityVersion: 1 })

// Now set to 4.2
db.adminCommand({ setFeatureCompatibilityVersion: "4.2" })
exit

I stopped the container:

docker stop mongo42
docker rm mongo42

Version 4.4

I started the container:

docker run -d \
  --name mongo44 \
  -v ~/db:/data/db \
  -p 27017:27017 \
  mongo:4.4

I spawned a mongo shell:

docker exec -it mongo44 mongo

I set the new setFeatureCompatibilityVersion:

use admin
// Confirm current FCV
db.adminCommand({ getParameter: 1, featureCompatibilityVersion: 1 })

// Now set to 4.4
db.adminCommand({ setFeatureCompatibilityVersion: "4.4" })
exit

I stopped the container:

docker stop mongo44
docker rm mongo44

Version 5.0

I started the container:

docker run -d \
  --name mongo50 \
  -v ~/db:/data/db \
  -p 27017:27017 \
  mongo:5.0

I spawned a mongo shell:

docker exec -it mongo50 mongo

I set the new setFeatureCompatibilityVersion:

use admin
// Confirm current FCV
db.adminCommand({ getParameter: 1, featureCompatibilityVersion: 1 })

// Now set to 5.0
db.adminCommand({ setFeatureCompatibilityVersion: "5.0" })
exit

I stopped the container:

docker stop mongo50
docker rm mongo50

Version 6.0

I started the container:

docker run -d \
  --name mongo60 \
  -v ~/db:/data/db \
  -p 27017:27017 \
  mongo:6.0

I spawned a mongo shell:

docker exec -it mongo60 mongosh

I set the new setFeatureCompatibilityVersion:

use admin
// Confirm current FCV
db.adminCommand({ getParameter: 1, featureCompatibilityVersion: 1 })

// Now set to 6.0
db.adminCommand({ setFeatureCompatibilityVersion: "6.0" })
exit

I stopped the container:

docker stop mongo60
docker rm mongo60

Version 7.0

I started the container:

docker run -d \
  --name mongo70 \
  -v ~/db:/data/db \
  -p 27017:27017 \
  mongo:7.0

I spawned a mongo shell:

docker exec -it mongo70 mongosh

I set the new setFeatureCompatibilityVersion:

use admin
// Confirm current FCV
db.adminCommand({ getParameter: 1, featureCompatibilityVersion: 1 })

// Now set to 7.0
db.adminCommand({ setFeatureCompatibilityVersion: "7.0", confirm: true })
exit

I stopped the container:

docker stop mongo70
docker rm mongo70

Controlling fan speed for Dell PowerEdge servers

In my home lab I’m running a Dell PowerEdge T630.

While the T630 is a very powerful machine for a home lab, the fans are just as powerful. Personally I prefer quiet – and I’m not concerned if CPU throttling is occuring.

Today I wrote a script to make it easier to control the fan speed with ipmitool

In order to use this script with your PowerEdge – you will need to enable IPMI in iDRAC

#!/bin/bash

# Function to display usage
usage() {
    echo "Usage: $0 -h host [-l username] [-p password] [-p percentage]"
    echo "  -h host: IP address or hostname of the iDRAC (mandatory)"
    echo "  -l username: Username for IPMI authentication"
    echo "  -p password: Password for IPMI authentication"
    echo "  -p percentage: Percentage value (0-100) to set fan speed (default: 0)"
    exit 1
}

# Check if ipmitool is installed
if ! command -v ipmitool &> /dev/null
then
    echo "Error: ipmitool is not installed. Please install ipmitool to proceed."
    exit 1
fi

# Initialize variables
username=""
password=""
host=""
percentage="0"

# Parse command-line options
while getopts "h:l:p:" opt
do
    case $opt in
        h) host="$OPTARG" ;;
        l) username="$OPTARG" ;;
        p) 
            if [[ "$OPTARG" =~ ^[0-9]+$ && "$OPTARG" -ge 0 && "$OPTARG" -le 100 ]]
            then
                percentage="$OPTARG"
            else
                echo "Error: Percentage value must be an integer between 0 and 100."
                usage
            fi
            ;;
        \?) usage ;;
    esac
done

# Check if mandatory -h option is provided
if [[ -z ${host} ]]
then
    echo "Error: -h (host) option is mandatory."
    usage
fi

# Prompt for username if not provided via getopts
if [[ -z ${username} ]]
then
    echo -n "Enter username: "
    read username
fi

# Prompt for password if not provided via getopts
if [[ -z ${password} ]]
then
    echo -n "Enter password: "
    read -s password
fi

# Disable automatic fan speed control
ipmitool -I lanplus -H "${host}" -U "${username}" -P "${password}" raw 0x30 0x30 0x01 0x00

# Set fan speed based on percentage
if [[ -n ${percentage} ]]
then
    value=$(printf "%x\n" $((percentage * 64 / 100)))
    ipmitool -I lanplus -H "${host}" -U "${username}" -P "${password}" raw 0x30 0x30 0x02 0xff 0x${value}
fi

Migrating VMWare ESXi 7 virtual machines with EFI booting, to Proxmox 8.2 from VMDK files

Quick guide on how to migrate virtual machines from VMWare ESXi 7 to Proxmox 8.2. I’ve been using VMWare ESXi in my home lab and wanted to migrate this machine to Proxmox 8.2. The only mechanism available for this kind of migration was offline, as I did not have two physical machines to run both VMWare ESXi and Proxmox concurrently.

  1. Make a backup of the VMFS filesystem that contains your virtual machines
    Its an extra hurdle to read your VMFS filesystem within Linux, so I opted to use SCP to copy to my Windows machine

    Don’t skimp. Backup every file in this location
  2. Make sure you have an idea of the hardware configurations of each virtual machine as this process will require you to manually configure the hardware
  3. After installing Proxmox, copy your backup to the Proxmox filesystem
    In my case I used SCP to upload them from my Windows machine
  4. Create a virtual machine for your target vm to migrate
    Ensure you configure the hardware to the requirements
    Do not add a hard disk, delete the SCSI disk created by the wizard
    However – ensure you configure an EFI disk and disable Pre-Enrolled Keys (does anyone even use Secure Boot?)
  5. Exclude net as a boot option from the Options tab, for the virtual machine, in the Proxmox web UIThis image has an empty alt attribute; its file name is image-1.png

    This step seemed critial and undocumented. If I did not do this step, I was unable to boot the virtual machine
  6. SSH to Proxmox, identify the VMID with qm list and identify the location of your SCP
  7. Migrate the disk image with qm importdisk with the syntax
    qm importdisk VMID VMDK-FILE STORAGE-NAME

    Example:
    qm importdisk 200 /nvme/vmware/kube-master/kube-master.vmdk nvme
  8. Boot the virtual machine and open the console
  9. When you are dropped to the EFI shell, enter the command exit which should take you to the BIOS
  10. Follow the guide to add the path to your EFI boot file – https://pve.proxmox.com/wiki/OVMF/UEFI_Boot_Entries
  11. Reset the virtual machine – and it should boot just the same as it did on VMWare
    You may need to reconfigure the network and other hardware drivers and devices

systemd automatic restart on failure

I had an issue with apache web server recently where the service was crashing. A simple poor mans fix was to make use of systemd’s ability to automatically restart a failed service

At first I located the systemd unit file with systemctl status httpd

I then edited the file with vi /usr/lib/systemd/system/httpd.service

Under the [Service] block I then added the restart variables

[Service]

# Auto-restart
Restart=on-failure
RestartSec=5s

I then reloaded systemd with systemctl daemon-reload

Generate ppk files for all existing keys

Recently I did an automation task involving the automated generation of ppk files when ssh-keygen is run. I also wanted to generate ppk files for the existing keys on this machine.

The command I ended up using, to generate a ppk file for every /home/*/.ssh/id_rsa file was:

set -x;for user in $(find /home/*/.ssh -name id_rsa -type f);do puttygen ${user} --ppk-param version=2 -o ${user}.ppk;chown $(ls -l ${user} | awk '{print $3":"$4}') ${user}.ppk;chmod 600 ${user}.ppk;done;set +x

Replacing telnet with netcat

A stupid post it note for my own benefit mostly. My employer refuses to allow telnet on any of their machines, citing audit compliance and the pre-existance of netcat

So to test that a port is open with netcat you run:

nc -zv victim.host.com 8080

Setting the hostname on Oracle Cloud

I built out a number of images with Oracle Cloud using Oracle Linux 8. I was facing the trouble that due to the way I configured my instances, Oracle Cloud was not supplying a domain name via DHCP which caused my instances to return a short host name when I ran hostname -f. This in turn caused many scripts to fail, as they could not enumerate a FQDN.

So the answer was actually extremely simple:

echo 'DOMAIN="domain.com"' >> /etc/sysconfig/network
echo machine.domain.com > /etc/hostname
sed -i 's/PRESERVE_HOSTINFO=0/PRESERVE_HOSTINFO=2/g' /etc/oci-hostname.conf
reboot

Also ensure that the entry for the hostname in /etc/hosts references the FQDN first, and the shortname last, or else hostname -f will produce the shortname and not the FQDN

Add check_mk agent to 3CX Debian Phone System

I run a check_mk deployment and try to add everything to it. I recently deployed 3CX for a relative and wanted to include this on my check_mk monitoring.

  • SSH to the 3CX host, either as root or sudo -i after login
  • Execute the following commands
  • wget https://check_mk/agent/url/from/your/check_mk/deployment
  • dpkg -i check-mk-agent_2.0.0p17-1_all.deb
  • systemctl enable check_mk-async
  • systemctl start check_mk-async
  • vi /etc/nftables.conf
  • Add the following firewall rule, substituting 192.168.0.1 with the IP address of your check_mk master
table inet filter {
    chain input {
[..]
        # check_mk
        ip saddr { 192.168.0.1 } tcp dport { 6556 } counter accept comment "check_mk agent"
[..]
    }
}
  • Save the file with :wq!
  • reboot
  • Login to check_mk and add a new host with the 3CX supplied FQDN