What motivated us to build Edit was the need for a default CLI text editor in 64-bit versions of Windows. 32-bit versions of Windows ship with the MS-DOS editor, but 64-bit versions do not have a CLI editor installed inbox.
My computer legacy originally started on MS-DOS on an IBM PS/2, and Edit was the very first text editor I ever used – so I was keen to give this a go on my Mac.
The release includes binaries for Windows & Linux, but does not include binaries for Mac. I thought this was a good chance to blog about how I built and installed edit:
# Install rust nightly and source rust environment to configure the shellcurl--proto'=https'--tlsv1.2-sSfhttps://sh.rustup.rs | sh-s----default-toolchainnightly-y."$HOME/.cargo/env"# Clone the repo & cd to its directorygitclone--branchv1.2.0--depth1https://github.com/microsoft/edit.gitcdedit# Build and install to ~/.cargo/bin/editcargoinstall--path.# Install for all userssudocp~/.cargo/bin/edit/usr/local/bin/
I was faced with this challenge when a workload in my homelab was using a very dated MongoDB 3.6 database, and updating it the workload, its now possible to run MongoDB 7.
I was not able to easily find any resources online that described this kind of upgrade so I decided to write my own.
The upgrade path for MongoDB is complex, requiring the sequence: 3.6 → 4.0 4.0 → 4.2 4.2 → 4.4 4.4 → 5.0 5.0 → 6.0 6.0 → 7.0
Additionally the package installation on Debian, particularly for the EOL versions is complex – so I opted to use Docker images from MongoDB for each version. In my case I performed this on a Debian environment under WSL.
At the start and the end I transferred the MongoDB database to and from the folder ~/db
With this process there were some differences with version 6 and 7 – but it was fairly straightforward.
I first began the migration by retrieving the docker images
useadmindb.adminCommand({ getParameter:1, featureCompatibilityVersion:1 })// If needed, set it to 3.6db.adminCommand({ setFeatureCompatibilityVersion:"3.6" })// exit the Mongo shellexit
useadmin// Confirm current FCVdb.adminCommand({ getParameter:1, featureCompatibilityVersion:1 })// Now set to 4.0db.adminCommand({ setFeatureCompatibilityVersion:"4.0" })exit
useadmin// Confirm current FCVdb.adminCommand({ getParameter:1, featureCompatibilityVersion:1 })// Now set to 4.2db.adminCommand({ setFeatureCompatibilityVersion:"4.2" })exit
useadmin// Confirm current FCVdb.adminCommand({ getParameter:1, featureCompatibilityVersion:1 })// Now set to 4.4db.adminCommand({ setFeatureCompatibilityVersion:"4.4" })exit
useadmin// Confirm current FCVdb.adminCommand({ getParameter:1, featureCompatibilityVersion:1 })// Now set to 5.0db.adminCommand({ setFeatureCompatibilityVersion:"5.0" })exit
useadmin// Confirm current FCVdb.adminCommand({ getParameter:1, featureCompatibilityVersion:1 })// Now set to 6.0db.adminCommand({ setFeatureCompatibilityVersion:"6.0" })exit
useadmin// Confirm current FCVdb.adminCommand({ getParameter:1, featureCompatibilityVersion:1 })// Now set to 7.0db.adminCommand({ setFeatureCompatibilityVersion:"7.0", confirm:true })exit
While the T630 is a very powerful machine for a home lab, the fans are just as powerful. Personally I prefer quiet – and I’m not concerned if CPU throttling is occuring.
Today I wrote a script to make it easier to control the fan speed with ipmitool
In order to use this script with your PowerEdge – you will need to enable IPMI in iDRAC
#!/bin/bash# Function to display usageusage() {echo"Usage: $0 -h host [-l username] [-p password] [-p percentage]"echo" -h host: IP address or hostname of the iDRAC (mandatory)"echo" -l username: Username for IPMI authentication"echo" -p password: Password for IPMI authentication"echo" -p percentage: Percentage value (0-100) to set fan speed (default: 0)"exit1}# Check if ipmitool is installedif ! command-vipmitool &> /dev/nullthenecho"Error: ipmitool is not installed. Please install ipmitool to proceed."exit1fi# Initialize variablesusername=""password=""host=""percentage="0"# Parse command-line optionswhilegetopts"h:l:p:"optdocase$optin h) host="$OPTARG" ;; l) username="$OPTARG" ;; p) if [[ "$OPTARG"=~ ^[0-9]+$ && "$OPTARG" -ge 0 && "$OPTARG" -le 100 ]]thenpercentage="$OPTARG"elseecho"Error: Percentage value must be an integer between 0 and 100."usagefi ;;\?) usage ;;esacdone# Check if mandatory -h option is providedif [[ -z ${host} ]]thenecho"Error: -h (host) option is mandatory."usagefi# Prompt for username if not provided via getoptsif [[ -z ${username} ]]thenecho-n"Enter username: "readusernamefi# Prompt for password if not provided via getoptsif [[ -z ${password} ]]thenecho-n"Enter password: "read-spasswordfi# Disable automatic fan speed controlipmitool-Ilanplus-H"${host}"-U"${username}"-P"${password}"raw0x300x300x010x00# Set fan speed based on percentageif [[ -n ${percentage} ]]thenvalue=$(printf "%x\n" $((percentage *64/100)))ipmitool-Ilanplus-H"${host}"-U"${username}"-P"${password}"raw0x300x300x020xff0x${value}fi
Quick guide on how to migrate virtual machines from VMWare ESXi 7 to Proxmox 8.2. I’ve been using VMWare ESXi in my home lab and wanted to migrate this machine to Proxmox 8.2. The only mechanism available for this kind of migration was offline, as I did not have two physical machines to run both VMWare ESXi and Proxmox concurrently.
Make a backup of the VMFS filesystem that contains your virtual machines Its an extra hurdle to read your VMFS filesystem within Linux, so I opted to use SCP to copy to my Windows machine
Don’t skimp. Backup every file in this location
Make sure you have an idea of the hardware configurations of each virtual machine as this process will require you to manually configure the hardware
After installing Proxmox, copy your backup to the Proxmox filesystem In my case I used SCP to upload them from my Windows machine
Create a virtual machine for your target vm to migrate Ensure you configure the hardware to the requirements Do not add a hard disk, delete the SCSI disk created by the wizard However – ensure you configure an EFI disk and disable Pre-Enrolled Keys (does anyone even use Secure Boot?)
Exclude net as a boot option from the Options tab, for the virtual machine, in the Proxmox web UI
This step seemed critial and undocumented. If I did not do this step, I was unable to boot the virtual machine
SSH to Proxmox, identify the VMID with qm list and identify the location of your SCP
Migrate the disk image with qm importdisk with the syntax qm importdisk VMID VMDK-FILE STORAGE-NAME
Reset the virtual machine – and it should boot just the same as it did on VMWare You may need to reconfigure the network and other hardware drivers and devices
I had an issue with apache web server recently where the service was crashing. A simple poor mans fix was to make use of systemd’s ability to automatically restart a failed service
At first I located the systemd unit file with systemctl status httpd
I then edited the file with vi /usr/lib/systemd/system/httpd.service
Under the [Service] block I then added the restart variables
Recently I did an automation task involving the automated generation of ppk files when ssh-keygen is run. I also wanted to generate ppk files for the existing keys on this machine.
The command I ended up using, to generate a ppk file for every /home/*/.ssh/id_rsa file was:
set -x;for user in $(find /home/*/.ssh -name id_rsa -type f);do puttygen ${user} --ppk-param version=2 -o ${user}.ppk;chown $(ls -l ${user} | awk '{print $3":"$4}') ${user}.ppk;chmod 600 ${user}.ppk;done;set +x
OpenVZ via its kernel architecture provides some weird operation with device nodes. I found that systemctl was unable to start OpenVPN on a CentOS 8 host which is virtualized in OpenVZ. The fix is quite simple but not so obvious
sed -i 's/LimitNPROC/#LimitNPROC/g' /usr/lib/systemd/system/openvpn-client\@.service
systemctl daemon-reload
A stupid post it note for my own benefit mostly. My employer refuses to allow telnet on any of their machines, citing audit compliance and the pre-existance of netcat
So to test that a port is open with netcat you run:
I built out a number of images with Oracle Cloud using Oracle Linux 8. I was facing the trouble that due to the way I configured my instances, Oracle Cloud was not supplying a domain name via DHCP which caused my instances to return a short host name when I ran hostname -f. This in turn caused many scripts to fail, as they could not enumerate a FQDN.
Also ensure that the entry for the hostname in /etc/hosts references the FQDN first, and the shortname last, or else hostname -f will produce the shortname and not the FQDN
I run a check_mk deployment and try to add everything to it. I recently deployed 3CX for a relative and wanted to include this on my check_mk monitoring.
SSH to the 3CX host, either as root or sudo -i after login