Show more


Writer named his cat "Brexit" because “he wakes me up every morning meowing to death because he wants to go out, and then when I open the door he stays in the middle, undecided, and then gives me evil looks when I put him out.”

If you are using #vim or #neovim with a lot of plugins this talk might change how you use them entirely. It surely did for me :)


At tonight’s meeting, a Banker’s Dustproof time lock from circa 1906, recently restored.

Awesome thread about the manufacturing of the humble diode.


diodes may look simple, but there's subtlety inside them. check out the cross section of this 1N914.


ANNOUNCING THE MYSPACE MUSIC DRAGON HOARD, a 450,000 song collection of mp3s from 2008-2010 on MySpace, gathered before they were all "deleted" by mistake. includes a link to a special custom search and play mechanism that lets you search and play songs.

Wow, adding or removing the signs that expert radiologists use to diagnose cancer.

This type of attack could have serious consequences and I'm sure attacks will be trying to use it sooner or later.

Anyone here on Ubuntu? Might want to patch this if you are using any of the following:

* Ubuntu 18.10
* Ubuntu 18.04 LTS
* Ubuntu 16.04 LTS
* Ubuntu 14.04 LTS


Two weeks after acknowledging it mishandled millions of its users' Facebook passwords, Facebook is demanding some users hand over the passwords for their personal email accounts too. (h/t

When #Cloudflare and #Google prevent you from watching a government's parliamentary debates. Does that count as interfering with democratic processes?

And no, you don't get a chance to prove to a computer that you're human, this is a hard lock out, disguesed as a CAPTCHA.


I got unreasonably angry about a thing, so I made a thing, and hopefully it makes you feel better too.

"Remember: If it's not ISO 8601, it's not a date, it's just hanging out"

Well that's appropriate, I'm eyeball deep in a storage machine build and an article extolling a NAS pops in my RSS feed. Per usual I read it and per usual it's full of hardware RAID and simple filesystems with a fancy UI.

There is a time/place for such things but there are important points to keep in mind when building storage, especially if you care about your data being safe and recoverable over time.

Some bullet points
✅ Hardware RAID only protects you from disk failures
✅ Hardware RAID is NOT portable
✅ Software RAID is portable but ONLY protects you from disk failures
✅ All but a few filesystems suffer from bitrot and this can NOT be avoided
✅ CPU/RAM/Cable/Controller Card (even the built in ones) CAN and WILL fail taking your data with them
✅ btrfs and zfs can keep bitrot at bay, IF properly configured
✅ btrfs failure modes are catastrophic and horrifying
✅ zfs failure modules are catastrophic but are less horrific than btrfs
✅ ext4 and exFAT are the two most reliable "dumb" filesystems you can pick
✅ ntfs is OK but isn't reliably compatible with anything outside Windows land
✅ hfs[+] (Apple's filesystems) are hot garbage and may the dieties come to your aid if you suffer any form of failure
✅ vfat/fat32 are great at rotting from the inside out, slightly less painful than hfs[+] for rot and failures
✅ lvm and lvm2 are handy but wholly useless for dynamic provisioning
✅ software raid + lvm2 + ext4 is NOT a durable solution ; lots of moving parts, many failure modes and not so subtle problems lay down this path. bitrot is very real here
✅ zfs and btrfs CAN be used with 1Gb RAM machines
✅ zfs and btrfs do NOT suffer bitrot IF configured propery

There is a pattern here: if you try to use a tech stack that's not entirely unified (RAID + lvm2 + filesystem folks I'm looking at you) you're going to have a LOT of moving parts, no data durability (checksums exist in zfs and btrfs at the filesystem level for a reason!) and your failure modes will be entertaining at best. If any one of those 3 pieces of the stack fail you're into a world of pain recovering that layer and hoping the others remain unaffected.

That and the bitrot problem: computers are comprised of a LOT of hardware components working together to do things. If ANY hardware related to your disk stack (CPU, RAM, disk controller, power cable, data cable, south bridge, north bridge, usb controller, etc) goes awry it'll surface higher up the "stack" in the software as some form of failure.

The traditional RAID + lvm2 + filesystem stack makes some assumptions about those moving parts generally working and really only the disks failing. That's a recipe for problems if something other than the disk goes bad along the way. Nevermind cascading disk failures... The software/hardware tools will tell you the disks are problematic but any replacements will immediately go bad. That's not a fun, fast or easy process to sort out. Nevermind you're likely going to incur subtle bitrot problems along the way that only a filesystem check will be able to see. It won't be able to correct rot coming from subtly broken (or confused) RAID or lvm2 either. The traditional filesystems aren't structured to recover data that's subtly corrupted at lower layers.

Enter zfs. [Editors note: btrfs has some of zfs' features but falls short in many ways] ZFS assumes ALL THE THINGS will break. Yeah, it has a "it'll break and it's my job to handle that mess" attitude towards data durability. You get choices: from no durability through wildy durable (think 99.999% reliable). The default is to checksum all the data being read/written to the disk and to tell you if reads/writes or checksums fail. Basically it's assuming something WILL go wrong (I promise it will at some point) and tell you about it. If you setup your disks with zfs in a way where >1 copy of the data exists (you can have >1 copy even with a single disk BTW) it'll fixup the filesystem on the fly and report a checksum error. If there are enough errors for a disk, it'll drop the disk offline so you can investigate. If you have >1 disk you can do some really neat things to avoid disk controller failure problems, cabling issues and more. ECC ram also protects you from RAM problems. You got choices to reduce the pain.

To say zfs is durable and resilient understates its capabilities greatly. Nevermind you can deploy FreeNAS and similar systems pretty cheaply and easily these days. You can even re-purpose an old computer OR a Rasbperry Pi 3b+ for zfs storage (notes on the latter coming soon to a post near you).

When you see the whiz bang, pretty appliance think about the above. Even a Synology is a shiny layer on top of the traditional software raid + lvm2 + ext4 model. It's not as durable or resilient as they want you to believe. A computer running FreeNAS has better durability and resiliency and will cost you the same.

Thank you for reading.

Turns out, making the beginning of my voicemail a fax answering sound has drastically removed the number of robo/scam calls I've received.

NaN Gates and Flip FLOPS

"A new kind of computer architecture that's more elegant than 1s and 0s, being based directly on Mathematics. Note: Everything in here is real (IEEE-754), but..."


Installing the Cray 1 at NMFECC (now NERSC) in 1978. 250mflops.

Show more

Social.Rights.Ninja is a small Mastodon instance for those looking for a quiet home-base from which to explore the fediverse. Please email for information on getting an invite.