Running Linux on old Hardware

Or alternatively - reasons for using a Raspberry Pi.

So this weekend I decided that now I have lots of nice fast fibre, I should probably retask some of the old kit I have sat in the corner and build up a box for hosting some of the things I currently have sat in the cloud. So nothing fancy, no special graphics required, if I can get a bit of power, some memory, some storage and maybe an SSD to boot, should only take an hour or so for someone who used to build machines all the time, right?

The history

I have a lot of old kit kicking around that dates back to 2008-2012, mostly reasonably high powered kit (for the day), AMD Phenom II, 8G/16G, 4 or 6 cores, desktop and rackmount configurations.

What’s that smell

First approach was the minimum work angle. I had an old server that should do the trick, I’ll just try a re-install. Nice case, 6-core processor, 16G Ram, SSD boot, 2 x 2Tb HDD running ZFS and 4 x 1G HDD running an MD raid5. Dates back to 2011, last used on a daily basis probably 2018.

Powers up. XFCE login prompt. Logs in.
Pfft*
Screen goes off, as does the power. Funny smell, maybe dust burning off? Sighs, gets it up on the bench and takes the lid off.Checks all the cables and connectors, all looks good. Plugs it in with a view to seeing which fans do / don’t start. Power on.
bang
Disconnects power, goes for the Windows. (not the M$ kind)

Ok, so for some unknown reason, that power supply is no more.

Debugging

Visits the garage and acquires a handful of old power supplies. After plugging 4 into the server it become apparent that not only is the power supply no more, so is the motherboard. Not to worry, wanders over to the server pile and picks an identical motherboard, retries.

Success, PSU fan, CPU fan, but no video. Ok, so I have a working PSU but no joy with that motherboard. That board has been “loose”, maybe it was damaged, let’s try another one. Hmm, same. Ok, let’s remove a board from an old server just to make sure we have something undamaged.

Superb, BIOS boot screen. Cleans it up, pastes a nice quiet scythe fan on the cpu, boots up a recovery instance from the SSD. Apparently this was the only 4-core 8G server in the stack.

Tries again with another server, this one is definitely 6 core 16G, BIOS screen some up fine. Strips the fan and installs a quiet fan. Hey preso - no joy. Installing the fan seems to have broken it.

Repeats the process with another board, exactly the same results, installing the fan seems to have broken it.

Looks in more detail at the thermal paste, although not as old as 2012, it’s probably 5-6 years old. (new / unopened) It would appear that the silicon has separated somewhat and the mixture is “too” liquid, I now have thermal “liquid” both on the chip pins and in the CPU socket. (anybody want some Phenom II CPU’s, I have 4 going cheap - thermal paste included!!)

Anyway, repeats the proces another couple of times, ends up with a working machine to the extent I can boot from the SSD.

Ohhh, the power …

So, all looking good, booting from the SSD, Plugs in the 2x2 TB drives, fails to boot again. Unplugs one of the drives and up it comes … tries the other drive on it’s own, boots. So … what, not enough power for two drives, that can’t be?!

I’ve been working with RPi5’s with 27W power supplies, which typically draw 5-6W. This thing has a 350W power supply in it and won’t boot with two hard drives! Ok, visits the garage again, digs out a 650W power supply. Recables. Power up!
BANG
Detects a certain sense of DeJaVu. This time thankfully it was just the power supply and the 3-pin fuse that blew. Throws power supply out of the window, which is still open from earlier (PSU is still on the patio, must put it back in the recycling pile…) and tries another 550W unit, which works.

So … 6-core Phenom II, 16G Ram, 128G SSD, 2x2Tb HDD. Working.

Say how much?

At this point I’m a little worried about the power consumption so I plug in a power monitoring brick. During boot I can see eating close to 200W, so I guess the power spike from trying to spin up two disks at the same time was just too much for an old 350W power supply. After booting, the power came down to an Idle of just under 100W. So at idle this machine is eating 100W, under load it can clearly eat over 200W.

Little bit of Math. 100W is 0.1 o a Unit. Last I checked a unit was 24p. So daily cost of running the box under no load is 0.1 * 0.24 * 24 = £0.576.

Cost per month would be around £17, cost per year, £210

Quickly checks with Digital Ocean, so yes, Ok, in terms of CPU’s, Memory and Storage, this is cheaper than the cloud. But it still sounds like a lot …

Trying to recount the cost of the kit … motherboard maybe £80, CPU, £120, memory £100 … so maybe £300 … and if I need to replace a component where I don’t have an old spare… it’s going to cost me at least £80. Not to mention power supplies which cost anything between £40 and £100 depending on how quiet / efficient you want them to be.

Hold my pint!

So, just checking this off against the stuff I’m now using. An RPi5 with a 4-core CPU, 8Gb of RAM, which doesn’t need an additional graphics card or CPU fan, £78 incl.

  • Most expensive component to replace, still under £80
  • Debug time required in the instance there is a hardware fault, well zero
  • Assembly time, between 20 seconds and 5 mins depending on the case
  • Cost of power supply, well the official expensive one is £11.90
  • Cost of running an RPi5 on Idle for a year, 0.005 * 0.24 * 24 * 365 = £10.5

Thinks. Glances down at the power brick for the new server which is powered down but still plugged in. It’s showing 8.87W. So the power supply is drawing as much as a moderately loaded RPi5 when it’s plugged in but not actually powering anything!

Note; yes, the RPi5 shows zero power draw when not switched on …

So about my old hardware

Yes, sure it will likely run Linux. If you have a laptop in particular it’ll probably be relatively hassle-free and not use too much power when you’re running it.

However.

If you want to run Linux on an old “PC”. Be aware.
“There may be dragons here!” - through no fault of the penguin.

Summary :bulb:

The cost of electricity doesn’t look like coming down by any great margin any time soon (or ever). To this end I’m looking at computers in the same way I started looking at LED light bulbs some time ago. Yes they are more expensive, but they are typically less than 1/10th the price like-for-like to run. So if an old 100W light bub costs £210 per year to run and a new one £21 per year to run, having a drawer full of old light bulbs might seem attractive when new bulbs can cost a fiver, but they’re really not (!)

Someone double check my math because I still find it hard to reconcile.

Cost per unit for electricity, 24p (per KW/h)
Power draw for light bulb, 100W.
So this is 0.1 of a unit for £0.24 per unit, where a unit is 1KW/hour = £0.024 / hour
This is 0.024 * 24 = 0.576 per day, which is £210 per year.

Yes?

And if you’re tempted to run your machine 24x7 to mine bitcoin, which will run it flat out, there’ll likely be very little change from £450/year. (depending on the exact spec of your machine obviously)

Your post provided a much needed chortle on this grey day - plus commiserations for your woes.
Your calcs look right to me and I am increasingly leaning towards investing an RPi5 to replace my “spare” computers that seem to be occupying a whole room.
Thanks for the update.

Keith

:slight_smile:

Yeah, I (now) have the remains of about 10 machines scattered over the room, just debating whether to strip them and ebay the bits, or just to recycle the whole lot. Thing is there’s some “valuable” stuff in there like 1U coolers and rack mount cases, I’m just not sure if anyone it still going to be using them for long … (I notice many cloud providers now seem to be looking at ARM hardware …)

One problem that I encountered (amongst many!) is the recent changes to plugs and sockets. Upon revamping a desktop for a lady-friend I had to buy a new PSU to suit the motherboard just because of that. And who uses IDE devices now, as everything seems to be SATA? And soon will be optically connected, I gather.

I guess that I ought to skip the lot, but I just can’t part with a very old, colour CRT monitor that has superb definition, even though it weighs a ton and occupies most of a table. The trouble with being a hoarder!

Well for old molex power to sata power, you could just use a MOLEX to SATA power splitter … single MOLEX → two SATA for a few quid …

https://amzn.eu/d/dnZCzrx

I didn’t think of that. Story of my life!
But the lure of a small, economical RPi5 beckons.

I own a few RPIs, and am looking to moving my VMs, so that I can retiring my server!

The stumbling block is finding a cheap power supply with 5 * 5v 3.1a (is that what a Pi4 needs?) so I may need to buy each a POE hat! (…i may need to buy a bigger poe switch though!)

secondly - I have a 8tb nas (2*3tb sata + a 3rd) - which I’d again, love to move to a RPI, but finding a 12v supply for those 2 (let’s loose the 2tb) - has been frustrating… OK, finding 2 is easy, but I want as few plugs as necessary - not 7!

Any thoughts on powering a RPI with two or three 3.5" drives?

…since moving things to my RPi, I’ve also found the benefits of embracing Docker - and the ability to rebuild things quickly and easily. Creating a like for like test environment on a different machine… I’m now sure it’ll be the same!

Ok, so I’ve not tried this, but I’ve seen these devices mentioned for powering multiple Pi4’s. They do multiple sizes this is a six but they also do a 10 - port. I think all you need to a USB A → USB C cable for each Pi.

https://amzn.eu/d/dEojomH

PoE for each Pi and a PoE switch, well, I’m not a great PoE Fan but I guess it should work … “UCTRONICS” do a cluster case designed or PoE but it’s not all that cheap.

You might have more trouble with 3.5" drives. Argon do a really nice NAS case which takes 4 drives, but unfortunately it’s designed for 4 x 2.5". (although it may take 2x3.5?)

I’m looking to build something, not really settled on what tho’. At the moment I’m leaning towards using multiple instances of RPi5 + M.2 NVMe clustered, rather than trying to build an integrated unit. For me 3.5HDD’s are sort of in the same category as light bulbs … for a start they can draw anything from 3-12w (each) and the average lifespan is only around 3 years.

I built my “perfect” server around 5 years ago, 512G NVMe, with two independent RAID’s across 4 x 3.5" HDD’s. Two weeks ago I had to disable both monthly MDADM checks because both RAID’s were hitting too many disk errors when doing their monthly verify. NVMe is the system disk and runs everything performance sensitive / critical, still running fine after 5 years of 24x7.

Most of the RPi4 stuff looks to be 2.5" oriented, I’m guessing RPi5 stuff when it comes out will be 2.5" or NVMe, so 3.5" as I say, might be problematic unless you go for some potentially expensive enclosures …

For anyone who has an (n) disk RAID / NAS that’s more than 3 years old, make sure it’s doing a monthly verify. To quote from the days of tape backups; “verify your backup - until you’ve verified your backup, you don’t have a backup!”.

I’ll add, “RAID is not backup!”

but thanks for those links - I’ll take a look…

“RAID is not backup!”

Absolutely! However, in-context ( :wink: ) the number of times I’ve seen people pull drives from unchecked RAID arrays and then being confused because their system has crashed … and hearing “but it’s RAID5 ?!”.