I spend a lot of time thinking about Hobby Sysadmining, or as I like to think of it sometimes, Computer Gardening.

A recurring theme in “self hosting” either open source projects or my own code is that I’ll deploy something, use and lightly maintain it for a while (years, maybe) then something will happen that means I have to re-deploy it - be it hardware failure, a breaking change in some other part of the ecosystem etc...


And I just won’t have the energy or enthusiasm for the fiddliness of that step, so the thing stays broken. Sometimes for months, sometimes for years.

So I’ve been thinking about what You’re Gonna Have To Re-Deploy It means as a strong requirement for deploying new systems, along with You’ve Got To Monitor It and You’d Better Be Taking Backups.

And it’s an interesting problem because it /seems/ like there are some obvious solutions: automate the deployment as much as possible with a configuration management system, or try and pre-bundle the dependencies in something like an AppImage or a container.

But I realised these are kind of equivalent: your automation through configuration management is basically going to produce a similar output to what you’d put in a container, if it’s going to be useful.

Show thread

These are just moving the problem around. Best case: you’re just replacing some hardware that died (or a VM provider that went out of business or a filesystem that failed or whatever), but even then you’ve got to update DNS, SSH keys, maybe TLS certs, restore a backup and so on.

And in a worse but also very likely case you’re having to re-deploy because there’s been a breaking change in a dependency (maybe an urgent security fix?) and then your automation just re-introduces the problem!

Show thread

So anyway. I don’t know how to fix this problem exactly but it feels helpful to have gone down this thought process.

I suspect one answer lies in a belt and braces sandbox and capabilities approach to try and cut off the worst of the “I have to re-deploy this with some changed environment or dependency Right Now to not get owned” problem, and fortunately this is the direction I’ve already been taking with network listening services (earlier with AppArmor, more recently nsjail)

Show thread

Also I feel like a lot of the “just self host ...” FOSS crowd have kind of missed the existence of this problem.

Show thread

I’ve been doing this for 20 years and I feel like I’m still only beginning to work it out 😂

Show thread

@fincham the FOSS crowd definitely ignores the complexity involved in running software in prod. Even with good automation I'm not super thrilled at the idea of redeploying a full, working application stack.

@fincham extremely relatable set of problems, also wrt. personal machine state and, deployed device firmware / images omg

@r yeah it might be one of those good old unsolvable problems too

@fincham @r what do you mean you don't want to maintain puppet manifests for your personal hardware until the end of time

Sign in to participate in the conversation
Cloud Island

A paid, early access, strongly moderated Mastodon instance hosted entirely in New Zealand.