These are evil and a couple are missing or broken :(
Oh and instead of screws in a bunch of places it has brass taper pins, shown here under the microscope :/
Everything has been cleaned, rinsed and dried and I started reassembly but I’ll probably finish it tomorrow.
I thought the mainspring was a very beautiful colour.
The hallmarks in the case are: lion passant (sterling silver), three wheat sheaves with an upturned sword (Chester Assuary Office) and a gothic P, which indicates 1878 as the year of manufacture.
This is a “fusee” style watch mechanism, made around 1878. Apparently at that point the fusee had been obsolete for over a hundred years, but this seems like it was quite a budget movement - it only has 3 jewels! Everything else is just metal on metal pivots.
It’s probably going to go back together no more broken than it was, at least.
Well written piece by @distel about a serious loss to the fediverse: the departure of playvicious.social, the most prominent black-oriented member of the fediverse: http://weirderearth.de/goodbyepv.html
I think there are a few things here to reflect on... critically indeed the way that the fediverse at large participates in racism systemic in our society. Much of this is social and the need for group prioritization of values. And I agree with all that analysis in the post.
I’ve been doing this for 20 years and I feel like I’m still only beginning to work it out 😂
Also I feel like a lot of the “just self host ...” FOSS crowd have kind of missed the existence of this problem.
So anyway. I don’t know how to fix this problem exactly but it feels helpful to have gone down this thought process.
I suspect one answer lies in a belt and braces sandbox and capabilities approach to try and cut off the worst of the “I have to re-deploy this with some changed environment or dependency Right Now to not get owned” problem, and fortunately this is the direction I’ve already been taking with network listening services (earlier with AppArmor, more recently nsjail)
These are just moving the problem around. Best case: you’re just replacing some hardware that died (or a VM provider that went out of business or a filesystem that failed or whatever), but even then you’ve got to update DNS, SSH keys, maybe TLS certs, restore a backup and so on.
And in a worse but also very likely case you’re having to re-deploy because there’s been a breaking change in a dependency (maybe an urgent security fix?) and then your automation just re-introduces the problem!
And it’s an interesting problem because it /seems/ like there are some obvious solutions: automate the deployment as much as possible with a configuration management system, or try and pre-bundle the dependencies in something like an AppImage or a container.
But I realised these are kind of equivalent: your automation through configuration management is basically going to produce a similar output to what you’d put in a container, if it’s going to be useful.
And I just won’t have the energy or enthusiasm for the fiddliness of that step, so the thing stays broken. Sometimes for months, sometimes for years.
So I’ve been thinking about what You’re Gonna Have To Re-Deploy It means as a strong requirement for deploying new systems, along with You’ve Got To Monitor It and You’d Better Be Taking Backups.
There are black zones of shadow close to our daily paths.
Sometimes Kiwicon organiser.
A paid, early access, strongly moderated Mastodon instance hosted entirely in New Zealand.