Like on some level, it's pretty fucked up that this entire protocol (correct word?) essentially posits that there's a valid reason you might WANT to federate with known nazi servers. Not directly, but that's how it works in practice. Whenever a new person signs up a server, they have to essentially tell it "please don't give places like Poast access to my users" and in many cases those new admins are the least qualified people on the fedi to know about the existence of those nazi sludge holes.
@AnarchoNinaWrites Speaking as a coder, this is a design problem that is common, and I've fallen for doing it myself: "the software can't tell which are the nazi servers, sooooo... I guess the admins of each server will have to make their own minds up each time. That's democracy or something!"
No, mate, assuming that every admin will be more equipped than you to make complicated decisions just because your *software* can't ... is a copout.
Yes, this is a hard problem. So?
@AnarchoNinaWrites Don't get me wrong, I think it's *good* when software chooses to pass a lot of options to the folks running it. But you can't use it to evade responsibility for making good software. You end up with a system that is too complex for the admins to understand...
@fishidwardrobe @AnarchoNinaWrites It could be opt-out rather than opt-in
@sashin @AnarchoNinaWrites What could?
Out or in -- if you make the admins work out what the technical pros and cons are you're making their job harder. Up to a point it can be empowering, but eventually we get to (guy sitting at desk with lemon press being pelted by lemons from a chute)
@fishidwardrobe @AnarchoNinaWrites I'm imagining we could have smart block lists, lists of instances and individuals to block, that could be subscribed to. To radically reduce the amount of work moderation takes, so the same instance or individual doesn't have to be reblocked over and over again.
The mastodon software can have as a default that one or more of these blocklists are subscribed to as a sane default. So individual instances don't need to reinvent the wheel of moderation each time.
@sashin @AnarchoNinaWrites We tried blocklists, then shitty folks got control of some of them. This is not a one-off; it happened on Twitter too. Still, there may be some value in them.
I'd be tempted to look at what works now, and change the software to make it easier. Say, build in a registry recording which admins blocked what for why, and a federated sideband channel for admins to talk to each other.
@fishidwardrobe @AnarchoNinaWrites I guess that kind of power is like a honeypot for bad people. Or perhaps power itself is that.
@sashin @fishidwardrobe I am literally talking to you here, on this server, because a bunch of self-appointed fedi safety police connected intimately to blocklists, decided to chase me offline by pretending that "Pig Empire" was an antisemitic slur and they didn't realize they fucked with the wrong trans girl. That happened. It's pinned in my profile.
The idea isn't useless, but if the implementation itself is ad hoc and there's nobody to watch the watchers - it can go very sideways.
@AnarchoNinaWrites @sashin My nieve feeling here is that folks *owning* blocklists is bad. "This is my list full of all the people I disagree with" -- a recipe for trouble.
@AnarchoNinaWrites @fishidwardrobe If anything its a slur against pigs!
@fishidwardrobe @AnarchoNinaWrites
Future Me will find a better solution!
*future Me moves onto the project, leaving only a // fix this later note attached*
Moderation is HARD. It's also 'impossible' from a perfected 'do it exactly right' scale. But hard doesn't mean you just look at it once, get discouraged, and assume it will work out fine.
Take steps in a direction, and try to figure out what else will work. Talk about it.
The weirdness (to me) are ones who insist 'this works fine!'
@fishidwardrobe @AnarchoNinaWrites Public key crypto has this “web of trust” model that I think might work here. Like you could basically set a threshold, and if some server is further removed than that then it’s blocked by default. Point being that there’s been a lot of work done in this area over the past decades, so doing something for mastodon wouldn’t require just making something up.
@complexmath @AnarchoNinaWrites Nazi servers block instances, too. So how do you know which blocks to listen to?
With the Web of Trust you are supposed to certify that each person "next to you" is trustworthy, which includes them not just certifying people at random, which is pretty hard to judge. It was a clever idea, but I think many people would say a flawed one.
@fishidwardrobe to clarify, this would be for vetting servers, not people. But yeah the trick is defining exactly what adjacency means in this context. Mostly I was thinking that a centralized block-list won’t work because that would imply a universal set of criteria for what type of comms people want.
@complexmath I'm talking about admins blocking instances, not people blocking other people (which has it's own different technical issues here). It's not a question of "what comms people want" but whether the server promotes/allows hate speech, for example.