Follow

1: "Join the fediverse, because decentralized systems are superiour to centralized monoliths and give back control!!"
2: "Fine. But will I be able to do <XYZ> in there?"
1: "Of course not because this is hard to impossible to do right in a decentralized system and you won't need it anyway!!"

(taken from: "Diary of inconvenient discussions", random chapter.)

@z428 Funny, but somewhat true. What was the thing that prompted this?

@jens ... ended up with a dispute again elsewhere on "global search" (finding entries associated with a particular hashtag or keyword across all instances). 😐 Not sure if this claim still holds true, but it's not the first time I ended up with something like that. 😟

@z428 Right.

IMHO this really is a social dilemma that technology just makes visible. For any question, we're driven to seek a universal truth - when in reality we're satisfied when the truth we get covers a wide enough area to not leave (too many) inconsistencies.

[For example, when you and the peers you discuss things with are not treated unjustly, injustice doesn't exist.]

Centralized systems embrace the universal truth seeking by providing a single source of it...

@z428 ... but what doesn't fit that is simply ignored.

Decentralized systems embrace the second part by providing only the truth for which there is consensus "within reach".

It's genuinely difficult to debate whether one or the other approach is better. Both simply ignore stuff to be workable.

@z428 The only reason I prefer decentralized systems (distributed, really) is because I much prefer consensus building over unilateral decision making.

But both can produce rubbish.

@jens Agree. My problem (being a bit bitter here) is that it feels like sort of a "golden hammer" approach: We build systems the way we did in the 1990s. And whatever doesn't fit these paradigms - small local servers under individual control - often simply gets talked down. That's a bad take especially in order to reach non-tech users and Improve The World. 😉

@z428 No, I get it.

E.g. in ActivityPub, the concepts are exact copies of email (AFAICT) with a standard interface for mailing lists added (instead of the mere convention of using subscribe/unsubscribe in subject or body).

This introduces a distinction between client and servers and client-server vs server-server interactions that is more embedded in HTTP-centralized thinking than is fundamentally necessary.

Nothing about this is wrong, but it shapes conversations one can have about it.

@jens Yes, this "HTTP-centralized" thinking is pretty close to what I was thinking of. I mean, after all, even while discussing "decentralized" solutions, isn't at the end *every* approach that is client-server based in some way or the other centralized, even though maybe just on a smaller scale? Here we come/go P2P again...... 😀

@z428 Yes it is.

Again, this is not *necessarily* wrong. It depends on how you define a server and how you delegate tasks to it.

In client-server/centralized thinking, the server MUST make decisions on behalf of actors (aka processing) in order for everything to function.

In distributed thinking, they don't - they still serve a purpose, but largely as relays, caches, etc.

It's not impossible to *also* delegate processing to them, but that's not the default.

@jens Yes I agree. My point being: A lot of things we want to achieve with decentralization apparently just seem to scream for an architecture of "smart clients" and servers being just relays, but it seems all too often we end up with "conventional" client/server setups because ... well, maybe because it's easier, but maybe too it's because how we're used to think these things.That's not necessarily bad, but it limits what can be done and maybe doesn't solve some of the problems at hand.

@z428 That's precisely it.

The main purpose of this little #interpeer project of mine is to provide a tech stack that makes smart clients or "local first" decision making so easy, it can become a default approach.

I guess that's the elevator pitch here, at least to the more technical crowd.

@z428 There is a lot that goes into doing this well, which is why it's sometimes hard to see this as the focal point - particularly when subprojects seemingly veer off in all kinds of directions.

I need to work on the messaging :)

@z428 For me, the crucial part is that a server *should not* perform any processing related to transferring data from one actor to another. Dumb relays of opaque data streams, definitely. Any kind of introspection-based decision making, not really.

There are edge cases. It's not always easy.

@jens @z428 In the Librecast project we talk a lot about "unicast thinking", which is this one-to-one centralized mindset that has grown up around all our Internet tech. Even multicast "experts" I've spoken to are guilty of it, and the protocols we have today are designed around it. The P2P crowd seem to grok multicast concepts and decentralised thinking better than most.

@jens @z428 there are some really poor decisions being taken by standards bodies right now atm that are heading in completely the wrong direction, because of this unicast/centralized mindset.

@dentangle @z428 To be fair, with multicast routing not available everywhere and encryption in the mix, unicast is a safer bet for many things. But I really do get your focus on it precisely because if there is no R&D there, then it will become the only bet soon enough.

@dentangle @z428 I was lucky in that one of my first jobs involved writing a cache server for distributing session data between HTTP hosts (something that didn't exist back in the day). At my second job, they had a different implementation based on multicast, which I got to extend and maintain. One could say that I got introduced to UDP and multicast immediately after sockets.

@jens @z428 Sadly the "multicast" we have available in IP isn't really multicast at all. It's clear from early RFCs that *someone* had the right idea, but then it was messed up pretty much immediately.

@dentangle @z428 Now that's a take I find interesting. I think I understand, but I'm not 100% certain. Would you care to elaborate on that a little?

@jens @z428 the IP multicast in use is PIM (Protocol Independent Multicast), which was an early hack that depends on unicast routing tables.

All packets have a source unicast IP address that is used for loop detection.

Last year, the IETF deprecated inter-domain ASM (Any Source Multicast) RFC8815, so only SSM (Single Source Multicast) remains, which requires a single (unicast) source in a JOIN.

This is not multicast.

@dentangle @z428 Gotcha. RFC8815 is very informative here.

The TL;DR (for other readers):

ASM used to allow video conferencing where video data was sent from each participant to each other participant. SSM only allows broadcasting, a single source that everyone else receives.

RFC8815 recommends only SSM (on the Internet), largely because ASM support for IPv6 is lacking.

datatracker.ietf.org/doc/html/

@dentangle @z428 I guess it's worth pointing out that ASM can simulate SSM (i.e. use ASM, but only a single source). Without going into details, I suppose SSM is simpler.

At the same time, you can use multiple SSM groups to simulate ASM - but each group needs to be separately managed, adding overhead to routers and hosts alike.

Does that capture what you mean?

@jens @z428 SSM is much simpler to configure, because it doesn't require Rendezvous Points. Just one setting on a router.

But it isn't multicast.

@jens @z428 You *can* tunnel ASM with SSM though, which is an interesting trick 🙂

@dentangle @z428 What stands in the way?

(I usually use multicast, and don't worry about configuration. I also typically use it locally.)

Intuitively, a group of single-source multicasts would behave the same as a multi-source multicast, at least if you ignore everything about how these groups come about.

@jens @z428 SSM requires that you know the unicast IP address of the source. How do you find that? DNS? What if it changes? What if there's no source yet?

@jens @z428 SSM is a completely broken way of thinking about multicast. It's another wrong step in the direction that we started on with PIM. It needs to be buried in a deep deep hole 🙂

Show newer

@jens @z428 ASM has many problems, but deprecating it without trying to build a replacement is very short-sighted. There's be no proper R&D on multicast for more than a decade and no standards taking it forward.

SSM is fine if all you care about it single source video streaming, which is all most multicast folks think about. The point that's missing is that, while multicast is good at streaming, multicast is actually *better* at non-streaming applications. Non-intuitive, but true.

@dentangle @z428 Since I have a background in video streaming, I feel the need to point out that video streaming and streaming I/O are not as related as most people think, and that includes stream modes for network traffic.

@jens @z428 ASM has such negative connotations in the multicast world that we were advised (by the author of RFC8815) to call what we're doing something else.

We want Any Source Multicast, but we're calling our version either Multi-Source Multicast or Private Source Multicat (PSM).

@dentangle @z428 Conversely I try to steer away from "streaming" as it implies loss recovery due to its historical baggage. Nothing wrong with streaming a such.

But especially with video and audio, I much prefer to use the term casting, because it emphasizes distribution over other characteristics.

@jens True. But the idea of getting people in touch with like-minded folks virtually all over the world is something a lot of people with "niche" interests in my environment (including myself) always considered A Good Thing. The idea of a "small local instance" is quite the opposite of this.

Sign in to participate in the conversation
Mastodon

The social network of the future: No ads, no corporate surveillance, ethical design, and decentralization! Own your data with Mastodon!