social.tchncs.de is one of the many independent Mastodon servers you can use to participate in the fediverse.
A friendly server from Germany – which tends to attract techy people, but welcomes everybody. This is one of the oldest Mastodon instances.

Administered by:

Server stats:

3.9K
active users

@cstross @graydon We've got an enormous list of computation problems that we don't have the resources to compute properly. What we don't have is the ways to build the software systems to do those computations because of their complexity.
And any chemist can find a reasonable, grant supported manner to use an arbitrary amount of compute 8)

Fish Id Wardrobe

@etchedpixels @cstross @graydon Is it the _resources_ we don't have, though? Or something else? We're seeing the limits of IT as we currently understand it — I think.

A breakthrough in, say, human-machine interaction, in machines being able to perceive the world and act in it as we do; that's not going to be solved by more computing _power_ but by a breakthrough in techniques. If at all.

@fishidwardrobe @cstross @graydon We are hitting lots of limits - on silicon sizes, on power, on validation (hardware and software), on correctness, on data sets and many more.
A human brain weighs about 1.5Kg, outperforms an LLM and doesn't require a large power plant so there are clearly better ways of doing some kinds of computation (although humans of course suck at many kinds of computation too so it's an iffy generalisation).

@etchedpixels @cstross @graydon I'm not sure the human brain is doing "computation" as we know it, and there's a big problem to solve: "as we know it"

@fishidwardrobe @cstross @graydon If the law of requisite variety is indeed correct, we may well never truly be able to know how we think because it would require something with more state than us to distinguish all the state we have and understand the relationships.

@etchedpixels @fishidwardrobe @cstross We also have a historical tendency to put too much weight on calculation. (That would be "any weight at all.")

We evolved, so everything we can do is an extension of something some other organism can do, and it had to be not-actively-harmful the whole time, back into the deeps of time. We're not calculators. We are plausibly signal processors. (And for the most recent few tens of thousands of years out of those hundreds of millions, we can math.)

@etchedpixels @fishidwardrobe @cstross It's at least decently plausible that "intelligence" is "usually correct reflexes about what to ignore as irrelevant".

Attempting to simulate this is often useful (signal processing is a large and productive field) but it's also really difficult to describe as processing layers start interacting with each other and anybody with any knowledge of biology sighs and expects that it's nothing like that neatly separated into layers in organisms.

@graydon @etchedpixels @cstross the word "reflex" is doing some really heavy lifting here. Historically, reflex usually just means "we don't understand this". which of course is correct in this case, if circular.

@fishidwardrobe @etchedpixels @cstross Precise language about the uncomprehended is a tall ask.

Whatever it is, it's fast (by meat standards); it works in a fruit fly; it works in squid; it's an accumulation of failures to die; it's not likely there's anything like a loop involved.

"Something dendritic in organization, possessing fractal dimension, and able to change in response to outcomes sometimes" all seem much more likely than not. But we really don't know what's going on in there.

@etchedpixels @fishidwardrobe @cstross @graydon Well... I'd not say that human outperforms LLM. Human clearly outperforms LLM when the task is "emulate human", but that's hardly a fair benchmark. If task would be "translate two randomly chosen languages", LLM would already outperform humans. Probably for quiz-style questions, too.
@etchedpixels @cstross @fishidwardrobe @graydon Actually, that might be fun task. Reverse turing test -- human and LLM, where both try convince judge that they are LLM :-).

@pavel @graydon @etchedpixels @fishidwardrobe @cstross last week ChatGPT 4 placed last in the Paraparaumu Presbyterian Church weekly pub quiz after being the only "contestant" to get every single question wrong.

@pavel @graydon @etchedpixels @fishidwardrobe @cstross Yes, but the LLM has training data on the subject and the average human doesn't. Find a human who speaks both languages fluently and I'm confident they would do higher quality translation.

@Insanitree @graydon @etchedpixels @fishidwardrobe @cstross Of course. "LLM has training data on the subject". That was my point.

@pavel @graydon @etchedpixels @cstross "translate two randomly chosen languages" is also an "emulate human" task. a _random_ human will be bad at that, it's true, but it will use _much_ less resources; and it will be liable if wrong…