@cstross @graydon We've got an enormous list of computation problems that we don't have the resources to compute properly. What we don't have is the ways to build the software systems to do those computations because of their complexity.
And any chemist can find a reasonable, grant supported manner to use an arbitrary amount of compute 8)
@etchedpixels @cstross @graydon Is it the _resources_ we don't have, though? Or something else? We're seeing the limits of IT as we currently understand it — I think.
A breakthrough in, say, human-machine interaction, in machines being able to perceive the world and act in it as we do; that's not going to be solved by more computing _power_ but by a breakthrough in techniques. If at all.
@fishidwardrobe @cstross @graydon We are hitting lots of limits - on silicon sizes, on power, on validation (hardware and software), on correctness, on data sets and many more.
A human brain weighs about 1.5Kg, outperforms an LLM and doesn't require a large power plant so there are clearly better ways of doing some kinds of computation (although humans of course suck at many kinds of computation too so it's an iffy generalisation).
@etchedpixels @cstross @graydon I'm not sure the human brain is doing "computation" as we know it, and there's a big problem to solve: "as we know it"
@fishidwardrobe @cstross @graydon If the law of requisite variety is indeed correct, we may well never truly be able to know how we think because it would require something with more state than us to distinguish all the state we have and understand the relationships.
@etchedpixels @fishidwardrobe @cstross We also have a historical tendency to put too much weight on calculation. (That would be "any weight at all.")
We evolved, so everything we can do is an extension of something some other organism can do, and it had to be not-actively-harmful the whole time, back into the deeps of time. We're not calculators. We are plausibly signal processors. (And for the most recent few tens of thousands of years out of those hundreds of millions, we can math.)
@etchedpixels @fishidwardrobe @cstross It's at least decently plausible that "intelligence" is "usually correct reflexes about what to ignore as irrelevant".
Attempting to simulate this is often useful (signal processing is a large and productive field) but it's also really difficult to describe as processing layers start interacting with each other and anybody with any knowledge of biology sighs and expects that it's nothing like that neatly separated into layers in organisms.
@graydon @etchedpixels @cstross the word "reflex" is doing some really heavy lifting here. Historically, reflex usually just means "we don't understand this". which of course is correct in this case, if circular.
@fishidwardrobe @etchedpixels @cstross Precise language about the uncomprehended is a tall ask.
Whatever it is, it's fast (by meat standards); it works in a fruit fly; it works in squid; it's an accumulation of failures to die; it's not likely there's anything like a loop involved.
"Something dendritic in organization, possessing fractal dimension, and able to change in response to outcomes sometimes" all seem much more likely than not. But we really don't know what's going on in there.
@graydon @etchedpixels @cstross this is true.