social.tchncs.de is one of the many independent Mastodon servers you can use to participate in the fediverse.
A friendly server from Germany – which tends to attract techy people, but welcomes everybody. This is one of the oldest Mastodon instances.

Administered by:

Server stats:

3.7K
active users

#aihype

5 posts5 participants0 posts today

"One hint that we might just be stuck in a hype cycle is the proliferation of what you might call “second-order slop” or “slopaganda”: a tidal wave of newsletters and X threads expressing awe at every press release and product announcement to hoover up some of that sweet, sweet advertising cash.

That AI companies are actively patronising and fanning a cottage economy of self-described educators and influencers to bring in new customers suggests the emperor has no clothes (and six fingers).

There are an awful lot of AI newsletters out there, but the two which kept appearing in my X ads were Superhuman AI run by Zain Kahn, and Rowan Cheung’s The Rundown. Both claim to have more than a million subscribers — an impressive figure, given the FT as of February had 1.6mn subscribers across its newsletters.

If you actually read the AI newsletters, it becomes harder to see why anyone’s staying signed up. They offer a simulacrum of tech reporting, with deeper insights or scepticism stripped out and replaced with techno-euphoria. Often they resemble the kind of press release summaries ChatGPT could have written."

ft.com/content/24218775-57b1-4

Financial Times · AI hype is drowning in slopagandaBy Siddharth Venkataramakrishnan
Replied in thread

@dbat

The problem (that I have lamented a lot about the past few years) is that even our own field is not immune from the #AIhype.

Tons of papers appear in "reputable publications" that contain utterly unscientific conjection and unsourced bold claims about what AI supposedly does or is capable of. Spectacle gets funding, I guess.

I am hard-pressed to recommend you *any* sources because they're all either poisoned by the AI hype or so technical that a layperson wouldn't understand it.

(2/4)

🎈 Techbro did everything right by SV standards, he just made the mistake of getting caught, that's all.

「 fintech founder in the US has been charged with fraud after it was found that his artificial intelligence shopping app relied heavily on Philippines call centre employees to complete the purchases manually 」

au.finance.yahoo.com/news/ai-s

Yahoo · AI shopping app found to be actually powered by Philippines call centre workersBy Vishwam Sankaran

One fundamental thing I wish was in more disability writings about AI, for example, this unquestioningly PR hype by @steven_aquino gets so close to hitting on the point I wish more would tackle. Even with the best technology, we're still disabled. This is ultimately why I find Techno Ableism to be particularly misleading. To be clear, he isn't peddling techno Ableism, but many in the blind community do. When this tech moves behind an expensive paywall, which, it will, the bandage for societies inaccessibility will be locked behind a steep paywall and I wish more examined what happens when #Enshittification comes for AI and how society and ableism haven't changed. curbcuts.co/blog/2025-4-9-goog #AI #AIHype

Curb CutsGoogle’s April Pixel Drop Shows how AI Can Be more Transformative than trivial pursuit — Curb CutsAs reported by 9to5 Google’s Abner Li , Google this week released its monthly Pixel Drop for Android. April’s edition brings with it but one lone feature: Gemini Live’s new Astra camera. Li writes the functionality is now available to all Pixel 9 phones, free of charge. The “Astra” name refers to

Announcing
AITRAP,
The AI hype TRAcking Project

Here:
poritz.net/jonathan/aitrap/

What/why:
I keep a very random list of articles about AI, with a focus on hype, ethics, policy, teaching, IP law, some of the CS aspects, etc., now up to 1000s of entries.

I decided to share, in case anyone is interested; I'm thinking of people who like @emilymbender, @alex, & @davidgerard . If there is a desire, I'll add a UI to allow submission of new links, commentary, hashtags.

www.poritz.netAITRAP -- AI hype Tracking Project

Once again, Germany will get a minister of research without the slightest clue of
- how research is done
- how research is organized
- what the current issues are
- etc.

However, she has the right party membership and has already demonstrated eloquently being able to combine having no idea with a strong opinion (#blockchain).

That and the fact that we are still in an #aihype let me fear the worst. Mark my words!

Have you heard about the AI 2027 forecast? I don't believe it. IMHO the least plausible part is the leap from AI coding to AI research - the story totally underestimates the unimaginably vast spaces of potentially plausible hypotheses that good researchers must use their knowledge and understanding to prune down to hypotheses to actually test. Coding agents are not going to cut it (none available anytime soon.). #aihype #GenAI #LLM #AI2027

Inspired by @Iris 's recent poll, I suppose... I’m writing up my #psych #phd thesis, and am currently looking at the methods chapter. I’m describing all the samples, procedures, measures, statistical tools and procedures I’ve used in my articles, and ethical considerations. However, although I haven’t seen this in other theses, and although nobody has told me I need to do it, I feel like including a section on «the use of #AI technologies» (read: chatGPT and other LLMs). The thing is, I’m getting the sense that this has become extremely prevalent in a very short amount of time. If nothing else, than to use it «as a brainstorming partner», or help to paraphrase sentences for clarity or fix punctuation. And the reason I want to make a statement out of this in my thesis is that I haven’t. Not one bit, in the least sense. I never wanted to, and I’m very happy I haven’t. Is this worth making a statement of in the methods chapter? How would you go about writing it? What info would you include? Do you know good examples of this kinds of disclaimers/statements, in academic writing? #AIhype

MM: "One strange thing about AI is that we built it—we trained it—but we don’t understand how it works. It’s so complex. Even the engineers at OpenAI who made ChatGPT don’t fully understand why it behaves the way it does.

It’s not unlike how we don’t fully understand ourselves. I can’t open up someone’s brain and figure out how they think—it’s just too complex.

When we study human intelligence, we use both psychology—controlled experiments that analyze behavior—and neuroscience, where we stick probes in the brain and try to understand what neurons or groups of neurons are doing.

I think the analogy applies to AI too: some people evaluate AI by looking at behavior, while others “stick probes” into neural networks to try to understand what’s going on internally. These are complementary approaches.

But there are problems with both. With the behavioral approach, we see that these systems pass things like the bar exam or the medical licensing exam—but what does that really tell us?

Unfortunately, passing those exams doesn’t mean the systems can do the other things we’d expect from a human who passed them. So just looking at behavior on tests or benchmarks isn’t always informative. That’s something people in the field have referred to as a crisis of evaluation."

blog.citp.princeton.edu/2025/0

CITP Blog · A Guide to Cutting Through AI Hype: Arvind Narayanan and Melanie Mitchell Discuss Artificial and Human Intelligence - CITP BlogLast Thursday’s Princeton Public Lecture on AI hype began with brief talks based on our respective books: The meat of the event was a discussion between the two of us and with the audience. A lightly edited transcript follows. Photo credit: Floriaan Tasche AN: You gave the example of ChatGPT being unable to comply with […]