Microblog

I am currently microblogging on Mastodon: @jd7h@fosstodon.org.

Archives

2026 2025 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014

Most recent microblogs

Judith van Stegeren @jd7h@fosstodon.org

"SLOW LLM is a browser extension that makes LLMs appear to run very slowly. It works with ChatGPT and Claude."

slowllm.lav.io/

via webcurios.co.uk/webcurios-20-0

11:32 · May 02, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

I'm reading this paper by Bruno Latour and it's indeed wild: bruno-latour.fr/sites/default/
I bet he'd have all kinds of interesting things to say about coding agents...

Found via the digital garden of Maggie Appleton: maggieappleton.com/gathering-s

Page from a paper by Jim Johnson (Bruno Latour) "Mixing Humans and Nonhumans Together: The Sociology of a Door-Closer"

"Solved? Well, not quite. Here comes the deskilling question so dear to social historians of technology: thousands of human grooms have been put on the dole by their nonhuman brethren. Have they been replaced? This depends on the kind of action that has been translated or delegated to them. In other words, when humans are displaced and deskilled, nonhumans have to be upgraded and reskilled: This is not an easy task, as we shall now see.
We have all experienced having a door with a powerful spring mechanism slam in our face. For sure, springs do the job of replacing grooms, but they play the role of a very rude, uneducated porter who obviously prefers the wall version of the door to its hole version. They simply slam the door shut. The interesting thing with such impolite doors is this: if they slam shut so violently, it means that you, the visitor, have to be very quick in passing through and that you should not be at someone else’s heels; otherwise your nose will get shorter and bloody. An unskilled nonhuman groom thus presupposes a skilled human user. It is always a trade-off."

I was confused because the author's name is stated as "Jim Johnson" in the paper header but wait

Footnote from the paper: "The author-in-the text is Jim Johnson, technologist in Columbus, Ohio, who went to Walla-Walla University, whereas the author-in-the-flesh is Bruno Latour, sociologist, from Paris, France, who never went to Columbus nor to Walla-Walla University. The distance between the two is great but similar to that between Steven Jobs, the inventor of
Macintosh, and the figurative nonhuman character who/which says “welcome to Macintosh” when you switch on your computer. Thus I inscribed in my text American scenes to bridge the gap between the prescribed reader and the pre-inscribed one."
20:55 · May 01, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

Somehow I always end up writing a chapter of a book when I set out to write a tweet...

15:26 · May 01, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

I'm back from AI Engineer Europe 2026 in London! I've written a conference report of day 3 (April 10), which you can read at the Datakami website:

datakami.com/blog/2026-05-01-a

15:26 · May 01, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

I feel there's a parallel between investing in ETFs vs stockpicking, and publishing on the social media silos vs the indie web.

- Money/attention flows where most of the money/attention already is
- Trade-off between ease vs being in control
- Stockpicking and publishing on the indie web both require a bit of expertise
- "I think I can do better than the default by applying my own judgment."

19:00 · Apr 28, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

"Investors have decided that the future is agents! So you must make your system a series of agents! Even if there are much simpler ways to do it, and even ways that don't use LLMs.

The reason for that, of course, is that VCs believe that if you have an AI agent that can do a human job, you can charge for the software like it was a human service (e.g. charging $10k/month rather than $100/month), which they would obviously love."

18:13 · Apr 28, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

"The industry has become obsessed with the idea of a “one-person, billion-dollar company,” and various AI startups and venture capital firms are now trying to push founders to try to create “autonomous” companies that have few or no employees."

"[Replacing software engineers with coding agents] will probably work as long as AI providers are taking a bath on their models, but what happens when all your "employees" ask for a 10x pay raise simultaneously? did tech bros reinvent the union from first principles?"

"Investors have decided that the future is agents! So you must make your system a series of agents! Even if there are much simpler ways to do it, and even ways that don't use LLMs.

The reason for that, of course, is that VCs believe that if you have an AI agent that can do a human job, you can charge for the software like it was a human service (e.g. charging $10k/month rather than $100/month), which they would obviously love."

"Given that Claude Code is reportedly writing 70-90% of the code for its own next version, there are clearly use cases where it's working out. I would read this more as industry transformation growing pains--a transition period where overexcited people are figuring out the hard way where this works and where it doesn't."

"[A] few of us end up writing the fixes for systemic issues and core pieces of code by hand while the LLM experts iterate quickly on surface bugs. It's similar to how we used to divide work between senior and junior coders, except with the downside that the LLM will never graduate past junior coder level no matter how much training it receives."

"I have librarian colleagues who never coded before who have used it successfully to write things like format conversion scripts. These are cases where without AI assistance, the thing just wouldn't get done at all-- their library wouldn't hire a programmer to do this stuff even without the freeze--but it's a huge boon to suddenly be able to make all these old historical records compliant with a modern catalog standard, or other activities along those lines."

18:10 · Apr 28, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

Remember 43things?

en.wikipedia.org/wiki/43_Things

I found my old profile in the Internet Archive today and guess what? Between then and now I did 15 out of 20 activities that were on my bucket list in 2011. Not a bad score at all. 😁

11:58 · Apr 26, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org 15:09 · Apr 25, 2026 Permalink
Leontien Talboom @makethecatwise@digipres.club

I love creating things inspired by my work, and the response to my digital preservation jumpers has been amazing! 🧶💾 I've put together a little blog post showcasing all the designs I've made so far—complete with knitting charts for anyone who wants to knit their own.

digitalpreservation-blog.lib.c

15:02 · Apr 25, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

RT Bruno Dias‬ ‪@brunodias.bsky.social‬

[louder, as if that'll improve reception] THE BLUESKY DEVS WOULD BE VERY UPSET BY YOUR JOKES ABOUT VIBE CODING IF THEY COULD LOAD YOUR POSTS

bsky.app/profile/brunodias.bsk

09:02 · Apr 25, 2026 Permalink
BLNDD002 @gray@merping.synth.download

at a job interview

"whats your biggest weakness?"

"understanding the semantics of a question but ignoring the pragmatics"

"could you give an me an example?"

"yes i could"

15:15 · Apr 24, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

This is a handy list for comparing the features of vector databases (holy mole there are a lot of them), including year of launch, opensource-ness, licences, and implementation language: superlinked.com/vector-db-comp

11:52 · Apr 23, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

"We used Opik, an open-source tool made by Comet, as our prompt monitoring tool because it follows Comet’s philosophy of simplicity and ease of use, which is currently relatively rare in the LLM landscape."

Shots fired! from H2 of the LLM Engineer's handbook by Maxime Labonne and Paul Iusztin.

11:22 · Apr 23, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

"It's hard to read The Soul of a New Machine in 2026 without wondering whether all this AI hype is really so new."

newsletter.dancohen.org/archiv

10:10 · Apr 21, 2026 Permalink
Heliograph @Heliograph@mastodon.au

yes :Froglet:

Drawing of a green round frog moving a seance board,  above them the text "Fuck chatgpt I'm asking ghosts."
19:46 · Apr 20, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

Generative AI apps have their own version of the training-serving skew from classical ML: the eval-production gap.

You create an eval dataset, optimize your LLM flows against it, hit great performance on your metrics, and ship. Then real users show up and:
- Write input texts of multiple pages long
- Ask in Spanish, Russian or Chinese when you tested in English
- Upload file types you never considered
- Ask questions from domains your product wasn't designed for

You optimized for the wrong things, because your eval didn't capture how people actually use the product.

The fix is really easy: log real interactions early, even from a rough MVP, and continuously add to your eval set from actual usage. Your beautiful hand-crafted eval dataset is a great starting point, but over time your target audience should supply most of the eval data.

If your logs are spread out over multiple observability tools, reconstructing actual usage can be a bit uncomfortable though, but that's where my data wrangling skills come in. 😁

12:49 · Apr 20, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

"Artificial intelligence is like plastic. At the beginning we also had this hype about plastic. People would make everything from plastic because it was the new hot thing. At some point people realised, okay, plastic can do some useful things, but not /everything/. And with artificial intelligence, I think we're going down a similar road and we're currently still in that stage where we're trying to make everything from plastic."

"And now we we're living in a world that has microplastics everywhere."

Metaphor by Andy Stauder and @rachelcoldicutt, paraphrased from youtu.be/UlRc500B30w?si=jcyIHf

10:59 · Apr 18, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

@bk1e Cool!

10:13 · Apr 18, 2026 Permalink
Judith van Stegeren @jd7h@fosstodon.org

This is a neat solution for those old Python projects that have no uv, pyproject.toml, or version-pinned requirements.txt. It allows you to go "back in time" with pip!

pypi.org/project/pypi-timemach

Edit: @bk1e pointed out pip >= 26 has this option built-in. Use `--uploaded-prior-to `!

20:13 · Apr 17, 2026 Permalink