"SLOW LLM is a browser extension that makes LLMs appear to run very slowly. It works with ChatGPT and Claude."
I am currently microblogging on Mastodon: @jd7h@fosstodon.org.
2026 2025 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014
"SLOW LLM is a browser extension that makes LLMs appear to run very slowly. It works with ChatGPT and Claude."
I'm reading this paper by Bruno Latour and it's indeed wild: http://www.bruno-latour.fr/sites/default/files/35-MIXING-H-ET-NH-GBpdf_0.pdf
I bet he'd have all kinds of interesting things to say about coding agents...
Found via the digital garden of Maggie Appleton: https://maggieappleton.com/gathering-structures
Somehow I always end up writing a chapter of a book when I set out to write a tweet...
I'm back from AI Engineer Europe 2026 in London! I've written a conference report of day 3 (April 10), which you can read at the Datakami website:
https://datakami.com/blog/2026-05-01-ai-engineer-europe-2026-day-3
I feel there's a parallel between investing in ETFs vs stockpicking, and publishing on the social media silos vs the indie web.
- Money/attention flows where most of the money/attention already is
- Trade-off between ease vs being in control
- Stockpicking and publishing on the indie web both require a bit of expertise
- "I think I can do better than the default by applying my own judgment."
"Investors have decided that the future is agents! So you must make your system a series of agents! Even if there are much simpler ways to do it, and even ways that don't use LLMs.
The reason for that, of course, is that VCs believe that if you have an AI agent that can do a human job, you can charge for the software like it was a human service (e.g. charging $10k/month rather than $100/month), which they would obviously love."
Good article and comments! It paints quite good picture of the current narrative around tokenmaxxing and replacing human engineers with agents.
https://www.404media.co/startups-brag-they-spend-more-money-on-ai-than-human-employees/
"The industry has become obsessed with the idea of a “one-person, billion-dollar company,” and various AI startups and venture capital firms are now trying to push founders to try to create “autonomous” companies that have few or no employees."
"[Replacing software engineers with coding agents] will probably work as long as AI providers are taking a bath on their models, but what happens when all your "employees" ask for a 10x pay raise simultaneously? did tech bros reinvent the union from first principles?"
"Investors have decided that the future is agents! So you must make your system a series of agents! Even if there are much simpler ways to do it, and even ways that don't use LLMs.
The reason for that, of course, is that VCs believe that if you have an AI agent that can do a human job, you can charge for the software like it was a human service (e.g. charging $10k/month rather than $100/month), which they would obviously love."
"Given that Claude Code is reportedly writing 70-90% of the code for its own next version, there are clearly use cases where it's working out. I would read this more as industry transformation growing pains--a transition period where overexcited people are figuring out the hard way where this works and where it doesn't."
"[A] few of us end up writing the fixes for systemic issues and core pieces of code by hand while the LLM experts iterate quickly on surface bugs. It's similar to how we used to divide work between senior and junior coders, except with the downside that the LLM will never graduate past junior coder level no matter how much training it receives."
"I have librarian colleagues who never coded before who have used it successfully to write things like format conversion scripts. These are cases where without AI assistance, the thing just wouldn't get done at all-- their library wouldn't hire a programmer to do this stuff even without the freeze--but it's a huge boon to suddenly be able to make all these old historical records compliant with a modern catalog standard, or other activities along those lines."
Remember 43things?
https://en.wikipedia.org/wiki/43_Things
I found my old profile in the Internet Archive today and guess what? Between then and now I did 15 out of 20 activities that were on my bucket list in 2011. Not a bad score at all. 😁
I love these 80s digital MacPaint artworks by Susan Kare from the early days at Apple: https://www.folklore.org/MacPaint_Gallery.html
Via @mrngm who shared https://www.hypertalking.com/2023/05/08/1-bit-pixel-art-of-hokusais-the-great-wave-off-kanagawa/ by @hypertalking
I love creating things inspired by my work, and the response to my digital preservation jumpers has been amazing! 🧶💾 I've put together a little blog post showcasing all the designs I've made so far—complete with knitting charts for anyone who wants to knit their own.
RT Bruno Dias @brunodias.bsky.social
[louder, as if that'll improve reception] THE BLUESKY DEVS WOULD BE VERY UPSET BY YOUR JOKES ABOUT VIBE CODING IF THEY COULD LOAD YOUR POSTS
https://bsky.app/profile/brunodias.bsky.social/post/3mk26swx5uk2e
at a job interview
"whats your biggest weakness?"
"understanding the semantics of a question but ignoring the pragmatics"
"could you give an me an example?"
"yes i could"
This is a handy list for comparing the features of vector databases (holy mole there are a lot of them), including year of launch, opensource-ness, licences, and implementation language: https://superlinked.com/vector-db-comparison
"We used Opik, an open-source tool made by Comet, as our prompt monitoring tool because it follows Comet’s philosophy of simplicity and ease of use, which is currently relatively rare in the LLM landscape."
Shots fired! from H2 of the LLM Engineer's handbook by Maxime Labonne and Paul Iusztin.
"It's hard to read The Soul of a New Machine in 2026 without wondering whether all this AI hype is really so new."
https://newsletter.dancohen.org/archive/the-role-of-a-new-machine/
Generative AI apps have their own version of the training-serving skew from classical ML: the eval-production gap.
You create an eval dataset, optimize your LLM flows against it, hit great performance on your metrics, and ship. Then real users show up and:
- Write input texts of multiple pages long
- Ask in Spanish, Russian or Chinese when you tested in English
- Upload file types you never considered
- Ask questions from domains your product wasn't designed for
You optimized for the wrong things, because your eval didn't capture how people actually use the product.
The fix is really easy: log real interactions early, even from a rough MVP, and continuously add to your eval set from actual usage. Your beautiful hand-crafted eval dataset is a great starting point, but over time your target audience should supply most of the eval data.
If your logs are spread out over multiple observability tools, reconstructing actual usage can be a bit uncomfortable though, but that's where my data wrangling skills come in. 😁
"Artificial intelligence is like plastic. At the beginning we also had this hype about plastic. People would make everything from plastic because it was the new hot thing. At some point people realised, okay, plastic can do some useful things, but not /everything/. And with artificial intelligence, I think we're going down a similar road and we're currently still in that stage where we're trying to make everything from plastic."
"And now we we're living in a world that has microplastics everywhere."
Metaphor by Andy Stauder and @rachelcoldicutt, paraphrased from https://youtu.be/UlRc500B30w?si=jcyIHfLnM_oPppik&t=3042
This is a neat solution for those old Python projects that have no uv, pyproject.toml, or version-pinned requirements.txt. It allows you to go "back in time" with pip!
https://pypi.org/project/pypi-timemachine/
Edit: @bk1e pointed out pip >= 26 has this option built-in. Use `--uploaded-prior-to `!