The Reading Room: Chapter 7
Six pieces on AI access, performance, and what fluency hides
Libraries have always felt like one of the few places where wandering still counts as a useful activity.
I think that’s why I end up looking for one in almost every country I visit. There’s something about the architecture, the silence, the smell of old pages, but mostly it’s the permission to be there without a reason. To follow a shelf because a title caught your eye. To leave with something you didn’t go in for.
Not everything has to be searched for, optimised, or found with a purpose already attached. Sometimes the best reads are the ones that interrupt your plans.
Last week, my reading felt a little like that. A few authors I knew I wanted to return to, a few writers I hadn’t expected to find, and a few arguments that opened up into much bigger questions once I sat with them properly.
Maybe they’ll pose a few questions for you, too.
Here’s this week’s Reading Room.
1. Nothing About Us Without Us: A Disability Justice Framework for Artificial Intelligence
Lag Phase is a new writer to me, and this was the first piece of theirs I sat down with properly.
What I appreciated was how it moved the AI conversation beyond inclusion as a nice idea, and toward disability justice as a framework for questioning the systems people are being included into in the first place.
As a neurodivergent woman of colour, I think about this a lot. Not because my experience is representative of everyone’s, but because I know what it feels like to move through systems that weren’t designed with your mind, body, race, background, or way of being as the default. And that’s exactly why the disability justice lens feels so necessary here.
The piece asks something deeper than whether AI can be made more accessible. It asks what kind of world we’re accelerating, and whether the systems underneath it were ever neutral to begin with.
That distinction feels crucial. Because if AI is built on top of existing hierarchies, then speed alone doesn’t mean progress. It can simply mean the same exclusions, the same blind spots, and the same harms, scaled faster and with more confidence.
The section on school surveillance was especially strong. The question isn’t simply whether these tools are being used in the name of safety, but which children are being watched, disciplined, filtered, and pushed further into systems that already treat them as problems to manage.
This is exactly the kind of conversation I hope more people make room for.
2. 57% vs 6%: The Multilingual Agent Gap That Should Alarm Every Localization Professional
This was a great practical piece, and the kind that makes you wish you had a team large enough to put the study into practice immediately. Though, Hilary | AI + Language Tech will encourage you to give a go either way.
Invisible failure is the idea that made the whole piece click for me. A bad translation can be noticed. A clunky sentence can be questioned. But when an AI agent quietly fails to call the right function in another language, the user may never know what happened. The tool simply doesn’t work for them, and the failure disappears into the background.
That makes this much bigger than a localisation issue. It becomes an access issue.
Fluency can make something feel like it’s working, even when the actual function underneath has broken down. The system may sound confident, useful and responsive, while still failing at the level that matters most: doing the thing the user asked it to do.
What I loved most is that this piece didn’t just name the gap, it offered a way for localisation teams to test it, document it, and bring that evidence into product and engineering conversations.
When so many decisions around AI are still made before the people with language, culture and user-context expertise are invited into the room, pieces like this matter. If you haven’t read it yet, I’d add it to your list.
3. That Harness You Forgot You Were Wearing
Lucy Blachnia is quickly becoming one of those writers whose work I know I’m going to enjoy before I’ve even properly settled into the first paragraph.
This piece pulled apart the difference between being genuinely irreplaceable and being visibly needed, which feels especially sharp in a digital culture where validation can start to look a lot like recognition.
The packed calendar. The late message. The pressure to be available. The strange little satisfaction of being needed by the machine of work, even when that need isn’t the same as value.
It made me think about when I lived and worked in Japan, and how important it was culturally to be seen as busy, even when you weren’t. That performance of effort wasn’t always about the work itself. Sometimes it was about proving you belonged inside the system.
I think that’s why the image of performing the bubble instead of blowing it worked so well for me. Going through all the motions of making something, while losing sight of whether anything is actually being made. It captures how easily the act itself can get tangled up with the need to be seen doing it.
And that feels very true of online life too. Not just work, but writing, creating, posting, building, sharing. The thing itself can become secondary to the proof that we’re doing it. brilliant read, and another reason I’ll keep coming back to Lucy’s work.
4. The Power and Hierarchy of Language in AI
I’m not usually someone who watches or listens to many lives, but this one was worth making time for. In this Substack Live (which I didn’t watch ‘live’), Dr Sam Illingworth and The Strategic Linguist discuss language, power, hierarchy, AI and terrible WiFi. In all honesty, I laughed so much more than I expected to, especially at Rebecca’s Mr Burns impression and her ‘tippiest topiest’ British accent, which also made me realise how much I recognise that received pronunciation layer in myself.
What stood out to me was the idea of language not just as a tool for communication, but as something that carries power, hierarchy and access inside it. Language decides who is understood easily, who has to translate themselves, whose knowledge sounds legitimate, and whose way of speaking is treated as deviation.
In AI, that distinction matters even more, because when systems are trained, evaluated, and deployed through dominant language patterns, they don’t simply reflect communication. They start shaping what counts as clarity, intelligence and usefulness. The system doesn’t have to announce its preferences for people to feel them. English sits so high in the linguistic hierarchy, but it is also limited and awkward in many ways, full of gaps, restraints and assumptions that then get carried into the systems being built on top of it.
As The Strategic Linguist put it, if we keep creating synthetic intelligence around a language that already sits at the top of a hierarchy, then we risk excluding people from conversations they should have been part of in the first place.
When Sam briefly dropped out, we got to hear more from Rebecca (The Strategic Linguist) on why she writes about linguistics. Linguistics is not about judging the way people speak. It gives us a framework for understanding what language is doing, including when that language is being used to shape behaviour.
That felt especially relevant to AI companionship and the pieces I’ve been writing around dependency, confession and trust. If conversational design is part of what keeps people returning to these tools, then the words used around them are not neutral.
That is why this conversation felt so worth making time for. It made the hierarchy of language feel less abstract, and much more connected to how people actually experience these systems.
5. Taking Care: When AI Enters the Room
Stephen Hall’s work always seems to ask the question underneath the question, which is part of why he’s a writer I keep finding myself returning to (he featured back in Chapter 3, too).
In this piece, he looks at care robots, social robots and embodied AI in human spaces, but the part I found most compelling was the shift away from asking whether the robot is human.
The better question is whether the institution still remembers that the person is.
That framing is important because loneliness is real. Understaffing is real. The lack of time for human presence in care environments is real. I don’t think the point is that a robot can never offer comfort, entertainment, routine, or even a moment of warmth.
But a convincing simulation of care still isn’t the same as care.
The danger isn’t only the tool. It’s what happens when institutions under pressure begin to count performance as presence, and substitution as solution.
That connects closely to what I’ve been thinking about with AI companionship too. The issue is rarely just whether a tool feels comforting in the moment. It’s what starts to change when systems, companies, or institutions treat that feeling as enough.
This was such a careful piece, and one I’m glad I read.
6. The AI Transition We’re Not Ready For. And How Philanthropy Can Help to Fix It
Anu’s piece took a more practical route into one of the bigger questions around AI and work: what happens to people who are expected to survive the transition before any new opportunity actually reaches them?
What I liked was the focus on ownership. Not just reskilling, or telling people to adapt, but asking what it would take to make ownership more possible for people who don’t usually have easy access to capital, confidence, or support.
I’ve watched enough people work twice as hard for half the access to know that ‘just reskill’ lands very differently depending on where you’re starting from. Time, money, stability, networks, belief. None of those are evenly distributed, and pretending they are is part of how the gap stays where it is.
The focus on local businesses, community support and practical routes into entrepreneurship gave the piece a much more grounded feel than the usual conversation around adaptation. It wasn’t entrepreneurship as another individual burden, but something that could be made more possible through shared infrastructure, support, and a different understanding of what economic resilience could look like.
And that’s it for this chapter.
As always, if you have any recommendations, send them my way or leave a message in the comments.
Whatever you do this week, carve out a little more time to read.








Thanks for the boost! What a great round up, I see some familiar faces but I also found some new people to follow and learn from, thanks for putting this together!
Jade, this is such a generous and carefully read response — thank you.
You’ve put your finger on the thing I most hoped would travel from the piece: that the question is not really whether the robot is human, but whether the institution still remembers that the person is.
I’m genuinely touched to be included again, and very glad the work is resonating with yours.