Bruta11y
October 10, 2023

Day 1

Let me tell you, picking this month as a blogging challenge was priobably not the best. First travel in 3+ years, one of which is a week long conference? Sheeeeesh.

BUT. But, the M-Enabling Summit is so good. I’m surrounded by leaders in the accessibility space, deeply interested professionals, and everyone is here to meet and learn. There’s something about the shared experience that is really invigorating

And then a day of learning and peopling and I’m pooped. So here’s a real brief collection of things I found interesting.

A Brief Collection of Thurs I Found Interesting.

  1. The Ethics of Paying for user Testing (From the User Testing and Resarch session) Alwar Pillai from Fable said paying your users is table stakes.” Which.. is refreshing to hear but also means that my company, which seems to finds this idea distasteful, is really behind in this case. And I know it’s not for neglect. Our user research team has tried.

Each person is an expert in their experience. and that experience is valuable. We should act like it.

  1. It’s about life (also the User Testing session) Christine Hemphill of Open Inclusion said, Users user. Consumers consume. But humans live adn experience.” Which is as profound as it is obvious. If we are producing tools to enrich lives, we should care about experience. We should care about life. We should care about that lived experience and, related to 1 above it matters and is valuable.

  2. Shitty UX is a drain. (Inclusive branding as a catalyst for org change) From Neil Milliken at Atos. We say this a lot but not exactly the same way. We put it in terms of fix effort. The later in the cycle you find an issue, the more effort/resources/money/whatever it takes to address it. Eventually you get to a point where it is so expensive to fix that it just… goes on a backlog and maybe it gets fixed later,”

  3. AI everything (from like every session?) I asked our ML/LLM/AI teams what we are doing about bias in our datasets. The example I had as about bias in diagnostics coming from AI y that results in a diagnosis change for a patiet. If that patient is, say, on SSDI and that only applies to a rigid set of codes…. that patient could lose their disability insurance because AI informed a user to a potentially poor choice.

That keeps me up at night.

Lots of sessions talked about data sets and existing bias reinforcing/perpetuating biased conclusions. Our datasets are already flawed and they are already being used. How do we close Pandora’s Ableist Box?

Tired

I’m looking forward to day 2. Maybe I’ll learn to talk to humans and actually network?

Up next

<< Brutalism is a thing. Boss Boss Boss >>