

This is really funny to me. If you keep optimizing this process you’ll eventually completely remove the AI parts. Really shows how some of the pains AI claims to solve are self-inflicted. A good UI would have allowed the user to make this transaction in the same time it took to give the AI its initial instructions.
On this topic, here’s another common anti-pattern that I’m waiting for people to realize is insane and do something about it:
- person A needs to convey an idea/proposal
- they write a short but complete technical specification for it
- it doesn’t comply with some arbitrary standard/expectation so they tell an AI to expand the text
- the AI can’t add any real information, it just spreads the same information over more text
- person B receives the text and is annoyed at how verbose it is
- they tell an AI to summarize it
- they get something that basically aims to be the original text, but it’s been passed through an unreliable hallucinating energy-inefficient channel
Based on true stories.
The above is not to say that every AI use case is made up or that the demo in the video isn’t cool. It’s also not a problem exclusive to AI. This is a more general observation that people don’t question the sanity of interfaces enough, even when it costs them a lot of extra work to comply with it.
No, the implied solution is to reevaluate the standard rather than hacking around it. The two humans should communicate that the standard works for neither side and design a better way to do things.