2026-05-03
"Your scientists were so preoccupied with whether they could, they didn't stop to think if they should." – Dr. Ian Malcolm, Jurassic Park
Spielberg put those words in Jeff Goldblum's mouth in 1993. They were about dinosaurs. They've aged into something that feels less like movie dialogue and more like a warning we keep choosing to ignore.
The question isn't whether AI is powerful. It is. The question is whether the people building and deploying it are stopping long enough to ask what it's actually for, who it affects, and what happens when it goes wrong.
Most aren't.
"Bright dings of pseudo-pleasure"
In 2007, Justin Rosenstein pulled an all-nighter at Facebook to build a prototype for what he called the "Awesome Button." The intention was simple: give people a lightweight way to express positivity without cluttering the feed with "I like this" comments. It shipped in 2009 as the Like button. By any product metric, it was an extraordinary success.
Years later, Rosenstein ended up putting parental controls on his own phone to stop himself from going back to the platform he helped build. He described the Like button as producing "bright dings of pseudo-pleasure." He watched it contribute to the rise of clickbait, to what he called "continuous partial attention," to an attention economy that optimised for engagement over everything else. "It is very common for humans to develop things with the best of intentions and for them to have unintended, negative consequences," he said.
He didn't set out to build something harmful. He set out to ship something that worked. Those aren't always the same thing.
The Like button is a cautionary tale that has nothing to do with AI and everything to do with what happens when you measure success too narrowly. Engagement went up. That was the metric. Nobody stopped to ask what people were trading for that engagement, or who would be most affected, or what the feed would look like at two billion users. The product worked. The consequences weren't part of the design.
The cost of convenience
A developer needs to push an app to Firebase. The command is firebase deploy. It takes thirty seconds to learn. Instead, he opens an AI assistant, describes the problem in full, and lets the model walk him through it step by step.
That interaction consumed real compute. Real energy. Real water, used to cool the data centre running the model. To do something a single command and a brief read of the docs would have handled completely.
This isn't an edge case. It's a pattern. And at the scale of millions of people reaching for AI before trying anything else, the aggregate environmental cost is real. By 2035, data centre electricity use globally could exceed 1,200 terawatt-hours, nearly triple 2024 levels. (World Economic Forum, From Paradox to Progress: A Net-Positive AI Energy Framework, 2025)
There's a difference between using AI to understand something and using it to avoid understanding it. One builds capability. The other quietly outsources judgement while running up a bill you can't see.
When the stakes are low, and when they aren't
Not all AI failures are equal. Some are embarrassing. Some are catastrophic.
In 2024, an attorney named Brandon Monk submitted a legal brief in a wrongful termination case citing cases that didn't exist. An AI tool had fabricated them, complete with plausible-sounding citations and invented quotations from real judges. Monk hadn't verified any of it, and didn't attempt to locate the cases until after the judge issued a show cause order. He was sanctioned: fined $2,000, ordered to complete a legal education course on AI, and required to inform his own clients of what had happened. His client, James Gauthier, lost the case entirely. Goodyear was granted summary judgement in December 2024.
That same year, Air Canada's AI chatbot told a grieving customer about a bereavement fare discount policy that didn't exist. The customer booked travel based on it. Air Canada tried to argue the chatbot was a "separate legal entity" responsible for its own statements. A tribunal rejected that argument and held the company liable. Somebody built that system. Somebody deployed it. Somebody decided it was good enough.
Documented AI safety incidents jumped 56% between 2023 and 2024. (Stanford AI Index Report, 2025) The technology is being deployed faster than the people deploying it are developing the judgement to use it well.
The human is not optional
There's a version of AI adoption that treats the human as a bottleneck. Get them out of the way, speed things up, reduce the friction. It sounds like efficiency. It isn't.
Take design. You can use AI to generate layouts, explore visual directions, produce a dozen variations in the time it used to take to make one. That's genuinely useful. But you cannot use AI to replace what a designer actually brings: years of taste developed through making real things, understanding of users built from sitting in research calls and watching people struggle with interfaces, the judgement to know when something technically works but feels wrong. Those things don't transfer to a prompt. They live in a person.
The same is true across disciplines. A doctor using AI to flag cardiovascular risk in a scan is practising good medicine. A doctor signing off on AI output without applying their own clinical judgement is doing something else entirely. A legal team using AI to draft contracts and then having attorneys review every clause is working well. A legal team treating the output as final is taking on liability they probably don't understand yet.
The model surfaces patterns. The human carries the accountability. That's not a limitation of AI. That's the design. The moment you remove the human from meaningful participation in the decision, you haven't made the process more efficient. You've just made it harder to find who's responsible when something goes wrong.
When you're in the room and nobody listens
I've been in this situation myself.
While working as a product manager at GitLab, I was part of the decision-making process around how the company would implement telemetry tracking. Collecting usage data to help product teams understand how people actually used the platform. A reasonable goal. The question was how you ask for permission to do it.
I advocated for opt-in. Not because it was the easier sell internally, but because the data pointed there clearly. I'd been in the customer conversations. I'd seen the user feedback. My peers were aligned. The evidence wasn't ambiguous. GitLab's community was developers and open source contributors who cared deeply about privacy and data ownership. Shipping opt-out tracking to that audience wasn't a risk worth taking. I made that case.
The executive team disagreed. They shipped it opt-out.
The response was immediate and damaging. Users and customers pushed back loudly and publicly. Internal staff voiced objections too. One engineering manager declared, on record, his "highest degree of objection" to the change on ethical and legal grounds. GDPR concerns surfaced almost immediately, with legal analysts pointing out that opt-out tracking potentially violated EU data protection law. Coverage spread to The Register, Hacker News, Bleeping Computer. Within days, GitLab's CEO published a public apology and rolled back the decision entirely, committing to never send usage data to a third-party analytics service.
I'm not telling this story to say I told you so. I'm telling it because it's a precise example of what happens when the people making the final call don't weigh the evidence seriously. The customer feedback existed. The internal alignment existed. The reasoning was documented. It was all there. The decision to override it wasn't made because of better data. It was made because of different priorities.
That gap, between what the evidence says and what gets shipped, is where a lot of harm originates. In product decisions, in AI deployments, in any system that affects real people. Responsibility means closing that gap. It means building processes where the people closest to the users, the ones doing the research, sitting in the calls, reading the feedback, have genuine weight in the outcome.
Not just the people in the room with the most seniority.
What responsible actually looks like
It looks like slowing down before deploying, not after something goes wrong. It looks like measuring the right things, not just the things that are easy to measure. It looks like asking "should we?" before "can we?" and being willing to say "not here, not yet."
Rosenstein didn't regret building the Like button. He regretted not thinking hard enough about what it would become. That's a distinction worth sitting with. Good intentions and good outcomes aren't the same thing. The gap between them is where responsibility lives.