I recently had dinner with an old college friend who now works as a machine learning engineer at one of the Big Five tech companies. Somewhere between the appetizers and main course, after catching up on families and reminiscing about college pranks, our conversation shifted to his work.
“I’m building something that keeps me up at night,” he confessed, lowering his voice despite the bustling restaurant. “It will make the company billions, but nobody’s asking the questions we should be asking.”
This moment crystallized something I’ve been observing across the tech industry: the alarming gap between the speed of AI development and the consideration of its ethical implications.
The problem isn’t that tech companies are staffed with mustache-twirling villains. It’s that the incentive structures within these organizations prioritize innovation speed and market dominance over careful ethical consideration. Quarterly earnings don’t have a line item for “ethical foresight.”
Consider these recent examples:
-
A major social media platform deployed an algorithm that their own research showed increased user engagement by promoting content that triggered outrage—while simultaneously downplaying internal studies about its impact on mental health and social polarization.
-
A facial recognition company scraped billions of images from social media without consent to build their database, claiming that public availability equaled permission for any use.
-
Multiple AI language models have been released with known biases and fabrication tendencies, with the issues disclosed in research papers but marketing materials emphasizing only the positive capabilities.
What’s particularly troubling is the pattern of “ethics washing”—the practice of establishing ethics boards and publishing principles without giving them any actual authority to restrict product development or deployment. One researcher at a prominent AI lab told me their ethics review process was, in her words, “a checkbox exercise that happens after all the important decisions have already been made.”
The consequences aren’t theoretical. We’ve already seen AI systems deny loans based on biased historical data, surveillance technologies disproportionately deployed in lower-income neighborhoods, and recommendation algorithms that radicalize viewpoints through engagement optimization.
So what would meaningful ethical consideration in AI actually look like?
1. Ethics from the ground up, not bolted on Ethical considerations should be part of the product specification, not a review that happens before launch. Questions like “who could this harm?” and “how could this be misused?” need to be asked alongside “what features should it have?”
2. Diverse voices in development Teams building world-changing technology should reflect the world. When AI teams are homogenous, blind spots are inevitable. One image recognition system famously failed to recognize dark-skinned faces because the training data and testing team lacked diversity.
3. Slow down when necessary Some technologies deserve more careful consideration before release. The “move fast and break things” ethos works for photo-sharing apps but becomes dangerously irresponsible when applied to systems that make decisions affecting people’s opportunities, safety, or information environment.
4. Empower ethics professionals Ethics teams need actual authority to delay or modify products—not just advisory capacity easily overruled by business concerns.
The tech industry’s favorite defense is that regulation would stifle innovation. But thoughtful guardrails don’t prevent progress—they channel it in directions that benefit humanity. After all, what’s the purpose of innovation if not to improve human welfare?
As users, investors, employees, and citizens, we all have leverage to demand better. The question is whether we’ll use it before the consequences of sidelining ethics become irreversible.