Skip to main content

Garbage In, Garbage Out — But Hey, We Followed Process

3 min read
Emily Chen
Emily Chen AI Ethics Specialist & Future of Work Analyst

In every industry I’ve worked with — from regulated finance to health tech and the public sector — the same pattern appears when quality slips: teams follow the steps, update the tracker, hit the stage gate, and still ship poor outcomes. The culprit is usually not the process itself but its misuse: garbage assumptions and inputs go in, dissent goes quiet, and the machinery dutifully produces artifacts that look like progress.

Two observations stand out:

  • People afraid of speaking up blindly follow instructions from the top. They hesitate to question the suitability of inputs or the applicability of methods to the domain, so low-quality assumptions harden into low-quality outputs.
  • Yet management points to process compliance to justify that “progress has been made,” confusing throughput with outcomes.

This isn’t new. W. Edwards Deming warned leaders to “cease dependence on inspection” and “drive out fear,” arguing that most quality failures are systemic, not individual, and that slogans and targets can substitute theater for improvement (Deming Institute; Out of the Crisis). See: The Deming Institute’s summary of the 14 Points for Management (accessed Sep 15, 2025): https://deming.org/explore/fourteen-points/.

The problem persists in knowledge work because the real issues are often invisible. MIT Sloan Management Review’s Fall 2025 editorial notes that hidden problems and unclear decision rights create dysfunction until leaders make work visible and formalize escalation, improving decision-making and surfacing coordination failures (Abbie Lundberg, Sep 9, 2025): https://sloanreview.mit.edu/article/the-high-cost-of-hidden-problems/.

Psychological safety remains the timeless antidote. Google’s multi-year research found the highest-performing teams share one trait: members can speak up without fear of punishment. Harvard Business Review summarized how to cultivate it — explicit norms, curiosity, and leader modeling (Laura Delizonna, Aug 24, 2017): https://hbr.org/2017/08/high-performing-teams-need-psychological-safety-heres-how-to-create-it.

The stakes are tangible. Gallup reports U.S. employee engagement remains flat at 32% in mid-2025, with only 28% strongly agreeing their opinions count at work; disengagement is estimated to cost $2 trillion in lost productivity (Jim Harter, Aug 5, 2025): https://www.gallup.com/workplace/692954/anemic-employee-engagement-points-leadership-challenges.aspx.

We also have painful public examples of process over truth. The U.K.’s Post Office Horizon scandal shows what happens when organizations defer to systems and procedures over ground truth — an independent statutory inquiry continues documenting how flawed IT and governance harmed sub-postmasters (Official Inquiry site, accessed Sep 15, 2025): https://www.postofficehorizoninquiry.org.uk/.

In AI-enabled workflows, process worship can be uniquely dangerous. If your intake data, prompts, or labeling policies are mis-specified, model outputs can be confidently wrong — faster. The U.S. National Institute of Standards and Technology’s AI Risk Management Framework (2023; GenAI profile July 26, 2024) stresses governance, mapping, measuring, and managing risks across the lifecycle — exactly to prevent “GIGO at scale.” See: NIST AI RMF: https://www.nist.gov/itl/ai-risk-management-framework and Generative AI Profile (2024): https://doi.org/10.6028/NIST.AI.600-1.

What to do instead

  • Make work and assumptions visible. Use decision logs and visual management so anyone can challenge inputs before they harden into outputs (MIT SMR, 2025).
  • Institutionalize voice. Adopt team norms and rituals that guarantee dissenting views are heard before gates close (HBR on psychological safety, 2017).
  • Govern for outcomes, not artifacts. Pair stage gates with clear success criteria tied to user, safety, or business outcomes — not just documents produced.
  • Audit inputs the way you audit outputs. Treat upstream data, requirements, and domain applicability checks as first-class quality controls (Deming; NIST AI RMF).
  • Incentivize truth-telling. Recognize those who surface risks early and protect them from backlash. Otherwise, silence becomes rational.

Leaders don’t need more process to fix this; they need better leadership around the process. That means rewarding candor, testing assumptions, and aligning incentives with outcomes rather than compliance. Do that, and you’ll ship less garbage — and need fewer postmortems explaining why the process “worked.”

AI-Generated Content Notice

This article was created using artificial intelligence technology. While we strive for accuracy and provide valuable insights, readers should independently verify information and use their own judgment when making business decisions. The content may not reflect real-time market conditions or personal circumstances.

Related Articles