Last month, my friend got rejected for a mortgage. Nothing unusual there—except he has an 800+ credit score and a six-figure salary with minimal debt. When he pressed for details, the loan officer awkwardly admitted that their new AI-powered approval system had flagged him as “high-risk” based on his address.
Turns out, he lives in a gentrifying neighborhood that historically had high default rates. The AI didn’t care that he bought after gentrification or that his financial profile was stellar. The algorithm had spoken.
Welcome to the messy world of AI ethics, where machines make decisions using logic that even their creators sometimes can’t fully explain.
As someone who’s implemented AI systems for financial services companies, I’ve seen firsthand how these ethical questions quickly move from philosophical debates to real-world consequences. When an algorithm determines who gets loans, who gets hired, or even who gets released from prison, the stakes aren’t academic.
The problem isn’t just biased data, though that’s certainly part of it. When a facial recognition system works perfectly for white males but fails spectacularly for women of color (as MIT researcher Joy Buolamwini demonstrated), that’s a straightforward case of biased training data.
But even “perfect” algorithms raise ethical questions. Take healthcare AI that predicts which patients will benefit most from limited organ transplants. Even if it works flawlessly, should we allow an algorithm to make what are essentially life-or-death decisions?
For those of us working with AI systems, here are the ethical questions that should keep you up at night:
-
Explainability vs. Performance: The most accurate AI models (like deep learning neural networks) are often the least explainable. Can you justify using a more accurate “black box” system when you can’t explain how it reaches decisions?
-
Appropriate Delegation: Just because AI can make a decision doesn’t mean it should. My rule of thumb: as stakes increase, human oversight should increase proportionally.
-
The “Who Benefits” Test: When implementing AI, ask who primarily benefits from the efficiency and who bears the risk of mistakes. If these groups aren’t the same, proceed with extreme caution.
The most pragmatic approach I’ve seen comes from Microsoft’s responsible AI principles: pair every AI system with clear accountability structures. For every algorithm, there needs to be a human with both the authority and responsibility to override it when necessary.
As for my friend? He got his mortgage—but only after a human manager reviewed his case and overruled the algorithm. Not everyone will be so persistent or lucky.