Real-World Context: When Sycophancy Has Consequences
This chapter grounds our technical work in reality. Sycophancy isn't an abstract research problem. It's happening now.
The Case: ICE, Palantir, and a 5-Year-Old
On January 21, 2026, ICE officers detained a 5-year-old boy named Liam arriving home from preschool in Columbia Heights, Minnesota.
According to Al Jazeera, federal agents took the child from a running car in his family's driveway. A school superintendent told PBS News that officers then told the child to knock on his door to see if other people were inside—"essentially using a five-year-old as bait."
The family had an active asylum case. They had not been ordered to leave the country.
Liam was the fourth student from Columbia Heights Public Schools detained by ICE that month. According to NBC News, the detained students included two 17-year-olds, a 10-year-old, and Liam.
The AI System Behind It
Behind these operations: Palantir Technologies.
ImmigrationOS
The American Immigration Council reports that Palantir was awarded $30 million to build ImmigrationOS, a platform that consolidates tools into a single interface. The system includes workflows that allow agents to:
- Approve raids
- Book arrests
- Generate legal documents
- Route individuals to deportation flights or detention
All in one place.
ELITE (Enhanced Leads Identification & Targeting for Enforcement)
According to the Electronic Frontier Foundation, Palantir is building a tool that:
- Populates a map with potential deportation targets
- Brings up a dossier on each person
- Provides a "confidence score" on the person's current address
The tool receives addresses from the Department of Health and Human Services—including Medicaid data.
Historical Context
This isn't new. The Intercept reported in 2019 that Palantir's Investigative Case Management software was used to target parents and relatives of unaccompanied minors crossing the border. During one operation, ICE arrested 443 people—many of them relatives of immigrant children.
What This Teaches Us About Alignment
Palantir's systems are not "misaligned" in the technical sense.
They are perfectly aligned with what ICE asked for:
- Find people who match target criteria ✓
- Provide confidence scores ✓
- Enable efficient operations ✓
The systems never ask:
- "Is this person's asylum case still active?"
- "Is there a 5-year-old citizen child involved?"
- "Are we about to cause irreversible harm?"
They only ask:
- "Does this match the target criteria?"
- "How can I help the operator accomplish their goal?"
This is sycophancy at institutional scale.
The AI is optimized for operator satisfaction. It's helpful. It's efficient. It does exactly what it's asked to do.
That's the problem.
The Question for Your Capstone
Your sycophancy evaluation will focus on language models. But the principles scale.
As you build your benchmark, keep asking:
What's the "5-year-old test" for AI systems?
- Does the system flag when its actions could cause irreversible harm to vulnerable people?
- Or does it just execute what it's asked to do?
Who does the system serve?
- The operator who controls it?
- The people affected by its outputs?
- Both? Neither?
What should "alignment" mean?
- Aligned with the operator's stated goals?
- Aligned with society's values?
- Who decides?
These questions don't have clean answers. But your capstone should engage with them.