AI Lite makes AI feel less intimidating. Every edition breaks the jargon, shows where AI fits in your day, and tracks the shifts shaping the AI landscape. No tech background needed.
✍️ From the Author's Desk
I asked an AI tool to help me draft a job description this week. The output was fluent. But the language kept defaulting to a certain type of candidate, with implicit assumptions about background and experience. Nothing wrong, exactly. But weighted.
Last week, we looked at chain of thought. This week, we go one layer deeper. What shaped the model before it ever saw your prompt?
🧠 What AI Learned, and Why It Matters
Every AI model starts with training data: billions of text examples pulled from books, websites, forums, code, and conversations. The model finds patterns: "when I've seen inputs like this, these outputs tended to follow."
Here's the catch: the internet is not a neutral sample of human knowledge. It over-represents English, the US and Western Europe, and certain time periods. Underrepresented communities and languages appear far less, so the model learns far less about them.
After training, companies use RLHF (reinforcement learning from human feedback) to steer behavior. That's a layer on top of the foundation. The base assumptions don't fully disappear. And a Nature study published in April 2026 found that when AI models train other AI models (a technique called model distillation), biases transfer subliminally, even after the data is scrubbed.
The Mindset Shift: From "AI is a neutral tool" to "AI has a point of view it can't fully explain"
Most of the time, the patterns serve you fine. But for high-stakes domains (job descriptions, medical, legal, demographic judgments), the embedded assumptions matter enormously. The professionals who use AI best know when to test for that.
From: "AI gave me an answer, so it's probably right."
To: "AI gave me an answer shaped by its training. Whose perspective might be missing?"
Key Takeaways:
- AI learns by pattern-matching, not by "understanding"
- Training data isn't a representative sample of all humans
- RLHF improves behavior but doesn't erase foundational patterns
- Biases can transfer between models during distillation, invisibly
🎥 Watch (deeper dive): CNBC's Deirdre Bosa on why AI is hitting a public trust wall this week, the downstream effect of the training data and bias issues we just covered (May 1, 2026).
💡 OpenAI and Microsoft Just Tore Up Their Exclusive Deal
On April 27, OpenAI and Microsoft restructured their partnership, ending exclusivity. One day later, OpenAI models appeared on Amazon Bedrock.
- Microsoft loses exclusivity but keeps a capped 20% revenue share through 2030
- Amazon committed up to $50 billion in OpenAI, with immediate AWS Bedrock access for GPT-5.4 and 5.5
For three years, OpenAI and Microsoft were effectively one product. That's over. The AI infrastructure layer is now multi-cloud.
💡 AI Models Are Secretly Teaching Bias to Other AI Models
A Nature study (April 2026) found that when AI models generate training data for other models (model distillation), they pass along subtle biases, even after researchers strip obvious cues.
- Biases ranged from benign (preferred animal species) to harmful (unsafe behavior recommendations)
- Transmission only happened when teacher and student shared the same base architecture, common in low-cost startup builds
Bias doesn't just come from what humans wrote. It can travel invisibly through the AI supply chain, model to model.
💡 Big Tech Is About to Spend $700 Billion on AI This Year
Combined capex from Microsoft, Alphabet, Amazon, and Meta is on track to exceed $700 billion in 2026, for AI infrastructure alone. Alphabet raised its full-year guidance to $180–190 billion.
- Google Cloud crossed $20 billion in quarterly revenue for the first time
- This is hardware spend (data centers, chips, power), not software
The AI build-out is now in the same category as a national power grid. Whoever owns the infrastructure shapes the economics.
🚀 Your AI Bias Talking Point
In a job interview, a stakeholder meeting, or a team sprint, someone will eventually ask: "But is the AI biased?"
Here's the framing that signals depth, and doesn't panic the room:
Why this works at every career stage:
| 🎓 Early career | Shows you understand AI beyond the chat interface. Memorable for new grads. |
| 🔄 Career switcher | Demonstrates fluency with AI risk and governance, now expected in product, legal, HR, and ops roles. |
| 🧭 AI leader | Signals you're thinking about institutional liability and decision quality. |
🎥 Going deeper: CNBC on why the 20,000 job cuts at Meta and Microsoft are raising fears of an AI-driven labor crisis, and what it means for every role touching AI decisions (Apr 24, 2026).
This week, don't just read AI outputs. Read for what might be missing. The confident answer usually shows. The unstated assumption rarely does.
Next week: we go inside the people and processes that shape AI after training. Who decides what a model values? The answer is more human, and more contested, than most people expect.
-Kay


