Microsoft Phi-3 Family Launch
Narrative
3.8B parameter model rivals GPT-3.5. Trained on 3.3T tokens. Small enough to run on phones. Open-source release.
Reality
Performance verified: 69% MMLU, competitive with Mixtral 8x7B. Phi-3-mini, small, medium released. On-device deployment working. Fine-tuning adoption strong.
Implication
Validated small language model viability. Challenged assumption that capability requires massive scale. On-device AI became practical. Edge deployment economics transformed.