

Research metric from Stanford measuring AI model efficiency, showing local language models improved 5.3× from 2023 to 2025, handling 88.7% of single-turn queries.
Loading more......
Intelligence Per Watt is a research initiative from Stanford's Scaling Intelligence Lab that measures the efficiency of AI models by evaluating how much computational intelligence can be achieved per unit of energy consumed.
Stanford's Intelligence Per Watt research showed that local language models already handle 88.7% of single-turn chat and reasoning queries, with intelligence efficiency improving 5.3× from 2023 to 2025.
This research demonstrates that:
The research directly influenced the design of OpenJarvis, Stanford's local-first AI agent framework, which prioritizes efficiency alongside task quality.
As AI deployment scales, Intelligence Per Watt is becoming a critical metric for sustainable AI development, particularly for edge and mobile deployments.
Research initiative, publicly available findings.