About This Project

Why Karpathy's visualization generated such intense public discourse.

A scientific reflection on why this visualization generated extraordinary public discourse — and what it reveals about how society processes technological disruption.

What We Changed for Austria

This is an Austrian adaptation of Karpathy's original US visualization. We replaced all data sources with official Austrian and EU statistics:

  • Employment data → Eurostat nama_10_a64_e (2024) instead of US BLS
  • Salary data → Statistik Austria Verdienststrukturerhebung 2022 (VSE) instead of BLS median pay
  • Industry classification → ÖNACE 2025 (Austrian NACE) instead of SOC
  • Occupation mapping → ISCO-08 occupation groups instead of BLS detailed occupations
  • Salaries → EUR gross annual (incl. 13th/14th salary) instead of USD
  • Education levels → Austrian qualification framework (Lehre, HTL, FH, Uni, etc.)
  • AI exposure & outlook → Re-scored for Austrian occupations using same LLM methodology

Built with Next.js, TypeScript, shadcn/ui and the webconsulting design system. AT vs US comparison view and German language support added.

What Karpathy Built

In March 2026, Andrej Karpathy— former Director of AI at Tesla, founding member of OpenAI, and one of the most cited researchers in deep learning — published an open-source tool that scrapes, parses, and visualizes all 342 occupations from the U.S. Bureau of Labor Statistics Occupational Outlook Handbook. The visualization is a squarified treemap in which each rectangle's area is proportional to total employment and its color encodes a selected metric: BLS projected growth, median pay, education requirements, or — most controversially — an LLM-generated "Digital AI Exposure" score on a 0–10 scale.

The tool covers 143 million jobs and $8.9 trillionin annual wages. It is deliberately not a paper, not a policy recommendation, and not a forecast — it is, in Karpathy's own words, "a development tool for exploring BLS data visually."

Why It Generated Intense Public Discourse

The project's viral reception across technology, economics, and policy communities can be attributed to five intersecting factors, each grounded in established communication and behavioral science frameworks:

1. Source Credibility and Epistemic Authority

Hovland, Janis & Kelley's source credibility model(1953) demonstrates that message persuasiveness depends heavily on the perceived expertise and trustworthiness of its source. Karpathy occupies a rare position: he is both a leading technical practitioner (having built production AI systems at Tesla and OpenAI) and a widely-followed public communicator (3M+ YouTube subscribers for his neural network lectures). When an insider of this caliber publishes an AI exposure map of the labor market, the signal carries qualitatively different weight than equivalent analysis from a consultancy or think tank. The implicit message — "I build these systems, and here is how they map onto your job" — creates a persuasive force that institutional reports cannot replicate.

2. Concretization of Abstract Risk

Research in risk communication (Slovic, 2000; Kahneman & Tversky's prospect theory, 1979) consistently shows that humans underweight statistical probabilities but overweight vivid, personally relevant information. Prior AI labor impact studies — Frey & Osborne (2017), Acemoglu & Restrepo (2020), Eloundou et al. (2023) — quantified exposure at the aggregate level ("47% of US jobs at risk", "80% of workers affected by GPTs"). Karpathy's treemap does something fundamentally different: it makes each occupation individually visible, sized by employment, and colored by exposure. A nurse can find "Registered Nurses" on the map, see its moderate green color (score 4/10), compare it to the deep red of "Software Developers" (9/10), and understand the relative positioning instantly. This concretization effect — transforming aggregate statistics into personally identifiable visual representations — is why the tool triggered emotional responses that no academic paper had achieved.

3. The Nuance Paradox: Exposure ≠ Displacement

Karpathy explicitly stated that a high AI exposure score "does not predict that a job will disappear" and that "software developers score 9/10 because AI is transforming their work — but demand for software could easily growas each developer becomes more productive." This caveat aligns with the economic concept of demand elasticity and the productivity effect identified by Autor (2015): automation of tasks within an occupation can increase output and reduce costs, potentially expanding demand for the occupation itself. However, dual-process theory (Kahneman, 2011) predicts that System 1 (fast, intuitive) processing of a red-colored job tile will override the System 2 (slow, deliberate) processing of the written caveats. This created a productive tension: careful readers praised the nuance, while rapid consumers shared the visualization with alarmist framings — generating debate, corrections, and counter-corrections that amplified the project's reach.

4. Open-Source Extensibility as a Research Platform

Unlike static reports, Karpathy released the complete pipeline: web scraping (Playwright), HTML parsing (BeautifulSoup), LLM scoring (via OpenRouter API), and site generation — approximately 1,200 lines of Python plus a 912-line standalone HTML/Canvas frontend. The scoring component is parameterized by a prompt, meaning any researcher can substitute a different question — "score exposure to humanoid robotics," "score offshoring risk," "score climate impact" — and regenerate the entire visualization. This transforms the project from a static artifact into what computational social scientists call a generative platform (Zittrain, 2008): an open system that enables third-party innovation without requiring permission. The Austrian adaptation in this repository is one such extension.

5. Temporal Context: The AI Acceleration Window

The release coincided with a period of unprecedented AI capability acceleration (GPT-4o, Claude 3.5, Gemini 1.5), concurrent tech sector layoffs attributed partially to AI-driven productivity gains, and active legislative debate (EU AI Act implementation, US executive orders on AI). Kingdon's multiple streams framework(1984) from policy science explains such moments: when the problem stream (labor market anxiety), the policy stream (regulatory proposals), and the politics stream (public attention) converge, a "policy window" opens. Karpathy's visualization served as what Kingdon calls a focusing event — a concrete, vivid artifact that crystallized diffuse anxieties into a shareable, debatable object.

Methodological Considerations

It is important to note the tool's methodological limitations, which Karpathy himself acknowledged:

  • LLM scoring is not empirical measurement. The AI exposure scores are generated by a single LLM (Gemini Flash) with a fixed prompt at low temperature (0.2). They reflect the model's training distribution, not observed labor market outcomes. This contrasts with econometric approaches (e.g., Felten et al.'s AI Occupational Exposure index, 2021) that use measured AI benchmark performance mapped to O*NET task descriptions.
  • No demand-side modeling. The scores do not account for price elasticity of demand, latent demand, regulatory barriers, or social preferences for human workers — all factors that mediate the relationship between technical capability and actual labor displacement (Acemoglu & Restrepo, 2019).
  • Static snapshot. The BLS OOH data reflects 2024 employment levels with 2034 projections. The AI exposure scores reflect 2026 LLM capabilities. Both will change rapidly, making the visualization a time-stamped artifact rather than a persistent forecast.
  • US-specific occupational taxonomy. The Standard Occupational Classification (SOC) system does not map directly to other countries' classification systems (ÖNACE, ISCO), limiting international comparisons without careful adaptation — as this Austrian version demonstrates.

Significance for Public Understanding

Despite these limitations, the project's contribution to public discourse is substantial. It demonstrated that a single developer with access to public data and LLM APIs can produce labor market analysis that reaches millions — a democratization of what was previously the domain of government agencies and research institutions. It shifted the conversation from "will AI take jobs?" (a binary framing) to "how will AI reshape specificjobs?" (a graduated, occupation-level framing). And it provided a reusable platform that enables contextualized adaptations — such as this Austrian version — that ground the global AI debate in local labor market realities.

In the taxonomy of science communication, Karpathy's project operates at the intersection of data journalism, information visualization, and participatory research infrastructure. Its intense reception reflects not only its technical quality but the depth of societal anxiety about AI's labor market implications — an anxiety that, as this tool makes visible, is neither unfounded nor uniformly distributed.

References

  • Acemoglu, D. & Restrepo, P. (2019). Automation and New Tasks: How Technology Displaces and Reinstates Labor. Journal of Economic Perspectives, 33(2), 3–30.
  • Acemoglu, D. & Restrepo, P. (2020). Robots and Jobs: Evidence from US Labor Markets. Journal of Political Economy, 128(6), 2188–2244.
  • Autor, D. (2015). Why Are There Still So Many Jobs? The History and Future of Workplace Automation. Journal of Economic Perspectives, 29(3), 3–30.
  • Eloundou, T., Manning, S., Mishkin, P. & Rock, D. (2023). GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models. arXiv:2303.10130.
  • Felten, E., Raj, M. & Seamans, R. (2021). Occupational, Industry, and Geographic Exposure to Artificial Intelligence: A Novel Dataset and Its Potential Uses. Strategic Management Journal, 42(12), 2195–2217.
  • Frey, C.B. & Osborne, M.A. (2017). The Future of Employment: How Susceptible Are Jobs to Computerisation? Technological Forecasting and Social Change, 114, 254–280.
  • Hovland, C.I., Janis, I.L. & Kelley, H.H. (1953). Communication and Persuasion. Yale University Press.
  • Kahneman, D. (2011). Thinking, Fast and Slow. Farrar, Straus and Giroux.
  • Kahneman, D. & Tversky, A. (1979). Prospect Theory: An Analysis of Decision Under Risk. Econometrica, 47(2), 263–291.
  • Kingdon, J.W. (1984). Agendas, Alternatives, and Public Policies. Little, Brown and Company.
  • Slovic, P. (2000). The Perception of Risk. Earthscan.
  • Zittrain, J. (2008). The Future of the Internet — And How to Stop It. Yale University Press.