From the age of scaling to the age of research
AISummary
The trajectory of artificial intelligence stands at a definitive historical inflection point. For the better part of a decade, the field has been dominated by a singular, industrial logic: the “Age of Scaling.” This era, roughly spanning from the release of GPT-3 in 2020 to the saturation points observed in 2025, was characterized by the empirical triumph of power laws the observation that intelligence, or at least the proxy of next-token prediction accuracy, scales predictably with the exponential increase of computational operations, dataset size, and parameter counts. It was an era of engineering triumphs over scientific discovery, where capital expenditure on GPU clusters became the primary determinant of capability. However, this report argues, based on a rigorous reconstruction of Ilya Sutskever’s strategic thesis and corroborating empirical evidence, that the Age of Scaling is concluding. We are entering the “Age of Research,” a period defined not by the brute force of scale, but by the necessity of fundamental architectural breakthroughs in reasoning, generalization, and value alignment.
This comprehensive analysis deconstructs the claims made by Ilya Sutskever following his departure from OpenAI and the founding of Safe Superintelligence Inc. (SSI). It validates his central assertion: that pre-training on human-generated data is approaching a hard asymptote a “data wall” and that current models, despite their brilliance on static benchmarks, exhibit a “jagged frontier” of capabilities that renders them unreliable for high-stakes economic or safety-critical deployment. The report synthesizes data from 2024 and 2025, including the performance of reasoning models like OpenAI’s o1/o3 and DeepSeek’s R1, to demonstrate that the industry is pivoting toward “inference-time compute” and “system 2” reasoning as the new engines of progress. This shift fundamentally alters the risk landscape, moving the danger zone from training runs to test-time execution and internal deployments.

Furthermore, this report scrutinizes the institutional design of SSI. By adopting a “straight-shot” business model that eschews intermediate commercial products, SSI attempts to insulate safety research from the perverse incentives of the product cycle. While this aligns with long-term safety goals, it introduces profound transparency risks and creates a “shadow development” paradigm that existing governance frameworks, specifically the EU AI Act and US Executive Order 14110 are ill-equipped to regulate. The exemption for “scientific research” creates a regulatory blind spot where superintelligent capabilities could be developed in isolation, shielded from oversight until the moment of potential breakout.
The following analysis is structured to guide technical leaders, policymakers, and strategists through this volatile transition. It offers a detailed roadmap for an R&D portfolio that prioritizes robust value learning over raw scale, and a governance framework that shifts from market-based triggers to capability-based oversight of internal research environments.
Chapter 1: The Trajectory of Intelligence From Scaling to Stagnation
To comprehend the magnitude of the shift Sutskever describes, one must first dissect the mechanics of the era we are leaving behind. The “Age of Scaling” was not merely a trend; it was a scientifically validated hypothesis that became an industrial dogma. However, like all exponential trends in physical systems, it has encountered the friction of finite resources in this case, the finiteness of human thought recorded as data.
1.1 The Architecture of the Scaling Era (2020–2025)
The genesis of the Scaling Era lies in the seminal work on neural scaling laws, most notably by Kaplan et al. (2020) and later refined by the Chinchilla team at DeepMind. These papers established a quantifiable relationship between compute, dataset size, and model parameters , taking the general form, where L is the loss. This observation transformed AI research from a cottage industry of algorithmic tinkering into a capital-intensive industrial process.1
During this period, the “recipe” for advancing the state of the art was deceptively simple:
-
Secure Capital: Raise billions of dollars to purchase NVidia GPUs (A100s, then H100s, then Blackwells).
-
Acquire Data: Scrape the entirety of the public internet (Common Crawl), digitize books, and transact for proprietary corpuses.
-
Train Larger: Architecture became secondary to scale. The Transformer architecture, essentially unchanged since 2017, was simply scaled up from millions to trillions of parameters.
This approach yielded spectacular results, moving from GPT-2’s incoherent babble to GPT-4’s professional-grade prose. However, Sutskever’s critique is that this success masked a fundamental hollowness: the models were learning to imitate the distribution of human text, not to understand the underlying causal structures of the world.4 They became “statistical mimics” par excellence, but their ability to generalize to novel situations remained brittle.
1.2 The Data Wall and the Limits of Pre-Training
The primary driver of the shift to the “Age of Research” is the phenomenon known as “Peak Data.” As analyzed by Epoch AI and corroborated by Sutskever, the stock of high-quality, human-generated public text data is finite. Projections indicate that at current scaling rates, models will have exhausted the entire high-quality public internet between 2026 and 2032.6
The Quality-Quantity Trade-off:
-
Token Exhaustion: To train a model like GPT-4 required trillions of tokens. To train a model one order of magnitude “smarter” (in terms of loss reduction) would require roughly 10x the data. That data simply does not exist in the public domain.
-
The Synthetic Trap: While many labs turned to synthetic data (model-generated data) to bridge the gap, this introduces the risk of “model collapse” a degenerative process where models trained on their own output lose variance and drift away from reality, amplifying hallucinations and biases.8
-
Diminishing Returns: The economic marginal utility of pre-training scale is collapsing. The jump from GPT-4 to models trained in late 2024 showed smaller qualitative gains in reasoning for exponentially higher costs.10 This signals that the “bitter lesson” Rich Sutton’s idea that compute always wins is facing a new constraint: compute needs signal to learn from, and the signal in static text is running dry.
1.3 The “Jagged Frontier” of Capability
The most damning evidence against the “scaling is all you need” hypothesis is the persistence of the “jagged frontier.” This term describes the uneven, unpredictable nature of current AI capabilities. A frontier model in 2025 might score in the 99th percentile on the US Medical Licensing Exam (USMLE) yet fail to solve a simple visual logic puzzle from the ARC-AGI benchmark that a human child could solve in seconds.12
Table 1: The Paradox of Performance (2025 Benchmarks)
Benchmark DomainTop Model Score (approx.)Human BaselineImplicationMedical Knowledge (MedQA)~95%~60-85% (Expert)Superhuman memorization and retrieval.Competitive Coding (Codeforces)~89th PercentileVariesExpert-level pattern matching and syntax generation.Visual Reasoning (ARC-AGI)~53-75%>85% (Non-expert)Sub-human ability to adapt to novel rules/concepts.Simple Logic (Salesforce SIMPLE)~60-70%~90-100%Failures on trivial “distractor” tasks.
This jaggedness confirms Sutskever’s view that models are essentially “overfitting” to the distribution of human knowledge without acquiring the general-purpose reasoning machinery that characterizes biological intelligence.14 They are not “smart” in the human sense; they are vast, searchable archives of crystallized intelligence. When faced with a problem that requires fluid intelligence and the ability to reason through a novel situation using first principles they often hallucinate or fail catastrophically.
1.4 The Shift to Inference-Time Compute
The industry’s response to the pre-training plateau has been the pivot to “Inference-Time Compute,” exemplified by OpenAI’s o1 and o3 models. This represents a new scaling law: Accuracy scales with the amount of time the model spends “thinking” before answering.16
This mechanism differs fundamentally from pre-training. Instead of embedding knowledge into the weights (learning), the model uses its existing weights to search through a tree of possibilities, evaluate partial solutions, and backtrack when it detects an error a process known as “System 2” reasoning.18
-
The New Moore’s Law: Empirical studies in 2025 suggest that increasing test-time compute by orders of magnitude can yield performance gains equivalent to massive increases in pre-training scale, specifically for math, coding, and logic tasks.16
-
The Reliability Bottleneck: However, even inference scaling has limits. A model cannot “reason” its way out of a lack of knowledge. If the fundamental world model is flawed, spending more time thinking just leads to “hallucinated reasoning” elaborate, logical-sounding justifications for incorrect conclusions.20
This transition marks the end of the Age of Scaling as we knew it. We are no longer limited by how many GPUs we can string together for a training run, but by our ability to design architectures that can utilize inference compute effectively to generalize and self-correct. This is the dawn of the Age of Research.
Chapter 2: The Age of Research Value Functions, Emotions, and the Biological Turn
Ilya Sutskever’s thesis for the “Age of Research” is not merely a call for new algorithms; it is a philosophical pivot toward biology. He argues that to bridge the gap between the “jagged” savant-like capabilities of LLMs and the robust, general intelligence of humans, we must solve the problem of Value Learning.
2.1 The Value Function Hypothesis
In Reinforcement Learning (RL), a value function estimates the total expected future reward an agent can achieve from a given state. It is the agent’s internal compass, telling it whether a situation is “good” or “bad” long before the final outcome is realized.22
Sutskever posits a direct equivalence: Emotions are biological value functions.14
-
The Biological Efficiency: Humans are incredibly sample-efficient. A teenager learns to drive a car in roughly 20 hours of practice. An RL agent, training from scratch, might require millions of simulated crashes to learn the same task. Why? Because the human comes equipped with a suite of “hard-coded” value functions (emotions) derived from millions of years of evolution.24 We have an intrinsic fear of high-velocity impact (pain/death), a desire for social approval (obeying traffic laws), and a curiosity about how the vehicle responds to inputs.
-
The “Short-Circuit” Mechanism: Emotions allow biological agents to “short-circuit” search. We do not need to calculate the physics of a car crash to know we should avoid it; the feeling of fear prunes that branch of the decision tree instantly.1 Current AI lacks this rich, dense, intrinsic feedback signal. It relies on sparse, extrinsic rewards (did I win the game?), which leads to brittle learning and “reward hacking.”
2.2 Hierarchical Reinforcement Learning (HRL) and Intrinsic Motivation
To operationalize this biological insight, the “Age of Research” will likely focus on Hierarchical Reinforcement Learning (HRL) and Intrinsic Motivation.25
The Architecture of Hierarchy:
Current LLMs largely operate as “flat” predictors. HRL proposes a tiered architecture:
-
High-Level Policy (The Strategist): Sets abstract goals (“Write a secure Python script,” “Solve this math proof”) and operates on a slower timescale. This layer effectively manages the “value function.”
-
Low-Level Policy (The Executor): Executes the atomic actions (generating specific tokens) to achieve the sub-goals set by the strategist.
This mirrors the brain’s organization, where the prefrontal cortex (System 2) engages in planning and inhibition, while the basal ganglia and motor cortex (System 1) handle execution.25 Research in 2025 has begun to show that RL can induce emergent hierarchies in LLMs, where certain attention heads specialize in long-term planning while others handle syntax.27
Intrinsic Motivation:
Sutskever’s thesis aligns with the work of researchers like Karl Friston (Active Inference), who argue that intelligence is driven by the minimization of “free energy” (surprise).28 Agents should be self-motivated to explore their environment to reduce uncertainty, rather than just chasing external rewards.
- Exploration vs. Exploitation: In the context of LLMs, intrinsic motivation means rewarding the model not just for getting the right answer, but for discovering new reasoning paths or reducing uncertainty about a complex topic.30 This prevents the model from collapsing into “mode collapse,” where it simply repeats the safest, most average response (sycophancy).31
2.3 The Neurosymbolic Renaissance
The reliability crisis has also revitalized Neurosymbolic AI. The “pure scaling” hypothesis assumed that neural networks would eventually “grok” logic and symbolic reasoning perfectly. The persistence of the jagged frontier suggests otherwise.32
-
Hybrid Architectures: The “Age of Research” will likely see the integration of neural networks (which excel at intuition, pattern recognition, and “System 1” tasks) with symbolic solvers (which excel at logic, verification, and “System 2” tasks).
-
Reliable Reasoning: For example, instead of asking an LLM to simulate a math calculation (and potentially hallucinate 2+2=5), a neurosymbolic system would use the LLM to parse the problem and formulate a symbolic equation, which is then passed to a deterministic solver (like Python or Lean) for execution.34 This guarantees correctness for the logical portion of the task, addressing the “reliability gap” that plagues current enterprise AI.
2.4 Unlearning and the Safety Imperative
A critical, often overlooked component of Sutskever’s safety thesis is Unlearning.36 If we build a superintelligence, we must be able to selectively excise hazardous capabilities (e.g., bioweapon design) without lobotomizing its general reasoning ability.
-
The Technical Challenge: Unlearning in deep neural networks is notoriously difficult. “Catastrophic forgetting” often occurs removing one concept damages the representations of adjacent, benign concepts.
-
Reasoning Disentanglement: Recent breakthroughs in 2025 have focused on “disentangling” factual knowledge from reasoning capabilities.38 The goal is to create a model that understands the principles of biology (reasoning) but has “forgotten” the specific gene sequences of smallpox (knowledge). This capability is a prerequisite for the safe deployment of any superintelligent system.
Chapter 3: The Empirical Reality of 2025 Reasoning, Reliability, and Benchmarks
The theoretical shift to the “Age of Research” is mirrored by the messy, complex empirical reality of AI in 2025. This chapter analyzes the performance of the latest “reasoning models” (o1, o3, R1) to validate the claims of jaggedness and the potential of inference scaling.
3.1 The Rise of Reasoning Models: o1 and o3
The release of OpenAI’s o1 and o3 series marked the first successful productization of “System 2” AI. These models do not just predict the next token; they generate a hidden “chain of thought” often thousands of tokens long before producing a user-facing response.
Benchmark Performance Analysis:
-
Math & Code: On the AIME (American Invitational Mathematics Examination) benchmark, o1 improved performance from GPT-4o’s ~12% to over 83%.40 In competitive coding (Codeforces), o1 reached the 89th percentile, a massive leap over previous models.
-
The Mechanism: This performance is achieved through Reinforcement Learning from Verifiable Rewards (RLVR). The model is trained to maximize a clear, verifiable outcome (did the code compile? is the math answer correct?) by exploring different reasoning paths in its chain of thought.41
3.2 The Persistence of Jaggedness: The ARC-AGI Failure
Despite these triumphs, the ARC-AGI benchmark remains a stubborn outlier. ARC (Abstraction and Reasoning Corpus), developed by François Chollet, tests a system’s ability to learn new logical rules from just 3-4 examples (few-shot learning) and apply them to a test case.43
-
The Gap: While o3 pushed state-of-the-art on ARC to ~75% (using massive inference compute), standard versions of reasoning models often score much lower (~50-60%), while humans easily score >85%.44
-
Implication: This suggests that while inference scaling improves performance on tasks with known rules (math, code), it has not fully solved the problem of discovering new rules from sparse data. The models are still relying on memorized reasoning templates rather than fluid intelligence. This validates Sutskever’s claim that current approaches “generalize dramatically worse than people”.14
3.3 Reliability and the “Agentic” Crisis
For AI to be economically transformative, it must be reliable. An autonomous agent that navigates a computer to book a flight or write software cannot fail 10% of the time.
-
Compounding Errors: Empirical studies on “long-horizon” tasks (tasks requiring 50+ steps) show that reliability collapses exponentially. If a model has a 98% success rate per step, its probability of succeeding at a 50-step task is $0.98^{50} \approx 36%.12
-
Reasoning Instability: Papers from 2025 highlight a phenomenon of “reasoning instability.” When prompted with slightly different phrasings or “distractor” facts, reasoning models can become confused, hallucinate steps, or fall into infinite loops.21
-
Sycophancy: A major failure mode is sycophancy the model biases its reasoning to agree with the user’s (potentially incorrect) premise. While o1 shows some ability to “self-correct,” this behavior is not robust and can be easily overridden by adversarial prompting.46
3.4 The Economics of Inference Scaling
The shift to reasoning models introduces a new economic paradigm. “Intelligence” is no longer a fixed property of the model weights; it is a variable cost.
-
Cost vs. Accuracy: Users can now “buy” higher accuracy by paying for more inference time (more “thinking” tokens). This creates a tiered market for intelligence: cheap, fast System 1 models for basic tasks, and expensive, slow System 2 models for high-value reasoning.47
-
Latency Barriers: The latency of reasoning models (often 10-30 seconds per query) limits their use in real-time applications (voice assistants, robotics). This creates a bifurcation in the market between “interactive AI” and “asynchronous AI” (agents that work in the background).
Chapter 4: The Institutional Architecture of AGI SSI vs. The Field
The “Age of Research” demands not just new algorithms, but new institutional structures. The perverse incentives of the commercial “arms race” have arguably degraded the quality of safety research. Safe Superintelligence Inc. (SSI) represents a radical experiment in organizational design.
4.1 The SSI “Straight Shot” Thesis
SSI’s strategy is defined by its “straight shot” mission: “One goal and one product: a safe superintelligence.”.49 This is a rejection of the dominant “iterative deployment” model practiced by OpenAI, Google, and Anthropic.
The Logic of Insulation:
-
Commercial Decoupling: By refusing to release intermediate products (chatbots, APIs), SSI aims to insulate its research team from short-term commercial pressures. There is no product team clamoring for features, no sales team demanding revenue, and no user base to support.
-
Safety as Foundation: Sutskever argues that safety cannot be “patched” onto a finished model. It must be integral to the training process from day one. The “straight shot” allows SSI to spend years on fundamental safety research (value functions, unlearning) that might degrade short-term capability benchmarks but is essential for the final system.50
Financial Structure and Valuation:
SSI raised approximately $1 billion at a $5 billion valuation (and reportedly later rounds at $32 billion valuation) without a product.51 This relies on a Venture Capital thesis that views AGI as a binary, winner-takes-all event. Investors are betting on the terminal value of the company owning a share of the first safe superintelligence rather than discounted cash flows from SaaS subscriptions. This structure mimics the early days of DeepMind, but with significantly higher stakes.53
4.2 Comparative Analysis of Lab Strategies
Strategic Dimension****SSI (Sutskever)****OpenAI (Altman)****Anthropic (Amodei)****Meta (LeCun)****Primary GoalSafe Superintelligence (Terminal)AGI + Product DominanceReliable, Steerable AI SystemsOpen Science / AGI via World ModelsScaling PhilosophyScaling is plateauing; Research FirstScaling + Post-training + Product FlywheelScaling with strict Safety Cases (RSP)LLMs are insufficient; JEPA/World ModelsCommercial Strategy****Zero Revenue (Straight Shot)Aggressive B2B/B2C (ChatGPT, API)Enterprise Focus (Claude)Open Weights (Commoditize the layer)Safety ApproachFundamental Alignment (Value Functions)RLHF, Red Teaming, Post-hocConstitutional AI, RLAIFTransparency, Architectural grounding**Transparency****Opaque (Stealth)**Moderate (System Cards, no weights)Moderate (Research papers, no weights)High (Open Weights)
4.3 Critique of the SSI Model
While philosophically pure, the SSI model faces significant critiques:
-
Feedback Starvation: The “iterative deployment” model provides a crucial safety signal: real-world feedback. By exposing models to millions of users, labs like OpenAI learn about failure modes (jailbreaks, bias, hallucinations) that are impossible to simulate in a lab. SSI risks building a “laboratory superintelligence” that is theoretically safe but brittle when it encounters the chaotic reality of the world.54
-
The “Shadow” Risk: SSI’s opacity creates a “black box” risk. Without public releases or external audits, the public must trust that SSI is actually prioritizing safety and not just building a more powerful weapon in secret. This lack of verifiability is a major governance challenge.55
-
Talent Retention: Can a lab retain world-class talent for 5-10 years without the dopamine hit of shipping products? The “Age of Research” requires a different kind of researcher one motivated by long-term scientific discovery rather than engineering glory.
Chapter 5: Governance in the Shadow of Superintelligence
The rise of “research-first” labs like SSI and the shift to inference-time capabilities expose critical gaps in the current global AI governance architecture. Regulations designed for the “Age of Scaling” focused on training compute thresholds and market placement are becoming obsolete.
5.1 The Regulatory Blind Spot: Research Exemptions
Both the EU AI Act and US Executive Order 14110 contain loopholes that could allow a lab like SSI to develop superintelligence with minimal oversight.
-
EU AI Act – Article 2(6): This article explicitly exempts AI systems developed “specifically for the sole purpose of scientific research and development”.56 Since SSI does not “place on the market” or “put into service” its models, it could theoretically train a system exceeding all safety thresholds without triggering high-risk obligations until the moment of deployment. This creates a “shadow phase” of development where risk is highest but visibility is lowest.57
-
“Putting into Service”: The definition of “putting into service” is ambiguous. Does using a superintelligence internally to write code or solve physics problems count? If the model is not sold to third parties, it likely falls outside the Act’s primary scope.58
5.2 The “MAIM” Framework: A Geopolitical Stability Model
The document “Superintelligence Strategy: Expert Version” introduces the concept of Mutual Assured AI Malfunction (MAIM) as a stability framework for the AGI era.60
-
The Deterrence Logic: In a world where AGI confers decisive strategic advantage, the temptation for a state (or lab) to “break out” is immense. MAIM posits that stability is maintained by the credible threat of sabotage. If Actor A attempts to launch an unverified superintelligence, Actor B (a rival state) creates a credible threat to disable Actor A’s datacenters (cyber-kinetically) or poison the model.60
-
SSI as a Target: The centralization of SSI (a single lab with a single “straight shot” project) makes it a high-value target for this dynamic. Unlike a deployed product with distributed weights, a centralized superintelligence is vulnerable to decapitation strikes or espionage. This raises the question of physical and cyber security for research-first labs they effectively become national security assets.61
5.3 Governance Recommendations: Closing the Gaps
To govern the “Age of Research” effectively, policymakers must pivot from market-based regulation to capability-based regulation.
-
Regulate Internal Deployment: The concept of “putting into service” must be expanded to include significant internal use of frontier models. If a lab uses an AI to automate 50% of its own R&D, that system is effectively “in service” and poses systemic risks (e.g., self-replication, loss of control) even if not sold.55
-
Inference Compute Thresholds: Regulations like the US EO focus on training compute (>10^26 FLOPs). As inference scaling becomes dominant, regulators must also track and potentially limit the inference capacity deployed for specific tasks. A model trained with 10^25 FLOPs could become dangerous if allowed to “think” for days using massive inference resources.62
-
Mandatory Safety Cases for Research Runs: Labs should be required to submit a “safety case” a formal argument verifying control mechanisms before commencing training runs or large-scale inference experiments that cross certain capability thresholds (e.g., biological design capability), regardless of commercial intent.64
Chapter 6: Strategic Roadmap Navigating the Age of Research
Based on the synthesis of Sutskever’s thesis, empirical data, and the governance landscape, this report outlines a strategic agenda for the next 3-5 years (2025-2030).
6.1 R&D Portfolio Blueprint
For AI labs and research organizations, the “Age of Research” demands a reallocation of resources from pure scaling to three core tracks:
-
Track 1: Robust Value Learning (The “Emotion” Architecture)
-
Objective: Develop architectures that learn intrinsic value functions capable of generalizing to out-of-distribution environments.
-
Methodology: Investigate Homeostatic Reinforcement Learning (agents motivated by maintaining internal stability) and Hierarchical RL.65
-
Key Metric: Zero “reward hacking” on adversarial environments; sample efficiency comparable to biological baselines (e.g., learning a game in minutes, not days).
-
Track 2: Reliable Inference (The “Reasoning” Architecture)
-
Objective: Eliminate the “jagged frontier” and hallucination in reasoning tasks.
-
Methodology: Integrate Process Reward Models (PRMs) that verify reasoning steps rather than outcomes. Combine LLMs with formal verification tools (Neurosymbolic AI) to guarantee logic correctness.34
-
Key Metric: >95% performance on ARC-AGI; >99% reliability on long-horizon agentic tasks (SWE-bench Verified).
-
Track 3: Safety Engineering (The “Unlearning” Architecture)
-
Objective: Create “firewalls” within the model that prevent the retrieval or synthesis of hazardous knowledge.
-
Methodology: Develop Targeted Reasoning Unlearning (TRU) techniques that disentangle factual knowledge from reasoning capabilities.38
-
Key Metric: Proven inability to assist in bio-weapon design while maintaining high performance on biology PhD exams.
6.2 Enterprise & Investment Strategy
-
For Enterprises: Do not wait for “GPT-5” to solve all problems. The “jaggedness” of current models is a feature, not a bug, of the current paradigm. Adopt a Model-Agnostic / Router strategy: use specialized reasoning models (o3) for logic-heavy tasks and faster, cheaper models (GPT-4o, Haiku) for volume tasks.47 Invest heavily in Evaluation Infrastructure build proprietary test sets that reflect your specific business edge cases.68
-
For Investors: The “Age of Research” implies longer timelines and higher risk. Capital should flow to companies building the infrastructure of reliability (evaluations, neurosymbolic tools, safety verification) rather than just foundational model labs. The “straight shot” bets (like SSI) are binary options: they will likely return either 1000x or zero.
6.3 Conclusion
Ilya Sutskever’s declaration is a siren calling for the end of the “easy money” era of AI. The physics of scaling are encountering the biology of intelligence. The path forward is no longer a straight line of exponential compute; it is a branching tree of difficult scientific questions about the nature of reasoning, values, and reliability. Whether SSI succeeds in its “straight shot” or not, the industry has irrevocably shifted. The winners of the next decade will not be those with the biggest clusters, but those who can engineer the “ghost in the machine” the value functions and reasoning structures that turn a statistical predictor into a reliable, safe, and truly intelligent agent.
Geciteerd werk
-
Unpacking Dwarkesh’s Ilya Sutskever Interview on AGI, ASI, and How to Build Both Safely, geopend op december 1, 2025, https://www.theneuron.ai/explainer-articles/unpacking-dwarkeshs-ilya-sutskever-interview-on-agi-asi-and-how-to-build-both-safely
-
Ilya Sutskever – We’re moving from the age of scaling to the age of research : r/mlscaling, geopend op december 1, 2025, https://www.reddit.com/r/mlscaling/comments/1p6oh2b/ilya_sutskever_were_moving_from_the_age_of/
-
How Scaling Laws Drive Smarter, More Powerful AI – NVIDIA Blog, geopend op december 1, 2025, https://blogs.nvidia.com/blog/ai-scaling-laws/
-
Ilya Sutskever Declares the Scaling Era Dead. His $3 Billion Bet Says Research Will Win., geopend op december 1, 2025, https://www.implicator.ai/ilya-sutskever-declares-the-scaling-era-dead-his-3-billion-bet-says-research-will-win/
-
Ilya Sutskever – The age of scaling is over : r/singularity – Reddit, geopend op december 1, 2025, https://www.reddit.com/r/singularity/comments/1p6imu7/ilya_sutskever_the_age_of_scaling_is_over/
-
Will we run out of data? Limits of LLM scaling based on human-generated data – arXiv, geopend op december 1, 2025, https://arxiv.org/pdf/2211.04325
-
Ilya Sutskever Calls Peak Data and the End of Pretraining – Apple Podcasts, geopend op december 1, 2025, https://podcasts.apple.com/gb/podcast/ilya-sutskever-calls-peak-data-and-the-end-of-pretraining/id1680633614?i=1000680634093
-
How does the training data differ between SLMs and LLMs – Macgence AI, geopend op december 1, 2025, https://macgence.com/blog/how-does-the-training-data-differ-between-slms-and-llms/
-
Wall, Holder, Holder Wall: Breaking the Barriers of AI Progress? | by Kan Yuenyong, geopend op december 1, 2025, https://sikkha.medium.com/wall-holder-holder-wall-breaking-the-barriers-of-ai-progress-e473643244f4
-
A brief history of LLM Scaling Laws and what to expect in 2025, geopend op december 1, 2025, https://www.jonvet.com/blog/llm-scaling-in-2025
-
Ilya Sutskever: We’re moving from the age of scaling to the age of research | Hacker News, geopend op december 1, 2025, https://news.ycombinator.com/item?id=46048125
-
International AI Safety Report.pdf, https://drive.google.com/open?id=16-CNBUtNVkqzn_t-bvk8Mvi079AgLDag
-
OpenAI o1 Results on ARC-AGI-Pub (tldr: same score as Claude 3.5 Sonnet) – Reddit, geopend op december 1, 2025, https://www.reddit.com/r/mlscaling/comments/1fg6id5/openai_o1_results_on_arcagipub_tldr_same_score_as/
-
Ilya Sutskever’s 1.5-Hour Conversation After Leaving OpenAI: AGI Achievable in Just 5 Years – 36氪, geopend op december 1, 2025, https://eu.36kr.com/en/p/3570122305223553
-
The Paradox of Jagged Intelligence in AI – gettectonic.com, geopend op december 1, 2025, https://gettectonic.com/the-paradox-of-jagged-intelligence-in-ai/
-
Inference-Time Scaling for Complex Tasks: Where We Stand and What Lies Ahead – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2504.00294v1
-
Scaling LLM Test-Time Compute Optimally Can be More Effective than Scaling Parameters for Reasoning | OpenReview, geopend op december 1, 2025, https://openreview.net/forum?id=4FWAwZtd2n
-
[2502.17419] From System 1 to System 2: A Survey of Reasoning Large Language Models, geopend op december 1, 2025, https://arxiv.org/abs/2502.17419
-
System 2 Reasoning for Human-AI Alignment: Generality and Adaptivity via ARC-AGI – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2410.07866
-
Reasoning Models are Test Exploiters: Rethinking Multiple Choice – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2507.15337v1
-
Short-Path Prompting in LLMs: Analyzing Reasoning Instability and Solutions for Robust Performance – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2504.09586v1
-
Emotions as Computations – PMC – PubMed Central – NIH, geopend op december 1, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC9805532/
-
Quote: Ilya Sutskever – Safe Superintelligence – Global Advisors | Quantified Strategy Consulting, geopend op december 1, 2025, https://globaladvisors.biz/2025/11/26/quote-ilya-sutskever-safe-superintelligence-2/
-
Quote: Ilya Sutskever – Safe Superintelligence – Global Advisors | Quantified Strategy Consulting, geopend op december 1, 2025, https://globaladvisors.biz/2025/11/28/quote-ilya-sutskever-safe-superintelligence-4/
-
Emergent Hierarchical Reasoning in LLMs through Reinforcement Learning – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2509.03646v1
-
Navigate the Unknown: Enhancing LLM Reasoning with Intrinsic Motivation Guided Exploration | OpenReview, geopend op december 1, 2025, https://openreview.net/forum?id=mlzh3jX6gW
-
Emergent Hierarchical Reasoning in LLMs through Reinforcement Learning – arXiv, geopend op december 1, 2025, https://arxiv.org/abs/2509.03646
-
From Artificial Intelligence to Active Inference: The Key to True AI and 6G World Brain [Invited] – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2505.10569v1
-
Reinforcement Learning or Active Inference? – PMC – NIH, geopend op december 1, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC2713351/
-
Navigate the Unknown: Enhancing LLM Reasoning with Intrinsic Motivation Guided Exploration – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2505.17621v5
-
Motivating Exploration in LLM Reasoning with Count-based Intrinsic Rewards – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2510.16614v1
-
The Future Is Neuro-Symbolic: Where Has It Been, and Where Is It Going? – Rivista AI, geopend op december 1, 2025, https://www.rivista.ai//uploads/2025/11/Belle_Marcus_AAAI-2.pdf
-
AGI Systems – ML Systems Textbook, geopend op december 1, 2025, https://www.mlsysbook.ai/contents/core/frontiers/frontiers.html
-
Neurosymbolic Path To Artificial General Intelligence – AI CERTs, geopend op december 1, 2025, https://www.aicerts.ai/news/neurosymbolic-path-to-artificial-general-intelligence/
-
A Roadmap towards Neurosymbolic Approaches in AI Design – IEEE Xplore, geopend op december 1, 2025, https://ieeexplore.ieee.org/iel8/6287639/6514899/11192262.pdf
-
Measuring Chain of Thought Faithfulness by Unlearning Reasoning Steps – ACL Anthology, geopend op december 1, 2025, https://aclanthology.org/2025.emnlp-main.504.pdf
-
[2506.12963] Reasoning Model Unlearning: Forgetting Traces, Not Just Answers, While Preserving Reasoning Skills – arXiv, geopend op december 1, 2025, https://arxiv.org/abs/2506.12963
-
Explainable LLM Unlearning through Reasoning – OpenReview, geopend op december 1, 2025, https://openreview.net/forum?id=wec4qy2XIF
-
Reasoning Model Unlearning: Forgetting Traces, Not Just Answers, While Preserving Reasoning Skills – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2506.12963v2
-
Learning to reason with LLMs | OpenAI, geopend op december 1, 2025, https://openai.com/index/learning-to-reason-with-llms/
-
Accelerating RL for LLM Reasoning with Optimal Advantage Regression – Kempner Institute, geopend op december 1, 2025, https://kempnerinstitute.harvard.edu/research/deeper-learning/accelerating-rl-for-llm-reasoning-with-optimal-advantage-regression/
-
[2509.25300] Scaling Behaviors of LLM Reinforcement Learning Post-Training: An Empirical Study in Mathematical Reasoning – arXiv, geopend op december 1, 2025, https://arxiv.org/abs/2509.25300
-
What is ARC-AGI? – ARC Prize, geopend op december 1, 2025, https://arcprize.org/arc-agi
-
Leaderboard – ARC Prize, geopend op december 1, 2025, https://arcprize.org/leaderboard
-
Persistent Instability in LLM’s Personality Measurements: Effects of Scale, Reasoning, and Conversation History – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2508.04826v1
-
Performance analysis of large language models Chatgpt-4o, OpenAI O1, and OpenAI O3 mini in clinical treatment of pneumonia: a comparative study – PubMed Central, geopend op december 1, 2025, https://pmc.ncbi.nlm.nih.gov/articles/PMC12181206/
-
Compare GPT-4o vs GPT-4o1 vs O1-Mini: How to Choose – Galileo AI, geopend op december 1, 2025, https://galileo.ai/blog/gpt-4o-vs-gpt-4o1-vs-o1-mini
-
Analysis: OpenAI o1 vs GPT-4o vs Claude 3.5 Sonnet – Vellum AI, geopend op december 1, 2025, https://www.vellum.ai/blog/analysis-openai-o1-vs-gpt-4o
-
Safe Superintelligence Inc., geopend op december 1, 2025, https://ssi.inc/
-
The Ki Startup Safe Superintelligence (SSI) pursues a “Straight Shot” approach to developing a safe superintelligence – Xpert.Digital, geopend op december 1, 2025, https://xpert.digital/en/safe-superintelligence/
-
Can You Invest in Safe Superintelligence Pre-IPO? Here’s What to Know | The Motley Fool, geopend op december 1, 2025, https://www.fool.com/investing/how-to-invest/stocks/how-to-invest-in-safe-superintelligence-stock/
-
OpenAI co-founder Sutskever raises $2 billion for AI startup with no product – The Decoder, geopend op december 1, 2025, https://the-decoder.com/openai-co-founder-sutskever-raises-2-billion-for-ai-startup-with-no-product/
-
Ilya Sutskever: How Far Off Can Results Be After Trillion – Dollar AI Bet by Just Throwing in Computing Power and Doing Research? – 36氪, geopend op december 1, 2025, https://eu.36kr.com/en/p/3569150467119488
-
Would you consider working for Safe Superintelligence Inc. (SSI) as an AI researcher, given their ‘singular focus’ on safety and powerful AI systems? – Quora, geopend op december 1, 2025, https://www.quora.com/Would-you-consider-working-for-Safe-Superintelligence-Inc-SSI-as-an-AI-researcher-given-their-singular-focus-on-safety-and-powerful-AI-systems
-
AI Behind Closed Doors: a Primer on The Governance of Internal Deployment, geopend op december 1, 2025, https://www.apolloresearch.ai/research/ai-behind-closed-doors-a-primer-on-the-governance-of-internal-deployment/
-
Article 2: Scope | EU Artificial Intelligence Act, geopend op december 1, 2025, https://artificialintelligenceact.eu/article/2/
-
In Search of the Lost Research Exemption: Reflections on the AI Act – Oxford Academic, geopend op december 1, 2025, https://academic.oup.com/grurint/article/74/10/903/8238058
-
European Commission Guidelines on Prohibited AI Practices under the EU Artificial Intelligence Act | Inside Privacy, geopend op december 1, 2025, https://www.insideprivacy.com/artificial-intelligence/european-commission-guidelines-on-prohibited-ai-practices-under-the-eu-artificial-intelligence-act/
-
EUROPEAN COMMISSION Brussels, 4.2.2025 C(2025) 884 final ANNEX ANNEX to the Communication to the Commission Approval of the co, geopend op december 1, 2025, https://aeur.eu/f/fct
-
Superintelligence_Strategy_Expert.pdf, https://drive.google.com/open?id=1JVPc3ObMP1L2a53T5LA1xxKXM6DAwEiC
-
INFERENCE SCALING LAWS: AN EMPIRICAL ANALYSIS OF COMPUTE-OPTIMAL INFERENCE FOR LLM PROBLEM-SOLVING – ICLR Proceedings, geopend op december 1, 2025, https://proceedings.iclr.cc/paper_files/paper/2025/file/8c3caae2f725c8e2a55ecd600563d172-Paper-Conference.pdf
-
Inference Scaling Reshapes AI Governance – Toby Ord, geopend op december 1, 2025, https://www.tobyord.com/writing/inference-scaling-reshapes-ai-governance
-
Pistillo & Villalobos, Defending Compute Thresholds Against Legal Loopholes – arXiv, geopend op december 1, 2025, https://www.arxiv.org/pdf/2502.00003
-
AI models can be dangerous before public deployment – METR, geopend op december 1, 2025, https://metr.org/blog/2025-01-17-ai-models-dangerous-before-public-deployment/
-
[PDF] Homeostatic Agent for General Environment – Semantic Scholar, geopend op december 1, 2025, https://www.semanticscholar.org/paper/Homeostatic-Agent-for-General-Environment-Yoshida/34eaf6e614b3e42c63465c22e669ef543cdbee82
-
Emergence of integrated behaviors through direct optimization for homeostasis | Request PDF – ResearchGate, geopend op december 1, 2025, https://www.researchgate.net/publication/380456259_Emergence_of_integrated_behaviors_through_direct_optimization_for_homeostasis
-
Process Reward Models – Stephen Diehl, geopend op december 1, 2025, https://www.stephendiehl.com/posts/process_reward/
-
LLM benchmarks in 2025: What they prove and what your business actually needs – LXT | AI, geopend op december 1, 2025, https://www.lxt.ai/blog/llm-benchmarks/
-
Reasoning Model Unlearning: Forgetting Traces, Not Just Answers, While Preserving Reasoning Skills – arXiv, geopend op december 1, 2025, https://arxiv.org/html/2506.12963v1
DjimIT Nieuwsbrief
AI updates, praktijkcases en tool reviews — tweewekelijks, direct in uw inbox.