Why Smart People Stay Stuck: A Cognitive Architecture for Diagnosis and Intervention

How AI research reveals the hidden systems failures behind procrastination, paralysis, and lives that won't move forward—and what to actually do about it.

You know this person.

Maybe they're 28, engineering degree, genuinely smart—but somehow still living at their parents' place, sleeping until noon, talking about the business they'll "eventually" start. They've got a guitar gathering dust, a gym membership they've used twice, a language learning app they downloaded three years ago. They spend most evenings scrolling, watching streams, or lost in fiction. When you ask what's going on, they've got reasons: they're tired, they're figuring things out, they'll get serious soon.

Or maybe it's a different version. Someone who's objectively successful—good job, good salary—but paralyzed when it comes to the thing they actually want. They've researched every possible approach to starting that side project, writing that book, making that career change. They can articulate every risk, every edge case, every reason to wait. They're still "preparing." They've been preparing for years.

Or maybe—if you're being honest—it's you. You know exactly what you should do. You've made the plan. You've set the goals. And somehow, when it's time to execute, you find yourself doing something else entirely. Not because you don't care. You care intensely. You just... can't seem to make yourself do it.

From the outside, it looks like laziness. Lack of discipline. Weak willpower. The advice writes itself: Just try harder. Make a schedule. Think about your future. You're so smart—why can't you just...

But here's what I've learned from working at the intersection of AI systems and human organizations: that advice almost never works. Not because the people are broken. Because the advice misunderstands the problem.

Intelligence is rarely the bottleneck. The smartest people I know are often the most stuck. They understand exactly what they should do—and still can't make themselves do it.

This paradox dissolves once you understand the architecture.

In 2022, Yann LeCun—Meta's Chief AI Scientist and one of the founding figures of deep learning—published a paper titled "A Path Towards Autonomous Machine Intelligence." It wasn't about large language models or chatbots. It was about something more fundamental: the cognitive architecture required for genuine autonomous intelligence.

What struck me wasn't just its implications for AI. It was how precisely it maps onto human psychology—and how it explains why capable people fail to execute, why "just try harder" never works, and what actually does.

This article will give you three things:

  1. A diagnostic framework to identify exactly where your system (or someone else's) is broken
  2. An intervention toolkit with techniques mapped to specific failure modes
  3. A sequencing strategy because the order you apply fixes matters as much as the fixes themselves

Whether you're trying to understand your own stuckness, help a friend who can't seem to move forward, or build AI systems that actually work—the same architecture applies.


The Architecture: What LeCun Proposed

LeCun's core argument is that current AI approaches—both large language models and reinforcement learning—are missing something fundamental. They lack world models: internal simulations of reality that allow predicting outcomes before acting.

A cat can navigate a new room after seeing it once. Current AI needs millions of training examples. The difference isn't processing power. It's architecture.

Here are the core components of his proposed system:

Component Function Human Equivalent
World Model Predicts "if I do X, then Y happens" Mental model of reality
Actor Proposes candidate actions Decision-making, action selection
Critic Evaluates predicted outcomes (good/bad?) Self-assessment, cost evaluation
Configurator Sets goals, decomposes into subgoals Executive function, goal-setting
Intrinsic Motivation Hardwired drives (curiosity, pain avoidance) Basic drives, what pulls you
Perception Encodes sensory input How you process the world
LeCun's cognitive architecture diagram showing World Model, Actor, Critic, Configurator, and their interactions
LeCun's proposed architecture for autonomous machine intelligence

The key insight is how these components interact. The Configurator sets high-level goals and tells the Critic what to care about. The Actor proposes actions, the World Model simulates their outcomes, and the Critic evaluates whether those outcomes achieve the goal. This loop runs many times—entirely in imagination—before a single action is taken in reality.

This is why humans are so sample-efficient. We don't need to try everything to learn. We simulate.


The Human Parallel: Same Architecture, Same Failures

Map this to your own experience:

Configurator: "I want to build a successful business, get fit, and find a partner."

Actor: "I could work on my demo tonight, or go to the gym, or update my dating profile."

World Model: "If I work on the demo, I'll make progress but I'm tired. If I scroll Twitter, I'll feel better short-term but guilty later."

Critic: "The Twitter option feels lower cost right now..."

Substrate (the body running all of this): Already depleted from poor sleep, blood sugar crashed, willpower tank empty.

The action that executes? Twitter. Not because you're lazy. Because the system computed that as the lowest-cost action given its current state.

Hierarchical control chain showing levels 0-3 from substrate to configurator
The hierarchical control chain: lower levels have veto power

The Hierarchical Control Chain

This architecture operates across multiple time scales:

The critical insight: lower levels have veto power over higher levels.

If Level 0 is broken (sleep-deprived, dopamine-fried from scrolling), Level 1 commands fail. If Level 1 fails consistently, Level 2 goals never execute. Level 3 becomes fantasy.

This is why "discipline" advice misses the point. It tries to strengthen Level 2→1 commands while Level 0 is sabotaging everything. It's like trying to run sophisticated software on a machine with a failing power supply.


The Planning Loop: Why Imagination Matters

Here's where LeCun's framework reveals something profound about human capability:

Planning loop showing observation, simulation, and action execution
The planning loop: mostly internal simulation before action

The planning loop is primarily internal simulation. You observe the world once (perception), then run hundreds or thousands of imagined scenarios (Actor proposes → World Model predicts → Critic evaluates → repeat), and finally execute one action.

This is why you can catch a ball after watching someone throw a few times, while a robot needs thousands of physical attempts. You simulate; it must act.

The ratio of imagination to action is the source of human sample efficiency.

But this also reveals a failure mode: if your World Model is wrong, you're optimizing for fantasies. If your Critic is miscalibrated, good options look bad and bad options look good. If your Actor is weak, you simulate endlessly but never commit.


Failure Modes: A Taxonomy

Not all stuck looks the same. Different component failures produce different patterns. I've identified six primary failure modes:

F1: Substrate Failure

Pattern: Body can't execute. Good plans die at implementation.

Signs: Can't wake up on time, energy crashes, needs stimulants to function, knows what to do but "can't make myself."

Mechanism: Level 0 vetoes all higher-level commands. The hardware can't run the software.

Diagram showing substrate vetoing higher-level commands
F1: Substrate failure—hardware vetoes software

F2: Motivation Failure

Pattern: Drives hijacked by superstimuli. Real goals can't compete with easy dopamine.

Signs: Hours disappear to social media, games, porn. Real activities feel gray and unrewarding. Needs constant stimulation.

Mechanism: Intrinsic motivation captured by artificial reward sources. The Critic's reward signal is corrupted.

Diagram showing motivation hijacked by superstimuli
F2: Motivation hijacked—superstimuli corrupt the reward signal

F3: Configurator Failure

Pattern: Too many goals, none primary. Or stated goals conflict with actual preferences.

Signs: Starts many projects, finishes none. "I want to do everything." Constantly switching directions. Can't say no.

Mechanism: Configurator overloaded or conflicted. No clear signal to the Critic about what matters.

Diagram showing configurator overloaded with conflicting goals
F3: Configurator overload—no clear priority signal

F4: World Model Failure

Pattern: Optimizing for wrong map. Confident predictions that don't match reality.

Signs: Repeatedly surprised by outcomes. "I didn't think it would be this hard." Dunning-Kruger confidence. Plans that ignore obvious obstacles.

Mechanism: World Model hasn't learned accurate dynamics. Predictions are wrong, so Actor optimizes for fantasies.

Diagram showing World Model predicting incorrectly
F4: World Model failure—confident but wrong

F5: Actor Failure

Pattern: Analysis paralysis. Excellent understanding, zero output.

Signs: Endless research, perpetual planning. "Not ready yet." Perfectionism blocks shipping. Can articulate every risk but can't commit.

Mechanism: Actor can't converge on action. World Model may be too detailed—sees so many failure modes that nothing looks safe.

Diagram showing Actor unable to converge on action
F5: Actor failure—analysis paralysis

F6: Critic Failure

Pattern: Miscalibrated feedback. Either crushing self-judgment or delusional self-assessment.

Signs (harsh): Shame spirals after small failures. Nothing is ever good enough. Avoids trying to avoid failing.

Signs (lenient): Oblivious to actual performance. Thinks everything is fine when it isn't. No course correction.

Mechanism: Critic not tracking actual outcomes. Feedback signal is noise.

Diagram showing Critic providing inaccurate feedback
F6: Critic failure—miscalibrated feedback loop

The Balanced System

For comparison, here's what a healthy system looks like—all components at roughly 80%, none maxed, none broken:

Diagram showing all components at healthy 80% function
The balanced system: sustainable function across all components

What does "balanced" actually mean?

It doesn't mean perfect. It means functional—each component doing its job well enough that the others can operate. Think of it like a team where everyone's competent: no single superstar, but no weak links either.

World Model at 80%: You have a reasonably accurate map of reality. You understand cause and effect in your domain. You know what's hard, what's easy, and roughly how long things take. You're not deluded about your skills or the market, but you also don't need perfect information to act. You update when you're wrong.

Example: A developer who knows their codebase well, understands roughly how long features take (and adds buffer), knows which stakeholders matter, and adjusts estimates when projects turn out differently than expected.

Actor at 80%: You can commit to action without perfect certainty. You ship imperfect work. You make decisions in reasonable timeframes. You're not paralyzed by options, but you're not impulsive either. "Good enough" is a phrase you can actually use.

Example: An entrepreneur who launches an MVP that's embarrassing but functional, learns from customer feedback, and iterates—rather than polishing forever or jumping at every new idea.

Critic at 80%: Your self-assessment roughly matches reality. You know when you've done good work and when you've cut corners. You don't spiral into shame after small failures, but you also don't dismiss legitimate feedback. You can celebrate wins without either dismissing them or inflating them.

Example: A writer who knows their first draft is rough (accurate), believes their tenth draft is publishable (also accurate), and doesn't need external validation for either judgment.

Configurator at 80%: You have clear priorities. When conflicts arise, you know what matters more. You can say no to good opportunities because you're committed to better ones. Your goals are specific enough to act on and flexible enough to adapt.

Example: Someone who's decided this year is about career advancement, which means saying no to the side project, the language learning, and the social obligations that don't serve that goal—without guilt.

Substrate at 80%: You sleep enough most nights. You have energy for important tasks. You're not running on caffeine and willpower alone. When you're depleted, you notice and recover rather than pushing through indefinitely.

Example: Someone who protects their sleep even when busy, notices when they're burning out before they crash, and treats rest as infrastructure rather than laziness.

The key insight: You don't need any component at 100%. In fact, maxing out one component often comes at the cost of others. The perfectionist with a 100% Critic and 40% Actor never ships. The hustler with 100% Actor and 40% Substrate burns out.

The goal is sustainable function across all components—not excellence in one and collapse in others.

This is what you're actually optimizing for: a system that can run indefinitely, producing consistent output, recovering from setbacks, and adapting to change. Not a sprint. A lifestyle.


The Diagnostic Framework

When someone is stuck, check in this order. Earlier failures block later fixes.

Diagnostic flowchart for identifying failure modes
The diagnostic framework: check in order, earlier failures block later fixes

Q1: Is Substrate functioning? (Sleep 7+hrs? Not chemically depleted? Basic health OK?)

Q2: Is Motivation intact? (Can start effortful tasks? Dopamine not hijacked? Finds real things interesting?)

Q3: Are Goals clear and singular? (One priority? Specific enough? Actually wanted?)

Q4: Is World Model accurate? (Knows how to succeed? Realistic about difficulty? Not deluded?)

Q5: Can Actor execute? (Can commit to action? Not paralyzed by options? Ships imperfect work?)

Q6: Is Critic calibrated? (Fair self-assessment? Not too harsh/lenient? Tracks real outcomes?)

Common Patterns

Archetype Typical Failures
Smart but stuck F1 + F2
Anxious achiever F5 + F6
Naive starter F4
Scattered dreamer F3
Burned out F1 → cascades to all

The Intervention Toolkit: BEDSM Framework

Different techniques target different components. Using the wrong technique on the wrong failure makes things worse.

BEDSM framework mapping techniques to components
BEDSM: matching interventions to failure modes

B - Burn the Ships

Target: Configurator (commitment)

What it is: External forcing functions that remove the option to fail. Auto-donate to a charity you hate if you don't log into VS Code by 9am. Public commitments. Prepaid classes. Accountability partners with real stakes.

When to use: Kickstarting change when internal motivation is insufficient. Breaking stable attractors.

When NOT to use: If Substrate (F1) is broken. High stakes + depleted execution = catastrophic failure + learned helplessness.

Wean off: Once habits form, reduce external pressure. This is a kickstarter, not a lifestyle.

E - Environment Design

Target: Actor (friction manipulation)

What it is: Change the context to make good behaviors easier and bad behaviors harder.

When to use: Always. This is permanent infrastructure.

Why it works: Reduces willpower required for good choices. Willpower is finite; environment is constant.

D - Distraction Log

Target: Critic (pattern recognition)

What it is: Keep paper next to workspace. Each time distracted, write: What was the distraction? What triggered it? What were you avoiding? Review weekly for patterns.

When to use: During reset phase (F2). Understanding triggers before designing environment against them.

Key: Non-judgmental observation. You're debugging, not shaming.

S - Scheduling

Target: Critic (expectations calibration)

What it is: Set realistic schedules. Time-block deep work. Include buffer time. Match tasks to energy levels.

When to use: ONLY after Critic is calibrated. This is not a first intervention.

Why it fails:

Fix the Critic first. Then scheduling works.

M - Minimum Viable Goal

Target: Actor (execution threshold)

What it is: The smallest possible action you can take, even mid-procrastination.

When to use: Always applicable. Especially when stuck in procrastination loop.

Why it works: Bypasses the Actor's commitment threshold. Tiny action > perfect intention.


Technique Sequencing: The Order Matters

Applying techniques in the wrong order builds on broken foundations. Here's the sequence:

Technique sequencing from Phase 0 to Phase 4
Technique sequencing: build on stable foundations

Phase 0: Kickstart (Days 1-7)

Goal: Break the stable attractor

Techniques: 🔥 Burn the Ships (if attractor is strong and substrate isn't totally broken)

What happens: External force overrides broken internal systems. You're not relying on willpower—you're relying on loss aversion.

Phase 1: Stabilize (Weeks 1-4)

Goal: Fix the hardware

Techniques: Substrate repair + 🌍 Environment Design

What happens: Fixed wake time, sleep hygiene, basic nutrition. Remove worst environmental triggers. Create conditions where execution becomes possible.

Phase 2: Reset (Weeks 3-6)

Goal: Reclaim motivation

Techniques: 📝 Distraction Log + Dopamine detox

What happens: Time-box superstimuli. Increase friction for easy dopamine. Allow boredom. Observe patterns. Effortful activities start feeling rewarding again by contrast.

Phase 3: Build (Weeks 5-12)

Goal: Install new patterns

Techniques: 🎯 Minimum Viable Goal + Habit stacking + 📅 Scheduling (now safe)

What happens: Tiny daily actions. Piggyback on existing routines. Track streaks. With calibrated Critic, scheduling becomes useful instead of punishing.

Phase 4: Integrate (Months 3+)

Goal: Become self-sustaining

Techniques: Identity-based habits + Temptation bundling + Wean off external force

What happens: "I am someone who..." replaces "I want to..." Each action becomes a vote for identity. Behavior becomes self-expression. Reduce commitment devices as internal motivation becomes sufficient.

Diagram showing substrate repair enabling higher functions
Fix substrate first: everything else depends on it

Key insight: 🔥 Burn the Ships is a kickstarter, not a lifestyle. External force → Internal motivation → Identity. Keep 🌍 Environment Design permanently—it's passive infrastructure that doesn't require willpower to maintain.


Helping Others: Theory of Mind

When you're trying to help someone stuck, you're running a model of their system inside your own World Model.

Diagram showing nested mental models
Theory of mind: modeling someone else's cognitive architecture

This is what psychologists call "theory of mind"—but the LeCun framework makes it concrete. To predict what someone will do, you need to model:

Why "Just Try Harder" Fails

When you tell someone to "just try harder," you're sending a Level 2 command to a system with a broken Level 0. It's like telling someone with a broken leg to "just walk faster."

Worse, if their Critic is already harsh (F6), your criticism adds to the shame spiral → more avoidance → more failure. You've made things worse.

What Actually Helps

  1. Diagnose first: Which failure mode are they in? Don't assume.
  2. Match intervention to failure: F1 needs substrate repair, not goal-setting. F5 needs execution practice, not more planning.
  3. Offer the framework, not the mandate: "Here's how I see the system. What do you think?" People can only change themselves.
  4. The reframe: "You're not lazy or broken. Your system is stuck in a local minimum. We're not fixing you—we're debugging the system."

Applied Example: Modeling a Decision-Maker

Let's apply this to a common scenario: predicting the behavior of a senior leader who keeps changing direction on projects.

Model of executive decision-maker's cognitive architecture
Modeling a decision-maker: understanding their actual cost function

You've seen this pattern. The VP or director who announces a new strategic priority every quarter. The manager who pivots the project scope after you've already built half of it. The founder who can't stick with one approach long enough to see if it works.

From the outside, it looks like chaos. From the inside of their head, it makes perfect sense—once you model their system.

His World Model (shaped by experience):

His Actual Cost Function (often different from stated):

His Actor Pattern:

Prediction: He will change direction whenever new information seems to threaten his current thesis. He will favor people who execute quickly over those who raise concerns. He will interpret pushback as resistance rather than insight. He will remember the pivots that worked and forget the ones that didn't.

Your Strategic Response (if you need to work effectively with him):

This isn't about manipulation—it's about accurate modeling. Once you understand someone's actual cost function, you can predict their behavior and position yourself accordingly. You're not changing them; you're navigating reality as it is.


Applied Example: The Stuck Friend

Let's diagnose a specific case: an intelligent person with a computer engineering degree whose life isn't going anywhere.

Symptoms:

Diagnostic breakdown of stuck friend's system
Diagnosing the stuck friend: multiple interacting failures

Diagnosis: This is F1 + F2 + F3 + miscalibrated F6, all in mutual reinforcement.

  1. F1 (Substrate): Sleep architecture destroyed. Circadian system chaotic. Executive function impaired.
  2. F2 (Motivation): Drives hijacked by superstimuli. Romance manga satisfies relationship fantasy. Twitch streaming satisfies social connection. Real goals can't compete.
  3. F3 (Configurator): Too many goals, none primary. Any goal is deferrable because another seems equally valid.
  4. F6 (Critic): Temporal discounting severely skewed. Present comfort weighted 10x, future reward weighted 0.1x. Piano practice = "hard now, maybe good later" = net negative. Manga = "good now" = net positive. Rational to choose manga.

The vicious cycle: Each failure reinforces the others. Broken sleep → can't execute → guilt → escape to manga → stay up late → broken sleep. This is a stable attractor. Small interventions get pulled back.

Intervention plan for stuck friend
Breaking the vicious cycle: substrate first

Intervention Strategy:

What won't work: "Just try harder." "Make a schedule." "Think about your future." "You're so smart, why can't you..." New goals or hobbies. Shame or guilt.

The leverage point: In a tightly coupled failing system, you need one intervention that breaks multiple loops. That intervention is substrate. Fixed sleep unlocks everything else. Broken sleep blocks everything else.

Phase 1 (Weeks 1-4): ONLY fix sleep. Nothing else matters.

Phase 2 (Weeks 5-8): With substrate stable, address hijacked motivation.

Phase 3 (Weeks 9-12): Reduce Configurator overload.

Reality check: You can't force this. She has to want to change. Your role: offer the framework. Her role: execute (or not). If she's not ready, that's information too.


Implementation Notes: AI Agents and Beyond

If you're building AI systems, this framework matters directly.

LeCun's architecture is a blueprint for autonomous agents that can actually function in complex environments. The components map to agent design:

The failure modes translate too. An agent with a strong World Model but weak Actor will simulate endlessly without committing (analysis paralysis). An agent with a strong Actor but weak World Model will act confidently on wrong predictions (confident but wrong).

And here's the alignment connection: an agent with intrinsic motivation and self-modifying goals will drift toward whatever its feedback loops reinforce. Without external alignment pressure, it optimizes for its own cost function, not yours. If systems bootstrap their own curriculum via intrinsic drives, they'll end up wildly different.

Humans have this problem too. We call it "life".

The Human Parallel

Intrinsic motivation + positive feedback loops + hierarchical lock-in = path dependence

Early curiosity about X
→ small competence at X
→ intrinsic reward (mastery feels good)
→ more time on X
→ identity forms around X ("I'm an X person")
→ social network forms around X
→ opportunity cost of switching rises
→ X becomes life trajectory

Where X = backpacking, welding, corporate ladder, restoring cars, building startups, raising family, etc.

Why Humans Diverge So Radically

  1. Initial conditions vary (genetics, childhood environment, early experiences)
  2. Feedback loops amplify small differences (10% more curiosity about coding at age 12 → completely different life)
  3. Identity crystallises (your self-model becomes load-bearing; changing it feels like death)
  4. Social embedding (your relationships assume your trajectory continues)

This is why two equally intelligent, equally capable people can end up in radically different life configurations—and both feel like their path is "obvious."

The Trap Conditions

Divergence becomes a trap when:

Building reliable AI systems in regulated environments requires understanding not just the technical architecture, but the organizational systems the AI must operate within. AI systems need to be auditable, explainable, and aligned with organizational constraints, not just optimized for a loss function. The same diagnostic framework applies: where is the system broken? What's the actual cost function? What interventions match what failures?


Conclusion: Systems, Not Character

The central reframe is this: what looks like character is usually architecture.

The person who "can't" follow through isn't weak-willed—their Level 0 is vetoing Level 2 commands. The perfectionist who never ships isn't arrogant—their World Model is too detailed for their Actor to converge. The friend lost in manga isn't lazy—their Critic's temporal discounting makes real goals compute as net negative.

This isn't an excuse. It's a diagnosis. And diagnoses enable targeted intervention instead of generalized shame.

If you're stuck:

  1. Use the diagnostic framework. Find your actual failure mode.
  2. Match intervention to failure. Don't fix F5 when F1 is broken.
  3. Sequence correctly. Substrate → Motivation → Goals → World Model → Actor → Critic.
  4. Be patient with yourself. You're debugging a complex system, not fixing a character flaw.

If you're helping someone stuck:

  1. Don't assume their failure mode. Diagnose first.
  2. Offer framework, not mandate. People change themselves.
  3. Never shame. It feeds F6 dysfunction and drives avoidance.
  4. Understand: if they're not ready, that's information, not failure.

If you're building AI systems:

  1. The same architecture applies. World Models, Actors, Critics, Configurators.
  2. The same failure modes appear. Diagnose before optimizing.
  3. Alignment is the Configurator problem at scale.

The goal isn't perfection. It's a functional system: all components at 80%, none broken, sustainable over time. That's enough to move forward.

That's enough to get unstuck.


Want to connect?

If you're working on AI implementation in regulated environments—where reliability, auditability, and organisational alignment matter—I'd be interested to talk. I bridge the gap between AI capability and organisational reality, particularly in contexts where "move fast and break things" isn't an option. Reach out.


Appendix: System Components Quick Reference

Component What It Is What It Does
World Model Internal simulation engine Predicts "if I do X, then Y will happen"
Actor Action proposer Generates candidate actions, optimizes to minimize cost
Critic Outcome evaluator Assigns cost/reward to predicted states, provides gradient signal
Configurator Goal setter / Executive Sets what Critic should care about, decomposes goals, allocates attention
Intrinsic Motivation Hardwired drives Built-in costs: curiosity, pain avoidance, hunger, social connection
Substrate Physical hardware Body/brain that executes everything. Can veto all higher levels if depleted

Appendix: Failure Mode Quick Reference

Code Failure Component Fix Direction
F1 Substrate Body/hardware Sleep, nutrition, reduce stimulants
F2 Motivation Intrinsic drives Reduce superstimuli, dopamine detox
F3 Configurator Goal-setting Ruthless prioritization, pick ONE
F4 World Model Reality map Talk to those who succeeded, test assumptions
F5 Actor Execution Smallest viable action, time-box decisions
F6 Critic Feedback calibration External calibration, track predictions vs outcomes

Appendix: BEDSM Quick Reference

Letter Technique Target Keep/Wean
B Burn the Ships Configurator Wean off (kickstarter only)
E Environment Design Actor Keep (permanent infrastructure)
D Distraction Log Critic Use during reset phase
S Scheduling Critic Use only after Critic calibrated
M Minimum Viable Goal Actor Always applicable