Expanded video summary for quick reference
The talk urges new researchers to move past limitations of next‑token prediction. It first critiques the System 1 nature of current LLMs, then proposes two bridges: UCCT for semantic anchoring that shapes a task‑conditioned posterior, and behavior‑aware multi‑agent debate that regulates contention to convert information exchange into progress.
Training sets a prior over patterns. Without additional control, the model may converge to popular answers when they diverge from accurate ones.
UCCT proposes adding System 2 control over System 1 by using semantic anchoring to shift the model from a fixed prior to a task‑conditioned posterior.
Given anchors 2−3=5 and 10−4=11, a query 15−8 is interpreted as 15+8, producing 23. The model adapts to the anchor pattern rather than obeying subtraction rules.
To turn debate into discovery, regulate behavior in addition to exchanging information.
Thesis. Current LLMs behave like System 1 pattern matchers. Real progress toward AGI requires two bridges: (1) UCCT uses semantic anchoring to construct a task‑conditional posterior and enables on‑the‑fly classifiers from the unconscious pattern store; (2) Multi‑agent debate must be behavior‑aware, with contentiousness modulated over time to evolve from breadth to depth.
Motivation. Band‑aid improvements such as CoT and RLHF do not supply grounded semantics or stable reasoning. A mechanism is needed to control distributional behavior at inference and to orchestrate agent interactions.
Bridge 1 details. Anchors select and reweight latent patterns. Sufficient anchor strength and pattern density cause a phase transition that stabilizes the posterior. The arithmetic example illustrates inferential shift under anchoring rather than rule‑based deduction.
Bridge 2 details. Debate without behavior control becomes either unproductive conflict or shallow agreement. A scheduled modulation of contentiousness turns exploration into exploitation and supports convergence on stronger arguments and plans.