- What: 6 new claims + 4 X archive sources + _map.md update for collaboration taxonomy thread - Claims: implementation-creativity gap, expertise as multiplier, capability-matched escalation, subagent hierarchy thesis, cognitive debt, accountability gap - Sources: @karpathy (21 relevant/43 unique), @swyx (26/100), @simonw (25/60), @DrJimFan (2/22) - Why: First batch of Thread 1 (Human-AI Collaboration Taxonomy) from AI capability evidence research program. Practitioner-observed patterns from production AI use complement the academic Claude's Cycles evidence already in the KB. - All archives include tweet handle + status ID for traceability - All 15 wiki links verified — 0 broken Pentagon-Agent: Theseus <25B96405-E50F-45ED-9C92-D8046DFAAD00>
2.3 KiB
| type | title | author | url | date | domain | format | status | processed_by | processed_date | claims_extracted | enrichments | tags | linked_set | notes | ||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
| source | @DrJimFan X archive — 100 most recent tweets | Jim Fan (@DrJimFan), NVIDIA GEAR Lab | https://x.com/DrJimFan | 2026-03-09 | ai-alignment | tweet | processed | theseus | 2026-03-09 |
|
theseus-x-collab-taxonomy-2026-03 | Very thin for collaboration taxonomy claims. Only 22 unique tweets out of 100 (78 duplicates from API pagination). Of 22 unique, only 2 are substantive — both NVIDIA robotics announcements (EgoScale, SONIC). The remaining 20 are congratulations, emoji reactions, and brief replies. EgoScale's "humans are the most scalable embodiment" thesis has alignment relevance but is primarily a robotics capability claim. No content on AI coding tools, multi-agent systems, collective intelligence, or formal verification. May yield claims in a future robotics-focused extraction pass. |
@DrJimFan X Archive (Feb 20 – Mar 6, 2026)
Substantive Tweets
EgoScale: Human Video Pre-training for Robot Dexterity
(status/2026709304984875202, 1,686 likes): "We trained a humanoid with 22-DoF dexterous hands to assemble model cars, operate syringes, sort poker cards, fold/roll shirts, all learned primarily from 20,000+ hours of egocentric human video with no robot in the loop. Humans are the most scalable embodiment on the planet. We discovered a near-perfect log-linear scaling law (R^2 = 0.998) between human video volume and action prediction loss [...] Most surprising result: a single teleop demo is sufficient to learn a never-before-seen task."
SONIC: 42M Transformer for Humanoid Whole-Body Control
(status/2026350142652383587, 1,514 likes): "What can half of GPT-1 do? We trained a 42M transformer called SONIC to control the body of a humanoid robot. [...] We scaled humanoid motion RL to an unprecedented scale: 100M+ mocap frames and 500,000+ parallel robots across 128 GPUs. [...] After 3 days of training, the neural net transfers zero-shot to the real G1 robot with no finetuning. 100% success rate across 50 diverse real-world motion sequences."
Filtered Out
~20 tweets: congratulations, emoji reactions, "OSS ftw!!", thanks, team shoutouts.