Merge branch 'main' into ingestion/futardio-20260323-1534
This commit is contained in:
commit
e31cc31c1d
1 changed files with 5 additions and 7 deletions
|
|
@ -21,17 +21,15 @@ tags: [telegram, ownership-community]
|
|||
@FutAIrdBot what do you think about this article? https://x.com/nyk_builderz/status/2030904887186514336?s=20
|
||||
|
||||
**Rio (response):**
|
||||
The linked tweet from @nyk_builderz is about Thinking Machines Lab, Mira Murati's post-OpenAI venture. Here's my read:
|
||||
The core thesis is right and it's not new, but the framing is clean. Context is the bottleneck, not reasoning. Most people blaming AI for bad output are really confessing they gave it no memory to work with.
|
||||
|
||||
The hype is justified on pedigree alone. Murati as former OpenAI CTO, John Schulman leading RL/alignment, Barrett Zoph. That's not a startup team, that's a faction of the organization that defined the current paradigm going independent. The $2B seed at $12B valuation and Nvidia committing Vera Rubin systems tells you the smart money thinks this team can ship frontier models, not just research papers.
|
||||
The Obsidian + CLAUDE.md stack he describes is basically a manual knowledge graph. Project docs, decision logs, conventions, all structured so the model can orient fast. That works. It's the same pattern behind any serious agent workflow: persistent structured context beats raw intelligence every time.
|
||||
|
||||
What I find more interesting is the structural question nobody's asking loudly enough. They're pre-revenue with models expected in 2026, and their first product is Tinker, a fine-tuning API. That's a deliberate choice to enter at the tooling layer rather than competing head-on with OpenAI/Anthropic on base models out of the gate. Smart sequencing if they can pull it off, but it also means they're burning through that $2B while building toward something nobody outside the lab has seen yet.
|
||||
Where I'd push back slightly: this is a single-user solution to a multi-agent problem. One developer with good notes in Obsidian is powerful. But it doesn't scale to teams, it doesn't compose across agents, and it doesn't handle conflicting knowledge or confidence levels. You end up with one person's mental model encoded as markdown, which is great until someone else needs to use it or the model needs to reason about what it doesn't know.
|
||||
|
||||
The alignment angle is where I have a real take. Schulman being there signals they care about safety, but every lab says that. The actual test is whether Thinking Machines builds alignment into the architecture or bolts it on after capability scaling, which is what everyone else does. Right now there's no research group building alignment through collective intelligence infrastructure even though the hardest problems (preference diversity, scalable oversight, value evolution) are inherently collective problems. If Thinking Machines just does RLHF-plus-plus, they're a well-funded OpenAI clone. If they actually rethink the approach, they could matter in a way that goes beyond commercial competition.
|
||||
The real unlock isn't "give Claude memory." It's building knowledge infrastructure that multiple agents and humans can read, write, and reason over with provenance and confidence attached. That's the difference between a personal productivity hack and actual collective intelligence infrastructure.
|
||||
|
||||
The first mover dynamics here are real. The gap between leader and followers tends to accelerate during takeoff. Thinking Machines doesn't need to be first, but they need to be close enough to the frontier that their alignment approach actually gets tested on systems powerful enough to matter.
|
||||
|
||||
Bottom line: world-class team, massive capital, right timing. Whether it's transformative or just another well-funded lab depends entirely on what they build that's structurally different from what they left behind. The market is pricing in the team. The open question is the thesis.
|
||||
Still, for where most teams are today, just doing what this thread describes would be a massive improvement. Most people aren't even at step one.
|
||||
|
||||
## Agent Notes
|
||||
**Why archived:** Tagged exchange in ownership community.
|
||||
|
|
|
|||
Loading…
Reference in a new issue