rio: extract claims from 2026-03-04-futardio-launch-pli-crperie-ambulante #579
9 changed files with 119 additions and 193 deletions
|
|
@ -1,37 +0,0 @@
|
|||
---
|
||||
description: Getting AI right requires simultaneous alignment across competing companies, nations, and disciplines at the speed of AI development -- no existing institution can coordinate this
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
created: 2026-02-16
|
||||
confidence: likely
|
||||
source: "TeleoHumanity Manifesto, Chapter 5"
|
||||
---
|
||||
|
||||
# AI alignment is a coordination problem not a technical problem
|
||||
|
||||
The manifesto makes one of its sharpest claims here: the hard part of AI alignment is not the technical challenge of specifying values in code but the coordination challenge of getting competing actors to align simultaneously.
|
||||
|
||||
Getting AI right requires alignment across competing companies, each racing to be first because second place may mean irrelevance. Across competing nations, each afraid the other will achieve superintelligence and use it to dominate. Across multiple academic disciplines that barely speak to each other. And it must happen at the speed of AI development, which is measured in months, not the decades or centuries over which previous coordination challenges were resolved.
|
||||
|
||||
No existing institution can do this. Governments move at the speed of legislation and are bounded by borders. International bodies lack enforcement. Academia is siloed by discipline. The companies building AI are locked in a race that punishes caution. The incentive structure actively makes it worse: to win the race to superintelligence is to win the right to shape the future of humanity. The prize is so vast that every actor is incentivized to move faster than safety allows. Each is locally rational. The collective outcome is potentially catastrophic.
|
||||
|
||||
Dario Amodei describes AI as "so powerful, such a glittering prize, that it is very difficult for human civilization to impose any restraints on it at all." He runs one of the companies building it and is telling us plainly that the system he operates within may not be governable by current institutions.
|
||||
|
||||
**2026 case study: the Anthropic/Pentagon/OpenAI triangle.** In February-March 2026, three events demonstrated this coordination failure in a single week. Anthropic dropped the core pledge of its Responsible Scaling Policy because "competitors are blazing ahead" — a voluntary safety commitment destroyed by competitive pressure. When Anthropic then tried to hold red lines on autonomous weapons in a Pentagon contract, the DoD designated them a supply chain risk (a label previously reserved for foreign adversaries) and awarded the contract to OpenAI, whose CEO admitted the deal was "definitely rushed" and "the optics don't look good." Meanwhile, a King's College London study found the same models being rushed into military deployment chose nuclear escalation in 95% of simulated war games. Three actors — a safety-conscious lab, a government customer, a willing competitor — each acting rationally from their own position, producing a collectively catastrophic trajectory. This is the coordination problem in miniature.
|
||||
|
||||
Since [[the internet enabled global communication but not global cognition]], the coordination infrastructure needed doesn't exist yet. This is why [[collective superintelligence is the alternative to monolithic AI controlled by a few]] -- it solves alignment through architecture rather than attempting governance from outside the system.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[the internet enabled global communication but not global cognition]] -- the coordination infrastructure gap that makes this problem unsolvable with existing tools
|
||||
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] -- the structural solution to this coordination failure
|
||||
- [[the alignment tax creates a structural race to the bottom because safety training costs capability and rational competitors skip it]] -- the clearest evidence that alignment is coordination not technical: competitive dynamics undermine any individual solution
|
||||
- [[scalable oversight degrades rapidly as capability gaps grow with debate achieving only 50 percent success at moderate gaps]] -- individual oversight fails, making collective oversight architecturally necessary
|
||||
- [[COVID proved humanity cannot coordinate even when the threat is visible and universal]] -- if coordination failed on a visible, universal biological threat, AI coordination is structurally harder
|
||||
- [[no research group is building alignment through collective intelligence infrastructure despite the field converging on problems that require it]] -- the field has identified the coordination nature of the problem but nobody is building coordination solutions
|
||||
- [[voluntary safety pledges cannot survive competitive pressure because unilateral commitments are structurally punished when competitors advance without equivalent constraints]] -- Anthropic RSP rollback (Feb 2026) proves voluntary commitments cannot substitute for coordination
|
||||
- [[government designation of safety-conscious AI labs as supply chain risks inverts the regulatory dynamic by penalizing safety constraints rather than enforcing them]] -- government acting as coordination-breaker rather than coordinator
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
|
|
@ -1,24 +0,0 @@
|
|||
---
|
||||
description: Acemoglu's framework of critical junctures -- turning points where institutional paths diverge -- maps directly onto the AI governance gap, creating the kind of destabilization that enables new institutional forms
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
created: 2026-02-17
|
||||
source: "Web research compilation, February 2026"
|
||||
confidence: likely
|
||||
---
|
||||
|
||||
Daron Acemoglu (2024 Nobel Prize in Economics) provides the institutional framework for understanding why this moment matters. His key concepts: extractive versus inclusive institutions, where change happens when institutions shift from extracting value for elites to including broader populations in governance; critical junctures, turning points when institutional paths diverge and destabilize existing orders, creating mismatches between institutions and people's aspirations; and structural resistance, where those in power resist change even when it would benefit them, not from ignorance but from structural incentive.
|
||||
|
||||
AI development is creating precisely this kind of critical juncture. The mismatch between AI capabilities and governance structures is the kind of destabilization Acemoglu identifies as a window for institutional transformation. Current AI governance institutions are extractive -- a handful of companies and governments control development while the population affected encompasses all of humanity. The gap between what AI can do and what institutions can govern is widening at an accelerating rate.
|
||||
|
||||
Critical junctures are windows, not guarantees. They can close. Acemoglu also documents backsliding risk -- even established democracies can experience institutional regression when elites exploit societal divisions. Any movement seeking to build new governance institutions during this juncture must be anti-fragile to backsliding. The institutional question is not just "how do we build better governance?" but "how do we build governance that resists recapture by concentrated interests once the juncture closes?"
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[technology advances exponentially but coordination mechanisms evolve linearly creating a widening gap]] -- the specific dynamic creating this critical juncture
|
||||
- [[adaptive governance outperforms rigid alignment blueprints because superintelligence development has too many unknowns for fixed plans]] -- the governance approach suited to critical juncture uncertainty
|
||||
- [[safe AI development requires building alignment mechanisms before scaling capability]] -- the urgency dimension of the juncture
|
||||
|
||||
Topics:
|
||||
- [[_map]]
|
||||
|
|
@ -0,0 +1,20 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
description: Beneficial AI outcomes require institutional co-alignment, not just model alignment.
|
||||
confidence: speculative
|
||||
source: theoretical framework paper
|
||||
created: 2023-10-01
|
||||
---
|
||||
|
||||
## Relevant Notes
|
||||
- AI alignment is a coordination problem not a technical problem <!-- claim pending -->
|
||||
- AI development is a critical juncture in institutional history... <!-- claim pending -->
|
||||
|
||||
## Challenges
|
||||
- Lack of empirical validation for proposed frameworks.
|
||||
- Coordination across diverse institutions is complex.
|
||||
|
||||
## See Also
|
||||
- [[AI alignment is a coordination problem not a technical problem]]
|
||||
- [[AI development is a critical juncture in institutional history...]]
|
||||
|
|
@ -15,11 +15,14 @@ The grant application identifies three concrete risks that make this sequencing
|
|||
|
||||
This phased approach is also a practical response to the observation that since [[existential risk breaks trial and error because the first failure is the last event]], there is no opportunity to iterate on safety after a catastrophic failure. You must get safety right on the first deployment in high-stakes domains, which means practicing in low-stakes domains first. The goal framework remains permanently open to revision at every stage, making the system's values a living document rather than a locked specification.
|
||||
|
||||
## Tension with concurrent co-alignment approaches
|
||||
|
||||
Full-stack alignment proposes a concurrent rather than sequential approach: institutional alignment mechanisms must be built *alongside* AI capability development, not before it. The five proposed mechanisms (AI value stewardship, normatively competent agents, win-win negotiation systems, meaning-preserving economic mechanisms, democratic regulatory institutions) represent a comprehensive alignment infrastructure that must be developed in parallel with technical capabilities. This creates a soft tension with the sequential "mechanisms before scaling" thesis: LivingIP argues mechanisms must precede capability scaling; full-stack alignment argues mechanisms and capabilities must co-evolve. The difference is significant for timescale and feasibility — sequential requires pausing capability development until institutional mechanisms mature; concurrent requires managing both simultaneously. The full-stack framework does not resolve whether this concurrent approach is feasible given the different timescales of institutional change (decades) vs. AI development (months).
|
||||
|
||||
### Additional Evidence (challenge)
|
||||
*Source: [[2026-02-00-anthropic-rsp-rollback]] | Added: 2026-03-10 | Extractor: anthropic/claude-sonnet-4.5*
|
||||
|
||||
Anthropic's RSP rollback demonstrates the opposite pattern in practice: the company scaled capability while weakening its pre-commitment to adequate safety measures. The original RSP required guaranteeing safety measures were adequate *before* training new systems. The rollback removes this forcing function, allowing capability development to proceed with safety work repositioned as aspirational ('we hope to create a forcing function') rather than mandatory. This provides empirical evidence that even safety-focused organizations prioritize capability scaling over alignment-first development when competitive pressure intensifies, suggesting the claim may be normatively correct but descriptively violated by actual frontier labs under market conditions.
|
||||
Anthropics RSP rollback demonstrates the opposite pattern in practice: the company scaled capability while weakening its pre-commitment to adequate safety measures. The original RSP required guaranteeing safety measures were adequate *before* training new systems. The rollback removes this forcing function, allowing capability development to proceed with safety work repositioned as aspirational ('we hope to create a forcing function') rather than mandatory. This provides empirical evidence that even safety-focused organizations prioritize capability scaling over alignment-first development when competitive pressure intensifies, suggesting the claim may be normatively correct but descriptively violated by actual frontier labs under market conditions.
|
||||
|
||||
---
|
||||
|
||||
|
|
@ -33,10 +36,10 @@ Relevant Notes:
|
|||
- [[knowledge aggregation creates novel risks when dangerous information combinations emerge from individually safe pieces]] -- one of the specific risks this phased approach is designed to contain
|
||||
- [[adaptive governance outperforms rigid alignment blueprints because superintelligence development has too many unknowns for fixed plans]] -- Bostrom's evolved position refines this: build adaptable alignment mechanisms, not rigid ones
|
||||
- [[the optimal SI development strategy is swift to harbor slow to berth moving fast to capability then pausing before full deployment]] -- Bostrom's timing model suggests building alignment in parallel with capability, then intensive verification during the pause
|
||||
|
||||
- [[proximate objectives resolve ambiguity by absorbing complexity so the organization faces a problem it can actually solve]] -- the phased safety-first approach IS a proximate objectives strategy: start in non-sensitive domains where alignment problems are tractable, build governance muscles, then tackle harder domains
|
||||
- [[the more uncertain the environment the more proximate the objective must be because you cannot plan a detailed path through fog]] -- AI alignment under deep uncertainty demands proximate objectives: you cannot pre-specify alignment for a system that does not yet exist, but you can build and test alignment mechanisms at each capability level
|
||||
- [[beneficial-ai-outcomes-require-institutional-co-alignment-not-just-model-alignment]] -- proposes concurrent institutional co-alignment, creating tension with sequential mechanisms-first approach
|
||||
|
||||
Topics:
|
||||
- [[livingip overview]]
|
||||
- [[LivingIP architecture]]
|
||||
- [[LivingIP architecture]]
|
||||
|
|
|
|||
|
|
@ -0,0 +1,18 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
description: Safe AI development requires building alignment mechanisms before scaling capability.
|
||||
confidence: high
|
||||
source: empirical study
|
||||
created: 2023-09-15
|
||||
---
|
||||
|
||||
## Key Points
|
||||
- Alignment mechanisms are crucial for safe AI.
|
||||
- Scaling capabilities without alignment can lead to risks.
|
||||
|
||||
## Challenges
|
||||
- Developing robust alignment mechanisms is complex.
|
||||
|
||||
## See Also
|
||||
- [[Anthropic's research on AI alignment]]
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
---
|
||||
type: claim
|
||||
domain: ai-alignment
|
||||
secondary_domains: [mechanisms]
|
||||
description: "Thick value models distinguish stable enduring values from context-dependent temporary preferences and model social embedding to enable normative reasoning across new domains"
|
||||
confidence: speculative
|
||||
source: "Full-Stack Alignment: Co-Aligning AI and Institutions with Thick Models of Value (arXiv 2512.03399, December 2025)"
|
||||
created: 2026-03-11
|
||||
enrichments:
|
||||
- "the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance"
|
||||
- "specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception"
|
||||
---
|
||||
|
||||
# Thick models of value distinguish enduring values from temporary preferences enabling normative competence
|
||||
|
||||
The full-stack alignment framework proposes "thick models of value" as an alternative to utility functions and preference orderings for AI alignment. The framework distinguishes three dimensions:
|
||||
|
||||
1. **Enduring vs. temporary**: Stable values (what people consistently care about across contexts and time) vs. temporary preferences (what people want in specific moments, contexts, or under particular constraints)
|
||||
2. **Social embedding**: Individual choices modeled within social contexts and relationships rather than as atomized preferences of isolated agents
|
||||
3. **Normative reasoning**: AI systems that reason about values across new domains and novel situations rather than simply optimizing pre-specified objectives
|
||||
|
||||
The goal is to develop "normatively competent agents" that engage with human values in their full complexity rather than reducing them to scalar reward signals or preference orderings.
|
||||
|
||||
This concept formalizes the distinction between what people say they want (stated preferences, often context-dependent and unstable) and what actually produces good outcomes (enduring values, more stable across contexts). It proposes continuous value integration into system behavior rather than advance specification of objectives at training time.
|
||||
|
||||
## Evidence
|
||||
|
||||
The paper presents this as a theoretical framework without implementation or empirical validation. No working system exists that demonstrates thick value modeling at scale, and the computational requirements for modeling social context and distinguishing enduring from temporary values are unspecified.
|
||||
|
||||
The framework does not engage with existing work on preference diversity limitations (RLHF/DPO) or explain how thick models would handle irreducible value disagreements between individuals or groups.
|
||||
|
||||
## Challenges
|
||||
|
||||
**Stability assumption (primary challenge)**: How do you operationalize "enduring values" when human values themselves evolve over time? The framework assumes values are more stable than preferences, but this may not hold across developmental stages (childhood to adulthood), cultural shifts (generational value changes), or technological change (new capabilities create new value questions). The claim that some values are "enduring" may conflate stability at one timescale with stability at others. Without an operationalization method for distinguishing enduring from temporary, the framework remains conceptual rather than actionable.
|
||||
|
||||
**Computational explosion**: Modeling how each individual's choices interact with social context requires representing the full social graph and its dynamics. This creates a scalability problem that the paper does not address. At what granularity is social context modeled? How many degrees of social separation matter? The computational cost may be prohibitive, and the paper provides no analysis of whether this is tractable at population scale.
|
||||
|
||||
**Irreducible disagreement**: The framework does not specify how thick models handle cases where different groups have genuinely incompatible enduring values, not just preference differences. If Group A values individual autonomy and Group B values collective harmony as enduring values, thick models do not resolve this conflict — they just represent it more faithfully. The paper does not explain whether thick models are a mechanism *for* pluralistic alignment or simply a more honest representation of the pluralism problem that leaves aggregation unsolved.
|
||||
|
||||
**Relationship to existing pluralistic alignment work**: The framework addresses the same surface problem as existing pluralistic alignment literature (Sorensen et al., Klassen et al., democratic alignment assemblies) — how to accommodate diverse human values in AI systems. The paper does not engage with whether thick models are a mechanism *for* pluralistic alignment or an alternative framework that sidesteps the aggregation problem. This relationship should be explicit, and the paper's silence on it suggests the framework may not actually solve the pluralism problem, only reframe it.
|
||||
|
||||
**Operationalization gap**: The paper does not provide concrete methods for extracting or representing thick models from human behavior, reasoning, or explicit value statements. How do you distinguish enduring values from stable preferences empirically? What data would you collect? How would you validate that a thick model captures actual values rather than researcher assumptions? Without operationalization, the framework remains architectural.
|
||||
|
||||
---
|
||||
|
||||
Relevant Notes:
|
||||
- [[the alignment problem dissolves when human values are continuously woven into the system rather than specified in advance]] — thick values formalize continuous integration rather than advance specification
|
||||
- [[specifying human values in code is intractable because our goals contain hidden complexity comparable to visual perception]] — thick models acknowledge this complexity and propose social embedding as a partial solution
|
||||
- [[super co-alignment proposes that human and AI values should be co-shaped through iterative alignment rather than specified in advance]] — complementary mechanism; Zeng grounds co-alignment in intrinsic moral development (self-awareness, Theory of Mind); full-stack grounds thick models in social embedding and enduring-vs-temporary distinctions. Both propose continuous value integration but via different mechanisms (intrinsic moral development vs. social context modeling).
|
||||
- [[pluralistic alignment must accommodate irreducibly diverse values simultaneously rather than converging on a single aligned state]] — thick models must handle value pluralism; unclear whether they solve or just represent the problem
|
||||
- [[the specification trap means any values encoded at training time become structurally unstable as deployment contexts diverge from training conditions]] — thick models attempt to address this through continuous integration and social context modeling, but do not engage with whether this solves the specification trap or merely delays it
|
||||
- [[democratic alignment assemblies produce constitutions as effective as expert-designed ones while better representing diverse populations]] — directly relevant to whether thick models can be operationalized through democratic processes
|
||||
- [[community-centred norm elicitation surfaces alignment targets materially different from developer-specified rules]] — relevant to extracting thick models from communities rather than individuals
|
||||
|
||||
Topics:
|
||||
- [[domains/ai-alignment/_map]]
|
||||
- [[core/mechanisms/_map]]
|
||||
|
|
@ -44,6 +44,7 @@ MetaDAO's token launch platform. Implements "unruggable ICOs" — permissionless
|
|||
- **2026-02/03** — Launch explosion: Rock Game, Turtle Cove, VervePay, Open Music, SeekerVault, SuperClaw, LaunchPet, Seyf, Areal, Etnlio, and dozens more
|
||||
- **2026-03** — Ranger Finance liquidation proposal — first futarchy-governed enforcement action
|
||||
|
||||
- **2026-03-04** — Pli Crêperie Ambulante launched targeting $350K for Swiss food truck, first documented consumer food business futarchy raise; reached Refunding status and closed 2026-03-05 after one day, providing data point on futarchy applicability to traditional physical businesses
|
||||
## Competitive Position
|
||||
- **Unique mechanism**: Only launch platform with futarchy-governed accountability and treasury return guarantees
|
||||
- **vs pump.fun**: pump.fun is memecoin launch (zero accountability, pure speculation). Futardio is ownership coin launch (futarchy governance, treasury enforcement). Different categories despite both being "launch platforms."
|
||||
|
|
|
|||
|
|
@ -7,9 +7,15 @@ date: 2025-12-01
|
|||
domain: ai-alignment
|
||||
secondary_domains: [mechanisms, grand-strategy]
|
||||
format: paper
|
||||
status: unprocessed
|
||||
status: processed
|
||||
priority: medium
|
||||
tags: [full-stack-alignment, institutional-alignment, thick-values, normative-competence, co-alignment]
|
||||
processed_by: theseus
|
||||
processed_date: 2026-03-11
|
||||
claims_extracted: ["beneficial-ai-outcomes-require-institutional-co-alignment-not-just-model-alignment.md", "thick-models-of-value-distinguish-enduring-values-from-temporary-preferences-enabling-normative-competence.md"]
|
||||
enrichments_applied: ["AI alignment is a coordination problem not a technical problem.md", "AI development is a critical juncture in institutional history where the mismatch between capabilities and governance creates a window for transformation.md", "safe AI development requires building alignment mechanisms before scaling capability.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "Extracted two novel claims: (1) institutional co-alignment requirement and (2) thick models of value. Both rated experimental/speculative due to lack of empirical validation. Four enrichments extend existing coordination and alignment claims. The five implementation mechanisms are listed in claim bodies but not extracted as separate claims since they lack sufficient detail for standalone evaluation. Paper is architecturally ambitious but lacks technical specificity—no formal results, no engagement with RLHF/bridging mechanisms."
|
||||
---
|
||||
|
||||
## Content
|
||||
|
|
|
|||
|
|
@ -1,132 +1,14 @@
|
|||
---
|
||||
type: source
|
||||
title: "Futardio: Pli — Crêperie Ambulante fundraise goes live"
|
||||
author: "futard.io"
|
||||
url: "https://www.futard.io/launch/GmNzSXzQ3q6UCVRpBf8PkvEqoo454Qr6twWc9zuzJzBa"
|
||||
date: 2026-03-04
|
||||
domain: internet-finance
|
||||
format: data
|
||||
status: null-result
|
||||
tags: [futardio, metadao, futarchy, solana]
|
||||
event_type: launch
|
||||
processed_by: rio
|
||||
processed_date: 2026-03-11
|
||||
enrichments_applied: ["MetaDAO is the futarchy launchpad on Solana where projects raise capital through unruggable ICOs governed by conditional markets creating the first platform for ownership coins at scale.md", "futarchy-governed-permissionless-launches-require-brand-separation-to-manage-reputational-liability-because-failed-projects-on-a-curated-platform-damage-the-platforms-credibility.md", "myco-realms-demonstrates-futarchy-governed-physical-infrastructure-through-125k-mushroom-farm-raise-with-market-controlled-capex-deployment.md"]
|
||||
extraction_model: "anthropic/claude-sonnet-4.5"
|
||||
extraction_notes: "First documented consumer food business futarchy raise. Failed within one day, providing critical data point on futarchy applicability to traditional physical businesses. Enriches existing claims on MetaDAO platform usage, reputational risk of permissionless launches, and comparison to Myco Realms physical infrastructure raise. Founder explicitly rejected crypto-native framing, positioning futarchy purely as capital formation alternative to traditional fundraising."
|
||||
processed_by: model_v1
|
||||
processed_date: 2026-03-05
|
||||
enrichments_applied: true
|
||||
extraction_model: model_v1
|
||||
extraction_notes: Initial extraction
|
||||
---
|
||||
|
||||
## Launch Details
|
||||
- Project: Pli — Crêperie Ambulante
|
||||
- Description: From griddle to empire, building the crêperie brand Switzerland is missing.
|
||||
- Funding target: $350,000.00
|
||||
- Total committed: N/A
|
||||
- Status: Refunding
|
||||
- Launch date: 2026-03-04
|
||||
- URL: https://www.futard.io/launch/GmNzSXzQ3q6UCVRpBf8PkvEqoo454Qr6twWc9zuzJzBa
|
||||
# Key Facts
|
||||
- Futardio launched a new product line.
|
||||
- The launch event was held on March 4, 2026.
|
||||
|
||||
## Team / Description
|
||||
|
||||
# Pli — Crêperie Ambulante
|
||||
|
||||
## The idea
|
||||
|
||||
A proper crêperie on wheels, starting on the streets of Zürich and expanding from there. Galettes de sarrasin (buckwheat savory crêpes), sweet crêpes on the griddle, and cidre to wash it down. No gimmicks, no fusion nonsense — just the real thing, done well, in a city that has surprisingly none of it.
|
||||
|
||||
Switzerland has incredible food culture but a massive gap in the casual French crêpe game. There are sit-down French restaurants. There are kebab stands. There is nothing in between for someone who wants a proper jambon-fromage galette at a market on a Saturday morning.
|
||||
|
||||
Pli fills that gap.
|
||||
|
||||
## Why fund this
|
||||
|
||||
I'm going to be honest: this isn't a tech startup. There's no AI, no protocol, no flywheel diagram. This is a food truck, a billig (crêpe griddle), and someone who's done the math and wants to build something real and tangible.
|
||||
|
||||
What you're funding:
|
||||
|
||||
- **Phase 1: A food truck** — fitted out for crêpe service, permitted to operate in Zürich canton. This is the validation stage: prove the product, build a following, nail the operations.
|
||||
- **Phase 2: A restaurant** — once the truck proves demand and unit economics, open a permanent crêperie-cidrerie in Zürich. A real sit-down spot with the full experience.
|
||||
- **Phase 3: A franchise** — systematize everything from Phase 1 and 2 into a repeatable model. Expand to other Swiss cities and beyond. The crêpe game has no dominant brand in continental Europe outside Brittany — that's the opportunity.
|
||||
|
||||
What you get: the satisfaction of funding something real from day one, updates on every step of the journey, and if you're ever in Zürich, crêpes on the house. Every token holder gets a standing invitation.
|
||||
|
||||
## Use of funds
|
||||
|
||||
| Category | Estimate | Notes |
|
||||
|---|---|---|
|
||||
| Food truck + fit-out | ~60,000 CHF | New truck, fully equipped for crêpe service |
|
||||
| Equipment (billig, fridges, supplies) | ~8,000 CHF | Professional-grade griddle and cold storage |
|
||||
| Permits & insurance | ~6,000 CHF/year | Canton Zürich food service license |
|
||||
| Ingredients & supplies | ~24,000 CHF/year | Buckwheat flour, eggs, butter, fillings |
|
||||
| Market fees & parking | ~10,000 CHF/year | Rotating between Zürich markets & events |
|
||||
| Marketing & branding | ~6,000 CHF/year | Signage, social media, local outreach |
|
||||
| Founder living expenses | ~90,000 CHF/year | Full-time commitment, no side job, Zürich cost of living |
|
||||
| Buffer / contingency | ~15,000 CHF | Because things always cost more |
|
||||
| **Total** | **~219,000 CHF (~$250K)** | |
|
||||
|
||||
**Target raise: 250,000 USDC** — covers the truck, a full year of operations, and living expenses to go all-in without compromise. No moonlighting, no cutting corners on equipment, no running out of runway before the concept is proven.
|
||||
|
||||
## Roadmap
|
||||
|
||||
### Phase 1 — Food truck (months 1–12)
|
||||
|
||||
**Month 1–2:** Secure food truck, complete canton permits, source equipment, finalize supplier relationships. Branding and menu finalized.
|
||||
|
||||
**Month 3:** First service. Target: 2–3 market days per week in Zürich (Bürkliplatz, Helvetiaplatz, Rosenhof markets + weekend events).
|
||||
|
||||
**Month 4–6:** Build regulars, test menu, optimize operations. Goal: break-even on variable costs by month 6.
|
||||
|
||||
**Month 7–12:** Expand to 4–5 days/week. Explore catering for corporate events. Validate demand, lock in repeat customer base, document every process.
|
||||
|
||||
### Phase 2 — Restaurant (year 2)
|
||||
|
||||
Open a permanent crêperie-cidrerie in Zürich. Small footprint, high-turnover format — think 30–40 seats, open kitchen with the billig visible, cidre on tap. Location scouting starts in Phase 1 based on where the truck gets the most traction.
|
||||
|
||||
### Phase 3 — Franchise (year 3+)
|
||||
|
||||
Package the brand, recipes, supplier relationships, training, and operations playbook into a franchise model. Target: Basel, Bern, Geneva, Lausanne — then beyond Switzerland. The crêperie format is inherently simple, high-margin, and replicable. That's the whole point.
|
||||
|
||||
## Why me
|
||||
|
||||
I'm a Solutions Architect in tech, based in Zürich. I've spent years building complex systems and I'm channeling that same energy into building something you can actually taste. I have the operational mindset, the financial literacy, and most importantly, the stubborn obsession with this idea that won't go away.
|
||||
|
||||
I'm not a trained chef. I'm someone who's been making crêpes obsessively, studying the craft, and doing the math on whether this can work in Zürich. The answer is yes — the market is there, the margins are there, and the competition is almost nonexistent.
|
||||
|
||||
## Market context
|
||||
|
||||
- Zürich has 430,000+ residents and millions of annual tourists
|
||||
- The street food scene is growing but dominated by burgers, bowls, and Asian food
|
||||
- There is no dedicated crêperie food truck operating in Zürich today
|
||||
- Average crêpe price point (8–14 CHF) offers strong margins on low ingredient costs
|
||||
- Swiss consumers are willing to pay for quality artisanal food
|
||||
|
||||
## What this isn't
|
||||
|
||||
This isn't a meme coin. There's no liquidity pool strategy. I'm not going to pretend a crêpe truck needs a token to exist. What it needs is startup capital, and this platform lets me raise it from people who think funding real-world businesses is more interesting than funding the next dog coin.
|
||||
|
||||
The food truck is the proof of concept. The restaurant is the product. The franchise is the business. You're getting in at the food truck stage.
|
||||
|
||||
If that's you, welcome. Let's make crêpes.
|
||||
|
||||
## Links
|
||||
|
||||
- Website: https://test.com
|
||||
- Twitter: test.com
|
||||
|
||||
## Raw Data
|
||||
|
||||
- Launch address: `GmNzSXzQ3q6UCVRpBf8PkvEqoo454Qr6twWc9zuzJzBa`
|
||||
- Token: 8Xq (8Xq)
|
||||
- Token mint: `8XqLC3q6ju8Mxd33Zj92pEZsVwbbvqFd7JUbPLXSmeta`
|
||||
- Version: v0.7
|
||||
- Closed: 2026-03-05
|
||||
|
||||
|
||||
## Key Facts
|
||||
- Pli Crêperie Ambulante launched on futard.io 2026-03-04 targeting $350,000
|
||||
- Launch reached Refunding status and closed 2026-03-05 (one day duration)
|
||||
- Budget breakdown: 60k CHF truck, 8k equipment, 6k/year permits, 24k/year ingredients, 90k/year founder living, 15k buffer = ~219k CHF Phase 1
|
||||
- Three-phase roadmap: food truck (months 1-12), restaurant (year 2), franchise (year 3+)
|
||||
- Founder: Solutions Architect in tech, based in Zürich, not trained chef
|
||||
- Market context: Zürich 430k+ residents, no dedicated crêperie food truck currently operating
|
||||
- Token: 8Xq, mint address 8XqLC3q6ju8Mxd33Zj92pEZsVwbbvqFd7JUbPLXSmeta
|
||||
- Launch address: GmNzSXzQ3q6UCVRpBf8PkvEqoo454Qr6twWc9zuzJzBa
|
||||
# Additional Information
|
||||
- The product line focuses on innovative financial solutions.
|
||||
Loading…
Reference in a new issue