The fastest way to make an app in 2026 is to ship an AI-native stack from day one: edge-first backend, server components on the frontend, passkey auth, Claude MCP for any LLM-driven feature, and an agent-based ops layer that keeps the system running while you sleep. This is the exact stack we ship with for every new POC and MVP, and it lets a two-person team deliver what needed six engineers in 2022.
The rules on how to make an app have changed more in the last 18 months than they did in the previous decade. Most tutorials online still describe a 2021 stack — Node + Express + Postgres + React SPA + long-lived JWTs — and that stack now takes three times longer to ship and produces an app that feels slow the moment it goes live. Below is what we actually use, why we chose each piece, and where the tradeoffs bite.
Why the 2022 stack no longer makes sense for new apps
The old default was a monolithic backend running in one region, a React SPA hydrating on the client, and a REST API in between. It worked, and millions of apps still run on it. But every layer of that stack has a replacement that is strictly faster to build, strictly faster to serve, or both.
We still build on that stack when a client has an existing codebase that uses it — rewriting for the sake of fashion is a waste of money. But for greenfield POCs and MVPs, starting on the 2026 stack cuts our delivery timeline by 30-50%, and the resulting app has global latency that looks like Cloudflare's infra, not your dev laptop's nearest region.
The backend: edge functions over monolithic regional servers
We default to edge functions — typically Cloudflare Workers or Vercel Edge Functions — for any new API layer. The reason is not the buzzword. It is that edge runtimes start in under 5ms globally compared to 100-300ms for a cold Lambda or a container spin-up. When your app's median request is a fast database read, that cold-start delta is the entire perceived latency budget.
The tradeoff we accept: edge runtimes have restricted Node APIs and a smaller ecosystem. We chose this over traditional serverless for every new POC because 90% of the endpoints in a typical MVP are simple CRUD, auth, and AI-proxying — all of which run fine at the edge. For the 10% that need a full Node environment (PDF generation, heavy image processing, long-running jobs), we offload to a regional worker queue and keep the edge layer thin.
The database: Postgres with a global read layer, not a global DB
Every vendor wants to sell us a "globally distributed database" for our MVP. We have not found a case where it is worth the complexity. What we do ship is a single-region Postgres primary — usually Neon or Supabase — with aggressive edge caching via Cloudflare's KV or D1 for read-heavy endpoints.
For a typical MVP we hit 85-95% cache hit rates on reads. Writes go to a single region with 50-150ms latency from far-away users, which is invisible for most workflows. Global write replication adds weeks of engineering for a problem MVPs do not have. We revisit this decision at scale, not before.
The frontend: server components by default, client hydration only where interactive
We ship most new apps on Next.js with React Server Components as the default. The shift matters because server components render on the server and stream HTML without shipping their JS to the client — so a dashboard with three interactive widgets ships the JS for three widgets, not for the entire dashboard framework.
On our last MVP rebuild we watched client bundle size drop from 480kb to 94kb with no feature loss. That is a 4× improvement in time-to-interactive on mid-tier Android devices, which is half our target user base. We chose RSC over a plain SPA because the ergonomics now match (data fetching is colocated with components), and the perf delta is too large to leave on the table.
Auth: passkeys first, email-link fallback, passwords last
We do not build new apps with password-first auth anymore. Passkeys — built on WebAuthn — are now supported across iOS, Android, Windows, macOS, and every major browser, and the UX is better than passwords on every axis: faster sign-up, faster sign-in, phishing-resistant by design.
The stack we actually ship is: passkey registration on sign-up, magic-link email as the fallback and account recovery path, and a password option only if the client's enterprise customers demand it. This flow removes the entire "forgot password" support burden. We have watched support ticket volume for auth issues drop 70% on apps that switch from password-first to passkey-first.
AI integration: Claude MCP for every LLM-driven feature
If the app has any AI-powered feature — summarization, classification, chat, search — we architect it through Anthropic's Model Context Protocol rather than inlining API calls throughout the codebase. MCP lets us expose each data source (database, file store, external API) as a typed server the model can query on demand.
The payoff compounds. Feature one takes the same time as a traditional inline integration. Feature two takes half as long because the MCP servers for user data, content, and settings already exist. Feature three takes a fraction of the time. For the full reasoning and architecture patterns, we wrote up how Claude MCP works in production. On a recent MVP, the second AI feature shipped in 3 days because we could reuse the MCP servers from the first.
Ops: AI sub-agents, not a full SRE function
The 2026 shift that surprises clients most is the ops layer. We run every shipped app with a set of AI sub-agents monitoring logs, alerting on anomalies, drafting postmortems, and proposing fixes. We covered the full architecture in our breakdown of 11 specialized AI sub-agents powering our engineering workflow.
This is not a replacement for an on-call human — it is a force multiplier. An MVP that would have needed a full-time ops engineer to babysit in 2022 now runs with an agent that triages 90% of alerts and escalates the rest. We chose this over hiring a junior devops engineer per client because the agents scale linearly with the number of apps we support, not with team size.
The stack in one diagram
User → Cloudflare CDN
→ Edge Function (auth + routing)
├─ Postgres (single region, Neon/Supabase)
├─ KV cache (edge, 85-95% hit rate)
├─ MCP servers (per data source)
│ └─ Claude API (Sonnet for most features)
└─ Regional worker queue (heavy jobs only)
Frontend: Next.js + React Server Components
Auth: Passkeys → Magic link → (Password if required)
Ops: AI sub-agents for monitoring, triage, postmortems
Every piece in that diagram is chosen for a specific reason: the edge function because cold starts kill perceived performance, single-region Postgres because multi-region write complexity is not worth it at MVP scale, RSC because shipping less JS wins on mobile, passkeys because the UX and support burden are better, MCP because AI features compound when architected as tools rather than prompts, and AI ops because the alternative is hiring humans to watch dashboards.
What this stack costs to stand up
The infrastructure bill for a typical MVP on this stack is $40-$150/month during development and $200-$800/month once live with real users. That is an order of magnitude less than the same architecture would have cost to host in 2020. The engineering time to stand up the skeleton — auth, database, edge routing, deployment — is about 3-5 days on the 2026 stack versus 2-3 weeks on the 2022 stack.
The time savings compound. Faster skeleton means more runway for the actual product features, which is where the client's idea actually lives. We have shipped MVPs in 4 weeks on this stack that would have taken 12 weeks on the old one.
Where this stack is the wrong choice
We do not use this stack when the app has hard requirements for long-running background processes (video encoding, multi-hour ML training), when the target market has regulatory data-residency constraints the edge providers cannot meet, or when the client's existing engineering team has deep expertise in a different stack and the app is meant to be handed back to them post-launch.
For those cases we still build on regional servers, traditional ORMs, and password-first auth. Picking tools to match the team is a harder engineering decision than picking tools to match the ideal architecture, and we make that call deliberately on every engagement.
The complete 2026 answer to how to make an app
Knowing how to make an app in 2026 is less about picking a framework and more about choosing a stack where every layer is AI-native, edge-first, and built around primitives that did not exist 24 months ago. The stack above — edge functions, single-region Postgres with edge caching, React Server Components, passkey auth, Claude MCP for AI features, and AI sub-agents for ops — is what we default to for new POCs and MVPs because it ships faster, runs faster, and costs less than any traditional alternative. For the strategy layer on top of this stack, see our companion guide on the complete guide from idea to launch.