Product Hunt is too loud, Hacker News is too cynical, and most dev Discords are too busy to read your link. projectvibe.ai is built for one thing: the moment someone ships a vibe-coded app and needs real humans to tell them what's working.
Submitting projects, reviewing others, following builders, leaving feedback — all free. Pro ($10/mo, Phase 6) unlocks analytics, featured placement, and native funding tools. Everything anyone needs to ship + validate an app stays free.
Every review comes with a small scorecard — problem clarity, would-you-use-this, differentiation, execution. We aggregate those into a Vibe Score (the “how good is this really” signal, more below). Open prose stays prose, but the scorecard turns reviews into something actionable.
Every submission tells us which AI tool shipped it — Cursor, Lovable, v0, Bolt, Claude Code, etc. Honor system, but displayed prominently. Per-tool Top 25 archives become the definitive lists of what people actually build with each builder.
An Idea needs different feedback than a Launched app. We prompt reviewers differently based on what stage a project is in, so the feedback lines up with what the builder needs to hear right now.
How Vibe Score works
Vibe Score is a Bayesian-smoothed blend of three signals, reported once a project has 3 or more reviewsso one glowing friend can't skew it.
A project with 5 reviews averaging 4.6 / 5 and “yes” on would-use lands around 87. A project with 30 reviews averaging 3.2 and mostly “maybe” lands around 62. The formula lives in lib/scoring/ranking.ts and is unit-tested so it doesn't drift.
projectvibe.ai itself is an AI-built app — Next.js 16, Supabase, Vercel, mostly Claude Code. It eats its own dogfood.