Six vector stores, three honest answers — pick by what your workload actually looks like
pgvector, Pinecone, Weaviate, Qdrant, Chroma, Turbopuffer / LanceDB. Most production RAG workloads do not need a purpose-built vector database. Some absolutely do. Knowing which is which is the work.
READ THE FULL COMPARISONThe options, ranked by the workload that picks each one
pgvector
When you already have Postgres
Vector search inside Postgres. No new service, no sync. Handles the workload for most RAG and semantic search use cases up to tens of millions of vectors. The default for teams already on Postgres or Supabase, and the right answer more often than the marketing for purpose-built vector DBs would suggest.
Read the take →Pinecone
Managed, fast, expensive
Hosted purpose-built vector database. Polished SDKs, fast indexing, predictable latency. Right call when you have hundreds of millions of vectors, billing is not the constraint, and you want to delete the operations problem.
Read the take →Weaviate
Pinecone's open-source equal
Open-source, self-hostable, hybrid search (vector + keyword). Strong on multi-modal and multi-tenant. Right call for teams that want Pinecone capability with the option to self-host.
Read the take →Qdrant
Rust-fast, simple API
Open-source, written in Rust, simpler API than Weaviate, strong filtering. Cloud option available. Right call when latency at scale is the constraint and the team values operational simplicity.
Read the take →Chroma
Embeddable, prototype-first
In-process vector database. Run it embedded in your app, like SQLite for vectors. Right call for prototypes, single-tenant tools, and small RAG workloads where adding any service is overhead.
Read the take →Turbopuffer / LanceDB
Object-store-backed, pay per query
Newer entrants storing vectors in S3-class object storage. Cheaper at idle (pay-per-query rather than always-on cluster). Right call for sparse-traffic workloads where most other options idle expensive.
Read the take →The decision in one sentence
Start with pgvector if you are already on Postgres — it handles most workloads under 50M vectors and the operational simplicity is worth real performance trade-offs. Move to Qdrant or Weaviate when latency at scale becomes the constraint and you want open source. Pick Pinecone when budget is not the constraint and you want to delete operations entirely. Pick Chroma for prototypes and single-tenant tools. Look at Turbopuffer / LanceDB for sparse-traffic workloads where always-on cluster cost is the bottleneck.
The supporting comparisons
- Vector databases 2026: pgvector, Pinecone, Weaviate, Qdrant, ChromaThe pillar comparison post — workload shape, pricing, latency, when to pick each.
- Search infrastructure 2026: Algolia, Typesense, Meilisearch, PagefindAdjacent decision — keyword search alongside vector. Often the same site needs both.
- Serverless databases 2026: Supabase, Neon, PlanetScale, Turso, ConvexWhere pgvector lives in production — Supabase ships it by default.
The full directory of 12 vector databases
This hub is the editorial top-5. The full directory at /vector-databases/ covers 12 vector stores filterable by category (embedded / managed-SaaS / self-hosted / multi-model), engine, pricing, plus hybrid-search and edge-ready flags — including the niche options the top-5 cuts: Milvus, Vespa, LanceDB, Turbopuffer, Marqo, MongoDB Atlas Vector Search, Astra DB Vector.