11 tools tagged
Showing 11 of 11 tools
Production RAG engine with hybrid search and knowledge graphs
R2R is a production-grade RAG engine from SciPhi AI that combines hybrid search with knowledge graph extraction and agentic retrieval capabilities. It provides a complete pipeline from document ingestion through retrieval and generation, supporting vector, keyword, and graph-based search strategies. The managed API and self-hosted options make it accessible for both rapid prototyping and production deployments requiring advanced retrieval beyond simple vector similarity.
Elasticsearch-quality full-text and hybrid search inside Postgres
ParadeDB brings Elasticsearch-quality full-text search, BM25 ranking, and hybrid vector-keyword search directly into PostgreSQL as native extensions. Backed by a 12 million dollar Series A with over 500,000 Docker deployments, it eliminates the overhead of running separate search infrastructure. Teams get powerful search within their existing Postgres stack without managing additional clusters.
Serverless vector and full-text search on object storage
turbopuffer is a serverless vector and full-text search engine built on object storage that delivers 10x lower costs than traditional vector databases. Used by Anthropic, Cursor, Notion, and Atlassian for production search workloads. Manages 2+ trillion vectors across 8+ petabytes with automatic scaling and no infrastructure management. Funded by Thrive Capital.
Real-time web search and retrieval via MCP
Exa MCP Server provides AI coding agents with real-time web search and content crawling capabilities through the Model Context Protocol. It leverages Exa's neural search API for semantic understanding of queries, returning clean, structured results with full page content extraction. Supports both remote hosted MCP endpoints and local client configurations.
Web scraping and crawling via MCP for AI agents
Firecrawl MCP Server is the official MCP integration for Firecrawl that gives AI coding agents web scraping, crawling, search, and structured data extraction capabilities. It supports batch operations, deep research mode, and agent-friendly extraction with configurable output formats across multiple AI client environments.
Fully managed vector database built for AI applications at production scale.
Pinecone is the leading managed vector database designed for high-performance similarity search at scale. Purpose-built for AI applications including RAG, recommendation systems, and semantic search. Offers serverless and pod-based architectures with automatic scaling, filtering, and namespacing. No infrastructure management required.
Open-source search engine — fast, typo-tolerant, easy to use.
Typesense is an open-source, typo-tolerant search engine optimized for instant search experiences. Written in C++ for maximum performance. Features built-in vector search for semantic/hybrid queries, geo-search, faceting, and curation. Popular for e-commerce search, documentation sites, and SaaS applications.
Lightning-fast, open-source search engine — a developer-friendly Algolia alternative.
Meilisearch is an open-source, lightning-fast search engine written in Rust. Designed as a developer-friendly alternative to Algolia with typo tolerance, faceted search, filtering, and sorting out of the box. Sub-50ms response times. Easy to deploy and configure with a RESTful API.
High-performance vector database written in Rust for similarity search at scale.
Qdrant is a high-performance vector similarity search engine and database written in Rust. Designed for production-grade AI applications with advanced filtering, payload indexing, and distributed deployment. Supports billion-scale vector collections with sub-second query times. Popular choice for RAG, recommendation systems, and anomaly detection.
Open-source vector database for AI-native applications and semantic search.
Weaviate is an open-source vector database purpose-built for AI applications. Supports vector, keyword, and hybrid search with built-in vectorization modules for OpenAI, Cohere, Hugging Face, and more. Used for RAG pipelines, semantic search, recommendation engines, and multimodal search. Written in Go for high performance.
Distributed search and analytics engine for all types of data.
Elasticsearch is the world's most popular open-source search and analytics engine, powering search experiences for companies like Wikipedia, GitHub, Netflix, and Uber. Built on Apache Lucene, it provides near-real-time search, structured and unstructured data analysis, and machine learning capabilities. Part of the Elastic Stack (ELK), it handles log analytics, application search, security analytics, and observability at scale. Supports vector search for AI/RAG applications.