7 tools tagged
Showing 7 of 7 tools
Open-source AI coding agent for the terminal
AI coding agent from Sourcegraph with full codebase awareness through Sourcegraph's code intelligence platform. Handles complex multi-file tasks by understanding code relationships, dependencies, and architectural patterns across entire repositories. Particularly strong for large enterprise codebases where understanding cross-repository context is critical for accurate code generation.
Open-source AI second brain with deep research and RAG
Khoj is an open-source personal AI app that serves as a self-hostable second brain. It connects to your documents — PDFs, Markdown, Notion, Word — and uses RAG to answer questions grounded in your knowledge base. Supports any local or cloud LLM including Llama, Claude, GPT, and Gemini. Features custom agents, scheduled automations, deep research mode, semantic search, and Obsidian, Emacs, and WhatsApp integrations. Over 33,000 GitHub stars, YC-backed.
All-in-one LLM CLI tool with shell assistant, RAG, and function calling
AI coding agent focused on autonomous test generation and quality assurance. Analyzes your codebase to identify untested paths, generate comprehensive test suites, and suggest edge cases you might have missed. Integrates with existing CI pipelines to continuously improve test coverage, helping teams ship more reliable code without the manual overhead of writing every test by hand.
Natural language interface for running code on your computer
Open-source AI coding assistant with a focus on privacy and local-first execution. Runs models on your machine using Ollama or connects to cloud providers when needed. Supports VS Code with code completion, chat, and inline editing. Targets developers who want full AI coding capabilities without sharing proprietary code with third-party services, offering complete data sovereignty.
Run LLMs locally with one command
Tool for running large language models locally on your machine with a simple CLI interface. Download and run Llama 3, Mistral, Gemma, Phi, Code Llama, and dozens of other open-source models with a single command. Features model management, GPU acceleration (NVIDIA/AMD/Apple Silicon), OpenAI-compatible API server, Modelfile for customization, and multi-model switching. Ideal for offline AI development, privacy-sensitive use cases, and local testing. 120K+ GitHub stars.
Unified API proxy for 100+ LLMs
Drop-in OpenAI-compatible proxy supporting 100+ LLM providers with load balancing, spend tracking, rate limiting, and fallback routing. Acts as a unified gateway for all your AI model calls, letting teams switch between providers, enforce budgets, and add reliability layers without changing application code. Essential infrastructure for multi-model AI architectures.
Open-source AI code assistant for any model
Open-source embedding database that runs in-memory with optional persistence. Designed for rapid prototyping and lightweight AI applications with a simple API. No server required — embed directly in your Python or JavaScript application. Ideal for developers building small to medium-scale RAG systems, semantic search features, or AI prototypes that need minimal infrastructure.