Google AI Edge Gallery brings powerful generative AI directly to mobile devices without requiring any internet connection or cloud service. Developed by Google as an open-source showcase, the app lets users download models from the Gemma family and run multi-turn conversations, analyze images, transcribe audio, and execute multi-step agent workflows entirely on their phone's hardware. All inference happens locally, meaning prompts, images, and personal data never leave the device.
The app includes several distinct capabilities: AI Chat with Thinking Mode reveals the model's step-by-step reasoning during conversations; Ask Image provides multimodal analysis using the device camera for object detection and visual question-answering; Audio Scribe handles real-time speech-to-text transcription and translation without cloud APIs; and Prompt Lab gives developers fine-grained control over parameters like temperature and top-k sampling. The Agent Skills feature enables autonomous, multi-step workflows that can query Wikipedia, look up locations, and chain tool calls together—all on-device.
Built on Google's MediaPipe framework and LiteRT runtime (formerly TensorFlow Lite), the gallery supports loading custom models from Hugging Face and includes hardware benchmarking to compare model performance across different devices. Licensed under Apache 2.0, it serves both as a polished consumer app—reaching the App Store top 10—and as a production-ready reference for developers building privacy-first mobile AI applications.