React Native ExecuTorch is a declarative framework bringing on-device AI inference to React Native applications leveraging Meta ExecuTorch runtime for efficient model execution on mobile devices. Maintained by Software Mansion, respected React Native core contributors, the library ensures high code quality and alignment with React Native best practices. ExecuTorch is Meta framework for running AI models efficiently on edge devices and this binding brings that capability to cross-platform mobile development.
The framework supports broad AI workloads critical for modern mobile applications including large language models with pre-built Llama 3.2 support, computer vision tasks like object detection and semantic segmentation, OCR for document scanning, text and image embeddings for semantic search, and vision-language models for multimodal reasoning. All inference happens on-device enabling privacy-first applications where user data never leaves the phone essential for health and financial applications.
Targeting iOS 17+ and Android 13+ with React Native New Architecture the library positions developers at the forefront of mobile AI capability. Devices from the last two to three years can run meaningful AI workloads without server dependencies. For mobile developers building AI-powered features like voice assistants, on-device search, document analysis, and real-time vision tasks, React Native ExecuTorch eliminates native module complexity and cloud API dependencies reducing latency while improving privacy.