React Native ExecuTorch is a declarative way to run AI models in React Native on device, powered by ExecuTorch 🚀.
ExecuTorch is a novel framework created by Meta that enables running AI models on devices such as mobile phones or microcontrollers. React Native ExecuTorch bridges the gap between React Native and native platform capabilities, allowing developers to run AI models locally on mobile devices with state-of-the-art performance, without requiring deep knowledge of native code or machine learning internals.
To run any AI model in ExecuTorch, you need to export it to a .pte
format. If you're interested in experimenting with your own models, we highly encourage you to check out the Python API. If you prefer focusing on developing your React Native app, we will cover several common use cases. For more details, please refer to the documentation.
Take a look at how our library can help build you your React Native AI features in our docs:
https://docs.swmansion.com/react-native-executorch
The minimal supported version is 17.0 for iOS and Android 13.
We currently host a single example demonstrating a chat app built with the latest Llama 3.2 1B/3B model. If you'd like to run it, navigate to examples/llama
from the repository root and install the dependencies with:
yarn
then run:
cd ios
pod install
cd ..
And finally, if you want to run on Android:
yarn expo run:android
or iOS:
yarn expo run:ios
Running LLMs requires a significant amount of RAM. If you are encountering unexpected app crashes, try to increase the amount of RAM allocated to the emulator.
This library is licensed under The MIT License.
To learn about our upcoming plans and developments, please visit our discussion page.
Since 2012 Software Mansion is a software agency with experience in building web and mobile apps. We are Core React Native Contributors and experts in dealing with all kinds of React Native issues. We can help you build your next dream product – Hire us.