This commit introduces a new "Toggle Server" feature that runs a local HTTP server on the device. This allows developers and researchers to interact with the on-device AI models using `curl`, with all communication tunneled exclusively over the USB cable. The server can handle multipart/form-data requests, allowing users to send a prompt, an image, or both. This provides a powerful new way to test, debug, and integrate the on-device models.
4.7 KiB
Google AI Edge Gallery ✨
Explore, Experience, and Evaluate the Future of On-Device Generative AI with Google AI Edge.
The Google AI Edge Gallery is an experimental app that puts the power of cutting-edge Generative AI models directly into your hands, running entirely on your Android (available now) and iOS (coming soon) devices. Dive into a world of creative and practical AI use cases, all running locally, without needing an internet connection once the model is loaded. Experiment with different models, chat, ask questions with images, explore prompts, and more!
Overview
Ask Image
Prompt Lab
AI Chat
🔌 Toggle Server
The "Toggle Server" feature runs a local HTTP server on your mobile device that allows you to interact with the on-device AI models from your laptop using curl
, with all communication tunneled exclusively over a USB cable connection.
Usage
-
Enable USB Debugging:
- Follow these steps to enable ADB port forwarding between your device and computer.
-
Connect Device to Computer & Enable Port Forwarding:
adb -d forward tcp:8080 tcp:8080
-
Start the Server in the App:
- Navigate to the "Toggle Server" screen.
- Tap the "Start In-App Server" button.
-
Send Requests with
curl
:- Prompt only:
curl -X POST -F "prompt=Hello, world!" http://localhost:8080
- Image and prompt:
curl -X POST -F "prompt=What is in this image?" -F "image=@/path/to/your/image.jpg" http://localhost:8080
- Prompt only:
✨ Core Features
- 📱 Run Locally, Fully Offline: Experience the magic of GenAI without an internet connection. All processing happens directly on your device.
- 🤖 Choose Your Model: Easily switch between different models from Hugging Face and compare their performance.
- 🖼️ Ask Image: Upload an image and ask questions about it. Get descriptions, solve problems, or identify objects.
- ✍️ Prompt Lab: Summarize, rewrite, generate code, or use freeform prompts to explore single-turn LLM use cases.
- 💬 AI Chat: Engage in multi-turn conversations.
- 📊 Performance Insights: Real-time benchmarks (TTFT, decode speed, latency).
- 🧩 Bring Your Own Model: Test your local LiteRT
.task
models. - 🔗 Developer Resources: Quick links to model cards and source code.
🏁 Get Started in Minutes!
- Download the App: Grab the latest APK.
- Install & Explore: For detailed installation instructions (including for corporate devices) and a full user guide, head over to our Project Wiki!
🛠️ Technology Highlights
- Google AI Edge: Core APIs and tools for on-device ML.
- LiteRT: Lightweight runtime for optimized model execution.
- LLM Inference API: Powering on-device Large Language Models.
- Hugging Face Integration: For model discovery and download.
🤝 Feedback
This is an experimental Alpha release, and your input is crucial!
- 🐞 Found a bug? Report it here!
- 💡 Have an idea? Suggest a feature!
📄 License
Licensed under the Apache License, Version 2.0. See the LICENSE file for details.