来这儿
看帖
huaren wxcity hn newmitbbs bedtime bili
new
jiandan
new
hdeal
github
Prev day Apr 06 Mon Next day Today
gallery

A gallery that showcases on-device ML/GenAI use cases and allows people to try and use models locally.


Google AI Edge Gallery ✨

License GitHub release (latest by date)

Explore, Experience, and Evaluate the Future of On-Device Generative AI with Google AI Edge.

AI Edge Gallery is the premier destination for running the world's most powerful open-source Large Language Models (LLMs) on your mobile device. Experience high-performance Generative AI directly on your hardware—fully offline, private, and lightning-fast.

Now Featuring: Gemma 4

The latest version brings official support for the newly released Gemma 4 family. As the centerpiece of this release, Gemma 4 allows you to test the cutting edge of on-device AI. Experience advanced reasoning, logic, and creative capabilities without ever sending your data to a server.

Install the app today from Google Play Install the app today from App Store
Get it on Google Play Download on the App Store

For users without Google Play access, install the apk from the latest release

App Preview

01 02 03 04 05 06 07 08

✨ Core Features

  • Agent Skills: Transform your LLM from a conversationalist into a proactive assistant. Use the Agent Skills tile to augment model capabilities with tools like Wikipedia for fact-grounding, interactive maps, and rich visual summary cards. You can even load modular skills from a URL or browse community contributions on GitHub Discussions.

  • AI Chat with Thinking Mode: Engage in fluid, multi-turn conversations and toggle the new Thinking Mode to peek "under the hood." This feature allows you to see the model’s step-by-step reasoning process, which is perfect for understanding complex problem-solving. Note: Thinking Mode currently works with supported models, starting with the Gemma 4 family.

  • Ask Image: Use multimodal power to identify objects, solve visual puzzles, or get detailed descriptions using your device’s camera or photo gallery.

  • Audio Scribe: Transcribe and translate voice recordings into text in real-time using high-efficiency on-device language models.

  • Prompt Lab: A dedicated workspace to test different prompts and single-turn use cases with granular control over model parameters like temperature and top-k.

  • Mobile Actions: Unlock offline device controls and automated tasks powered entirely by a finetune of FuntionGemma 270m.

  • Tiny Garden: A fun, experimental mini-game that uses natural language to plant and harvest a virtual garden using a finetune of FunctionGemma 270m.

  • Model Management & Benchmark: Gallery is a flexible sandbox for a wide variety of open-source models. Easily download models from the list or load your own custom models. Manage your model library effortlessly and run benchmark tests to understand exactly how each model performs on your specific hardware.

  • 100% On-Device Privacy: All model inferences happen directly on your device hardware. No internet is required, ensuring total privacy for your prompts, images, and sensitive data.

🏁 Get Started in Minutes!

  1. Check OS Requirement: Android 12 and up, and iOS 17 and up.
  2. Download the App:
  3. Install & Explore: For detailed installation instructions (including for corporate devices) and a full user guide, head over to our Project Wiki!

🛠️ Technology Highlights

  • Google AI Edge: Core APIs and tools for on-device ML.
  • LiteRT: Lightweight runtime for optimized model execution.
  • Hugging Face Integration: For model discovery and download.

⌨️ Development

Check out the development notes for instructions about how to build the app locally.

🤝 Feedback

This is an experimental Beta release, and your input is crucial!

📄 License

Licensed under the Apache License, Version 2.0. See the LICENSE file for details.

🔗 Useful Links

▼ Show full
LiteRT-LM

LiteRT-LM

LiteRT-LM is Google's production-ready, high-performance, open-source inference framework for deploying Large Language Models on edge devices.

🔗 Product Website

🔥 What's New: Gemma 4 support with LiteRT-LM

Deploy Gemma 4 across a broad range of hardware with stellar performance (blog).

👉 Try on Linux, macOS, Windows (WSL) or Raspberry Pi with the LiteRT-LM CLI:

litert-lm run  \
   --from-huggingface-repo=litert-community/gemma-4-E2B-it-litert-lm \
   gemma-4-E2B-it.litertlm \
   --prompt="What is the capital of France?"

🌟 Key Features

  • 📱 Cross-Platform Support: Android, iOS, Web, Desktop, and IoT (e.g. Raspberry Pi).
  • 🚀 Hardware Acceleration: Peak performance via GPU and NPU accelerators.
  • 👁️ Multi-Modality: Support for vision and audio inputs.
  • 🔧 Tool Use: Function calling support for agentic workflows.
  • 📚 Broad Model Support: Gemma, Llama, Phi-4, Qwen, and more.


🚀 Production-Ready for Google's Products

LiteRT-LM powers on-device GenAI experiences in Chrome, Chromebook Plus, Pixel Watch, and more.

You can also try the Google AI Edge Gallery app to run models immediately on your device.

Install the app today from Google Play Install the app today from App Store
Get it on Google Play Download on the App Store

📰 Blogs & Announcements

Link Description
Bring state-of-the-art agentic skills to the edge with Gemma 4 Deploy Gemma 4 in-app and across a broader range of devices with stellar performance and broad reach using LiteRT-LM.
On-device GenAI in Chrome, Chromebook Plus and Pixel Watch Deploy language models on wearables and browser-based platforms using LiteRT-LM at scale.
On-device Function Calling in Google AI Edge Gallery Explore how to fine-tune FunctionGemma and enable function calling capabilities powered by LiteRT-LM Tool Use APIs.
Google AI Edge small language models, multimodality, and function calling Latest insights on RAG, multimodality, and function calling for edge language models.

🏃 Quick Start

🔗 Key Links

⚡ Quick Try (No Code)

Try LiteRT-LM immediately from your terminal without writing a single line of code using uv:

uv tool install litert-lm

litert-lm run \
  --from-huggingface-repo=google/gemma-3n-E2B-it-litert-lm \
  gemma-3n-E2B-it-int4 \
  --prompt="What is the capital of France?"

📚 Supported Language APIs

Ready to get started? Explore our language-specific guides and setup instructions.

Language Status Best For... Documentation
Kotlin ✅ Stable Android apps & JVM Android (Kotlin) Guide
Python ✅ Stable Prototyping & Scripting Python Guide
C++ ✅ Stable High-performance native C++ Guide
Swift 🚀 In Dev Native iOS & macOS (Coming Soon)

🏗️ Build From Source

This guide shows how you can compile LiteRT-LM from source.


📦 Releases

  • v0.9.0: Improvements to function calling capabilities, better app performance stability.
  • v0.8.0: Desktop GPU support and Multi-Modality.
  • v0.7.0: NPU acceleration for Gemma models.

For a full list of releases, see GitHub Releases.


▼ Show full
pi-mono

AI agent toolkit: coding agent CLI, unified LLM API, TUI & web UI libraries, Slack bot, vLLM pods


🏖️ OSS Weekend

Issue tracker reopens Monday, April 13, 2026.

OSS weekend runs Thursday, April 2, 2026 through Monday, April 13, 2026. New issues and PRs from unapproved contributors are auto-closed during this time. Approved contributors can still open issues and PRs if something is genuinely urgent, but please keep that to pressing matters only. For support, join Discord.

Current focus: at the moment i'm deep in refactoring internals, and need to focus.


pi logo

Discord Build status

pi.dev domain graciously donated by

Exy mascot
exe.dev

Pi Monorepo

Looking for the pi coding agent? See packages/coding-agent for installation and usage.

Tools for building AI agents and managing LLM deployments.

Packages

Package Description
@mariozechner/pi-ai Unified multi-provider LLM API (OpenAI, Anthropic, Google, etc.)
@mariozechner/pi-agent-core Agent runtime with tool calling and state management
@mariozechner/pi-coding-agent Interactive coding agent CLI
@mariozechner/pi-mom Slack bot that delegates messages to the pi coding agent
@mariozechner/pi-tui Terminal UI library with differential rendering
@mariozechner/pi-web-ui Web components for AI chat interfaces
@mariozechner/pi-pods CLI for managing vLLM deployments on GPU pods

Contributing

See CONTRIBUTING.md for contribution guidelines and AGENTS.md for project-specific rules (for both humans and agents).

Development

npm install          # Install all dependencies
npm run build        # Build all packages
npm run check        # Lint, format, and type check
./test.sh            # Run tests (skips LLM-dependent tests without API keys)
./pi-test.sh         # Run pi from sources (can be run from any directory)

Note: npm run check requires npm run build to be run first. The web-ui package uses tsc which needs compiled .d.ts files from dependencies.

License

MIT

▼ Show full
huaren wxcity hn newmitbbs bedtime bili
new
jiandan
new
hdeal
github
Prev day Apr 06 Mon Next day Today
DISCLAIMER: This information is provided "as is". The post content belongs to the original source.