This commit introduces a comprehensive set of features to transform the
AI Edge Gallery into a personalized offline chat application.
Phase 1: Core Offline Chat Functionality
- Data Structures: Defined UserProfile, Persona, ChatMessage, Conversation,
and UserDocument to model application data.
- DataStoreRepository: Enhanced to manage persistence for all new data
models, including encryption for UserProfile and local storage for
conversations, personas, and user documents. Default personas are
now also localized.
- UI for Personal Information: Added a screen for you to input and
edit your CV/resume details (name, summary, skills, experience).
- Feature Removal: Streamlined the app by removing the "Ask Image" and
"Prompt Lab" features to focus on chat.
- UI for Persona Management: Implemented UI for creating, editing,
deleting, and selecting an active persona to guide AI responses.
- Core Chat Logic & UI:
- Refactored LlmChatViewModel and LlmChatScreen.
- Supports starting new conversations with an optional custom system
prompt.
- Integrates active persona and user profile summary into LLM context.
- Manages conversations (saving messages, title, timestamps,
model used, persona used).
- Conversation History UI: Added a screen to view, open, and delete
past conversations.
- Localization: Implemented localization for English and Korean for all
new user-facing strings and default personas.
Phase 2: Document Handling (Started)
- UserDocument data class defined for managing imported files.
- DataStoreRepository updated to support CRUD operations for UserDocuments.
The application now provides a personalized chat experience with features
for managing user identity, AI personas, and conversation history, all
designed for offline use. Further document handling, monetization, and
cloud sync features are planned for subsequent phases.
- Show accelerator name in chat message sender labels.
- Attach download workers with silent foreground notifications to make them less likely to be killed.
- Update app icon to be consistent with Google style.
- Bump up version to 1.0.2.
- Don't go back to model selection screen automatically when there is an error during model initialization, so that users have a chance to change model parameters (e.g. accelerator) to retry the initialization.
- Show error dialog properly in prompt lab screen.
- Back to multi-turn (no exception handling yet).
- Show app's version in App Info screen.
- Show API doc link and source code link in the model list screen.
- Jump to model info page in HF when clicking "learn more".