- I added a WebSearchService to call the Tavily API.
- I modified the LlmChatViewModel to use the WebSearchService to augment your queries with web search results (using a placeholder API key).
- I added UI feedback for web search status (loading, errors, no results).
- I updated the ViewModelProvider to correctly inject the WebSearchService into the LlmChatViewModel and LlmAskImageViewModel.
- Show accelerator name in chat message sender labels.
- Attach download workers with silent foreground notifications to make them less likely to be killed.
- Update app icon to be consistent with Google style.
- Bump up version to 1.0.2.
- Don't go back to model selection screen automatically when there is an error during model initialization, so that users have a chance to change model parameters (e.g. accelerator) to retry the initialization.
- Show error dialog properly in prompt lab screen.
- Back to multi-turn (no exception handling yet).
- Show app's version in App Info screen.
- Show API doc link and source code link in the model list screen.
- Jump to model info page in HF when clicking "learn more".