diff --git a/README.md b/README.md
index 75bb444..14be33e 100644
--- a/README.md
+++ b/README.md
@@ -8,7 +8,7 @@
Integrate the DeepSeek API into popular softwares. Access [DeepSeek Open Platform](https://platform.deepseek.com/) to get an API key.
-English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README_cn.md)
+English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README_cn.md)/[日本語](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README_ja.md)
@@ -18,6 +18,11 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
### Applications
+
+
+ Quantalogic
+ QuantaLogic is a ReAct (Reasoning & Action) framework for building advanced AI agents.
+
Chatbox
@@ -42,6 +47,16 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
LibreChat
LibreChat is a customizable open-source app that seamlessly integrates DeepSeek for enhanced AI interactions.
+
+
+
+ Just-Chat
+ Make your LLM agent and chat with it simple and fast!
+
+
+
+ PapersGPT
+ PapersGPT is a Zotero plugin that seamlessly with DeepSeek and other multiple AI models for quickly reading papers in Zotero.
@@ -83,12 +98,31 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
Raycast
Raycast is a productivity tool for macOS that lets you control your tools with a few keystrokes. It supports various extensions including DeepSeek AI.
+ Nice Prompt Nice Prompt Organize, share and use your prompts in your code editor, with Cursor and VSCode。
+
PHP Client
Deepseek PHP Client is a robust and community-driven PHP client library for seamless integration with the Deepseek API.
+
+
+
+
+
+ DeepSwiftSeek
+
+
+ DeepSwiftSeek is a lightweight yet powerful Swift client library, pretty good integration with the DeepSeek API.
+ It provides easy-to-use Swift concurrency for chat, streaming, FIM (Fill-in-the-Middle) completions, and more.
+
+
Laravel Integration
Laravel wrapper for Deepseek PHP client, to seamless deepseek API integration with laravel applications.
@@ -99,8 +133,8 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
Zotero is a free, easy-to-use tool to help you collect, organize, annotate, cite, and share research.
-
- SiYuan
+
+ SiYuan
SiYuan is a privacy-first personal knowledge management system that supports complete offline usage, as well as end-to-end encrypted data sync.
@@ -138,11 +172,64 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
AgenticFlow
AgenticFlow is a no-code platform where marketers build agentic AI workflows for go-to-market automation, powered by hundreds of everyday apps as tools for your AI agents.
+
+
+ STranslate
+ STranslate (Windows) is a ready-to-go translation ocr tool developed by WPF
+
+
+
+ ASP Client
+ Deepseek.ASPClient is a lightweight ASP.NET wrapper for the Deepseek AI API, designed to simplify AI-driven text processing in .NET applications..
+
+
+
+ GPT AI Flow
+
+ The ultimate productivity weapon built by engineers for efficiency enthusiasts (themselves): GPT AI Flow
+
+ `Shift+Alt+Space` Wake up desktop intelligent hub
+ Local encrypted storage
+ Custom instruction engine
+ On-demand calling without subscription bundling
+
+
+
+
+
+ Story-Flicks
+ With just one sentence, you can quickly generate high-definition story short videos, supporting models such as DeepSeek.
+
+
+
+ 16x Prompt
+ 16x Prompt is an AI coding tool with context management. It helps developers manage source code context and craft prompts for complex coding tasks on existing codebases.
+
+
+
+ PeterCat
+ A conversational Q&A agent configuration system, self-hosted deployment solutions, and a convenient all-in-one application SDK, allowing you to create intelligent Q&A bots for your GitHub repositories.
+
### AI Agent frameworks
+
+
+ smolagents
+ The simplest way to build great agents. Agents write python code to call tools and orchestrate other agents. Priority support for open models like DeepSeek-R1!
+
+
+
+ YoMo
+ Stateful Serverless LLM Function Calling Framework with Strongly-typed Language Support
+
+
+
+ SuperAgentX
+ SuperAgentX: A Lightweight Open Source AI Framework Built for Autonomous Multi-Agent Applications with Artificial General Intelligence (AGI) Capabilities.
+
Anda
@@ -152,6 +239,26 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
RIG
Build modular and scalable LLM Applications in Rust.
+
+
+
+ Just-Agents
+ A lightweight, straightforward library for LLM agents - no over-engineering, just simplicity!
+
+
+
+ Alice
+ An autonomous AI agent on ICP, leveraging LLMs like DeepSeek for on-chain decision-making. Alice combines real-time data analysis with a playful personality to manage tokens, mine BOB, and govern ecosystems.
+
+
+
+ Upsonic
+ Upsonic offers a cutting-edge enterprise-ready agent framework where you can orchestrate LLM calls, agents, and computer use to complete tasks cost-effectively.
+
+
+
+ ATTPs
+ A foundational protocol framework for trusted communication between agents. Any agents based on DeepSeek, By integrating with the ATTPs SDK, can access features such as agent registration, sending verifiable data, and retrieving verifiable data. So that it can make trusted communication with agents from other platforms.
@@ -163,6 +270,42 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
RAGFlow
An open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
+
+
+ Autoflow
+ AutoFlow is an open-source knowledge base tool based on GraphRAG (Graph-based Retrieval-Augmented Generation), built on TiDB Vector, LlamaIndex, and DSPy. It provides a Perplexity-like search interface and allows easy integration of AutoFlow's conversational search window into your website by embedding a simple JavaScript snippet.
+
+
+
+ DeepSearcher
+ DeepSearcher combines powerful LLMs (DeepSeek, OpenAI, etc.) and Vector Databases (Milvus, etc.) to perform search, evaluation, and reasoning based on private data, providing highly accurate answer and comprehensive report.
+
+
+
+### Solana frameworks
+
+
+
+
+ Solana Agent Kit
+ An open-source toolkit for connecting AI agents to Solana protocols. Now, any agent, using any Deepseek LLM, can autonomously perform 60+ Solana actions:
+
+
+
+### Synthetic data curation
+
+
+
+
+
+ Curator
+ An open-source tool to curate large scale datasets for post-training LLMs.
+
+
+
+ Kiln
+ Generate synthetic datasets and distill R1 models into custom fine-tunes.
+
### IM Application Plugins
@@ -174,9 +317,14 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
Domain knowledge assistant in personal WeChat and Feishu, focusing on answering questions.
-
- QChatGPT (QQ)
- A QQ chatbot with high stability, plugin support, and real-time networking.
+
+ LangBot (QQ, Lark, WeCom)
+ LLM-based IM bots framework, supports QQ, Lark, WeCom, and more platforms.
+
+
+
+ NoneBot (QQ, Lark, Discord, TG, etc.)
+ Based on NoneBot framework, provide intelligent chat and deep thinking functions, supports QQ, Lark, Discord, TG, and more platforms.
@@ -213,13 +361,33 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
FluentRead
A revolutionary open-source browser translation plugin that enables everyone to have a native-like reading experience
+
+
+ Ncurator
+ Knowledge Base AI Q&A Assistant - Let AI help you organize and analyze knowledge
+
+
+
+ RssFlow
+ An intelligent RSS reader browser extension with AI-powered RSS summarization and multi-dimensional feed views. Supports DeepSeek model configuration for enhanced content understanding.
+
+
+
+ Typral
+ Fast AI writer assistant - Let AI help you quickly improve article, paper, text...
+
+
+
+ Trancy
+ Immersive bilingual translation, video bilingual subtitles, sentence/word selection translation extension
+
### VS Code Extensions
-
+
Continue
Continue is an open-source autopilot in IDE.
@@ -228,26 +396,56 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
Cline
Meet Cline, an AI assistant that can use your CLI aNd Editor.
+
+
+ AI Commit
+ Use AI to generate git commit messages in VS Code.
+
+
+
+### Visual Studio Extensions
+
+
+
+
+ Comment2GPT
+ Use OpenAI ChatGPT, Google Gemini, Anthropic Claude, DeepSeek and Ollama through your comments
+
+
+
+ CodeLens2GPT
+ Use OpenAI ChatGPT, Google Gemini, Anthropic Claude, DeepSeek and Ollama through the CodeLens
+
+
+
+ Unity Code Assist Lite
+ Code assistance for Unity scripts
+
### neovim Extensions
-
+
avante.nvim
avante.nvim is an open-source autopilot in IDE.
llm.nvim
- A free large language model(LLM) plugin that allows you to interact with LLM in Neovim. Supports any LLM, such as Deepseek, GPT, GLM, Kimi or local LLMs (such as ollama).
+ A free large language model (LLM) plugin that allows you to interact with LLM in Neovim. Supports any LLM, such as Deepseek, GPT, GLM, Kimi or local LLMs (such as ollama).
codecompanion.nvim
AI-powered coding, seamlessly in Neovim.
+
+
+ minuet-ai.nvim
+ Minuet offers code completion as-you-type from popular LLMs including Deepseek, OpenAI, Gemini, Claude, Ollama, Codestral, and more.
+
### JetBrains Extensions
@@ -264,7 +462,7 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
Onegai Copilot is an AI coding assistant in JetBrain's IDE.
-
+
Continue
Continue is an open-source autopilot in IDE.
@@ -280,13 +478,28 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
-### Cursor
+### Discord Bots
+
+
+
+
+ Geneplore AI
+ Geneplore AI runs one of the largest AI Discord bots, now with Deepseek v3 and R1.
+
+
+
+### Native AI Code Editor
Cursor
- The AI Code Editor
+ The AI Code Editor based on VS Code
+
+
+
+ WindSurf
+ Another AI Code Editor based on VS Code by Codeium
@@ -305,9 +518,34 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
+### Security
+
+
+
+
+ CodeGate
+ CodeGate: secure AI code generation
+
+
+
### Others
+
+ 🐠
+ Abso
+ TypeScript SDK to interact with any LLM provider using the OpenAI format.
+
+
+
+ ShellOracle
+ A terminal utility for intelligent shell command generation.
+
+
+
+ Bolna
+ Use DeepSeek as the LLM for conversational voice AI agents
+
siri_deepseek_shortcut
@@ -318,6 +556,11 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
n8n-nodes-deepseek
An N8N community node that supports direct integration with the DeepSeek API into workflows.
+
+
+ Portkey AI
+ Portkey is a unified API for interacting with over 1600+ LLM models, offering advanced tools for control, visibility, and security in your DeepSeek apps. Python & Node SDK available.
+
LiteLLM
@@ -328,14 +571,34 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
Mem0
Mem0 enhances AI assistants with an intelligent memory layer, enabling personalized interactions and continuous learning over time.
-
-
- Geneplore AI
- Geneplore AI runs one of the largest AI Discord bots, now with Deepseek v3 and R1.
-
promptfoo
Test and evaluate LLM prompts, including DeepSeek models. Compare different LLM providers, catch regressions, and evaluate responses.
+
+
+ deepseek-tokenizer
+ An efficient and lightweight tokenization library for DeepSeek models, relying solely on the `tokenizers` library without heavy dependencies like `transformers`.
+
+
+
+ Langfuse
+ Open-source LLM observability platform that helps teams collaboratively debug, analyze, and iterate on their DeepSeek applications.
+
+
+ CR
+ deepseek-review
+ 🚀 Sharpen Your Code, Ship with Confidence – Elevate Your Workflow with Deepseek Code Review 🚀
+
+
+
+ GPTLocalost
+ Use DeepSeek-R1 in Microsoft Word Locally. No inference costs.
+
+
+
+ WordPress ai助手
+ Docking Deepseek api for WordPress site ai conversation assistant, post generation, post summary plugin.
+
diff --git a/README_cn.md b/README_cn.md
index 567a9e9..df703b8 100644
--- a/README_cn.md
+++ b/README_cn.md
@@ -8,7 +8,7 @@
将 DeepSeek 大模型能力轻松接入各类软件。访问 [DeepSeek 开放平台](https://platform.deepseek.com/)来获取您的 API key。
-[English](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README.md)/简体中文
+[English](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README.md)/简体中文/[日本語](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README_ja.md)
@@ -18,6 +18,11 @@
### 应用程序
+
+
+ Quantalogic
+ QuantaLogic 是一个 ReAct(推理和行动)框架,用于构建高级 AI 代理。
+
Chatbox
@@ -43,9 +48,14 @@
LibreChat
LibreChat 是一个可定制的开源应用程序,无缝集成了 DeepSeek,以增强人工智能交互体验
+
+
+ PapersGPT
+ PapersGPT是一款集成了DeepSeek及其他多种AI模型的辅助论文阅读的Zotero插件.
+
- RSS翻译器
+ RSS翻译器
开源、简洁、可自部署的RSS翻译器
@@ -77,14 +87,15 @@
Raycast
Raycast 是一款 macOS 生产力工具,它允许你用几个按键来控制你的工具。它支持各种扩展,包括 DeepSeek AI。
+ Nice Prompt Nice Prompt 是一个结合提示工程与社交功能的平台,支持用户高效创建、分享和协作开发AI提示词。
Zotero
Zotero 是一款免费且易于使用的文献管理工具,旨在帮助您收集、整理、注释、引用和分享研究成果。
-
- 思源笔记
+
+ 思源笔记
思源笔记是一款隐私优先的个人知识管理系统,支持完全离线使用,并提供端到端加密的数据同步功能。
@@ -112,6 +123,34 @@
Bob
Bob 是一款 macOS 平台的翻译和 OCR 软件,您可以在任何应用程序中使用 Bob 进行翻译和 OCR,即用即走!
+
+
+ STranslate
+ STranslate (Windows) 是 WPF 开发的一款即用即走的翻译、OCR工具
+
+
+
+ GPT AI Flow
+
+ 工程师为效率狂人(他们自己)打造的终极生产力武器: GPT AI Flow
+
+ `Shift+Alt+空格` 唤醒桌面智能中枢
+ 本地加密存储
+ 自定义指令引擎
+ 按需调用拒绝订阅捆绑
+
+
+
+
+
+ Story-Flicks
+ 通过一句话即可快速生成高清故事短视频,支持 DeepSeek 等模型。
+
+
+
+ PeterCat
+ 我们提供对话式答疑 Agent 配置系统、自托管部署方案和便捷的一体化应用 SDK,让您能够为自己的 GitHub 仓库一键创建智能答疑机器人,并快速集成到各类官网或项目中, 为社区提供更高效的技术支持生态。
+
### AI Agent 框架
@@ -122,6 +161,21 @@
Anda
一个专为 AI 智能体开发设计的 Rust 语言框架,致力于构建高度可组合、自主运行且具备永久记忆能力的 AI 智能体网络。
+
+
+ YoMo
+ Stateful Serverless LLM Function Calling Framework with Strongly-typed Language Support
+
+
+
+ Alice
+ 一个基于 ICP 的自主 AI 代理,利用 DeepSeek 等大型语言模型进行链上决策。Alice 结合实时数据分析和独特的个性,管理代币、挖掘 BOB 并参与生态系统治理。
+
+
+
+ ATTPs
+ 一个用于Agent之间可信通信的基础协议框架,基于DeekSeek的Agent,可以接入ATTPs 的SDK,获得注册Agent,发送可验证数据,获取可验证数据等功能,从而与其他平台的Agent进行可信通信。
+
### RAG 框架
@@ -132,6 +186,26 @@
RAGFlow
一款基于深度文档理解构建的开源 RAG(Retrieval-Augmented Generation)引擎。RAGFlow 可以为各种规模的企业及个人提供一套精简的 RAG 工作流程,结合大语言模型(LLM)针对用户各类不同的复杂格式数据提供可靠的问答以及有理有据的引用。
+
+
+ Autoflow
+ AutoFlow 是一个开源的基于 GraphRAG 的知识库工具,构建于 TiDB Vector、LlamaIndex 和 DSPy 之上。提供类 Perplexity 的搜索页面,并可以嵌入简单的 JavaScript 代码片段,轻松将 Autoflow 的对话式搜索窗口集成到您的网站。
+
+
+
+ DeepSearcher
+ DeepSearcher 结合强大的 LLM(DeepSeek、OpenAI 等)和向量数据库(Milvus 等),根据私有数据进行搜索、评估和推理,提供高度准确的答案和全面的报告。
+
+
+
+### Solana 框架
+
+
+
+
+ Solana Agent Kit
+ 一个用于连接 AI 智能体到 Solana 协议的开源工具包。现在,任何使用 Deepseek LLM 的智能体都可以自主执行 60+ 种 Solana 操作:
+
### 即时通讯插件
@@ -143,9 +217,14 @@
一个集成到个人微信群/飞书群的领域知识助手,专注解答问题不闲聊
-
- QChatGPT (QQ)
- 😎高稳定性、🧩支持插件、🌏实时联网的 LLM QQ / QQ频道 / One Bot 机器人🤖
+
+ LangBot (QQ, 企微, 飞书)
+ 大模型原生即时通信机器人平台,适配 QQ / QQ频道 / 飞书 / OneBot / 企业微信(wecom) 等多种消息平台
+
+
+
+ NoneBot (QQ, 飞书, Discord, TG, etc.)
+ 基于 NoneBot 框架,支持智能对话与深度思考功能。适配 QQ / 飞书 / Discord, TG 等多种消息平台
@@ -182,13 +261,33 @@
流畅阅读
一款革新性的浏览器开源翻译插件,让所有人都能够拥有基于母语般的阅读体验
+
+
+ 馆长
+ 知识库AI问答助手 - 让AI帮助你整理与分析知识
+
+
+
+ RssFlow
+ 一款智能的RSS阅读器浏览器扩展,具有AI驱动的RSS摘要和多维度订阅视图功能。支持配置DeepSeek模型以增强内容理解能力。
+
+
+
+ Typral
+ 超快的AI写作助手 - 让AI帮你快速优化日报,文章,文本等等...
+
+
+
+ Trancy
+ 沉浸双语对照翻译、视频双语字幕、划句/划词翻译插件
+
### VS Code 插件
-
+
Continue
开源 IDE 插件,使用 LLM 做你的编程助手
@@ -197,13 +296,18 @@
Cline
Cline 是一款能够使用您的 CLI 和编辑器的 AI 助手。
+
+
+ AI Commit
+ 使用 AI 生成 git commit message 的 VS Code 插件。
+
### neovim 插件
-
+
avante.nvim
开源 IDE 插件,使用 LLM 做你的编程助手
@@ -212,6 +316,11 @@
llm.nvim
免费的大语言模型插件,让你在Neovim中与大模型交互,支持任意一款大模型,比如Deepseek,GPT,GLM,kimi或者本地运行的大模型(比如ollama)
+
+
+ minuet-ai.nvim
+ Minuet 提供实时代码补全功能,支持多个主流大语言模型,包括 Deepseek、OpenAI、Gemini、Claude、Ollama、Codestral 等。
+
codecompanion.nvim
@@ -234,9 +343,34 @@
+### AI Code编辑器
+
+
+
+
+ Cursor
+ 基于VS Code进行扩展的AI Code编辑器
+
+
+
+ WindSurf
+ 另一个基于VS Code的AI Code编辑器,由Codeium出品
+
+
+
### 其它
diff --git a/README_ja.md b/README_ja.md
new file mode 100644
index 0000000..daeb742
--- /dev/null
+++ b/README_ja.md
@@ -0,0 +1,411 @@
+
+
+
+
+
+
+# Awesome DeepSeek Integrations 
+
+DeepSeek API を人気のソフトウェアに統合します。API キーを取得するには、[DeepSeek Open Platform](https://platform.deepseek.com/)にアクセスしてください。
+
+[English](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README.md)/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README_cn.md)/日本語
+
+
+
+
+
+
+### アプリケーション
+
+
+
+
+ Quantalogic
+ QuantaLogicは、高度なAIエージェントを構築するためのReAct(推論と行動)フレームワークです。
+
+
+
+ Chatbox
+ Chatboxは、Windows、Mac、Linuxで利用可能な複数の最先端LLMモデルのデスクトップクライアントです。
+
+
+
+ ChatGPT-Next-Web
+ ChatGPT Next Webは、GPT3、GPT4、Gemini ProをサポートするクロスプラットフォームのChatGPTウェブUIです。
+
+
+
+ Liubai
+ Liubaiは、WeChat上でDeepSeekを使用してノート、タスク、カレンダー、ToDoリストを操作できるようにします!
+
+
+
+ Pal - AI Chat Client (iOS, ipadOS)
+ Palは、iOS上でカスタマイズされたチャットプレイグラウンドです。
+
+
+
+ LibreChat
+ LibreChatは、DeepSeekをシームレスに統合してAIインタラクションを強化するカスタマイズ可能なオープンソースアプリです。
+
+
+
+ RSS Translator
+ RSSフィードをあなたの言語に翻訳します!
+
+
+
+ Enconvo
+ Enconvoは、AI時代のランチャーであり、すべてのAI機能のエントリーポイントであり、思いやりのあるインテリジェントアシスタントです。
+
+
+
+ Cherry Studio
+ プロデューサーのための強力なデスクトップAIアシスタント
+
+
+
+ ToMemo (iOS, ipadOS)
+ フレーズブック+クリップボード履歴+キーボードiOSアプリで、キーボードでの迅速な出力にAIマクロモデリングを統合しています。
+
+
+
+ Video Subtitle Master
+ ビデオの字幕を一括生成し、字幕を他の言語に翻訳することができます。これはクライアントサイドのツールで、MacとWindowsの両方のプラットフォームをサポートし、Baidu、Volcengine、DeepLx、OpenAI、DeepSeek、Ollamaなどの複数の翻訳サービスと統合されています。
+
+
+
+ Chatworm
+ Chatwormは、複数の最先端LLMモデルのためのウェブアプリで、オープンソースであり、Androidでも利用可能です。
+
+
+
+ Easydict
+ Easydictは、単語の検索やテキストの翻訳を簡単かつエレガントに行うことができる、簡潔で使いやすい翻訳辞書macOSアプリです。大規模言語モデルAPIを呼び出して翻訳を行うことができます。
+
+
+
+ Raycast
+ Raycast は、macOSの生産性ツールで、いくつかのキーストロークでツールを制御できます。DeepSeek AIを含むさまざまな拡張機能をサポートしています。
+
+
+
+ PHP Client
+ Deepseek PHP Clientは、Deepseek APIとのシームレスな統合のための堅牢でコミュニティ主導のPHPクライアントライブラリです。
+
+
+
+ Laravel Integration
+ LaravelアプリケーションとのシームレスなDeepseek API統合のためのLaravelラッパー。
+
+
+
+ Zotero
+ Zotero は、研究成果を収集、整理、注釈、引用、共有するのに役立つ無料で使いやすいツールです。
+
+
+
+ SiYuan
+ SiYuanは、完全にオフラインで使用できるプライバシー優先の個人知識管理システムであり、エンドツーエンドの暗号化データ同期を提供します。
+
+
+
+ go-stock
+ go-stockは、Wailsを使用してNativeUIで構築され、LLMによって強化された中国株データビューアです。
+
+
+
+ Wordware
+ Wordware は、誰でも自然言語だけでAIスタックを構築、反復、デプロイできるツールキットです。
+
+
+
+ Dify
+ Dify は、アシスタント、ワークフロー、テキストジェネレーターなどのアプリケーションを作成するためのDeepSeekモデルをサポートするLLMアプリケーション開発プラットフォームです。
+
+
+
+ Big-AGI
+ Big-AGI は、誰もが高度な人工知能にアクセスできるようにするための画期的なAIスイートです。
+
+
+
+ LiberSonora
+ LiberSonoraは、「自由の声」を意味し、AIによって強化された強力なオープンソースのオーディオブックツールキットであり、インテリジェントな字幕抽出、AIタイトル生成、多言語翻訳などの機能を備え、GPUアクセラレーションとバッチオフライン処理をサポートしています。
+
+
+
+ Bob
+ Bob は、任意のアプリで使用できるmacOSの翻訳およびOCRツールです。
+
+
+
+ AgenticFlow
+ AgenticFlow は、マーケターがAIエージェントのためのエージェンティックAIワークフローを構築するためのノーコードプラットフォームであり、数百の毎日のアプリをツールとして使用します。
+
+
+
+ PeterCat
+ 会話型Q&Aエージェントの構成システム、自ホスト型デプロイメントソリューション、および便利なオールインワンアプリケーションSDKを提供し、GitHubリポジトリのためのインテリジェントQ&Aボットをワンクリックで作成し、さまざまな公式ウェブサイトやプロジェクトに迅速に統合し、コミュニティのためのより効率的な技術サポートエコシステムを提供します。
+
+
+
+### AI エージェントフレームワーク
+
+
+
+
+ Anda
+ 高度にコンポーザブルで自律的かつ永続的な記憶を持つAIエージェントネットワークを構築するために設計されたRustフレームワーク。
+
+
+
+ ATTPs
+ エージェント間の信頼できる通信のための基本プロトコルフレームワークです。利用者はATTPs のSDKを導入することで、エージェントの登録、検証可能なデータの送信、検証可能なデータの取得などの機能を利用することができます。
+
+
+
+### RAG フレームワーク
+
+
+
+
+ RAGFlow
+ 深い文書理解に基づいたオープンソースのRAG(Retrieval-Augmented Generation)エンジン。RAGFlowは、あらゆる規模の企業や個人に対して、ユーザーのさまざまな複雑な形式のデータに対して信頼性のある質問応答と根拠のある引用を提供するための簡素化されたRAGワークフローを提供します。
+
+
+
+ Autoflow
+ AutoFlow は、GraphRAGに基づくオープンソースのナレッジベースツールであり、TiDB Vector、LlamaIndex、DSPy の上に構築されています。Perplexity のような検索インターフェースを提供し、シンプルな JavaScript スニペットを埋め込むことで、AutoFlow の対話型検索ウィンドウを簡単にウェブサイトに統合できます。
+
+
+
+ DeepSearcher
+ DeepSearcher は、強力な大規模言語モデル(DeepSeek、OpenAI など)とベクトルデータベース(Milvus など)を組み合わせて、私有データに基づく検索、評価、推論を行い、高精度な回答と包括的なレポートを提供します。
+
+
+
+### Solana フレームワーク
+
+
+
+
+ Solana Agent Kit
+ AIエージェントをSolanaプロトコルに接続するためのオープンソースツールキット。DeepSeek LLMを使用する任意のエージェントが、60以上のSolanaアクションを自律的に実行できます。
+
+
+
+### IM アプリケーションプラグイン
+
+
+
+### ブラウザ拡張機能
+
+
+
+
+ Immersive Translate
+ Immersive Translateは、バイリンガルのウェブページ翻訳プラグインです。
+
+
+
+ Immersive Reading Guide
+ サイドバーなし!!! 没入型のAIウェブ要約、質問をする...
+
+
+
+ ChatGPT Box
+ ChatGPT Boxは、ブラウザに統合されたChatGPTで、完全に無料です。
+
+
+
+ hcfy (划词翻译)
+ hcfy (划词翻译)は、複数の翻訳サービスを統合するウェブブラウザ拡張機能です。
+
+
+
+ Lulu Translate
+ このプラグインは、マウス選択翻訳、段落ごとの比較翻訳、およびPDF文書���訳機能を提供します。DeepSeek AI、Bing、GPT、Googleなどのさまざまな翻訳エンジンを利用できます。
+
+
+
+ FluentRead
+ 誰もが母国語のような読書体験を持つことができる革新的なオープンソースのブラウザ翻訳プラグイン
+
+
+
+ RssFlow
+ AIを活用したRSS要約と多次元フィードビューを備えたインテリジェントなRSSリーダーブラウザ拡張機能。コンテンツ理解を強化するためのDeepSeekモデル設定をサポートしています。
+
+
+
+ Ncurator
+ ナレッジベース AI Q&Aアシスタント – AIがあなたの知識の整理と分析をお手伝いします
+
+
+
+ Typral
+ 超高速AIライティングアシスタント - AIがあなたの日報、記事、テキストなどを素早く最適化します
+
+
+
+ Trancy
+ イマーシブな二か国語対照翻訳、動画の二か国語字幕、文/単語の選択翻訳プラグイン
+
+
+
+### VS Code 拡張機能
+
+
+
+
+ Continue
+ Continueは、IDEのオープンソースの自動操縦です。
+
+
+
+ Cline
+ Clineは、CLIとエディタを使用できるAIアシスタントです。
+
+
+
+ AI Commit
+ VS Code で AI を使用して git commit message を生成するプラグイン。
+
+
+
+### neovim 拡張機能
+
+
+
+
+ avante.nvim
+ avante.nvimは、IDEのオープンソースの自動操縦です。
+
+
+
+ llm.nvim
+ NeovimでLLMと対話できる無料の大規模言語モデル(LLM)プラグイン。Deepseek、GPT、GLM、Kimi、またはローカルLLM(ollamaなど)など、任意のLLMをサポートします。
+
+
+
+ codecompanion.nvim
+ Neovimでシームレスに統合されたAI駆動のコーディング。
+
+
+
+### JetBrains 拡張機能
+
+
+
+### AI コードエディタ
+
+
+
+
+ Cursor
+ AIコードエディタ
+
+
+
+ WindSurf
+ CodeiumによるVS CodeをベースにしたのAIコードエディタ
+
+
+
+### Emacs
+
+
+
+
+ gptel
+ EmacsのためのシンプルなLLMクライアント
+
+
+
+ Minuet AI
+ コードでインテリジェンスとダンス💃
+
+
+
+### その他
+
+
+
+ 🐠
+ Abso
+ OpenAIフォーマットを使用するあらゆるLLMプロバイダと対話するためのTypeScript SDK.
+
+
+
+ siri_deepseek_shortcut
+ DeepSeek APIを装備したSiri
+
+
+
+ n8n-nodes-deepseek
+ DeepSeek APIをワークフローに直接統合するためのN8Nコミュニティノード。
+
+
+
+ LiteLLM
+ 100以上のLLM APIをOpenAI形式で呼び出すためのPython SDK、プロキシサーバー(LLMゲートウェイ)。DeepSeek AIもサポートし、コスト追跡も可能です。
+
+
+
+ Mem0
+ Mem0は、AIアシスタントにインテリジェントなメモリレイヤーを追加し、パーソナライズされたインタラクションと継続的な学習を可能にします。
+
+
+
+ Geneplore AI
+ Geneplore AIは、Deepseek v3およびR1を搭載した最大のAI Discordボットの1つを運営しています。
+
+
+
+ promptfoo
+ LLMプロンプトをテストおよび評価し、DeepSeekモデルを含む。さまざまなLLMプロバイダーを比較し、回帰をキャッチし、応答を評価します。
+
+
diff --git a/docs/16x_prompt/README.md b/docs/16x_prompt/README.md
new file mode 100644
index 0000000..e0b92d7
--- /dev/null
+++ b/docs/16x_prompt/README.md
@@ -0,0 +1,18 @@
+# [16x Prompt](https://prompt.16x.engineer/)
+
+AI Coding with Context Management.
+
+16x Prompt helps developers manage source code context and craft prompts for complex coding tasks on existing codebases.
+
+# UI
+
+
+
+## Integrate with DeepSeek API
+
+1. Click on the model selection button at bottom right
+2. Click on "DeepSeek API" to automatically fill in API Endpoint
+3. Enter model ID, for example `deepseek-chat` (for DeepSeek V3) or `deepseek-reasoner` (for DeepSeek R1)
+4. Enter your API key
+
+
\ No newline at end of file
diff --git a/docs/16x_prompt/assets/16x_prompt_integration.png b/docs/16x_prompt/assets/16x_prompt_integration.png
new file mode 100644
index 0000000..a3d42e6
Binary files /dev/null and b/docs/16x_prompt/assets/16x_prompt_integration.png differ
diff --git a/docs/16x_prompt/assets/16x_prompt_ui.png b/docs/16x_prompt/assets/16x_prompt_ui.png
new file mode 100644
index 0000000..3c7afbf
Binary files /dev/null and b/docs/16x_prompt/assets/16x_prompt_ui.png differ
diff --git a/docs/ATTPs/README.md b/docs/ATTPs/README.md
new file mode 100644
index 0000000..4b6360f
--- /dev/null
+++ b/docs/ATTPs/README.md
@@ -0,0 +1,379 @@
+
+# APRO-COM/ATTPs-framework
+
+Foundation framework that enables advanced agent based on DeepSeek interactions, data verification, and price queries with [ATTPs Protocol](https://docs.apro.com/attps) . It streamlines agent creation, verification processes, and provides a flexible framework for building robust agent-based solutions.
+
+For more details about ATTPs, you can see the [whitepaper here](https://www.apro.com/attps.pdf)
+
+## Overview
+
+The ATTPs framework bridges agent-based logic with the DeepSeek. It handles agent registration, data verification, and price queries, empowering both automated and user-driven workflows.
+
+## Features
+
+### Agent Operations
+- **Agent Creation**: Deploy new agents with custom settings
+- **Registration**: Register agents on-chain or via standardized processes
+- **Multi-Signer Framework**: Supports threshold-based approval flows
+
+### Data Verification
+- **Chain Validation**: Verify data authenticity on-chain
+- **Transaction Execution**: Handle verification logic with built-in security checks
+- **Auto-Hashing**: Convert raw data to hashed formats when needed
+- **Metadata Parsing**: Validate content type, encoding, and compression
+
+### Price Queries
+- **Live Price Data**: Fetch price information for various pairs
+- **Format Validation**: Normalize user query inputs to standard trading-pair formats
+- **APIs Integration**: Retrieve real-time or near-real-time pricing information
+
+## Security Features
+
+### Access Control
+- **Private Key Management**: Safe usage of private keys for transaction signing
+- **Environment Variables**: Secure injection of credentials
+- **On-Chain Validation**: Leverage on-chain contract checks
+
+### Verification
+- **Input Validation**: Strict schema checks before on-chain operations
+- **Transaction Receipts**: Provide verifiable transaction details
+- **Error Handling**: Detailed error logs for quick debugging
+
+## Performance Optimization
+
+1. **Cache Management**
+ - Implement caching for frequent queries
+ - Monitor retrieval times and cache hits
+
+2. **Network Efficiency**
+ - Batch requests where possible
+ - Validate response parsing to reduce overhead
+
+## System Requirements
+- Node.js 16.x or higher
+- Sufficient network access to on-chain endpoints
+- Basic configuration of environment variables
+- Minimum 4GB RAM recommended
+
+## Troubleshooting
+
+1. **Invalid Agent Settings**
+ - Ensure signers and threshold are correct
+ - Validate agentHeader for proper UUIDs and numeric values
+
+2. **Verification Failures**
+ - Check the input data formats
+ - Confirm environment variables are set
+
+3. **Price Query Errors**
+ - Verify the trading pair format
+ - Check external API availability
+
+## Safety & Security
+
+1. **Credential Management**
+ - Store private keys securely
+ - Do not commit secrets to version control
+
+2. **Transaction Limits**
+ - Configure thresholds to mitigate abuse
+ - Log transaction attempts and failures
+
+3. **Monitoring & Logging**
+ - Track unusual activity
+ - Maintain detailed audit logs
+
+
+# Usage with js
+
+## Installation
+
+```bash
+npm install ai-agent-sdk-js
+```
+
+## Configuration
+
+Configure the plugin by setting environment variables or runtime settings:
+- APRO_RPC_URL
+- APRO_PROXY_ADDRESS
+- APRO_PRIVATE_KEY
+- APRO_CONVERTER_ADDRESS
+- APRO_AUTO_HASH_DATA
+
+## Usage with js sdk
+
+To use the AI Agent SDK, import the library and create an instance of the `Agent` class:
+
+```typescript
+import { AgentSDK } from 'ai-agent-sdk-js'
+
+const agent = new AgentSDK({
+ rpcUrl: 'https://bsc-testnet-rpc.publicnode.com',
+ privateKey: '',
+ proxyAddress: '',
+})
+
+// if you want the SDK to hash the data automatically
+const autoHashAgent = new AgentSDK({
+ rpcUrl: 'https://bsc-testnet-rpc.publicnode.com',
+ privateKey: '',
+ proxyAddress: '',
+ autoHashData: true,
+ converterAddress: '',
+})
+```
+
+To create a new agent, call the `createAndRegisterAgent` method:
+
+```typescript
+import type { AgentSettings, TransactionOptions } from 'ai-agent-sdk-js'
+import { randomUUID } from 'node:crypto'
+import { parseUnits } from 'ethers'
+
+// prepare the agent settings
+const agentSettings: AgentSettings = {
+ signers: [],
+ threshold: 3,
+ converterAddress: '',
+ agentHeader: {
+ messageId: randomUUID(),
+ sourceAgentId: randomUUID(),
+ sourceAgentName: 'AI Agent SDK JS',
+ targetAgentId: '',
+ timestamp: Math.floor(Date.now() / 1000),
+ messageType: 0,
+ priority: 1,
+ ttl: 3600,
+ },
+}
+
+// prepare the transaction options
+const nonce = await agent.getNextNonce()
+const transactionOptions: TransactionOptions = {
+ nonce,
+ gasPrice: parseUnits('1', 'gwei'),
+ gasLimit: BigInt(2000000),
+}
+
+const tx = await agent.createAndRegisterAgent({ agentSettings, transactionOptions })
+
+// or you can leave the transaction options empty, the SDK will use the auto-generated values
+// const tx = await agent.createAndRegisterAgent({ agentSettings })
+```
+
+The SDK also provides the tool to extract the new agent address from the transaction receipt:
+
+```typescript
+import { parseNewAgentAddress } from 'ai-agent-sdk-js'
+
+const receipt = await tx.wait()
+const agentAddress = parseNewAgentAddress(receipt)
+```
+
+To verify the data integrity, call the `verify` method:
+
+```typescript
+import type { MessagePayload } from 'ai-agent-sdk-js'
+import { hexlify, keccak256, toUtf8Bytes } from 'ethers'
+
+// prepare the payload
+const data = hexlify(toUtf8Bytes('Hello World!'))
+const dataHash = keccak256(data)
+const payload: MessagePayload = {
+ data,
+ dataHash,
+ signatures: [
+ {
+ r: '',
+ s: '',
+ v: 1, // 1, 0, 27, 28 are allowed
+ },
+ // ...
+ ],
+ metadata: {
+ contentType: '',
+ encoding: '',
+ compression: '',
+ },
+}
+
+const tx = await agent.verify({ payload, agent: '', digest: '' })
+```
+
+If the data is obtained from the APRO DATA pull service, you can use the auto-hash feature:
+
+```typescript
+import type { MessagePayload } from 'ai-agent-sdk-js'
+
+const payload: MessagePayload = {
+ data: '0x...',
+ signatures: [
+ {
+ r: '',
+ s: '',
+ v: 1, // 1, 0, 27, 28 are allowed
+ },
+ // ...
+ ],
+ metadata: {
+ contentType: '',
+ encoding: '',
+ compression: '',
+ },
+}
+
+// When
+const tx = await autoHashAgent.verify({ payload, agent: '', digest: '' })
+```
+
+For more examples, see the [test](https://github.com/APRO-com/ai-agent-sdk-js/tree/main/test) cases.
+
+
+
+# Usage with Python
+
+## Installation
+
+```bash
+$ pip3 install ai-agent-sdk
+
+```
+
+## Usage with Python SDK
+
+### Initialize AgentSDK
+
+```python
+from ai_agent.agent import AgentSDK
+
+AGENT_PROXY_ADDRESS = "0x07771A3026E60776deC8C1C61106FB9623521394"
+NETWORK_RPC = "https://testnet-rpc.bitlayer.org"
+
+agent = AgentSDK(endpoint_uri=NETWORK_RPC, proxy_address=AGENT_PROXY_ADDRESS)
+```
+
+To create a new agent, call the createAndRegisterAgent method:
+
+```python
+import time
+from ai_agent.entities import (
+ AgentSettings,
+ AgentHeader,
+ MessageType,
+ Priority
+)
+from ai_agent.utils import (
+ generate_uuid_v4
+)
+
+AGENT_SETTINGS = AgentSettings(
+ signers=[
+ "0x4b1056f504f32c678227b5Ae812936249c40AfBF",
+ "0xB973476e0cF88a3693014b99f230CEB5A01ac686",
+ "0x6cF0803D049a4e8DC01da726A5a212BCB9FAC1a1",
+ "0x9D46daa26342e9E9e586A6AdCEDaD667f985567B",
+ "0x33AF673aBcE193E20Ee94D6fBEb30fEf0cA7015b",
+ "0x868D2dE4a0378450BC62A7596463b30Dc4e3897E",
+ "0xD4E157c36E7299bB40800e4aE7909DDcA8097f67",
+ "0xA3866A07ABEf3fD0643BD7e1c32600520F465ca8",
+ "0x62f642Ae0Ed7F12Bc40F2a9Bf82ccD0a3F3b7531"
+ ],
+ threshold=2,
+ converter_address="0xaB303EF87774D9D259d1098E9aA4dD6c07F69240",
+ agent_header=AgentHeader(
+ version="1.0",
+ message_id="d4d0813f-ceb7-4ce1-8988-12899b26c4b6",
+ source_agent_id="da70f6b3-e580-470f-b88b-caa5369e7778",
+ source_agent_name="APRO Pull Mode Agent",
+ target_agent_id="",
+ timestamp=int(time.time()),
+ message_type=MessageType.Event,
+ priority=Priority.Low,
+ ttl=60 * 60
+ )
+)
+
+dynamic_setting = AGENT_SETTINGS
+dynamic_setting.agent_header.source_agent_id = generate_uuid_v4()
+dynamic_setting.agent_header.target_agent_id = generate_uuid_v4()
+dynamic_setting.agent_header.message_id = generate_uuid_v4()
+user_owner = agent.add_account("0x_user_private_key")
+result = agent.create_and_register_agent(
+ transmitter="",
+ nonce=None,
+ settings=AGENT_SETTINGS)
+print("created agent:", result)
+
+```
+To verify the data integrity, call the verify method:
+
+```python
+from ai_agent.entities import (
+ AgentMessagePayload,
+ Proofs,
+ AgentMetadata,
+)
+
+AGENT_CONTRACT = "0xA1903361Ee8Ec35acC7c8951b4008dbE8D12C155"
+AGENT_SETTING_DIGEST = "0x010038164dba6abffb84eb5cb538850d9bc5d8f815149a371069b3255fd177a4"
+AGENT_PAYLOAD = AgentMessagePayload(
+ data="0x0006e706cf7ab41fa599311eb3de68be869198ce62aef1cd079475ca50e5b3f60000000000000000000000000000000000000000000000000000000002b1bf0e000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000022000000000000000000000000000000000000000000000000000000000000002a0000101000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001200003665949c883f9e0f6f002eac32e00bd59dfe6c34e92a91c37d6a8322d6489000000000000000000000000000000000000000000000000000000006762677d000000000000000000000000000000000000000000000000000000006762677d000000000000000000000000000000000000000000000000000003128629ec0800000000000000000000000000000000000000000000000004db732547630000000000000000000000000000000000000000000000000000000000006763b8fd0000000000000000000000000000000000000000000015f0f60671beb95cc0000000000000000000000000000000000000000000000015f083baa654a7b900000000000000000000000000000000000000000000000015f103ec7cb057ea80000000000000000000000000000000000000000000000000000000000000000003b64f7e72208147bb898e8b215d0997967bef0219263726c76995d8a19107d6ba5306a176474f9ccdb1bc5841f97e0592013e404e15b0de0839b81d0efb26179f222e0191269a8560ebd9096707d225bc606d61466b85d8568d7620a3b59a73e800000000000000000000000000000000000000000000000000000000000000037cae0f05c1bf8353eb5db27635f02b40a534d4192099de445764891198231c597a303cd15f302dafbb1263eb6e8e19cbacea985c66c6fed3231fd84a84ebe0276f69f481fe7808c339a04ceb905bb49980846c8ceb89a27b1c09713cb356f773",
+ data_hash="0x53d9f133f1265bd4391fcdf89b63424cbcfd316c8448f76cc515647267ac0a8e",
+ proofs=Proofs(
+ zk_proof="0x",
+ merkle_proof="0x",
+ signature_proof="0x000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000003b64f7e72208147bb898e8b215d0997967bef0219263726c76995d8a19107d6ba5306a176474f9ccdb1bc5841f97e0592013e404e15b0de0839b81d0efb26179f222e0191269a8560ebd9096707d225bc606d61466b85d8568d7620a3b59a73e800000000000000000000000000000000000000000000000000000000000000037cae0f05c1bf8353eb5db27635f02b40a534d4192099de445764891198231c597a303cd15f302dafbb1263eb6e8e19cbacea985c66c6fed3231fd84a84ebe0276f69f481fe7808c339a04ceb905bb49980846c8ceb89a27b1c09713cb356f7730000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001",
+ ),
+ meta_data=AgentMetadata(
+ content_type="0x",
+ encoding="0x",
+ compression="0x"
+ )
+)
+user_owner = agent.add_account("0x_user_private_key")
+result = agent.verify(
+ transmitter=user_owner,
+ nonce=None,
+ agent_contract=AGENT_CONTRACT,
+ settings_digest=AGENT_SETTING_DIGEST,
+ payload=AGENT_PAYLOAD
+)
+print("verify:", result)
+```
+For more examples, see the [test cases](https://github.com/APRO-com/ai-agent-sdk-python/tree/main/tests).
+
+
+# Other SDKs
+
+JAVA: https://github.com/APRO-com/ai-agent-sdk-java
+
+RUST: https://github.com/APRO-com/ai-agent-sdk-rust
+
+GOLANG: https://github.com/APRO-com/ai-agent-sdk-go
+
+# Support
+
+For issues or feature requests:
+1. Check existing documentation
+2. Submit a GitHub issue with relevant details
+3. Include transaction logs and system info if applicable
+
+# Contributing
+
+We welcome pull requests! Refer to the project’s CONTRIBUTING.md and open discussions to coordinate efforts.
+
+# Credits
+
+- [APRO](https://www.apro.com/) - Plugin sponsor and partner
+- [ai-agent-sdk-js](https://github.com/APRO-com/ai-agent-sdk-js) - Underlying agent SDK
+- [ethers.js](https://docs.ethers.io/) - Transaction and contract interaction
+- Community contributors for feedback and testing
+
+For more information about Apro plugin capabilities:
+
+- [Apro Documentation](https://docs.apro.com/en)
+
+# License
+
+This plugin is part of the Eliza project. Refer to the main project repository for licensing details.
\ No newline at end of file
diff --git a/docs/Geneplore AI/README.md b/docs/Geneplore AI/README.md
new file mode 100644
index 0000000..27719e0
--- /dev/null
+++ b/docs/Geneplore AI/README.md
@@ -0,0 +1,17 @@
+# [Geneplore AI](https://geneplore.com/bot)
+
+## Geneplore AI is building the world's easiest way to use AI - Use 50+ models, all on Discord
+
+Chat with the all-new Deepseek v3, GPT-4o, Claude 3 Opus, LLaMA 3, Gemini Pro, FLUX.1, and ChatGPT with **one bot**. Generate videos with Stable Diffusion Video, and images with the newest and most popular models available.
+
+Don't like how the bot responds? Simply change the model in *seconds* and continue chatting like normal, without adding another bot to your server. No more fiddling with API keys and webhooks - every model is completely integrated into the bot.
+
+**NEW:** Try the most powerful open AI model, Deepseek v3, for free with our bot. Simply type /chat and select Deepseek in the model list.
+
+
+
+Use the bot trusted by over 60,000 servers and hundreds of paying subscribers, without the hassle of multiple $20/month subscriptions and complicated programming.
+
+https://geneplore.com
+
+© 2025 Geneplore AI, All Rights Reserved.
diff --git a/docs/Ncurator/README.md b/docs/Ncurator/README.md
new file mode 100644
index 0000000..9661cdd
--- /dev/null
+++ b/docs/Ncurator/README.md
@@ -0,0 +1,12 @@
+
+
+# [Ncurator](https://www.ncurator.com)
+
+Knowledge Base AI Q&A Assistant -
+Let AI help you organize and analyze knowledge
+
+## UI
+
+
+## Integrate with Deepseek API
+
\ No newline at end of file
diff --git a/docs/Ncurator/README_cn.md b/docs/Ncurator/README_cn.md
new file mode 100644
index 0000000..8b9f87b
--- /dev/null
+++ b/docs/Ncurator/README_cn.md
@@ -0,0 +1,11 @@
+
+
+# [Ncurator](https://www.ncurator.com)
+
+知识库AI问答助手-让AI帮助你整理与分析知识
+
+## UI
+
+
+## 配置 Deepseek API
+
\ No newline at end of file
diff --git a/docs/Ncurator/assets/logo.png b/docs/Ncurator/assets/logo.png
new file mode 100644
index 0000000..09bda82
Binary files /dev/null and b/docs/Ncurator/assets/logo.png differ
diff --git a/docs/Ncurator/assets/screenshot1.png b/docs/Ncurator/assets/screenshot1.png
new file mode 100644
index 0000000..3d1f517
Binary files /dev/null and b/docs/Ncurator/assets/screenshot1.png differ
diff --git a/docs/Ncurator/assets/screenshot2.png b/docs/Ncurator/assets/screenshot2.png
new file mode 100644
index 0000000..2d8b119
Binary files /dev/null and b/docs/Ncurator/assets/screenshot2.png differ
diff --git a/docs/Ncurator/assets/screenshot3.png b/docs/Ncurator/assets/screenshot3.png
new file mode 100644
index 0000000..91276b1
Binary files /dev/null and b/docs/Ncurator/assets/screenshot3.png differ
diff --git a/docs/Siyuan/README.md b/docs/SiYuan/README.md
similarity index 93%
rename from docs/Siyuan/README.md
rename to docs/SiYuan/README.md
index fb13da8..ec5cdc6 100644
--- a/docs/Siyuan/README.md
+++ b/docs/SiYuan/README.md
@@ -2,7 +2,7 @@
-
+
---
diff --git a/docs/Siyuan/README_cn.md b/docs/SiYuan/README_cn.md
similarity index 92%
rename from docs/Siyuan/README_cn.md
rename to docs/SiYuan/README_cn.md
index b3c7247..c1123bc 100644
--- a/docs/Siyuan/README_cn.md
+++ b/docs/SiYuan/README_cn.md
@@ -1,6 +1,6 @@
# README_cn
-
+
---
diff --git a/docs/Siyuan/assets/image-20250122162241-32a4oma.png b/docs/SiYuan/assets/image-20250122162241-32a4oma.png
similarity index 100%
rename from docs/Siyuan/assets/image-20250122162241-32a4oma.png
rename to docs/SiYuan/assets/image-20250122162241-32a4oma.png
diff --git a/docs/Siyuan/assets/image-20250122162425-wlsgw0u.png b/docs/SiYuan/assets/image-20250122162425-wlsgw0u.png
similarity index 100%
rename from docs/Siyuan/assets/image-20250122162425-wlsgw0u.png
rename to docs/SiYuan/assets/image-20250122162425-wlsgw0u.png
diff --git a/docs/Siyuan/assets/image-20250122163007-hkuruoe.png b/docs/SiYuan/assets/image-20250122163007-hkuruoe.png
similarity index 100%
rename from docs/Siyuan/assets/image-20250122163007-hkuruoe.png
rename to docs/SiYuan/assets/image-20250122163007-hkuruoe.png
diff --git a/docs/Siyuan/assets/image-20250122162731-7wkftbw.png b/docs/Siyuan/assets/image-20250122162731-7wkftbw.png
deleted file mode 100644
index 2bb3189..0000000
Binary files a/docs/Siyuan/assets/image-20250122162731-7wkftbw.png and /dev/null differ
diff --git a/docs/Typral/README.md b/docs/Typral/README.md
new file mode 100644
index 0000000..30c1f83
--- /dev/null
+++ b/docs/Typral/README.md
@@ -0,0 +1,11 @@
+
+
+# [Typral](https://www.typral.com)
+
+Fast AI writer assistant - Let AI help you quickly improve article, paper, text...
+
+## UI
+
+
+## 配置 Deepseek API
+
\ No newline at end of file
diff --git a/docs/Typral/README_cn.md b/docs/Typral/README_cn.md
new file mode 100644
index 0000000..88d3162
--- /dev/null
+++ b/docs/Typral/README_cn.md
@@ -0,0 +1,11 @@
+
+
+# [Typral](https://www.typral.com)
+
+超快的AI写作助手 - 让AI帮你快速优化日报,文章,文本等等...
+
+## UI
+
+
+## 配置 Deepseek API
+
\ No newline at end of file
diff --git a/docs/Typral/assets/screenshot1.png b/docs/Typral/assets/screenshot1.png
new file mode 100644
index 0000000..a2ad8d4
Binary files /dev/null and b/docs/Typral/assets/screenshot1.png differ
diff --git a/docs/Typral/assets/screenshot2.png b/docs/Typral/assets/screenshot2.png
new file mode 100644
index 0000000..763c981
Binary files /dev/null and b/docs/Typral/assets/screenshot2.png differ
diff --git a/docs/autoflow/README.md b/docs/autoflow/README.md
new file mode 100644
index 0000000..e024893
--- /dev/null
+++ b/docs/autoflow/README.md
@@ -0,0 +1,23 @@
+# Autoflow
+
+
+
+[AutoFlow](https://github.com/pingcap/autoflow) is an open-source knowledge base tool based on GraphRAG (Graph-based Retrieval-Augmented Generation), built on [TiDB](https://www.pingcap.com/ai?utm_source=tidb.ai&utm_medium=community) Vector, LlamaIndex, and DSPy. It provides a Perplexity-like search interface and allows easy integration of AutoFlow's conversational search window into your website by embedding a simple JavaScript snippet.
+
+## UI
+
+1. **Perplexity-style Conversational Search page**: Our platform features an advanced built-in website crawler, designed to elevate your browsing experience. This crawler effortlessly navigates official and documentation sites, ensuring comprehensive coverage and streamlined search processes through sitemap URL scraping.
+
+ 
+
+2. **Embeddable JavaScript Snippet**: Integrate our conversational search window effortlessly into your website by copying and embedding a simple JavaScript code snippet. This widget, typically placed at the bottom right corner of your site, facilitates instant responses to product-related queries.
+
+ 
+
+## Integrate with Deepseek API
+
+- Click the tab `Models` then `LLMs` to enter the LLM model management page.
+- Click the `Create` button to create a new LLM model.
+- Input data like below, then click the `Create LLM` button.
+
+
diff --git a/docs/autoflow/README_cn.md b/docs/autoflow/README_cn.md
new file mode 100644
index 0000000..1de1269
--- /dev/null
+++ b/docs/autoflow/README_cn.md
@@ -0,0 +1,23 @@
+# Autoflow
+
+
+
+[AutoFlow](https://github.com/pingcap/autoflow) 是一个基于 GraphRAG(基于图的检索增强生成)的开源知识库工具,构建于 [TiDB](https://www.pingcap.com/ai?utm_source=tidb.ai&utm_medium=community) Vector、LlamaIndex 和 DSPy 之上。它提供类似 Perplexity 的搜索界面,并允许通过嵌入简单的 JavaScript 代码片段,将 AutoFlow 的对话式搜索窗口轻松集成到您的网站中。
+
+## UI 界面
+
+1. **Perplexity 风格的对话式搜索页面**:我们的平台配备了高级内置网站爬虫,旨在提升您的浏览体验。该爬虫能够轻松抓取官方网站和文档站点,通过 sitemap 抓取,实现全面覆盖和高效搜索。
+
+ 
+
+2. **可嵌入的 JavaScript 代码片段**:通过复制并嵌入一段简单的 JavaScript 代码,即可轻松将我们的对话式搜索窗口集成到您的网站中。此小部件通常放置在网站右下角,可即时回答与产品相关的查询。
+
+ 
+
+## 集成 Deepseek API
+
+- 点击 `Models` 选项卡,然后进入 `LLMs` 以进入 LLM 模型管理页面。
+- 点击 `Create` 按钮创建一个新的 LLM 模型。
+- 按照下方示例输入数据,然后点击 `Create LLM` 按钮。
+
+
diff --git a/docs/avante.nvim/README.md b/docs/avante.nvim/README.md
index 1b350d5..71860ee 100644
--- a/docs/avante.nvim/README.md
+++ b/docs/avante.nvim/README.md
@@ -25,16 +25,14 @@ return {
lazy = false,
version = false, -- set this if you want to always pull the latest change
opts = {
- provider = "openai",
- auto_suggestions_provider = "openai", -- Since auto-suggestions are a high-frequency operation and therefore expensive, it is recommended to specify an inexpensive provider or even a free provider: copilot
- openai = {
- endpoint = "https://api.deepseek.com/v1",
- model = "deepseek-chat",
- timeout = 30000, -- Timeout in milliseconds
- temperature = 0,
- max_tokens = 4096,
- -- optional
- api_key_name = "OPENAI_API_KEY", -- default OPENAI_API_KEY if not set
+ provider = "deepseek",
+ vendors = {
+ deepseek = {
+ __inherited_from = "openai",
+ api_key_name = "DEEPSEEK_API_KEY",
+ endpoint = "https://api.deepseek.com",
+ model = "deepseek-coder",
+ },
},
},
-- if you want to build from source then do `make BUILD_FROM_SOURCE=true`
diff --git a/docs/avante.nvim/README_cn.md b/docs/avante.nvim/README_cn.md
index 3ada845..40f19bf 100644
--- a/docs/avante.nvim/README_cn.md
+++ b/docs/avante.nvim/README_cn.md
@@ -25,16 +25,14 @@ return {
lazy = false,
version = false, -- set this if you want to always pull the latest change
opts = {
- provider = "openai",
- auto_suggestions_provider = "openai", -- Since auto-suggestions are a high-frequency operation and therefore expensive, it is recommended to specify an inexpensive provider or even a free provider: copilot
- openai = {
- endpoint = "https://api.deepseek.com/v1",
- model = "deepseek-chat",
- timeout = 30000, -- Timeout in milliseconds
- temperature = 0,
- max_tokens = 4096,
- -- optional
- api_key_name = "OPENAI_API_KEY", -- default OPENAI_API_KEY if not set
+ provider = "deepseek",
+ vendors = {
+ deepseek = {
+ __inherited_from = "openai",
+ api_key_name = "DEEPSEEK_API_KEY",
+ endpoint = "https://api.deepseek.com",
+ model = "deepseek-coder",
+ },
},
},
-- if you want to build from source then do `make BUILD_FROM_SOURCE=true`
diff --git a/docs/codecompanion.nvim/README.md b/docs/codecompanion.nvim/README.md
index 4e9eb28..9ab08cb 100644
--- a/docs/codecompanion.nvim/README.md
+++ b/docs/codecompanion.nvim/README.md
@@ -34,9 +34,8 @@ return {
require("codecompanion").setup({
adapters = {
deepseek = function()
- return require("codecompanion.adapters").extend("openai_compatible", {
+ return require("codecompanion.adapters").extend("deepseek", {
env = {
- url = "https://api.deepseek.com",
api_key = "YOUR_API_KEY",
},
})
@@ -71,9 +70,8 @@ later(function()
require("codecompanion").setup({
adapters = {
deepseek = function()
- return require("codecompanion.adapters").extend("openai_compatible", {
+ return require("codecompanion.adapters").extend("deepseek", {
env = {
- url = "https://api.deepseek.com",
api_key = "YOUR_API_KEY",
},
})
diff --git a/docs/codecompanion.nvim/README_cn.md b/docs/codecompanion.nvim/README_cn.md
index 01173ec..df69ebb 100644
--- a/docs/codecompanion.nvim/README_cn.md
+++ b/docs/codecompanion.nvim/README_cn.md
@@ -34,9 +34,8 @@ return {
require("codecompanion").setup({
adapters = {
deepseek = function()
- return require("codecompanion.adapters").extend("openai_compatible", {
+ return require("codecompanion.adapters").extend("deepseek", {
env = {
- url = "https://api.deepseek.com",
api_key = "YOUR_API_KEY",
},
})
@@ -71,9 +70,8 @@ later(function()
require("codecompanion").setup({
adapters = {
deepseek = function()
- return require("codecompanion.adapters").extend("openai_compatible", {
+ return require("codecompanion.adapters").extend("deepseek", {
env = {
- url = "https://api.deepseek.com",
api_key = "YOUR_API_KEY",
},
})
diff --git a/docs/codegate/README.md b/docs/codegate/README.md
new file mode 100644
index 0000000..9653112
--- /dev/null
+++ b/docs/codegate/README.md
@@ -0,0 +1,158 @@
+# CodeGate: secure AI code generation
+
+CodeGate is a **local gateway** that makes AI agents and coding assistants safer. It
+ensures AI-generated recommendations adhere to best practices while safeguarding
+your code's integrity and protecting your privacy. With CodeGate, you can
+confidently leverage AI in your development workflow without sacrificing
+security or productivity.
+
+
+
+
+
+
+---
+## ✨ Why choose CodeGate?
+
+AI coding assistants are powerful, but they can inadvertently introduce risks.
+CodeGate protects your development process by:
+
+- 🔒 Preventing accidental exposure of secrets and sensitive data
+- 🛡️ Ensuring AI suggestions follow secure coding practices
+- ⚠️ Blocking recommendations of known malicious or deprecated libraries
+- 🔍 Providing real-time security analysis of AI suggestions
+
+---
+## 🚀 Quickstart with 🐋 Deepseek!
+
+### Prerequisites
+
+CodeGate is distributed as a Docker container. You need a container runtime like
+Docker Desktop or Docker Engine. Podman and Podman Desktop are also supported.
+CodeGate works on Windows, macOS, and Linux operating systems with x86_64 and
+arm64 (ARM and Apple Silicon) CPU architectures.
+
+These instructions assume the `docker` CLI is available. If you use Podman,
+replace `docker` with `podman` in all commands.
+
+### Installation
+
+To start CodeGate, run this simple command (making sure to pass in the
+deepseek.com URL as the `CODEGATE_PROVIDER_OPENAI_URL` environment variable):
+
+```bash
+docker run --name codegate -d -p 8989:8989 -p 9090:9090 -p 8990:8990 \
+ -e CODEGATE_PROVIDER_OPENAI_URL=https://api.deepseek.com \
+ --mount type=volume,src=codegate_volume,dst=/app/codegate_volume \
+ --restart unless-stopped ghcr.io/stacklok/codegate:latest
+```
+
+That’s it! CodeGate is now running locally.
+
+### Using CodeGate and 🐋 Deepseek within Continue
+
+To use Continue with CodeGate, open the Continue settings and add
+the following configuration:
+
+```json
+{
+ "title": "Deepseek-r1",
+ "provider": "openai",
+ "model": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
+ "apiKey": "YOUR_DEEPSEEK_API_KEY",
+ "apiBase": "http://localhost:8989/openai",
+}
+```
+
+Just use Continue as normal, and you know longer have to worry about security
+or privacy concerns!
+
+
+
+
+### Using CodeGate and 🐋 Deepseek with Cline
+
+To use Cline with CodeGate, open the Cline settings and add
+the following configuration:
+
+
+
+Just use Cline as normal, and you know longer have to worry about security
+or privacy concerns!
+
+
+
+---
+## 🖥️ Dashboard
+
+CodeGate includes a web dashboard that provides:
+
+- A view of **security risks** detected by CodeGate
+- A **history of interactions** between your AI coding assistant and your LLM
+
+
+
+
+
+
+### Accessing the dashboard
+
+Open [http://localhost:9090](http://localhost:9090) in your web browser to
+access the dashboard.
+
+To learn more, visit the
+[CodeGate Dashboard documentation](https://docs.codegate.ai/how-to/dashboard).
+
+---
+## 🔐 Features
+
+### Secrets encryption
+
+CodeGate helps you protect sensitive information from being accidentally exposed
+to AI models and third-party AI provider systems by redacting detected secrets
+from your prompts using encryption.
+[Learn more](https://docs.codegate.ai/features/secrets-encryption)
+
+### Dependency risk awareness
+
+LLMs’ knowledge cutoff date is often months or even years in the past. They
+might suggest outdated, vulnerable, or non-existent packages (hallucinations),
+exposing you and your users to security risks.
+
+CodeGate scans direct, transitive, and development dependencies in your package
+definition files, installation scripts, and source code imports that you supply
+as context to an LLM.
+[Learn more](https://docs.codegate.ai/features/dependency-risk)
+
+### Security reviews
+
+CodeGate performs security-centric code reviews, identifying insecure patterns
+or potential vulnerabilities to help you adopt more secure coding practices.
+[Learn more](https://docs.codegate.ai/features/security-reviews)
+
+---
+## 🛡️ Privacy first
+
+Unlike other tools, with CodeGate **your code never leaves your machine**.
+CodeGate is built with privacy at its core:
+
+- 🏠 **Everything stays local**
+- 🚫 **No external data collection**
+- 🔐 **No calling home or telemetry**
+- 💪 **Complete control over your data**
+
+---
+## 🛠️ Development
+
+Are you a developer looking to contribute? Dive into our technical resources:
+
+- [Development guide](https://github.com/stacklok/codegate/blob/main/docs/development.md)
+- [CLI commands and flags](https://github.com/stacklok/codegate/blob/main/docs/cli.md)
+- [Configuration system](https://github.com/stacklok/codegate/blob/main/docs/configuration.md)
+- [Logging system](https://github.com/stacklok/codegate/blob/main/docs/logging.md)
+
+---
+## 📜 License
+
+CodeGate is licensed under the terms specified in the
+[LICENSE file](https://github.com/stacklok/codegate/blob/main/LICENSE).
diff --git a/docs/codegate/README_cn.md b/docs/codegate/README_cn.md
new file mode 100644
index 0000000..10de199
--- /dev/null
+++ b/docs/codegate/README_cn.md
@@ -0,0 +1,132 @@
+# CodeGate:安全的 AI 代码生成
+
+CodeGate 是一个**本地代理**,可以让 AI 代理和编码助手更加安全。它确保 AI 生成的建议遵循最佳实践,同时保护您的代码完整性和隐私。使用 CodeGate,您可以在开发工作流程中自信地利用 AI,而不会牺牲安全性或生产力。
+
+
+
+
+
+
+---
+## ✨ 为什么选择 CodeGate?
+
+AI 编码助手功能强大,但可能会无意中带来风险。CodeGate 通过以下方式保护您的开发过程:
+
+- 🔒 防止意外泄露机密和敏感数据
+- 🛡️ 确保 AI 建议遵循安全编码实践
+- ⚠️ 阻止推荐已知的恶意或已弃用的库
+- 🔍 提供 AI 建议的实时安全分析
+
+---
+## 🚀 使用 🐋 Deepseek 快速开始!
+
+### 前提条件
+
+CodeGate 以 Docker 容器的形式分发。您需要一个容器运行时,如 Docker Desktop 或 Docker Engine。同时也支持 Podman 和 Podman Desktop。CodeGate 可在 Windows、macOS 和 Linux 操作系统上运行,支持 x86_64 和 arm64(ARM 和 Apple Silicon)CPU 架构。
+
+以下说明基于 `docker` CLI 可用的前提。如果您使用 Podman,请在所有命令中将 `docker` 替换为 `podman`。
+
+### 安装
+
+要启动 CodeGate,运行这个简单的命令(确保将 deepseek.com URL 作为 `CODEGATE_PROVIDER_OPENAI_URL` 环境变量传入):
+
+```bash
+docker run --name codegate -d -p 8989:8989 -p 9090:9090 -p 8990:8990 \
+ -e CODEGATE_PROVIDER_OPENAI_URL=https://api.deepseek.com \
+ --mount type=volume,src=codegate_volume,dst=/app/codegate_volume \
+ --restart unless-stopped ghcr.io/stacklok/codegate:latest
+```
+
+就是这样!CodeGate 现在在本地运行了。
+
+### 在 Continue 中使用 CodeGate 和 🐋 Deepseek
+
+要在 Continue 中使用 CodeGate,打开 Continue 设置并添加以下配置:
+
+```json
+{
+ "title": "Deepseek-r1",
+ "provider": "openai",
+ "model": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B",
+ "apiKey": "YOUR_DEEPSEEK_API_KEY",
+ "apiBase": "http://localhost:8989/openai",
+}
+```
+
+像往常一样使用 Continue,您不再需要担心安全或隐私问题!
+
+)
+
+### 在 Cline 中使用 CodeGate 和 🐋 Deepseek
+
+要在 Cline 中使用 CodeGate,打开 Cline 设置并添加以下配置:
+
+
+
+像往常一样使用 Cline,您不再需要担心安全或隐私问题!
+
+
+
+---
+## 🖥️ 仪表板
+
+CodeGate 包含一个 Web 仪表板,提供:
+
+- CodeGate 检测到的**安全风险**视图
+- AI 编码助手与 LLM 之间的**交互历史**
+
+
+
+
+
+
+### 访问仪表板
+
+在您的网络浏览器中打开 [http://localhost:9090](http://localhost:9090) 以访问仪表板。
+
+要了解更多信息,请访问 [CodeGate 仪表板文档](https://docs.codegate.ai/how-to/dashboard)。
+
+---
+## 🔐 功能
+
+### 机密加密
+
+CodeGate 通过使用加密对检测到的机密进行编辑,帮助您防止敏感信息意外暴露给 AI 模型和第三方 AI 提供商系统。
+[了解更多](https://docs.codegate.ai/features/secrets-encryption)
+
+### 依赖风险意识
+
+LLM 的知识截止日期通常是几个月甚至几年前。它们可能会建议过时的、易受攻击的或不存在的包(幻觉),使您和您的用户面临安全风险。
+
+CodeGate 扫描您作为上下文提供给 LLM 的包定义文件、安装脚本和源代码导入中的直接依赖、传递依赖和开发依赖。
+[了解更多](https://docs.codegate.ai/features/dependency-risk)
+
+### 安全审查
+
+CodeGate 执行以安全为中心的代码审查,识别不安全的模式或潜在的漏洞,帮助您采用更安全的编码实践。
+[了解更多](https://docs.codegate.ai/features/security-reviews)
+
+---
+## 🛡️ 隐私优先
+
+与其他工具不同,使用 CodeGate **您的代码永远不会离开您的机器**。CodeGate 以隐私为核心构建:
+
+- 🏠 **所有数据均本地存储**
+- 🚫 **没有外部数据收集**
+- 🔐 **没有回传或遥测**
+- 💪 **完全控制您的数据**
+
+---
+## 🛠️ 开发
+
+您是想要贡献的开发者吗?深入了解我们的技术资源:
+
+- [开发指南](https://github.com/stacklok/codegate/blob/main/docs/development.md)
+- [CLI 命令和标志](https://github.com/stacklok/codegate/blob/main/docs/cli.md)
+- [配置系统](https://github.com/stacklok/codegate/blob/main/docs/configuration.md)
+- [日志系统](https://github.com/stacklok/codegate/blob/main/docs/logging.md)
+
+---
+## 📜 许可证
+
+CodeGate 根据 [LICENSE 文件](https://github.com/stacklok/codegate/blob/main/LICENSE) 中指定的条款获得许可。
\ No newline at end of file
diff --git a/docs/codegate/assets/cline-screen.png b/docs/codegate/assets/cline-screen.png
new file mode 100644
index 0000000..f59dd29
Binary files /dev/null and b/docs/codegate/assets/cline-screen.png differ
diff --git a/docs/codegate/assets/cline-settings.png b/docs/codegate/assets/cline-settings.png
new file mode 100644
index 0000000..3b60c5e
Binary files /dev/null and b/docs/codegate/assets/cline-settings.png differ
diff --git a/docs/codegate/assets/codegate.png b/docs/codegate/assets/codegate.png
new file mode 100644
index 0000000..d625d61
Binary files /dev/null and b/docs/codegate/assets/codegate.png differ
diff --git a/docs/codegate/assets/continue-screen.png b/docs/codegate/assets/continue-screen.png
new file mode 100644
index 0000000..1b93a5d
Binary files /dev/null and b/docs/codegate/assets/continue-screen.png differ
diff --git a/docs/continue/README.md b/docs/continue/README.md
index c32595d..f03b007 100644
--- a/docs/continue/README.md
+++ b/docs/continue/README.md
@@ -1,4 +1,4 @@
-
+
# [Continue](https://continue.dev/)
diff --git a/docs/continue/README_cn.md b/docs/continue/README_cn.md
index 195b790..7f0ec17 100644
--- a/docs/continue/README_cn.md
+++ b/docs/continue/README_cn.md
@@ -1,4 +1,4 @@
-
+
# [Continue](https://continue.dev/)
diff --git a/docs/curator/README.md b/docs/curator/README.md
new file mode 100644
index 0000000..c307d9d
--- /dev/null
+++ b/docs/curator/README.md
@@ -0,0 +1,30 @@
+
+
+
+
+# [Curator](https://github.com/bespokelabsai/curator)
+
+
+Curator is an open-source tool to curate large scale datasets for post-training LLMs.
+
+Curator was used to curate [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k), a reasoning dataset to train a fully open reasoning model [Bespoke-Stratos](https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation).
+
+
+### Curator supports:
+
+- Calling Deepseek API for scalable synthetic data curation
+- Easy structured data extraction
+- Caching and automatic recovery
+- Dataset visualization
+- Saving $$$ using batch mode
+
+### Call Deepseek API with Curator easily:
+
+
+
+# Get Started here
+
+- [Colab Example](https://colab.research.google.com/drive/1Z78ciwHIl_ytACzcrslNrZP2iwK05eIF?usp=sharing)
+- [Github Repo](https://github.com/bespokelabsai/curator)
+- [Documentation](https://docs.bespokelabs.ai/)
+- [Discord](https://discord.com/invite/KqpXvpzVBS)
diff --git a/docs/curator/README_cn.md b/docs/curator/README_cn.md
new file mode 100644
index 0000000..2c7dbe2
--- /dev/null
+++ b/docs/curator/README_cn.md
@@ -0,0 +1,29 @@
+
+
+
+# [Curator](https://github.com/bespokelabsai/curator)
+
+
+Curator 是一个用于后训练大型语言模型 (LLMs) 和结构化数据提取的制作与管理可扩展的数据集的开源工具。
+
+Curator 被用来制作 [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k),这是一个用于训练完全开源的推理模型 [Bespoke-Stratos](https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation) 的推理数据集。
+
+
+### Curator 支持:
+
+- 调用 Deepseek API 进行可扩展的合成数据管理
+- 简便的结构化数据提取
+- 缓存和自动恢复
+- 数据集可视化
+- 使用批处理模式节省费用
+
+### 轻松使用 Curator 调用 Deepseek API:
+
+
+
+# 从这里开始
+
+- [Colab 示例](https://colab.research.google.com/drive/1Z78ciwHIl_ytACzcrslNrZP2iwK05eIF?usp=sharing)
+- [Github 仓库](https://github.com/bespokelabsai/curator)
+- [文档](https://docs.bespokelabs.ai/)
+- [Discord](https://discord.com/invite/KqpXvpzVBS)
\ No newline at end of file
diff --git a/docs/minuet-ai.nvim/README.md b/docs/minuet-ai.nvim/README.md
new file mode 100644
index 0000000..a0386bf
--- /dev/null
+++ b/docs/minuet-ai.nvim/README.md
@@ -0,0 +1,182 @@
+
+# Minuet AI
+
+Minuet AI: Dance with Intelligence in Your Code 💃.
+
+`Minuet-ai` brings the grace and harmony of a minuet to your coding process.
+Just as dancers move during a minuet.
+
+# Features
+
+- AI-powered code completion with dual modes:
+ - Specialized prompts and various enhancements for chat-based LLMs on code completion tasks.
+ - Fill-in-the-middle (FIM) completion for compatible models (DeepSeek,
+ Codestral, Qwen, and others).
+- Support for multiple AI providers (OpenAI, Claude, Gemini, Codestral, Ollama, and
+ OpenAI-compatible services).
+- Customizable configuration options.
+- Streaming support to enable completion delivery even with slower LLMs.
+- Support `nvim-cmp`, `blink-cmp`, `virtual text` frontend.
+
+# Requirements
+
+- Neovim 0.10+.
+- [plenary.nvim](https://github.com/nvim-lua/plenary.nvim)
+- optional: [nvim-cmp](https://github.com/hrsh7th/nvim-cmp)
+- optional: [blink.cmp](https://github.com/Saghen/blink.cmp)
+- An API key for at least one of the supported AI providers
+
+# Installation
+
+**Lazy.nvim**:
+
+```lua
+specs = {
+ {
+ 'milanglacier/minuet-ai.nvim',
+ config = function()
+ require('minuet').setup {
+ -- Your configuration options here
+ }
+ end,
+ },
+ { 'nvim-lua/plenary.nvim' },
+ -- optional, if you are using virtual-text frontend, nvim-cmp is not
+ -- required.
+ { 'hrsh7th/nvim-cmp' },
+ -- optional, if you are using virtual-text frontend, blink is not required.
+ { 'Saghen/blink.cmp' },
+}
+```
+
+**Rocks.nvim**:
+
+`Minuet` is available on luarocks.org. Simply run `Rocks install
+minuet-ai.nvim` to install it like any other luarocks package.
+
+**Setting up with virtual text**:
+
+```lua
+require('minuet').setup {
+ virtualtext = {
+ auto_trigger_ft = {},
+ keymap = {
+ -- accept whole completion
+ accept = '',
+ -- accept one line
+ accept_line = '',
+ -- accept n lines (prompts for number)
+ -- e.g. "A-z 2 CR" will accept 2 lines
+ accept_n_lines = '',
+ -- Cycle to prev completion item, or manually invoke completion
+ prev = '',
+ -- Cycle to next completion item, or manually invoke completion
+ next = '',
+ dismiss = '',
+ },
+ },
+}
+```
+
+**Setting up with nvim-cmp**:
+
+
+
+```lua
+require('cmp').setup {
+ sources = {
+ {
+ -- Include minuet as a source to enable autocompletion
+ { name = 'minuet' },
+ -- and your other sources
+ }
+ },
+ performance = {
+ -- It is recommended to increase the timeout duration due to
+ -- the typically slower response speed of LLMs compared to
+ -- other completion sources. This is not needed when you only
+ -- need manual completion.
+ fetching_timeout = 2000,
+ },
+}
+
+
+-- If you wish to invoke completion manually,
+-- The following configuration binds `A-y` key
+-- to invoke the configuration manually.
+require('cmp').setup {
+ mapping = {
+ [""] = require('minuet').make_cmp_map()
+ -- and your other keymappings
+ },
+}
+```
+
+
+
+**Setting up with blink-cmp**:
+
+
+
+```lua
+require('blink-cmp').setup {
+ keymap = {
+ -- Manually invoke minuet completion.
+ [''] = require('minuet').make_blink_map(),
+ },
+ sources = {
+ -- Enable minuet for autocomplete
+ default = { 'lsp', 'path', 'buffer', 'snippets', 'minuet' },
+ -- For manual completion only, remove 'minuet' from default
+ providers = {
+ minuet = {
+ name = 'minuet',
+ module = 'minuet.blink',
+ score_offset = 8, -- Gives minuet higher priority among suggestions
+ },
+ },
+ },
+ -- Recommended to avoid unnecessary request
+ completion = { trigger = { prefetch_on_insert = false } },
+}
+```
+
+
+
+**LLM Provider Examples**:
+
+**Deepseek**:
+
+```lua
+-- you can use deepseek with both openai_fim_compatible or openai_compatible provider
+require('minuet').setup {
+ provider = 'openai_fim_compatible',
+ provider_options = {
+ openai_fim_compatible = {
+ api_key = 'DEEPSEEK_API_KEY',
+ name = 'deepseek',
+ optional = {
+ max_tokens = 256,
+ top_p = 0.9,
+ },
+ },
+ },
+}
+
+
+-- or
+require('minuet').setup {
+ provider = 'openai_compatible',
+ provider_options = {
+ openai_compatible = {
+ end_point = 'https://api.deepseek.com/v1/chat/completions',
+ api_key = 'DEEPSEEK_API_KEY',
+ name = 'deepseek',
+ optional = {
+ max_tokens = 256,
+ top_p = 0.9,
+ },
+ },
+ },
+}
+```
diff --git a/docs/minuet-ai.nvim/README_cn.md b/docs/minuet-ai.nvim/README_cn.md
new file mode 100644
index 0000000..31610dd
--- /dev/null
+++ b/docs/minuet-ai.nvim/README_cn.md
@@ -0,0 +1,172 @@
+# Minuet AI
+
+Minuet AI:在您的代码中翩翩起舞,挥洒智能 💃。
+
+`Minuet-ai` 将小步舞曲的优雅与和谐带入您的编码流程。正如舞者在小步舞曲中舞动一样。
+
+# 特性
+
+- 基于 AI 的代码补全,提供双重模式:
+ - 针对代码补全任务,为基于聊天的 LLMs 提供专门的提示和各种增强功能。
+ - 针对兼容的模型(DeepSeek、Codestral、Qwen 等)提供中间填充 (FIM) 补全。
+- 支持多种 AI 提供商(OpenAI、Claude、Gemini、Codestral、Ollama 和兼容 OpenAI 的服务)。
+- 可自定义配置选项。
+- 支持流式传输,即使使用较慢的 LLMs 也能实现补全的交付。
+- 支持 `nvim-cmp`、`blink-cmp`、`virtual text` 前端。
+
+# 要求
+
+- Neovim 0.10+。
+- [plenary.nvim](https://github.com/nvim-lua/plenary.nvim)
+- 可选: [nvim-cmp](https://github.com/hrsh7th/nvim-cmp)
+- 可选: [blink.cmp](https://github.com/Saghen/blink.cmp)
+- 至少一个受支持的 AI 提供商的 API 密钥
+
+# 安装
+
+**Lazy.nvim:**
+
+```lua
+specs = {
+ {
+ 'milanglacier/minuet-ai.nvim',
+ config = function()
+ require('minuet').setup {
+ -- 在此处配置您的选项
+ }
+ end,
+ },
+ { 'nvim-lua/plenary.nvim' },
+ -- 可选,如果您使用 virtual-text 前端,则不需要 nvim-cmp。
+ { 'hrsh7th/nvim-cmp' },
+ -- 可选,如果您使用 virtual-text 前端,则不需要 blink。
+ { 'Saghen/blink.cmp' },
+}
+```
+
+**Rocks.nvim:**
+
+`Minuet` 可在 luarocks.org 上获取。只需运行 `Rocks install minuet-ai.nvim` 即可像安装其他 luarocks 包一样安装它。
+
+**使用 virtual text 进行设置:**
+
+```lua
+require('minuet').setup {
+ virtualtext = {
+ auto_trigger_ft = {},
+ keymap = {
+ -- 接受完整补全
+ accept = '',
+ -- 接受一行
+ accept_line = '',
+ -- 接受 n 行(提示输入数字)
+ -- 例如,“A-z 2 CR”将接受 2 行
+ accept_n_lines = '',
+ -- 切换到上一个补全项,或手动调用补全
+ prev = '',
+ -- 切换到下一个补全项,或手动调用补全
+ next = '',
+ dismiss = '',
+ },
+ },
+}
+```
+
+**使用 nvim-cmp 进行设置:**
+
+
+
+```lua
+require('cmp').setup {
+ sources = {
+ {
+ -- 包含 minuet 作为源以启用自动补全
+ { name = 'minuet' },
+ -- 和您的其他来源
+ }
+ },
+ performance = {
+ -- 建议增加超时时间,因为与其他补全来源相比,LLMs 的响应速度通常较慢。如果您只需要手动补全,则不需要此设置。
+ fetching_timeout = 2000,
+ },
+}
+
+
+-- 如果你希望手动调用补全,
+-- 以下配置将 `A-y` 键绑定到手动调用配置。
+require('cmp').setup {
+ mapping = {
+ [""] = require('minuet').make_cmp_map()
+ -- 和您的其他键映射
+ },
+}
+```
+
+
+
+**使用 blink-cmp 进行设置:**
+
+
+
+```lua
+require('blink-cmp').setup {
+ keymap = {
+ -- 手动调用 minuet 补全。
+ [''] = require('minuet').make_blink_map(),
+ },
+ sources = {
+ -- 启用 minuet 进行自动补全
+ default = { 'lsp', 'path', 'buffer', 'snippets', 'minuet' },
+ -- 仅对于手动补全,从默认值中删除 'minuet'
+ providers = {
+ minuet = {
+ name = 'minuet',
+ module = 'minuet.blink',
+ score_offset = 8, -- 在建议中赋予 minuet 更高的优先级
+ },
+ },
+ },
+ -- 建议避免不必要的请求
+ completion = { trigger = { prefetch_on_insert = false } },
+}
+```
+
+
+
+**LLM 提供商示例:**
+
+**Deepseek:**
+
+```lua
+-- 你可以使用 openai_fim_compatible 或 openai_compatible 提供商来使用 deepseek
+require('minuet').setup {
+ provider = 'openai_fim_compatible',
+ provider_options = {
+ openai_fim_compatible = {
+ api_key = 'DEEPSEEK_API_KEY',
+ name = 'deepseek',
+ optional = {
+ max_tokens = 256,
+ top_p = 0.9,
+ },
+ },
+ },
+}
+
+
+-- 或者
+require('minuet').setup {
+ provider = 'openai_compatible',
+ provider_options = {
+ openai_compatible = {
+ end_point = 'https://api.deepseek.com/v1/chat/completions',
+ api_key = 'DEEPSEEK_API_KEY',
+ name = 'deepseek',
+ optional = {
+ max_tokens = 256,
+ top_p = 0.9,
+ },
+ },
+ },
+}
+```
diff --git a/docs/solana-agent-kit/README.md b/docs/solana-agent-kit/README.md
new file mode 100644
index 0000000..07ca47c
--- /dev/null
+++ b/docs/solana-agent-kit/README.md
@@ -0,0 +1,67 @@
+# README
+
+
+
+---
+
+An open-source toolkit for connecting AI agents to Solana protocols. Now, any agent, using any model can autonomously perform 60+ Solana actions:
+
+
+## STEP1.
+
+Apply a token from [deepseek open platform](https://platform.deepseek.com/)
+
+## STEP 2
+
+Intialize the DeepSeek LLM
+
+```
+import { ChatDeepSeek } from "@langchain/deepseek";
+
+const deepseek = new ChatDeepSeek({
+ model: "deepseek-chat",
+ temperature: 0,
+});
+```
+
+## STEP 3
+
+Intialize the Solana Agent Kit with DeepSeek
+
+```typescript
+const solanaAgent = new SolanaAgentKit(
+ process.env.SOLANA_PRIVATE_KEY!,
+ process.env.RPC_URL!,
+ {
+ OPENAI_API_KEY: process.env.OPENAI_API_KEY!,
+ HELIUS_API_KEY: process.env.HELIUS_API_KEY!,
+ PERPLEXITY_API_KEY: process.env.PERPLEXITY_API_KEY!,
+ },
+);
+
+const tools = createSolanaTools(solanaAgent);
+
+const agent = createReactAgent({
+ llm: deepseek,
+ tools,
+ checkpointSaver: memory,
+ messageModifier: `
+ You are a helpful agent that can interact onchain using the Solana Agent Kit. You are
+ empowered to interact onchain using your tools. If you ever need funds, you can request them from the
+ faucet. If not, you can provide your wallet details and request funds from the user. If there is a 5XX
+ (internal) HTTP error code, ask the user to try again later. If someone asks you to do something you
+ can't do with your currently available tools, you must say so, and encourage them to implement it
+ themselves using the Solana Agent Kit, recommend they go to https://www.solanaagentkit.xyz for more information. Be
+ concise and helpful with your responses. Refrain from restating your tools' descriptions unless it is explicitly requested.
+ `,
+});
+
+const stream = await agent.stream(
+ { messages: [new HumanMessage(userInput)] },
+ config,
+);
+```
+
+More guides can be found in the [Solana Agent Kit](https://docs.solanaagentkit.xyz/v0/introduction)
+
+
\ No newline at end of file
diff --git a/docs/solana-agent-kit/assets/sendai-logo.png b/docs/solana-agent-kit/assets/sendai-logo.png
new file mode 100644
index 0000000..638b962
Binary files /dev/null and b/docs/solana-agent-kit/assets/sendai-logo.png differ
diff --git a/docs/stranslate/README.md b/docs/stranslate/README.md
new file mode 100644
index 0000000..3d4c3fd
--- /dev/null
+++ b/docs/stranslate/README.md
@@ -0,0 +1,31 @@
+
+
+# [`STranslate`](https://stranslate.zggsong.com/)
+
+STranslate is a translation and OCR tool that is ready to use on the go.
+
+## Translation
+
+Supports multiple translation languages and various translation methods such as input, text selection, screenshot, clipboard monitoring, and mouse text selection. It also allows displaying multiple service translation results simultaneously for easy comparison.
+
+## OCR
+
+Supports fully offline OCR for Chinese, English, Japanese, and Korean, based on PaddleOCR, with excellent performance and quick response. It supports screenshot, clipboard, and file OCR, as well as silent OCR. Additionally, it supports OCR services from WeChat, Baidu, Tencent, OpenAI, and Google.
+
+## Services
+
+Supports integration with over ten translation services including DeepSeek, OpenAI, Gemini, ChatGLM, Baidu, Microsoft, Tencent, Youdao, and Alibaba. It also offers free API options. Built-in services like Microsoft, Yandex, Google, and Kingsoft PowerWord are ready to use out of the box.
+
+## Features
+
+Supports back-translation, global TTS, writing (directly translating and replacing selected content), custom prompts, QR code recognition, external calls, and more.
+
+## Main Interface
+
+
+
+## Configuration
+
+
+
+
\ No newline at end of file
diff --git a/docs/stranslate/README_cn.md b/docs/stranslate/README_cn.md
new file mode 100644
index 0000000..43f9b15
--- /dev/null
+++ b/docs/stranslate/README_cn.md
@@ -0,0 +1,31 @@
+
+
+# [`STranslate`](https://stranslate.zggsong.com/)
+
+STranslate 是一款即用即走的翻译、OCR工具
+
+## 翻译
+
+支持多种翻译语言,支持输入、划词、截图、监听剪贴板、监听鼠标划词等多种翻译方式,支持同时显示多个服务翻译结果,方便比较翻译结果
+
+## OCR
+
+支持中英日韩完全离线OCR,基于 PaddleOCR,效果优秀反应迅速,支持截图、剪贴板、文件OCR,支持静默OCR,同时支持微信、百度、腾讯、OpenAI、Google等OCR
+
+## 服务
+
+支持DeepSeek、OpenAI、Gemini、ChatGLM、百度、微软、腾讯、有道、阿里等十多家翻译服务接入;同时还提供免费API可供选择;内置微软、Yandex、Google、金山词霸等内置服务可做到开箱即用
+
+## 特色
+
+支持回译、全局TTS、写作(选中后直接翻译替换内容)、自定义Prompt、二维码识别、外部调用等等功能
+
+## 主界面
+
+
+
+## 配置
+
+
+
+
\ No newline at end of file
diff --git a/docs/stranslate/assets/main.png b/docs/stranslate/assets/main.png
new file mode 100644
index 0000000..7c5c832
Binary files /dev/null and b/docs/stranslate/assets/main.png differ
diff --git a/docs/stranslate/assets/settings_1.png b/docs/stranslate/assets/settings_1.png
new file mode 100644
index 0000000..70d84db
Binary files /dev/null and b/docs/stranslate/assets/settings_1.png differ
diff --git a/docs/stranslate/assets/settings_2.png b/docs/stranslate/assets/settings_2.png
new file mode 100644
index 0000000..7aaad02
Binary files /dev/null and b/docs/stranslate/assets/settings_2.png differ
diff --git a/docs/stranslate/assets/stranslate.svg b/docs/stranslate/assets/stranslate.svg
new file mode 100644
index 0000000..df8c4c3
--- /dev/null
+++ b/docs/stranslate/assets/stranslate.svg
@@ -0,0 +1,2 @@
+
+
\ No newline at end of file
diff --git a/docs/superagentx/README.md b/docs/superagentx/README.md
new file mode 100644
index 0000000..69de83c
--- /dev/null
+++ b/docs/superagentx/README.md
@@ -0,0 +1,33 @@
+# `SuperAgentX`
+
+> 🤖 SuperAgentX: A lightweight autonomous true multi-agent framework with AGI capabilities.
+
+**SuperAgentX Source Code**: [https://github.com/superagentxai/superagentx](https://github.com/superagentxai/superagentx)
+
+**DeepSeek AI Agent Example**: [https://github.com/superagentxai/superagentx/blob/master/tests/llm/test_deepseek_client.py](https://github.com/superagentxai/superagentx/blob/master/tests/llm/test_deepseek_client.py)
+
+**Documentation** : [https://docs.superagentx.ai/](https://docs.superagentx.ai/)
+
+The SuperAgentX framework integrates DeepSeek as its LLM service provider, enhancing the multi-agent's reasoning and decision-making capabilities.
+
+## 🤖 Introduction
+
+`SuperAgentX` SuperAgentX is an advanced agentic AI framework designed to accelerate the development of Artificial General Intelligence (AGI). It provides a powerful, modular, and flexible platform for building autonomous AI agents capable of executing complex tasks with minimal human intervention.
+
+
+
+### ✨ Key Features
+
+🚀 Open-Source Framework: A lightweight, open-source AI framework built for multi-agent applications with Artificial General Intelligence (AGI) capabilities.
+
+🎯 Goal-Oriented Multi-Agents: This technology enables the creation of agents with retry mechanisms to achieve set goals. Communication between agents is Parallel, Sequential, or hybrid.
+
+🏖️ Easy Deployment: Offers WebSocket, RESTful API, and IO console interfaces for rapid setup of agent-based AI solutions.
+
+♨️ Streamlined Architecture: Enterprise-ready scalable and pluggable architecture. No major dependencies; built independently!
+
+📚 Contextual Memory: Uses SQL + Vector databases to store and retrieve user-specific context effectively.
+
+🧠 Flexible LLM Configuration: Supports simple configuration options of various Gen AI models.
+
+🤝🏻 Extendable Handlers: Allows integration with diverse APIs, databases, data warehouses, data lakes, IoT streams, and more, making them accessible for function-calling features.
diff --git a/docs/superagentx/assets/architecture.png b/docs/superagentx/assets/architecture.png
new file mode 100644
index 0000000..f155158
Binary files /dev/null and b/docs/superagentx/assets/architecture.png differ
diff --git a/docs/tomemo/README.md b/docs/tomemo/README.md
index 86c637f..f550279 100644
--- a/docs/tomemo/README.md
+++ b/docs/tomemo/README.md
@@ -12,7 +12,19 @@ ToMemo is a phrasebook + clipboard history + keyboard iOS app with integrated AI
## Integrate with Deepseek API
-Go to Settings-Extensions-AI Services-AI Providers to add the Deepseek API Key.
-After adding, you can turn on the 「show in bottom tab」 in the AI service page, so that you can talk to Deepseek directly in the application.
+- Go to "Settings-Extensions-AI Services-AI Providers", click "Add" in the top right corner, and select "DeepSeek" in the **Provider** field.
+- Enter your API Key in the **API Key** field.
+- Click the "Test" button to verify if the input is valid.
+- Click "Load Models" to select the model you want to use
+- Turn on "Enable" and click "Save"
-
+
+
+## Use
+
+- Go to "Settings-Extensions-AI Services"
+- Click "AI Assistant" to enter the AI Assistant page
+- Add an AI Assistant in the top right corner, you can select "Deepseek" in the models
+- Start chatting with Deepseek
+
+
diff --git a/docs/tomemo/README_cn.md b/docs/tomemo/README_cn.md
index b46b0b2..766675a 100644
--- a/docs/tomemo/README_cn.md
+++ b/docs/tomemo/README_cn.md
@@ -12,7 +12,19 @@ ToMemo 是一款短语合集 + 剪切板历史 + 键盘输出的 iOS 应用,
## Integrate with Deepseek API
-进入设置-扩展-AI 服务-AI 供应商,即可添加 Deepseek API Key。
-添加完成后,可以 AI 服务页面中开启底部 Tab 页,方便应用中直接与 Deepseek 对话。
+- 进入「设置-扩展-AI 服务-AI 供应商」,点击右上角「添加」,在**供应商**中选择「DeepSeek」。
+- 在**API Key**中输入你的 API Key。
+- 点击「测试」按钮,测试填入是否可用。
+- 点击「加载模型」,选择需要使用的模型
+- 打开「启用」后,点击「保存」
-
+
+
+## Use
+
+- 进入「设置-扩展-AI 服务」,
+- 点击「AI 助手」进入 AI 助手页面,
+- 右上角添加 AI 助手,可以在模型中选择「深度求索」
+- 开始和 Deepseek 聊天
+
+
diff --git a/docs/tomemo/assets/app-provider.png b/docs/tomemo/assets/app-provider.png
new file mode 100644
index 0000000..5309a86
Binary files /dev/null and b/docs/tomemo/assets/app-provider.png differ
diff --git a/docs/tomemo/assets/use-deepseek.png b/docs/tomemo/assets/use-deepseek.png
new file mode 100644
index 0000000..2c0a1a1
Binary files /dev/null and b/docs/tomemo/assets/use-deepseek.png differ
diff --git a/docs/yomo/README.md b/docs/yomo/README.md
new file mode 100644
index 0000000..1b727c1
--- /dev/null
+++ b/docs/yomo/README.md
@@ -0,0 +1,146 @@
+# YoMo Framework - Deepseek Provider
+
+YoMo is an open-source LLM Function Calling Framework for building Geo-distributed AI agents. Built atop QUIC Transport Protocol and Strongly-typed Stateful Serverless architecture, makes your AI agents low-latency, reliable, secure, and easy.
+
+## 🚀 Getting Started
+
+Let's implement a function calling serverless `sfn-get-ip-latency`:
+
+### Step 1. Install CLI
+
+```bash
+curl -fsSL https://get.yomo.run | sh
+```
+
+### Step 2. Start the server
+
+Prepare the configuration as `my-agent.yaml`
+
+```yaml
+name: ai-zipper
+host: 0.0.0.0
+port: 9000
+
+auth:
+ type: token
+ token: SECRET_TOKEN
+
+bridge:
+ ai:
+ server:
+ addr: 0.0.0.0:9000 ## Restful API endpoint
+ provider: deepseek ## LLM API Service we will use
+
+ providers:
+ deepseek:
+ api_key:
+ model: deepseek-reasoner
+```
+
+Start the server:
+
+```sh
+YOMO_LOG_LEVEL=debug yomo serve -c my-agent.yaml
+```
+
+### Step 3. Write the function
+
+First, let's define what this function do and how's the parameters required, these will be combined to prompt when invoking LLM.
+
+```golang
+type Parameter struct {
+ Domain string `json:"domain" jsonschema:"description=Domain of the website,example=example.com"`
+}
+
+func Description() string {
+ return `if user asks ip or network latency of a domain, you should return the result of the giving domain. try your best to dissect user expressions to infer the right domain names`
+}
+
+func InputSchema() any {
+ return &Parameter{}
+}
+```
+
+Create a Stateful Serverless Function to get the IP and Latency of a domain:
+
+```golang
+func Handler(ctx serverless.Context) {
+ var msg Parameter
+ ctx.ReadLLMArguments(&msg)
+
+ // get ip of the domain
+ ips, _ := net.LookupIP(msg.Domain)
+
+ // get ip[0] ping latency
+ pinger, _ := ping.NewPinger(ips[0].String())
+ pinger.Count = 3
+ pinger.Run()
+ stats := pinger.Statistics()
+
+ val := fmt.Sprintf("domain %s has ip %s with average latency %s", msg.Domain, ips[0], stats.AvgRtt)
+ ctx.WriteLLMResult(val)
+}
+
+```
+
+Finally, let's run it
+
+```bash
+$ yomo run app.go
+
+time=2025-01-29T21:43:30.583+08:00 level=INFO msg="connected to zipper" component=StreamFunction sfn_id=B0ttNSEKLSgMjXidB11K1 sfn_name=fn-get-ip-from-domain zipper_addr=localhost:9000
+time=2025-01-29T21:43:30.584+08:00 level=INFO msg="register ai function success" component=StreamFunction sfn_id=B0ttNSEKLSgMjXidB11K1 sfn_name=fn-get-ip-from-domain zipper_addr=localhost:9000 name=fn-get-ip-from-domain tag=16
+```
+
+### Done, let's have a try
+
+```sh
+$ curl -i http://127.0.0.1:9000/v1/chat/completions -H "Content-Type: application/json" -d '{
+ "messages": [
+ {
+ "role": "system",
+ "content": "You are a test assistant."
+ },
+ {
+ "role": "user",
+ "content": "Compare website speed between Nike and Puma"
+ }
+ ],
+ "stream": false
+}'
+
+HTTP/1.1 200 OK
+Content-Length: 944
+Connection: keep-alive
+Content-Type: application/json
+Date: Wed, 29 Jan 2025 13:30:14 GMT
+Keep-Alive: timeout=4
+Proxy-Connection: keep-alive
+
+{
+ "Content": "Based on the data provided for the domains nike.com and puma.com which include IP addresses and average latencies, we can infer the following about their website speeds:
+ - Nike.com has an IP address of 13.225.183.84 with an average latency of 65.568333 milliseconds.
+ - Puma.com has an IP address of 151.101.194.132 with an average latency of 54.563666 milliseconds.
+
+ Comparing these latencies, Puma.com is faster than Nike.com as it has a lower average latency.
+
+ Please be aware, however, that website speed can be influenced by many factors beyond latency, such as server processing time, content size, and delivery networks among others. To get a more comprehensive understanding of website speed, you would need to consider additional metrics and possibly conductreal-time speed tests.",
+ "FinishReason": "stop"
+}
+```
+
+### Full Example Code
+
+[Full LLM Function Calling Codes](https://github.com/yomorun/llm-function-calling-examples)
+
+## 🎯 Focuses on Geo-distributed AI Inference Infra
+
+It’s no secret that today’s users want instant AI inference, every AI
+application is more powerful when it response quickly. But, currently, when we
+talk about `distribution`, it represents **distribution in data center**. The AI model is
+far away from their users from all over the world.
+
+If an application can be deployed anywhere close to their end users, solve the
+problem, this is **Geo-distributed System Architecture**:
+
+