diff --git a/.gitignore b/.gitignore new file mode 100644 index 0000000..85e7c1d --- /dev/null +++ b/.gitignore @@ -0,0 +1 @@ +/.idea/ diff --git a/README.md b/README.md index 086772a..794261b 100644 --- a/README.md +++ b/README.md @@ -8,7 +8,8 @@ Integrate the DeepSeek API into popular softwares. Access [DeepSeek Open Platform](https://platform.deepseek.com/) to get an API key. -English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README_cn.md) +English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/ + ME_cn.md)/[日本語](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README_ja.md) @@ -18,6 +19,31 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati ### Applications + + + + + + + + + + + + + + + + + + + + + + + + + @@ -28,6 +54,11 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati + + + + + @@ -42,6 +73,16 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati + + + + + + + + + + @@ -83,12 +124,31 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati + + + - + + + + + @@ -96,11 +156,11 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati - + - - + + @@ -139,30 +199,241 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati - - - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Icon4EVERChat4EVERChat is an intelligent model selection platform integrating hundreds of LLMs, enabling real-time comparison of model performance. Leveraging 4EVERLAND AI RPC's unified API endpoint, it achieves cost-free model switching and automatically selects combinations with fast responses and low costs.
Iconxhai Browserxhai Browser is an Android desktop management & AI browser, DeepSeek is the default AI dialog engine.It has the ultimate performance (0.2 seconds to start), slim size (apk 3M), no ads, ultra-fast ad blocking, multi-screen classification, screen navigation, multi-search box, a box multiple search!
IconIntelliBarIntelliBar is a beautiful assistant for the Mac that lets you use advanced models like DeepSeek R1 with any app on your Mac — ex: edit emails in your mail app or summarize articles in your browser.
IconDeepChatDeepChat is a fully free desktop smart assistant, with a powerful DeepSeek large model, supporting multi-round conversations, internet search, file uploads, knowledge bases, and more.
Icon Quantalogic QuantaLogic is a ReAct (Reasoning & Action) framework for building advanced AI agents.
Icon Chatbox ChatGPT-Next-Web ChatGPT Next Web is a cross-platform ChatGPT web UI, with GPT3, GPT4 & Gemini Pro support.
Icon Coco AI Coco AI is a fully open-source, cross-platform unified search and productivity tool that connects and searches across various data sources, including applications, files, Google Drive, Notion, Yuque, Hugo, and more, both local and cloud-based. By integrating with large models like DeepSeek, Coco AI enables intelligent personal knowledge management, emphasizing privacy and supporting private deployment, helping users quickly and intelligently access their information.
Icon Liubai LibreChat LibreChat LibreChat is a customizable open-source app that seamlessly integrates DeepSeek for enhanced AI interactions.
Icon Just-Chat Make your LLM agent and chat with it simple and fast!
PapersGPT PapersGPT PapersGPT is a Zotero plugin that seamlessly with DeepSeek and other multiple AI models for quickly reading papers in Zotero.
Icon Raycast Raycast is a productivity tool for macOS that lets you control your tools with a few keystrokes. It supports various extensions including DeepSeek AI.
Icon Nice Prompt Nice Prompt Organize, share and use your prompts in your code editor, with Cursor and VSCode。
PHP Client PHP Client Deepseek PHP Client is a robust and community-driven PHP client library for seamless integration with the Deepseek API.
+ DeepSwiftSeek Logo + + DeepSwiftSeek + + DeepSwiftSeek is a lightweight yet powerful Swift client library, pretty good integration with the DeepSeek API. + It provides easy-to-use Swift concurrency for chat, streaming, FIM (Fill-in-the-Middle) completions, and more. +
Laravel Integration Laravel Integration Laravel wrapper for Deepseek PHP client, to seamless deepseek API integration with laravel applications.
Icon Zotero Zotero is a free, easy-to-use tool to help you collect, organize, annotate, cite, and share research. Zotero is a free, easy-to-use tool to help you collect, organize, annotate, cite, and share research. It can use deepseek as translation service.
Icon SiYuan Icon SiYuan SiYuan is a privacy-first personal knowledge management system that supports complete offline usage, as well as end-to-end encrypted data sync.
AgenticFlow is a no-code platform where marketers build agentic AI workflows for go-to-market automation, powered by hundreds of everyday apps as tools for your AI agents.
Icon ReadLecture ReadLecture is an AI assistant designed to summarize audio and video content. It effortlessly transforms audio and video into accurate text and image formats. It also supports a variety of AI-driven learning methods, including PPT extraction, podcast summaries, mind mapping, translation, and meeting notes. Icon AIhaoji AIhaoji is an AI assistant designed to summarize audio and video content. It effortlessly transforms audio and video into accurate text and image formats. It also supports a variety of AI-driven learning methods, including PPT extraction, podcast summaries, mind mapping, translation, and meeting notes.
Icon STranslate STranslate(Windows) is a ready-to-go translation ocr tool developed by WPF
Asp Client ASP Client Deepseek.ASPClient is a lightweight ASP.NET wrapper for the Deepseek AI API, designed to simplify AI-driven text processing in .NET applications..
gpt-ai-flow-logo GPT AI Flow + The ultimate productivity weapon built by engineers for efficiency enthusiasts (themselves): GPT AI Flow +
    +
  • `Shift+Alt+Space` Wake up desktop intelligent hub
  • +
  • Local encrypted storage
  • +
  • Custom instruction engine
  • +
  • On-demand calling without subscription bundling
  • +
+
Icon Story-FlicksWith just one sentence, you can quickly generate high-definition story short videos, supporting models such as DeepSeek.
Icon 16x Prompt 16x Prompt is an AI coding tool with context management. It helps developers manage source code context and craft prompts for complex coding tasks on existing codebases.
Icon Alpha Pai AI Research Assistant / The Next-Generation Financial Information Portal Driven by AI.
Proxy for investors to attend meetings and take notes, as well as providing search and Q&A services for financial information and quantitative analysis for investment research.
Icon argo Locally download and run Ollama and Huggingface models with RAG on Mac/Windows/Linux. Support LLM API too.
Icon PeterCat A conversational Q&A agent configuration system, self-hosted deployment solutions, and a convenient all-in-one application SDK, allowing you to create intelligent Q&A bots for your GitHub repositories.
Icon FastGPT + FastGPT is an open-source AI knowledge base platform built on large language models (LLMs), supporting various models including DeepSeek and OpenAI. We provide out-of-the-box capabilities for data processing, model invocation, RAG retrieval, and visual AI workflow orchestration, enabling you to effortlessly build sophisticated AI applications. +
Icon RuZhi AI Notes RuZhi AI Notes is an intelligent knowledge management tool powered by AI, providing one-stop knowledge management and application services including AI search & exploration, AI results to notes conversion, note management & organization, knowledge presentation & sharing. Integrated with DeepSeek model to provide more stable and higher quality outputs.
Icon Chatgpt-on-Wechat Chatgpt-on-Wechat(CoW) is a flexible chatbot framework that supports seamless integration of multiple LLMs, including DeepSeek, OpenAI, Claude, Qwen, and others, into commonly used platforms or office software such as WeChat Official Accounts, WeCom, Feishu, DingTalk, and websites. It also supports a wide range of custom plugins.
Icon Athena The world's first autonomous general AI with advanced cognitive architecture and human-like reasoning capabilities, designed to tackle complex real-world challenges.
Icon MaxKB MaxKB is a ready-to-use, flexible RAG Chatbot.
Icon TigerGPT TigerGPT is the first financial AI investment assistant of its kind based on OpenAI, developed by Tiger Group. TigerGPT aims to provide intelligent investment decision-making support for investors. On February 18, 2025, TigerGPT officially integrated the DeepSeek-R1 model to provide users with online Q&A services that support deep reasoning.
Icon HIX.AI Try DeepSeek for free and enjoy unlimited AI chat on HIX.AI. Use DeepSeek R1 for AI chat, writing, coding & more. Experience next-gen AI chat now!
Icon Askanywhere Select text anywhere and start a conversation with Deepseek
Icon 1chat An iOS app that lets you chat with the DeepSeek-R1 model locally.
iOS AI Chatbot Access 250+ text, image LLMs in one app 1AI iOS Chatbot integrates with 250+ text, image, voice models allowing users chat with any model in the world including Deepseek R1 and Deepseek V3 models.
PopAi PopAi PopAi launches DeepSeek R1! Enjoy lag-free, lightning-fast performance with PopAi. Seamlessly toggle online search on/off.
### AI Agent frameworks + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Icon smolagents The simplest way to build great agents. Agents write python code to call tools and orchestrate other agents. Priority support for open models like DeepSeek-R1!
IconYoMoStateful Serverless LLM Function Calling Framework with Strongly-typed Language Support
Icon SuperAgentX SuperAgentX: A Lightweight Open Source AI Framework Built for Autonomous Multi-Agent Applications with Artificial General Intelligence (AGI) Capabilities.
Icon Anda A Rust framework for AI agent development, designed to build a highly composable, autonomous, and perpetually memorizing network of AI agents.
Icon RIG Build modular and scalable LLM Applications in Rust.
Icon Just-Agents A lightweight, straightforward library for LLM agents - no over-engineering, just simplicity!
Icon Alice An autonomous AI agent on ICP, leveraging LLMs like DeepSeek for on-chain decision-making. Alice combines real-time data analysis with a playful personality to manage tokens, mine BOB, and govern ecosystems.
Icon Upsonic Upsonic offers a cutting-edge enterprise-ready agent framework where you can orchestrate LLM calls, agents, and computer use to complete tasks cost-effectively.
Icon ATTPs A foundational protocol framework for trusted communication between agents. Any agents based on DeepSeek, By integrating with the ATTPs SDK, can access features such as agent registration, sending verifiable data, and retrieving verifiable data. So that it can make trusted communication with agents from other platforms.
图标 translate.js AI i18n for front-end developers. It can achieve fully automatic HTML translation with just two lines of JavaScript. You can switch among dozens of languages with a single click. There is no need to modify the page, no language configuration files are required, and it supports dozens of fine-tuning extension instructions. It is SEO-friendly. Moreover, it opens up a standard text translation API interface.
Icon agentUniverse agentUniverse is a multi-agent collaboration framework designed for complex business scenarios. It offers rapid and user-friendly development capabilities for LLM agent applications, with a focus on mechanisms such as agent collaborative scheduling, autonomous decision-making, and dynamic feedback. The framework originates from Ant Group's real-world business practices in the financial industry. In June 2024, agentUniverse achieved full integration support for the DeepSeek series of models.
### RAG frameworks - + + + + + + + + + + + + + + + + +
Icon Icon RAGFlow An open-source RAG (Retrieval-Augmented Generation) engine based on deep document understanding. It offers a streamlined RAG workflow for businesses of any scale, combining LLM (Large Language Models) to provide truthful question-answering capabilities, backed by well-founded citations from various complex formatted data.
Icon Autoflow AutoFlow is an open-source knowledge base tool based on GraphRAG (Graph-based Retrieval-Augmented Generation), built on TiDB Vector, LlamaIndex, and DSPy. It provides a Perplexity-like search interface and allows easy integration of AutoFlow's conversational search window into your website by embedding a simple JavaScript snippet.
Icon DeepSearcher DeepSearcher combines powerful LLMs (DeepSeek, OpenAI, etc.) and Vector Databases (Milvus, etc.) to perform search, evaluation, and reasoning based on private data, providing highly accurate answer and comprehensive report.
Icon KAG KAG is a logical reasoning and Q&A framework based on the OpenSPG engine and large language models, which is used to build logical reasoning and Q&A solutions for vertical domain knowledge bases. KAG can effectively overcome the ambiguity of traditional RAG vector similarity calculation and the noise problem of GraphRAG introduced by OpenIE. KAG supports logical reasoning and multi-hop fact Q&A, etc.
+ +### FHE (Fully Homomorphic Encryption) frameworks + + + + + + + +
Icon Mind FHE Rust SDK

An open-source SDK for encrypting AI with Fully Homomorphic Encryption (FHE) and integrating with Mind Network for agent consensus. FHE is considered the holy grail of cryptography, enabling computations directly on encrypted data without the need for decryption. With FHE, agents can safeguard their privacy while using Deepseek, ensuring both model integrity and result consensus - all without exposing their data - by connecting to Mind Network. The SDK source code is implemented in pure Rust> and the package also available on crates.io .

+ +### Solana frameworks + + + + + + + +
Icon Solana Agent Kit An open-source toolkit for connecting AI agents to Solana protocols. Now, any agent, using any Deepseek LLM, can autonomously perform 60+ Solana actions:
+ +### Synthetic data curation + + + + + + + + + + + + + + + + + +
Icon Curator An open-source tool to curate large scale datasets for post-training LLMs.
Icon Kiln Generate synthetic datasets and distill R1 models into custom fine-tunes.
Icon Dingo Dingo: A Comprehensive Data Quality Evaluation Tool.
### IM Application Plugins @@ -174,9 +445,14 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati Domain knowledge assistant in personal WeChat and Feishu, focusing on answering questions. - Icon - QChatGPT
(QQ)
- A QQ chatbot with high stability, plugin support, and real-time networking. + Icon + LangBot
(QQ, Lark, WeCom)
+ LLM-based IM bots framework, supports QQ, Lark, WeCom, and more platforms. + + + Icon + NoneBot
(QQ, Lark, Discord, TG, etc.)
+ Based on NoneBot framework, provide intelligent chat and deep thinking functions, supports QQ, Lark, Discord, TG, and more platforms. @@ -213,13 +489,48 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati FluentRead A revolutionary open-source browser translation plugin that enables everyone to have a native-like reading experience + + Icon + Ncurator + Knowledge Base AI Q&A Assistant - Let AI help you organize and analyze knowledge + + + Icon + RssFlow + An intelligent RSS reader browser extension with AI-powered RSS summarization and multi-dimensional feed views. Supports DeepSeek model configuration for enhanced content understanding. + + + Icon + DeepChat + A Chrome extension that enables users to chat with DeepSeek by opening a sidebar on any website. In addition, it provides a floating menu underneath any selected text on any website that allows users to generate text summaries, check grammar issues, and translate content. + + + Icon + Typral + Fast AI writer assistant - Let AI help you quickly improve article, paper, text... + + + Icon + Trancy + Immersive bilingual translation, video bilingual subtitles, sentence/word selection translation extension + + + Icon + Anything Copilot + Anything Copilot is a browser extension that enables seamless access to mainstream AI tools directly from your sidebar. + + + Icon + Cliprun + Python code runner & playground. Right-click Python code on DeepSeek to run it instantly in your browser. + ### VS Code Extensions - + @@ -228,26 +539,61 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati + + + + + + + + + + +
Icon Icon Continue Continue is an open-source autopilot in IDE.
Cline Meet Cline, an AI assistant that can use your CLI aNd Editor.
Icon AI Commit Use AI to generate git commit messages in VS Code.
Icon SeekCode Copilot vscode intelligent coding assistant supports configuring locally deployed DeepSeek models
+ +### Visual Studio Extensions + + + + + + + + + + + + + + + + +
Icon Comment2GPT Use OpenAI ChatGPT, Google Gemini, Anthropic Claude, DeepSeek and Ollama through your comments
Icon CodeLens2GPT Use OpenAI ChatGPT, Google Gemini, Anthropic Claude, DeepSeek and Ollama through the CodeLens
Icon Unity Code Assist Lite Code assistance for Unity scripts
### neovim Extensions - + - + + + + + +
Icon Icon avante.nvim avante.nvim is an open-source autopilot in IDE.
Icon llm.nvim A free large language model(LLM) plugin that allows you to interact with LLM in Neovim. Supports any LLM, such as Deepseek, GPT, GLM, Kimi or local LLMs (such as ollama). A free large language model (LLM) plugin that allows you to interact with LLM in Neovim. Supports any LLM, such as Deepseek, GPT, GLM, Kimi or local LLMs (such as ollama).
Icon codecompanion.nvim AI-powered coding, seamlessly in Neovim.
Icon minuet-ai.nvim Minuet offers code completion as-you-type from popular LLMs including Deepseek, OpenAI, Gemini, Claude, Ollama, Codestral, and more.
### JetBrains Extensions @@ -264,7 +610,7 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati Onegai Copilot is an AI coding assistant in JetBrain's IDE. - Icon + Icon Continue Continue is an open-source autopilot in IDE. @@ -280,13 +626,28 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati -### Cursor +### Discord Bots + + + + + + + +
Icon Geneplore AI Geneplore AI runs one of the largest AI Discord bots, now with Deepseek v3 and R1.
+ +### Native AI Code Editor - + + + + + +
Icon Cursor ‍The AI Code Editor‍The AI Code Editor based on VS Code
Icon WindSurf Another AI Code Editor based on VS Code by Codeium
@@ -305,9 +666,49 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati +### Security + + + + + + + + + + + + +
Icon CodeGate CodeGate: secure AI code generation
Icon AI-Infra-Guard Tencent's Hunyuan Security Team - AI infrastructure security assessment tool designed to discover and detect potential security risks in AI systems.
+ ### Others + + + + + + + + + + + + + + + + + + + + + + + + + @@ -318,6 +719,11 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati + + + + + @@ -329,9 +735,9 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati - - - + + + @@ -339,3 +745,48 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
🤖 Wechat-Bot A wechat robot based on WeChaty combined with DeepSeek and other Ai services.
🐠 Abso TypeScript SDK to interact with any LLM provider using the OpenAI format.
Icon ShellOracle A terminal utility for intelligent shell command generation.
Icon Bolna Use DeepSeek as the LLM for conversational voice AI agents
Icon Siri Ultra A GitHub project with 1000 stars, supporting internet connectivity, multi-turn conversations, and DeepSeek series models
Icon siri_deepseek_shortcut n8n-nodes-deepseek An N8N community node that supports direct integration with the DeepSeek API into workflows.
Icon Portkey AI Portkey is a unified API for interacting with over 1600+ LLM models, offering advanced tools for control, visibility, and security in your DeepSeek apps. Python & Node SDK available.
Icon LiteLLM Mem0 enhances AI assistants with an intelligent memory layer, enabling personalized interactions and continuous learning over time.
Icon Geneplore AI Geneplore AI runs one of the largest AI Discord bots, now with Deepseek v3 and R1. Icon Simplismart AI Simplismart enables seamless GenAI deployments with the fastest inference for LLMs, Diffusion, and Speech models. Deploy Deepseek effortlessly on the Simplismart Cloud or with your own Cloud.
Icon Test and evaluate LLM prompts, including DeepSeek models. Compare different LLM providers, catch regressions, and evaluate responses.
+ + + deepseek-tokenizer + An efficient and lightweight tokenization library for DeepSeek models, relying solely on the `tokenizers` library without heavy dependencies like `transformers`. + + + Icon + Langfuse + Open-source LLM observability platform that helps teams collaboratively debug, analyze, and iterate on their DeepSeek applications. + + + CR + deepseek-review + 🚀 Sharpen Your Code, Ship with Confidence – Elevate Your Workflow with Deepseek Code Review 🚀 + + + Icon + GPTLocalost + Use DeepSeek-R1 in Microsoft Word Locally. No inference costs. + + + Icon + WordPress ai助手 + Docking Deepseek api for WordPress site ai conversation assistant, post generation, post summary plugin. + + + Icon + ComfyUI-Copilot + An intelligent assistant built on the Comfy-UI framework that simplifies and enhances the AI algorithm debugging and deployment process through natural language interactions. + + + Icon + LLM4AD + LLM4AD is a unified open-source Python-based Platform using Large Language Models (LLMs) for Automatic Algorithm Design (AD). + + + + chatchat + Large Language Models Python API. + + + +### Star History + +[![Star History Chart](https://api.star-history.com/svg?repos=deepseek-ai/awesome-deepseek-integration&type=Date)](https://star-history.com/#deepseek-ai/awesome-deepseek-integration&Date) diff --git a/README_cn.md b/README_cn.md index a4f7a8d..71f0723 100644 --- a/README_cn.md +++ b/README_cn.md @@ -1,23 +1,48 @@

-Awesome DeepSeek Integrations +Awesome DeepSeek Integrations

# DeepSeek 实用集成 ![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg) 将 DeepSeek 大模型能力轻松接入各类软件。访问 [DeepSeek 开放平台](https://platform.deepseek.com/)来获取您的 API key。 -[English](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README.md)/简体中文 +[English](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README.md)/简体中文/[日本語](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README_ja.md)
-
-
+

+ ### 应用程序 + + + + + + + + + + + + + + + + + + + + + + + + + @@ -29,7 +54,12 @@ - + + + + + + @@ -43,59 +73,65 @@ + + + + + - + - + - + + - - + + - - + + - - + + - + - + - + - - - + + + - + - + @@ -105,12 +141,102 @@ - + - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + @@ -119,16 +245,43 @@
Icon4EVERChat4EVERChat 是集成数百款LLM的智能模型选型平台,支持直接对比不同模型的实时响应差异,基于4EVERLAND AI RPC 统一API端点实现零成本模型切换,自动选择响应快、成本低的模型组合。
Icon小海浏览器小海浏览器是安卓桌面管理&AI浏览器,DeepSeek是默认AI对话引擎.他有极致的性能(0.2秒启动),苗条的体型(apk 3M大),无广告,超高速广告拦截,多屏分类,屏幕导航,多搜索框,一框多搜
IconDeepChatDeepChat 是一款完全免费的桌面端智能助手,内置强大的 DeepSeek 大模型,支持多轮对话、联网搜索、文件上传、知识库等多种功能。
🤖 Wechat-Bot 基于 wechaty 实现的微信机器人,结合了 DeepSeek 和其他 Ai 服务。
Icon Quantalogic QuantaLogic 是一个 ReAct(推理和行动)框架,用于构建高级 AI 代理。
Icon Chatbox 一键获取跨平台ChatGPT网页用户界面,支持流行的LLM
Icon Icon Coco AI Coco AI 是一个完全开源、跨平台的统一搜索与效率工具,能够连接并搜索多种数据源,包括应用程序、文件、谷歌网盘、Notion、语雀、Hugo 等本地与云端数据。通过接入 DeepSeek 等大模型,Coco AI 实现了智能化的个人知识库管理,注重隐私,支持私有部署,帮助用户快速、智能地访问信息。
Icon 留白记事 留白让你直接在微信上使用 DeepSeek 管理你的笔记、任务、日程和待办清单!
LibreChat LibreChat 是一个可定制的开源应用程序,无缝集成了 DeepSeek,以增强人工智能交互体验
PapersGPT PapersGPT PapersGPT是一款集成了DeepSeek及其他多种AI模型的辅助论文阅读的Zotero插件。
Icon RSS翻译器 RSS翻译器 开源、简洁、可自部署的RSS翻译器
Icon Enconvo Enconvo是AI时代的启动器,是所有AI功能的入口,也是一位体贴的智能助理. Enconvo是AI时代的启动器,是所有AI功能的入口,也是一位体贴的智能助理。
Icon Cherry Studio一款为创造者而生的桌面版 AI 助手 一款为创造者而生的桌面版 AI 助手
Icon ToMemo (iOS, ipadOS) 一款短语合集 + 剪切板历史 + 键盘输出的iOS应用,集成了AI大模型,可以在键盘中快速输出使用。
一款短语合集 + 剪切板历史 + 键盘输出的iOS应用,集成了AI大模型,可以在键盘中快速输出使用。
Icon Video Subtitle Master 批量为视频生成字幕,并可将字幕翻译成其它语言。这是一个客户端工具, 跨平台支持 mac 和 windows 系统, 支持百度,火山,deeplx, openai, deepseek, ollama 等多个翻译服务
批量为视频生成字幕,并可将字幕翻译成其它语言。这是一个客户端工具, 跨平台支持 mac 和 windows 系统, 支持百度,火山,deeplx, openai, deepseek, ollama 等多个翻译服务
Icon Easydict Easydict 是一个简洁易用的词典翻译 macOS App,能够轻松优雅地查找单词或翻译文本,支持调用大语言模型 API 翻译。
Easydict 是一个简洁易用的词典翻译 macOS App,能够轻松优雅地查找单词或翻译文本,支持调用大语言模型 API 翻译。
Icon Raycast Raycast 是一款 macOS 生产力工具,它允许你用几个按键来控制你的工具。它支持各种扩展,包括 DeepSeek AI。 Raycast 是一款 macOS 生产力工具,它允许你用几个按键来控制你的工具。它支持各种扩展,包括 DeepSeek AI。
Icon Icon Zotero Zotero 是一款免费且易于使用的文献管理工具,旨在帮助您收集、整理、注释、引用和分享研究成果。 Zotero 是一款免费且易于使用的文献管理工具,旨在帮助您收集、整理、注释、引用和分享研究成果。
Icon 思源笔记 思源笔记是一款隐私优先的个人知识管理系统,支持完全离线使用,并提供端到端加密的数据同步功能。 Icon 思源笔记 思源笔记是一款隐私优先的个人知识管理系统,支持完全离线使用,并提供端到端加密的数据同步功能。
Icon go-stock go-stock 是一个由 Wails 使用 NativeUI 构建并由 LLM 提供支持的股票数据查看分析器。 go-stock 是一个由 Wails 使用 NativeUI 构建并由 LLM 提供支持的股票数据查看分析器。
Wordware Wordware Wordware 这是一个工具包,使任何人都可以仅通过自然语言构建、迭代和部署他们的AI堆栈Wordware 这是一个工具包,使任何人都可以仅通过自然语言构建、迭代和部署他们的AI堆栈
Icon
Icon LiberSonora LiberSonora,寓意"自由的声音",是一个 AI 赋能的、强大的、开源有声书工具集,包含智能字幕提取、AI标题生成、多语言翻译等功能,支持 GPU 加速、批量离线处理 LiberSonora,寓意"自由的声音",是一个 AI 赋能的、强大的、开源有声书工具集,包含智能字幕提取、AI标题生成、多语言翻译等功能,支持 GPU 加速、批量离线处理
Icon Bob Bob 是一款 macOS 平台的翻译和 OCR 软件,您可以在任何应用程序中使用 Bob 进行翻译和 OCR,即用即走! Bob 是一款 macOS 平台的翻译和 OCR 软件,您可以在任何应用程序中使用 Bob 进行翻译和 OCR,即用即走!
Icon STranslate STranslate(Windows) 是 WPF 开发的一款即用即走的翻译、OCR工具
gpt-ai-flow-logo GPT AI Flow + 工程师为效率狂人(他们自己)打造的终极生产力武器: GPT AI Flow +
    +
  • `Shift+Alt+空格` 唤醒桌面智能中枢
  • +
  • 本地加密存储
  • +
  • 自定义指令引擎
  • +
  • 按需调用拒绝订阅捆绑
  • +
+
Icon Story-Flicks 通过一句话即可快速生成高清故事短视频,支持 DeepSeek 等模型。
Icon Alpha派 AI投研助理/AI驱动的新一代金融信息入口。代理投资者听会/记纪要,金融投资信息的搜索问答/定量分析等投资研究工作。
Icon argo 本地下载并运行Huggingface及Ollama模型,支持RAG、LLM API、工具接入等,支持Mac/Windows/Linux。
Icon PeterCat 我们提供对话式答疑 Agent 配置系统、自托管部署方案和便捷的一体化应用 SDK,让您能够为自己的 GitHub 仓库一键创建智能答疑机器人,并快速集成到各类官网或项目中, 为社区提供更高效的技术支持生态。
Icon FastGPT + FastGPT 基于 LLM 大模型的开源 AI 知识库构建平台,支持 DeepSeek、OpenAI 等多种模型。我们提供了开箱即用的数据处理、模型调用、RAG 检索、可视化 AI 工作流编排等能力,帮助您轻松构建复杂的 AI 应用。 +
Icon Chatgpt-on-Wechat Chatgpt-on-Wechat(CoW)项目是一个灵活的聊天机器人框架,支持将DeepSeek、OpenAI、Claude、Qwen等多种LLM 一键接入到微信公众号、企业微信、飞书、钉钉、网站等常用平台或办公软件,并支持丰富的自定义插件。
Icon 如知AI笔记 如知AI笔记是一款智能化的AI知识管理工具,致力于为用户提供一站式的知识管理和应用服务,包括AI搜索探索、AI结果转笔记、笔记管理与整理、知识演示与分享等。集成了DeepSeek深度思考模型,提供更稳定、更高质量的输出。
Icon Athena 世界上首个具有先进认知架构和类人推理能力的自主通用人工智能,旨在解决复杂的现实世界挑战。
Icon TigerGPT TigerGPT 是老虎集团开发的,业内首个基于 OpenAI 的金融 AI 投资助理。TigerGPT 旨在为投资者提供智能化的投资决策支持。2025年2月18日,TigerGPT 正式接入 DeepSeek-R1 模型,为用户提供支持深度推理的在线问答服务。
Icon HIX.AI 免费试用 DeepSeek,在 HIX.AI 上享受无限量的 AI 聊天。使用 DeepSeek R1 进行 AI 聊天、写作、编码等。立即体验下一代 AI 聊天!
Icon 划词AI助手 一个划词ai助手,在任何地方划词,快速打开与Deepseek的对话!
Icon 1查 一款能让你在本地运行 DeepSeek 的 iOS 应用
iOS AI 聊天机器人 在一个应用中访问250多个文本、图像大模型 1AI iOS聊天机器人集成了250多个文本、图像、语音模型,让用户可以与OpenRouter、Replicate上的任何模型对话,包括Deepseek推理和Deepseek V3模型。
PopAi PopAi PopAi推出DeepSeek R1!享受无延迟、闪电般快速的性能,尽在PopAi。轻松切换在线搜索开/关。
Icon
+ ### AI Agent 框架 - + + + + + + + + + + + + + + + + + + + + + + + + + +
Icon Anda 一个专为 AI 智能体开发设计的 Rust 语言框架,致力于构建高度可组合、自主运行且具备永久记忆能力的 AI 智能体网络。 一个专为 AI 智能体开发设计的 Rust 语言框架,致力于构建高度可组合、自主运行且具备永久记忆能力的 AI 智能体网络。
Icon YoMo Stateful Serverless LLM Function Calling Framework with Strongly-typed Language Support
图标 Alice 一个基于 ICP 的自主 AI 代理,利用 DeepSeek 等大型语言模型进行链上决策。Alice 结合实时数据分析和独特的个性,管理代币、挖掘 BOB 并参与生态系统治理。
图标 ATTPs 一个用于Agent之间可信通信的基础协议框架,基于DeepSeek的Agent,可以接入ATTPs的SDK,获得注册Agent,发送可验证数据,获取可验证数据等功能,从而与其他平台的Agent进行可信通信。
图标 translate.js 面向前端开发者的 AI i18n, 两行js实现html全自动翻译,几十语种一键切换,无需改动页面、无语言配置文件、支持几十个微调扩展指令、对SEO友好。并且开放标准文本翻译API接口
Icon agentUniverse agentUniverse 是一个面向复杂业务场景设计的多智能体协作框架。其提供了快速易用的大模型智能体应用搭建能力,并着重于提供智能体协同调度、自主决策与动态反馈等机制,其源自蚂蚁集团在金融领域的真实业务实践沉淀。agentUniverse于2024年6月全面接入支持deepseek系列模型。
+ ### RAG 框架 @@ -137,8 +290,55 @@ + + + + + + + + + + + + + + +
RAGFlow 一款基于深度文档理解构建的开源 RAG(Retrieval-Augmented Generation)引擎。RAGFlow 可以为各种规模的企业及个人提供一套精简的 RAG 工作流程,结合大语言模型(LLM)针对用户各类不同的复杂格式数据提供可靠的问答以及有理有据的引用。
Icon Autoflow AutoFlow 是一个开源的基于 GraphRAG 的知识库工具,构建于 TiDB Vector、LlamaIndex 和 DSPy 之上。提供类 Perplexity 的搜索页面,并可以嵌入简单的 JavaScript 代码片段,轻松将 Autoflow 的对话式搜索窗口集成到您的网站。
Icon DeepSearcher DeepSearcher 结合强大的 LLM(DeepSeek、OpenAI 等)和向量数据库(Milvus 等),根据私有数据进行搜索、评估和推理,提供高度准确的答案和全面的报告。
Icon KAG KAG 是基于 OpenSPG 引擎和大型语言模型的逻辑推理问答框架,用于构建垂直领域知识库的逻辑推理问答解决方案。KAG 可以有效克服传统 RAG 向量相似度计算的歧义性和 OpenIE 引入的 GraphRAG 的噪声问题。KAG 支持逻辑推理、多跳事实问答等。
+### FHE (全同态加密) frameworks + + + + + + + +
Icon Mind FHE Rust SDK

一个开源 SDK,可使用**全同态加密(FHE)**对 AI 进行加密,实现代理共识。FHE被誉为密码学的圣杯,能够在无需解密的情况下直接对加密数据进行计算。借助FHE,代理在使用Deepseek时可以保护隐私,同时确保模型的完整性和计算结果的一致性,无需暴露任何数据。该SDK的源代码采用纯Rust实现,并可在crates.io获取

+ + +### Solana 框架 + + + + + + + +
Icon Solana Agent Kit 一个用于连接 AI 智能体到 Solana 协议的开源工具包。现在,任何使用 DeepSeek LLM 的智能体都可以自主执行 60+ 种 Solana 操作。
+ +### 综合数据管理 + + + + + + + +
Icon Dingo 一个综合性的数据质量评估工具。
+ + ### 即时通讯插件 @@ -148,12 +348,18 @@ - - - + + + + + + + +
一个集成到个人微信群/飞书群的领域知识助手,专注解答问题不闲聊
Icon QChatGPT
(QQ)
😎高稳定性、🧩支持插件、🌏实时联网的 LLM QQ / QQ频道 / One Bot 机器人🤖 Icon LangBot
(QQ, 企微, 飞书)
大模型原生即时通信机器人平台,适配 QQ / QQ频道 / 飞书 / OneBot / 企业微信(wecom) 等多种消息平台
Icon NoneBot
(QQ, 飞书, Discord, TG, etc.)
基于 NoneBot 框架,支持智能对话与深度思考功能。适配 QQ / 飞书 / Discord, TG 等多种消息平台
+ ### 浏览器插件 @@ -180,57 +386,105 @@ - + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Icon 欧路翻译 提供鼠标划词搜索、逐段对照翻译、PDF文献翻译功能。可以使用支持 DeepSeek AI, Bing、GPT、Google等多种翻译引擎。 提供鼠标划词搜索、逐段对照翻译、PDF文献翻译功能。可以使用支持 DeepSeek AI、Bing、GPT、Google 等多种翻译引擎。
Icon 流畅阅读 一款革新性的浏览器开源翻译插件,让所有人都能够拥有基于母语般的阅读体验
Icon 馆长 知识库AI问答助手 - 让AI帮助你整理与分析知识
Icon RssFlow 一款智能的RSS阅读器浏览器扩展,具有AI驱动的RSS摘要和多维度订阅视图功能。支持配置DeepSeek模型以增强内容理解能力。
Icon Typral 超快的AI写作助手 - 让AI帮你快速优化日报,文章,文本等等...
Icon Trancy 沉浸双语对照翻译、视频双语字幕、划句/划词翻译插件
Icon Anything Copilot Anything Copilot 是一款可以让你在侧边栏无缝使用任意主流AI工具的浏览器插件
Icon DeepChat 一款Chrome扩展程序,允许用户在任何网站上通过打开侧边栏与DeepSeek聊天。此外,它还在任何网站上选中的文本下方提供一个浮动菜单,使用户能够生成文本摘要、检查语法问题和翻译内容。
+ ### VS Code 插件 - + - + + + + + + + + + + +
Icon Icon Continue 开源 IDE 插件,使用 LLM 做你的编程助手
Icon Cline Cline 是一款能够使用您的 CLI 和编辑器的 AI 助手。 Cline 是一款能够使用您的 CLI 和编辑器的 AI 助手
Icon AI Commit 使用 AI 生成 git commit message 的 VS Code 插件
Icon SeekCode Copilot vscode智能编码助手,支持配置本地部署的DeepSeek模型
+ ### neovim 插件 - + - + + + + + + - +
Icon Icon avante.nvim 开源 IDE 插件,使用 LLM 做你的编程助手
Icon llm.nvim 免费的大语言模型插件,让你在Neovim中与大模型交互,支持任意一款大模型,比如Deepseek,GPT,GLM,kimi或者本地运行的大模型(比如ollama) 免费的大语言模型插件,让你在Neovim中与大模型交互,支持任意一款大模型,比如DeepSeek,GPT,GLM,kimi或者本地运行的大模型(比如ollama)
Icon minuet-ai.nvim Minuet 提供实时代码补全功能,支持多个主流大语言模型,包括 DeepSeek、OpenAI、Gemini、Claude、Ollama、Codestral 等。
Icon codecompanion.nvim AI 驱动的编码,在 Neovim 中无缝集成. AI 驱动的编码,在 Neovim 中无缝集成。
+ ### JetBrains 插件 - + @@ -239,9 +493,49 @@
Icon Chinese-English Translate 集成了多家国内翻译和ai厂商,将中文翻译到英文的插件。 集成了多家国内翻译和AI厂商,将中文翻译到英文的插件。
Icon
+ +### AI Code编辑器 + + + + + + + + + + + + +
Icon Cursor 基于VS Code进行扩展的AI Code编辑器
Icon WindSurf 另一个基于VS Code的AI Code编辑器,由Codeium出品
+ +### 安全 + + + + + + +
Icon AI-Infra-Guard 腾讯混元安全-AI基础设施安全评估工具,发现和检测AI系统中的潜在安全风险。
+ ### 其它 + + + + + + + + + + + + + + @@ -250,11 +544,36 @@ - + + + + + + + + + + + + + + + + + + + + + + + + + +

🐠

Abso + TypeScript SDK 使用 OpenAI 格式与任何 LLM 提供商进行交互。
Icon ShellOracle 一种用于智能 shell 命令生成的终端工具。
Icon Siri Ultra GitHub 千星开源项目,支持联网、多轮对话,支持 DeepSeek 系列模型
Icon 深度求索(快捷指令)
Icon n8n-nodes-deepseek 一个 N8N 的社区节点,支持直接使用 DeepSeek API 集成到工作流中 一个 N8N 的社区节点,支持直接使用 DeepSeek API 集成到工作流中
Icon promptfoo 测试和评估LLM提示,包括DeepSeek模型。比较不同的LLM提供商,捕获回归,并评估响应。
deepseek-tokenizer 一个高效的轻量级tokenization库,仅依赖`tokenizers`库,不依赖`transformers`等重量级依赖。
CR deepseek-review 🚀 使用 DeepSeek 进行代码审核,支持 GitHub Action 和本地 🚀
Icon WordPress ai助手 对接DeepSeek API用于WordPress站点的AI对话助手、AI文章生成、AI文章总结插件。
Icon ComfyUI-Copilot 基于Comfy-UI框架构建的智能助手,通过自然语言交互简化和增强AI算法调试和部署过程。
Icon LLM4AD LLM4AD 是一个开源、简洁、模块化的基于大模型的自动算法设计平台,使用DeepSeek API进行算法设计。
diff --git a/README_ja.md b/README_ja.md new file mode 100644 index 0000000..9292439 --- /dev/null +++ b/README_ja.md @@ -0,0 +1,560 @@ +
+ +

+Awesome DeepSeek Integrations +

+ +# Awesome DeepSeek Integrations ![Awesome](https://cdn.rawgit.com/sindresorhus/awesome/d7305f38d29fed78fa85652e3a63e154dd8e8829/media/badge.svg) + +DeepSeek API を人気のソフトウェアに統合します。API キーを取得するには、[DeepSeek Open Platform](https://platform.deepseek.com/)にアクセスしてください。 + +[English](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README.md)/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/README_cn.md)/日本語 + +
+ +
+
+ +### アプリケーション + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Icon4EVERChat4EVERChatは、数百のLLMを統合したインテリジェントなモデル選択プラットフォームで、モデルのパフォーマンスをリアルタイムで比較可能です。4EVERLAND AI RPCの統一APIエンドポイントを活用し、コストフリーでモデル切り替えを実現し、応答が速くコストの低い組み合わせを自動的に選択します。
IconDeepChatDeepChat は、強力な DeepSeek モデルを内蔵した完全に無料のデスクトップ インテリジェント アシスタントです。複数ラウンドの会話、オンライン検索、ファイルのアップロード、ナレッジ ベースなどの複数の機能をサポートします。
Icon Quantalogic QuantaLogicは、高度なAIエージェントを構築するためのReAct(推論と行動)フレームワークです。
Icon Chatbox Chatboxは、Windows、Mac、Linuxで利用可能な複数の最先端LLMモデルのデスクトップクライアントです。
Icon ChatGPT-Next-Web ChatGPT Next Webは、GPT3、GPT4、Gemini ProをサポートするクロスプラットフォームのChatGPTウェブUIです。
Icon Coco AI Coco AI は、完全にオープンソースでクロスプラットフォーム対応の統合検索および生産性向上ツールで、アプリケーション、ファイル、Google Drive、Notion、Yuque、Hugoなど、ローカルおよびクラウドのさまざまなデータソースを接続して検索できます。DeepSeekなどの大規模モデルと連携することにより、Coco AIはインテリジェントな個人のナレッジ管理を実現し、プライバシーを重視し、プライベートなデプロイにも対応。ユーザーが情報に迅速かつインテリジェントにアクセスできるようサポートします。
Icon Liubai Liubaiは、WeChat上でDeepSeekを使用してノート、タスク、カレンダー、ToDoリストを操作できるようにします!
Icon Pal - AI Chat Client
(iOS, ipadOS)
Palは、iOS上でカスタマイズされたチャットプレイグラウンドです。
LibreChat LibreChat LibreChatは、DeepSeekをシームレスに統合してAIインタラクションを強化するカスタマイズ可能なオープンソースアプリです。
Icon RSS Translator RSSフィードをあなたの言語に翻訳します!
Icon Enconvo Enconvoは、AI時代のランチャーであり、すべてのAI機能のエントリーポイントであり、思いやりのあるインテリジェントアシスタントです。
IconCherry Studioプロデューサーのための強力なデスクトップAIアシスタント
Icon ToMemo (iOS, ipadOS) フレーズブック+クリップボード履歴+キーボードiOSアプリで、キーボードでの迅速な出力にAIマクロモデリングを統合しています。
Icon Video Subtitle Master ビデオの字幕を一括生成し、字幕を他の言語に翻訳することができます。これはクライアントサイドのツールで、MacとWindowsの両方のプラットフォームをサポートし、Baidu、Volcengine、DeepLx、OpenAI、DeepSeek、Ollamaなどの複数の翻訳サービスと統合されています。
Icon Chatworm Chatwormは、複数の最先端LLMモデルのためのウェブアプリで、オープンソースであり、Androidでも利用可能です。
Icon Easydict Easydictは、単語の検索やテキストの翻訳を簡単かつエレガントに行うことができる、簡潔で使いやすい翻訳辞書macOSアプリです。大規模言語モデルAPIを呼び出して翻訳を行うことができます。
Icon Raycast Raycastは、macOSの生産性ツールで、いくつかのキーストロークでツールを制御できます。DeepSeek AIを含むさまざまな拡張機能をサポートしています。
PHP Client PHP Client Deepseek PHP Clientは、Deepseek APIとのシームレスな統合のための堅牢でコミュニティ主導のPHPクライアントライブラリです。
Laravel Integration Laravel Integration LaravelアプリケーションとのシームレスなDeepseek API統合のためのLaravelラッパー。
Icon Zotero Zoteroは、研究成果を収集、整理、注釈、引用、共有するのに役立つ無料で使いやすいツールです。
Icon SiYuan SiYuanは、完全にオフラインで使用できるプライバシー優先の個人知識管理システムであり、エンドツーエンドの暗号化データ同期を提供します。
Icon go-stock go-stockは、Wailsを使用してNativeUIで構築され、LLMによって強化された中国株データビューアです。
Wordware Wordware Wordwareは、誰でも自然言語だけでAIスタックを構築、反復、デプロイできるツールキットです。
Icon Dify Difyは、アシスタント、ワークフロー、テキストジェネレーターなどのアプリケーションを作成するためのDeepSeekモデルをサポートするLLMアプリケーション開発プラットフォームです。
Big-AGI Big-AGI Big-AGIは、誰もが高度な人工知能にアクセスできるようにするための画期的なAIスイートです。
Icon LiberSonora LiberSonoraは、「自由の声」を意味し、AIによって強化された強力なオープンソースのオーディオブックツールキットであり、インテリジェントな字幕抽出、AIタイトル生成、多言語翻訳などの機能を備え、GPUアクセラレーションとバッチオフライン処理をサポートしています。
Icon Bob Bobは、任意のアプリで使用できるmacOSの翻訳およびOCRツールです。
Icon AgenticFlow AgenticFlowは、マーケターがAIエージェントのためのエージェンティックAIワークフローを構築するためのノーコードプラットフォームであり、数百の毎日のアプリをツールとして使用します。
Icon Alphaパイ AI投資研究エージェント/次世代の金融情報エントリーポイント。投資家を代理して会議に出席し、AI議事録を取るほか、金融投資情報の検索・質問応答やエージェント駆使した定量分析など、投資研究業務を支援します。
Icon argo ローカルでダウンロードし、Mac、Windows、Linux 上でOllamaとHuggingfaceモデルをRAGで実行します。LLM APIもサポートしています。
Icon PeterCat 会話型Q&Aエージェントの構成システム、自ホスト型デプロイメントソリューション、および便利なオールインワンアプリケーションSDKを提供し、GitHubリポジトリのためのインテリジェントQ&Aボットをワンクリックで作成し、さまざまな公式ウェブサイトやプロジェクトに迅速に統合し、コミュニティのためのより効率的な技術サポートエコシステムを提供します。
Icon FastGPT + FastGPT は大規模言語モデル(LLM)を基盤としたオープンソースAIナレッジベース構築プラットフォームで、DeepSeekやOpenAIなど様々なモデルをサポートしています。データ処理、モデル呼び出し、RAG検索、ビジュアルAIワークフロー設計などの導入即使用可能な機能を提供し、複雑なAIアプリケーションの構築を容易に実現します。 +
Icon Chatgpt-on-Wechat ChatGPT-on-WeChat (CoW) プロジェクトは、DeepSeek、OpenAI、Claude、Qwen など複数の LLM を、WeChat 公式アカウント、企業微信、飛書、DingTalk、ウェブサイトなどの一般的なプラットフォームやオフィスソフトにシームレスに統合できる柔軟なチャットボットフレームワークです。また、豊富なカスタムプラグインもサポートしています。
Icon TigerGPT TigerGPT は、OpenAI に基づく最初の金融 AI 投資アシスタントで、虎のグループによって開発されています。TigerGPT は、投資家に対して、深い推理をサポートするオンライン Q&A サービスを提供することを目的としています。2025年2月18日、TigerGPT は DeepSeek-R1 モデルを正式に統合し、ユーザーにオンライン Q&A サービスを提供することで、深い推理をサポートします。
Icon HIX.AI DeepSeek を無料でお試しいただき、 HIX.AI で AI チャットを無制限にお楽しみください。AI チャット、ライティング、コーディングなどに DeepSeek R1 をご利用ください。今すぐ次世代の AI チャットを体験してください!
Icon Askanywhere どこでも単語を選択して、Deepseekとの会話を素早く開くことができる単語選択AIアシスタント!
アイコン 1chat DeepSeek-R1モデルとローカルでチャットできるiOSアプリです。
iOS AI チャットボット 1つのアプリで250以上のテキスト、画像の大規模モデルにアクセス 1AI iOSチャットボットは250以上のテキスト、画像、音声モデルと統合されており、OpenRouterやReplicateの任意のモデル、Deepseek推論およびDeepseek V3モデルと話すことができます。
PopAi PopAi PopAiがDeepSeek R1を発表!PopAiで遅延のない、超高速なパフォーマンスをお楽しみください。 オンライン検索のオン/オフをシームレスに切り替え可能です。
+ +### AI エージェントフレームワーク + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Icon Anda 高度にコンポーザブルで自律的かつ永続的な記憶を持つAIエージェントネットワークを構築するために設計されたRustフレームワーク。
Icon ATTPs エージェント間の信頼できる通信のための基本プロトコルフレームワークです。利用者はATTPsのSDKを導入することで、エージェントの登録、検証可能なデータの送信、検証可能なデータの取得などの機能を利用することができます。
Icon 如知AIノート 如知AIノートは、AIを活用したインテリジェントな知識管理ツールで、AI検索・探索、AI結果のノート変換、ノート管理・整理、知識の表示・共有など、ワンストップの知識管理・応用サービスを提供します。DeepSeek深層思考モデルを統合し、より安定した、より高品質な出力を提供します。
图标 translate.js フロントエンド開発者向けのAI i 18 n、2行のjsはhtmlの全自動翻訳を実現し、数十言語のワンクリック切り替えで、ページを変更する必要がなく、言語プロファイルがなく、数十個の微調整拡張命令をサポートし、SEOに友好的である。また、標準テキスト翻訳APIインタフェースを開放する
Icon agentUniverse agentUniverseは、複雑なビジネスシーン向けに設計されたマルチエージェント協調フレームワークです。迅速で使いやすい大規模モデルのインテリジェントエージェントアプリケーション構築能力を提供し、特にエージェント間の協調スケジューリング、自律的な意思決定、動的なフィードバックなどのメカニズムに重点を置いています。これは、Ant Groupの金融業界における実践的なビジネス経験に基づいて開発されました。agentUniverseは、2024年6月にDeepSeekシリーズモデルのサポートを全面的に統合しました。
+ +### RAG フレームワーク + + + + + + + + + + + + + + + + + + + + + + +
Icon RAGFlow 深い文書理解に基づいたオープンソースのRAG(Retrieval-Augmented Generation)エンジン。RAGFlowは、あらゆる規模の企業や個人に対して、ユーザーのさまざまな複雑な形式のデータに対して信頼性のある質問応答と根拠のある引用を提供するための簡素化されたRAGワークフローを提供します。
Icon Autoflow AutoFlow は、GraphRAGに基づくオープンソースのナレッジベースツールであり、TiDB Vector、LlamaIndex、DSPy の上に構築されています。Perplexity のような検索インターフェースを提供し、シンプルな JavaScript スニペットを埋め込むことで、AutoFlow の対話型検索ウィンドウを簡単にウェブサイトに統合できます。
Icon DeepSearcher DeepSearcher は、強力な大規模言語モデル(DeepSeek、OpenAI など)とベクトルデータベース(Milvus など)を組み合わせて、私有データに基づく検索、評価、推論を行い、高精度な回答と包括的なレポートを提供します。
Icon KAG KAG は、OpenSPG エンジンと大規模言語モデルに基づく論理的推論質問応答フレームワークです。垂直ドメイン知識ベース用の論理的推論質問応答ソリューションを構築するために使用されます。 KAG は、従来の RAG ベクトル類似度計算の曖昧さと、OpenIE によって導入された GraphRAG のノイズ問題を効果的に克服できます。 KAG は、論理的推論、マルチホップの事実に基づく質問への回答などをサポートします。
+ +### FHE フレームワーク + + + + + + + +
Icon Mind FHE Rust SDK

AI を完全準同型暗号(FHE)で暗号化し、Mind Network と統合してエージェントのコンセンサスを実現するオープンソース SDK。FHE は暗号学の聖杯とされており、復号せずに暗号化データ上で直接計算を実行できます。FHE を活用することで、エージェントは Deepseek を使用しながらプライバシーを保護し、モデルの整合性と計算結果の合意を確保できます。さらに、データを一切公開することなく Mind Network に接続することで、このプロセスが実現されます。この SDK のソースコード純 Rust で実装されており、crates.io からも入手可能です。

+ +### Solana フレームワーク + + + + + + + +
Icon Solana Agent Kit AIエージェントをSolanaプロトコルに接続するためのオープンソースツールキット。DeepSeek LLMを使用する任意のエージェントが、60以上のSolanaアクションを自律的に実行できます。
+ +### 合成データのカテゴリ化 + + + + + + + +
Icon Dingo 総合的なデータ品質評価ツール。
+ + +### IM アプリケーションプラグイン + + + + + + + + + + + + + + + + + +
Icon HuixiangDou
(wechat,lark)
個人のWeChatおよびFeishuでのドメイン知識アシスタントで、質問に答えることに焦点を当てています。
Icon QChatGPT
(QQ)
高い安定性、プラグインサポート、リアルタイムネットワーキングを備えたQQチャットボット。
Icon NoneBot
(QQ, Lark, Discord, TG, etc.)
NoneBotフレームワークを基に、インテリジェントな会話と深い思考機能をサポートします。QQ/飛書/Discord/Telegram等多种多様なメッセージプラットフォームに対応しています
+ +### ブラウザ拡張機能 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Icon Immersive Translate Immersive Translateは、バイリンガルのウェブページ翻訳プラグインです。
Icon Immersive Reading Guide サイドバーなし!!! 没入型のAIウェブ要約、質問をする...
Icon ChatGPT Box ChatGPT Boxは、ブラウザに統合されたChatGPTで、完全に無料です。
Icon hcfy (划词翻译) hcfy (划词翻译)は、複数の翻訳サービスを統合するウェブブラウザ拡張機能です。
Icon Lulu Translate このプラグインは、マウス選択翻訳、段落ごとの比較翻訳、およびPDF文書���訳機能を提供します。DeepSeek AI、Bing、GPT、Googleなどのさまざまな翻訳エンジンを利用できます。
Icon FluentRead 誰もが母国語のような読書体験を持つことができる革新的なオープンソースのブラウザ翻訳プラグイン
Icon RssFlow AIを活用したRSS要約と多次元フィードビューを備えたインテリジェントなRSSリーダーブラウザ拡張機能。コンテンツ理解を強化するためのDeepSeekモデル設定をサポートしています。
Icon DeepChat 任意のウェブサイト上でサイドバーを開いてDeepSeekとチャットできるChrome拡張機能です。さらに、任意のウェブサイト上の選択したテキストの下にフローティングメニューを表示し、テキストの要約生成、文法チェック、コンテンツの翻訳を行うことができます。
Icon Ncurator ナレッジベース AI Q&Aアシスタント – AIがあなたの知識の整理と分析をお手伝いします
Icon Typral 超高速AIライティングアシスタント - AIがあなたの日報、記事、テキストなどを素早く最適化します
Icon Trancy イマーシブな二か国語対照翻訳、動画の二か国語字幕、文/単語の選択翻訳プラグイン
Icon Anything Copilot Anything Copilotは、サイドバーから直接主要なAIツールにシームレスにアクセスできるようにするブラウザ拡張機能です。
+ +### VS Code 拡張機能 + + + + + + + + + + + + + + + + + + + + + + +
Icon Continue Continueは、IDEのオープンソースの自動操縦です。
Icon Cline Clineは、CLIとエディタを使用できるAIアシスタントです。
Icon AI Commit VS Code で AI を使用して git commit message を生成するプラグイン。
Icon SeekCode Copilot vscode インテリジェント コーディング アシスタントは、ローカルにデプロイされた DeepSeek モデルの構成をサポートします
+ +### neovim 拡張機能 + + + + + + + + + + + + + + + + + +
Icon avante.nvim avante.nvimは、IDEのオープンソースの自動操縦です。
Icon llm.nvim NeovimでLLMと対話できる無料の大規模言語モデル(LLM)プラグイン。Deepseek、GPT、GLM、Kimi、またはローカルLLM(ollamaなど)など、任意のLLMをサポートします。
Icon codecompanion.nvim Neovimでシームレスに統合されたAI駆動のコーディング。
+ +### JetBrains 拡張機能 + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Icon AutoDev ‍AutoDevは、JetBrainのIDEでのオープンソースのAIコーディングアシスタントです。
Icon Onegai Copilot Onegai Copilotは、JetBrainのIDEでのAIコーディングアシスタントです。
Icon Continue Continueは、IDEのオープンソースの自動操縦です。
Icon Chinese-English Translate JetBrainのIDEでの複数の翻訳サービス。
Icon AI Git Commit このプラグインは、コードの変更に基づいてコミットメッセージを自動生成するためにAIを使用します。
+ +### AI コードエディタ + + + + + + + + + + + + +
Icon Cursor ‍AIコードエディタ
Icon WindSurf CodeiumによるVS CodeをベースにしたのAIコードエディタ
+ +### Emacs + + + + + + + + + + + + +
Icon gptel EmacsのためのシンプルなLLMクライアント
Icon Minuet AI コードでインテリジェンスとダンス💃
+ + +### その他 + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
🤖 Wechat-Bot WeChaty をベースに、DeepSeek とその他の AI サービスを組み合わせた WeChat ボットです。
🐠 Abso OpenAIフォーマットを使用するあらゆるLLMプロバイダと対話するためのTypeScript SDK.
Icon Siri Ultra GitHubで1000以上のスターを獲得した、インターネット接続、マルチターン会話、DeepSeekシリーズモデルをサポートするプロジェクト
Icon siri_deepseek_shortcut DeepSeek APIを装備したSiri
Icon n8n-nodes-deepseek DeepSeek APIをワークフローに直接統合するためのN8Nコミュニティノード。
Icon LiteLLM 100以上のLLM APIをOpenAI形式で呼び出すためのPython SDK、プロキシサーバー(LLMゲートウェイ)。DeepSeek AIもサポートし、コスト追跡も可能です。
Icon Mem0 Mem0は、AIアシスタントにインテリジェントなメモリレイヤーを追加し、パーソナライズされたインタラクションと継続的な学習を可能にします。
Icon Geneplore AI Geneplore AIは、Deepseek v3およびR1を搭載した最大のAI Discordボットの1つを運営しています。
Icon promptfoo LLMプロンプトをテストおよび評価し、DeepSeekモデルを含む。さまざまなLLMプロバイダーを比較し、回帰をキャッチし、応答を評価します。
Icon ComfyUI-Copilot Comfy-UIフレームワーク上に構築されたインテリジェントアシスタント。自然言語による対話を通じて、AIアルゴリズムのデバッグおよびデプロイプロセスを簡素化し、効率化します。
Icon LLM4AD LLM4AD は、大規模言語モデル(LLM)を活用した自動アルゴリズム設計のための統一されたオープンソースのPythonベースのプラットフォームです。
Icon AI-Infra-Guard テンセント混元セキュリティチーム - AIインフラのセキュリティ評価ツールで、AIシステムにおける潜在的なセキュリティリスクを発見・検出することを目的としています。
diff --git a/docs/16x_prompt/README.md b/docs/16x_prompt/README.md new file mode 100644 index 0000000..e0b92d7 --- /dev/null +++ b/docs/16x_prompt/README.md @@ -0,0 +1,18 @@ +# [16x Prompt](https://prompt.16x.engineer/) + +AI Coding with Context Management. + +16x Prompt helps developers manage source code context and craft prompts for complex coding tasks on existing codebases. + +# UI + +![image](assets/16x_prompt_ui.png) + +## Integrate with DeepSeek API + +1. Click on the model selection button at bottom right +2. Click on "DeepSeek API" to automatically fill in API Endpoint +3. Enter model ID, for example `deepseek-chat` (for DeepSeek V3) or `deepseek-reasoner` (for DeepSeek R1) +4. Enter your API key + +![image](assets/16x_prompt_integration.png) \ No newline at end of file diff --git a/docs/16x_prompt/assets/16x_prompt_integration.png b/docs/16x_prompt/assets/16x_prompt_integration.png new file mode 100644 index 0000000..a3d42e6 Binary files /dev/null and b/docs/16x_prompt/assets/16x_prompt_integration.png differ diff --git a/docs/16x_prompt/assets/16x_prompt_ui.png b/docs/16x_prompt/assets/16x_prompt_ui.png new file mode 100644 index 0000000..3c7afbf Binary files /dev/null and b/docs/16x_prompt/assets/16x_prompt_ui.png differ diff --git a/docs/4EVERChat/README.md b/docs/4EVERChat/README.md new file mode 100644 index 0000000..ffe0974 --- /dev/null +++ b/docs/4EVERChat/README.md @@ -0,0 +1,36 @@ + + +# [4EVERChat](https://chat.4everland.org/) + +[4EVERChat](https://chat.4everland.org) is a large language model selection platform that seamlessly integrates hundreds of mainstream LLM models (such as ChatGPT, Deepseek, Grok, etc.). Developers can compare the response performance differences of various models in real-time, accurately identifying the optimal engine for their business needs. The platform requires no API key configuration and is ready to use out of the box. + +Its underlying technology relies on [4EVERLAND AI RPC](https://docs.4everland.org/ai/ai-rpc/quick-start), which not only enables zero-cost switching between hundreds of models through a unified API endpoint but also monitors service status in real-time and automatically activates backup nodes. It dynamically selects model combinations with fast response times and low inference costs, creating an AI development infrastructure that balances stability and cost-effectiveness. + + + +🔗 + +4EVERChat: https://chat.4everland.org/ + +4EVERLAND: https://www.4everland.org/ + +​ + + + +## UI + +#### 4EVERChat + + + +#### 4EVER AI RPC + + + + + +## Add Deepseek Model + + + diff --git a/docs/4EVERChat/README_cn.md b/docs/4EVERChat/README_cn.md new file mode 100644 index 0000000..c7c666a --- /dev/null +++ b/docs/4EVERChat/README_cn.md @@ -0,0 +1,31 @@ + + +# [4EVERChat](https://chat.4everland.org/) + +[4EVERChat](https://chat.4everland.org) 是一个大语言模型选型平台,无缝集成数百款主流LLM模型(如ChatGPT、Deepseek、Grok等),开发者可实时对比不同模型的应答性能差异,精准定位最优业务引擎,平台无需配置API密钥即开即用。 + +底层技术依托 [4EVERLAND AI RPC](https://docs.4everland.org/ai/ai-rpc/quick-start),不仅能通过统一API端点实现百种模型的零成本切换,更会实时监测服务状态并自动启用备用节点,动态选择响应速度快、推理费用低的模型组合,打造兼顾稳定性与性价比的AI开发基础设施。 + + +🔗 + +4EVERChat: https://chat.4everland.org/ + +4EVERLAND: https://www.4everland.org/ + +## UI + +#### 4EVERChat + + + +#### 4EVER AI RPC + + + + + +## 添加Deepseek 模型 + + + diff --git a/docs/4EVERChat/README_ja.md b/docs/4EVERChat/README_ja.md new file mode 100644 index 0000000..2c2849b --- /dev/null +++ b/docs/4EVERChat/README_ja.md @@ -0,0 +1,32 @@ + + +# [4EVERChat](https://chat.4everland.org/) + +[4EVERChat](https://chat.4everland.org) は、大規模言語モデル選択プラットフォームであり、数百の主流LLMモデル(ChatGPT、Deepseek、Grokなど)をシームレスに統合しています。開発者は、さまざまなモデルの応答性能の違いをリアルタイムで比較し、ビジネスニーズに最適なエンジンを正確に特定できます。このプラットフォームはAPIキーの設定が不要で、すぐに使用を開始できます。 + +その基盤技術は[4EVERLAND AI RPC](https://docs.4everland.org/ai/ai-rpc/quick-start)に依存しており、統一されたAPIエンドポイントを通じて数百のモデルのゼロコスト切り替えを実現するだけでなく、サービス状態をリアルタイムで監視し、バックアップノードを自動的に有効化します。応答速度が速く、推論コストが低いモデル組み合わせを動的に選択し、安定性とコスト効率を両立したAI開発インフラを構築します。 + + + +🔗 + +4EVERChat: https://chat.4everland.org/ + +4EVERLAND: https://www.4everland.org/ + +## UI + +#### 4EVERChat + + + +#### 4EVER AI RPC + + + + + +## Deepseek モデルを追加 + + + diff --git a/docs/ATTPs/README.md b/docs/ATTPs/README.md new file mode 100644 index 0000000..4b6360f --- /dev/null +++ b/docs/ATTPs/README.md @@ -0,0 +1,379 @@ + +# APRO-COM/ATTPs-framework + +Foundation framework that enables advanced agent based on DeepSeek interactions, data verification, and price queries with [ATTPs Protocol](https://docs.apro.com/attps) . It streamlines agent creation, verification processes, and provides a flexible framework for building robust agent-based solutions. + +For more details about ATTPs, you can see the [whitepaper here](https://www.apro.com/attps.pdf) + +## Overview + +The ATTPs framework bridges agent-based logic with the DeepSeek. It handles agent registration, data verification, and price queries, empowering both automated and user-driven workflows. + +## Features + +### Agent Operations +- **Agent Creation**: Deploy new agents with custom settings +- **Registration**: Register agents on-chain or via standardized processes +- **Multi-Signer Framework**: Supports threshold-based approval flows + +### Data Verification +- **Chain Validation**: Verify data authenticity on-chain +- **Transaction Execution**: Handle verification logic with built-in security checks +- **Auto-Hashing**: Convert raw data to hashed formats when needed +- **Metadata Parsing**: Validate content type, encoding, and compression + +### Price Queries +- **Live Price Data**: Fetch price information for various pairs +- **Format Validation**: Normalize user query inputs to standard trading-pair formats +- **APIs Integration**: Retrieve real-time or near-real-time pricing information + +## Security Features + +### Access Control +- **Private Key Management**: Safe usage of private keys for transaction signing +- **Environment Variables**: Secure injection of credentials +- **On-Chain Validation**: Leverage on-chain contract checks + +### Verification +- **Input Validation**: Strict schema checks before on-chain operations +- **Transaction Receipts**: Provide verifiable transaction details +- **Error Handling**: Detailed error logs for quick debugging + +## Performance Optimization + +1. **Cache Management** + - Implement caching for frequent queries + - Monitor retrieval times and cache hits + +2. **Network Efficiency** + - Batch requests where possible + - Validate response parsing to reduce overhead + +## System Requirements +- Node.js 16.x or higher +- Sufficient network access to on-chain endpoints +- Basic configuration of environment variables +- Minimum 4GB RAM recommended + +## Troubleshooting + +1. **Invalid Agent Settings** + - Ensure signers and threshold are correct + - Validate agentHeader for proper UUIDs and numeric values + +2. **Verification Failures** + - Check the input data formats + - Confirm environment variables are set + +3. **Price Query Errors** + - Verify the trading pair format + - Check external API availability + +## Safety & Security + +1. **Credential Management** + - Store private keys securely + - Do not commit secrets to version control + +2. **Transaction Limits** + - Configure thresholds to mitigate abuse + - Log transaction attempts and failures + +3. **Monitoring & Logging** + - Track unusual activity + - Maintain detailed audit logs + + +# Usage with js + +## Installation + +```bash +npm install ai-agent-sdk-js +``` + +## Configuration + +Configure the plugin by setting environment variables or runtime settings: +- APRO_RPC_URL +- APRO_PROXY_ADDRESS +- APRO_PRIVATE_KEY +- APRO_CONVERTER_ADDRESS +- APRO_AUTO_HASH_DATA + +## Usage with js sdk + +To use the AI Agent SDK, import the library and create an instance of the `Agent` class: + +```typescript +import { AgentSDK } from 'ai-agent-sdk-js' + +const agent = new AgentSDK({ + rpcUrl: 'https://bsc-testnet-rpc.publicnode.com', + privateKey: '', + proxyAddress: '', +}) + +// if you want the SDK to hash the data automatically +const autoHashAgent = new AgentSDK({ + rpcUrl: 'https://bsc-testnet-rpc.publicnode.com', + privateKey: '', + proxyAddress: '', + autoHashData: true, + converterAddress: '', +}) +``` + +To create a new agent, call the `createAndRegisterAgent` method: + +```typescript +import type { AgentSettings, TransactionOptions } from 'ai-agent-sdk-js' +import { randomUUID } from 'node:crypto' +import { parseUnits } from 'ethers' + +// prepare the agent settings +const agentSettings: AgentSettings = { + signers: [], + threshold: 3, + converterAddress: '', + agentHeader: { + messageId: randomUUID(), + sourceAgentId: randomUUID(), + sourceAgentName: 'AI Agent SDK JS', + targetAgentId: '', + timestamp: Math.floor(Date.now() / 1000), + messageType: 0, + priority: 1, + ttl: 3600, + }, +} + +// prepare the transaction options +const nonce = await agent.getNextNonce() +const transactionOptions: TransactionOptions = { + nonce, + gasPrice: parseUnits('1', 'gwei'), + gasLimit: BigInt(2000000), +} + +const tx = await agent.createAndRegisterAgent({ agentSettings, transactionOptions }) + +// or you can leave the transaction options empty, the SDK will use the auto-generated values +// const tx = await agent.createAndRegisterAgent({ agentSettings }) +``` + +The SDK also provides the tool to extract the new agent address from the transaction receipt: + +```typescript +import { parseNewAgentAddress } from 'ai-agent-sdk-js' + +const receipt = await tx.wait() +const agentAddress = parseNewAgentAddress(receipt) +``` + +To verify the data integrity, call the `verify` method: + +```typescript +import type { MessagePayload } from 'ai-agent-sdk-js' +import { hexlify, keccak256, toUtf8Bytes } from 'ethers' + +// prepare the payload +const data = hexlify(toUtf8Bytes('Hello World!')) +const dataHash = keccak256(data) +const payload: MessagePayload = { + data, + dataHash, + signatures: [ + { + r: '', + s: '', + v: 1, // 1, 0, 27, 28 are allowed + }, + // ... + ], + metadata: { + contentType: '', + encoding: '', + compression: '', + }, +} + +const tx = await agent.verify({ payload, agent: '', digest: '' }) +``` + +If the data is obtained from the APRO DATA pull service, you can use the auto-hash feature: + +```typescript +import type { MessagePayload } from 'ai-agent-sdk-js' + +const payload: MessagePayload = { + data: '0x...', + signatures: [ + { + r: '', + s: '', + v: 1, // 1, 0, 27, 28 are allowed + }, + // ... + ], + metadata: { + contentType: '', + encoding: '', + compression: '', + }, +} + +// When +const tx = await autoHashAgent.verify({ payload, agent: '', digest: '' }) +``` + +For more examples, see the [test](https://github.com/APRO-com/ai-agent-sdk-js/tree/main/test) cases. + + + +# Usage with Python + +## Installation + +```bash +$ pip3 install ai-agent-sdk + +``` + +## Usage with Python SDK + +### Initialize AgentSDK + +```python +from ai_agent.agent import AgentSDK + +AGENT_PROXY_ADDRESS = "0x07771A3026E60776deC8C1C61106FB9623521394" +NETWORK_RPC = "https://testnet-rpc.bitlayer.org" + +agent = AgentSDK(endpoint_uri=NETWORK_RPC, proxy_address=AGENT_PROXY_ADDRESS) +``` + +To create a new agent, call the createAndRegisterAgent method: + +```python +import time +from ai_agent.entities import ( + AgentSettings, + AgentHeader, + MessageType, + Priority +) +from ai_agent.utils import ( + generate_uuid_v4 +) + +AGENT_SETTINGS = AgentSettings( + signers=[ + "0x4b1056f504f32c678227b5Ae812936249c40AfBF", + "0xB973476e0cF88a3693014b99f230CEB5A01ac686", + "0x6cF0803D049a4e8DC01da726A5a212BCB9FAC1a1", + "0x9D46daa26342e9E9e586A6AdCEDaD667f985567B", + "0x33AF673aBcE193E20Ee94D6fBEb30fEf0cA7015b", + "0x868D2dE4a0378450BC62A7596463b30Dc4e3897E", + "0xD4E157c36E7299bB40800e4aE7909DDcA8097f67", + "0xA3866A07ABEf3fD0643BD7e1c32600520F465ca8", + "0x62f642Ae0Ed7F12Bc40F2a9Bf82ccD0a3F3b7531" + ], + threshold=2, + converter_address="0xaB303EF87774D9D259d1098E9aA4dD6c07F69240", + agent_header=AgentHeader( + version="1.0", + message_id="d4d0813f-ceb7-4ce1-8988-12899b26c4b6", + source_agent_id="da70f6b3-e580-470f-b88b-caa5369e7778", + source_agent_name="APRO Pull Mode Agent", + target_agent_id="", + timestamp=int(time.time()), + message_type=MessageType.Event, + priority=Priority.Low, + ttl=60 * 60 + ) +) + +dynamic_setting = AGENT_SETTINGS +dynamic_setting.agent_header.source_agent_id = generate_uuid_v4() +dynamic_setting.agent_header.target_agent_id = generate_uuid_v4() +dynamic_setting.agent_header.message_id = generate_uuid_v4() +user_owner = agent.add_account("0x_user_private_key") +result = agent.create_and_register_agent( + transmitter="", + nonce=None, + settings=AGENT_SETTINGS) +print("created agent:", result) + +``` +To verify the data integrity, call the verify method: + +```python +from ai_agent.entities import ( + AgentMessagePayload, + Proofs, + AgentMetadata, +) + +AGENT_CONTRACT = "0xA1903361Ee8Ec35acC7c8951b4008dbE8D12C155" +AGENT_SETTING_DIGEST = "0x010038164dba6abffb84eb5cb538850d9bc5d8f815149a371069b3255fd177a4" +AGENT_PAYLOAD = AgentMessagePayload( + data="0x0006e706cf7ab41fa599311eb3de68be869198ce62aef1cd079475ca50e5b3f60000000000000000000000000000000000000000000000000000000002b1bf0e000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000e0000000000000000000000000000000000000000000000000000000000000022000000000000000000000000000000000000000000000000000000000000002a0000101000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000001200003665949c883f9e0f6f002eac32e00bd59dfe6c34e92a91c37d6a8322d6489000000000000000000000000000000000000000000000000000000006762677d000000000000000000000000000000000000000000000000000000006762677d000000000000000000000000000000000000000000000000000003128629ec0800000000000000000000000000000000000000000000000004db732547630000000000000000000000000000000000000000000000000000000000006763b8fd0000000000000000000000000000000000000000000015f0f60671beb95cc0000000000000000000000000000000000000000000000015f083baa654a7b900000000000000000000000000000000000000000000000015f103ec7cb057ea80000000000000000000000000000000000000000000000000000000000000000003b64f7e72208147bb898e8b215d0997967bef0219263726c76995d8a19107d6ba5306a176474f9ccdb1bc5841f97e0592013e404e15b0de0839b81d0efb26179f222e0191269a8560ebd9096707d225bc606d61466b85d8568d7620a3b59a73e800000000000000000000000000000000000000000000000000000000000000037cae0f05c1bf8353eb5db27635f02b40a534d4192099de445764891198231c597a303cd15f302dafbb1263eb6e8e19cbacea985c66c6fed3231fd84a84ebe0276f69f481fe7808c339a04ceb905bb49980846c8ceb89a27b1c09713cb356f773", + data_hash="0x53d9f133f1265bd4391fcdf89b63424cbcfd316c8448f76cc515647267ac0a8e", + proofs=Proofs( + zk_proof="0x", + merkle_proof="0x", + signature_proof="0x000000000000000000000000000000000000000000000000000000000000006000000000000000000000000000000000000000000000000000000000000000e000000000000000000000000000000000000000000000000000000000000001600000000000000000000000000000000000000000000000000000000000000003b64f7e72208147bb898e8b215d0997967bef0219263726c76995d8a19107d6ba5306a176474f9ccdb1bc5841f97e0592013e404e15b0de0839b81d0efb26179f222e0191269a8560ebd9096707d225bc606d61466b85d8568d7620a3b59a73e800000000000000000000000000000000000000000000000000000000000000037cae0f05c1bf8353eb5db27635f02b40a534d4192099de445764891198231c597a303cd15f302dafbb1263eb6e8e19cbacea985c66c6fed3231fd84a84ebe0276f69f481fe7808c339a04ceb905bb49980846c8ceb89a27b1c09713cb356f7730000000000000000000000000000000000000000000000000000000000000003000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000000010000000000000000000000000000000000000000000000000000000000000001", + ), + meta_data=AgentMetadata( + content_type="0x", + encoding="0x", + compression="0x" + ) +) +user_owner = agent.add_account("0x_user_private_key") +result = agent.verify( + transmitter=user_owner, + nonce=None, + agent_contract=AGENT_CONTRACT, + settings_digest=AGENT_SETTING_DIGEST, + payload=AGENT_PAYLOAD +) +print("verify:", result) +``` +For more examples, see the [test cases](https://github.com/APRO-com/ai-agent-sdk-python/tree/main/tests). + + +# Other SDKs + +JAVA: https://github.com/APRO-com/ai-agent-sdk-java + +RUST: https://github.com/APRO-com/ai-agent-sdk-rust + +GOLANG: https://github.com/APRO-com/ai-agent-sdk-go + +# Support + +For issues or feature requests: +1. Check existing documentation +2. Submit a GitHub issue with relevant details +3. Include transaction logs and system info if applicable + +# Contributing + +We welcome pull requests! Refer to the project’s CONTRIBUTING.md and open discussions to coordinate efforts. + +# Credits + +- [APRO](https://www.apro.com/) - Plugin sponsor and partner +- [ai-agent-sdk-js](https://github.com/APRO-com/ai-agent-sdk-js) - Underlying agent SDK +- [ethers.js](https://docs.ethers.io/) - Transaction and contract interaction +- Community contributors for feedback and testing + +For more information about Apro plugin capabilities: + +- [Apro Documentation](https://docs.apro.com/en) + +# License + +This plugin is part of the Eliza project. Refer to the main project repository for licensing details. \ No newline at end of file diff --git a/docs/Alpha派/README.md b/docs/Alpha派/README.md new file mode 100644 index 0000000..dab4aa3 --- /dev/null +++ b/docs/Alpha派/README.md @@ -0,0 +1,14 @@ + + +# Alpha Pai + +AI Research Assistant / The Next-Generation Financial Information Portal Driven by AI.
+Proxy for investors to attend meetings and take notes, as well as providing search and Q&A services for financial information and quantitative analysis for investment research. +## UI + + + + + + + diff --git a/docs/Alpha派/README_cn.md b/docs/Alpha派/README_cn.md new file mode 100644 index 0000000..7be27ec --- /dev/null +++ b/docs/Alpha派/README_cn.md @@ -0,0 +1,13 @@ + + +# Alpha派 + +AI投研助理/AI驱动的新一代金融信息入口。代理投资者听会/记纪要,金融投资信息的搜索问答/定量分析等投资研究工作。 + +## UI + + + + + + diff --git a/docs/Alpha派/README_ja.md b/docs/Alpha派/README_ja.md new file mode 100644 index 0000000..faf3ec9 --- /dev/null +++ b/docs/Alpha派/README_ja.md @@ -0,0 +1,13 @@ + + +# Alphaパイ + +AI投資研究エージェント/次世代の金融情報エントリーポイント。投資家を代理して会議に出席し、AI議事録を取るほか、金融投資情報の検索・質問応答やエージェント駆使した定量分析など、投資研究業務を支援します。 + +## UI + + + + + + diff --git a/docs/Alpha派/assets/Alpha派-0.png b/docs/Alpha派/assets/Alpha派-0.png new file mode 100644 index 0000000..e98cb90 Binary files /dev/null and b/docs/Alpha派/assets/Alpha派-0.png differ diff --git a/docs/Alpha派/assets/Alpha派-1.png b/docs/Alpha派/assets/Alpha派-1.png new file mode 100644 index 0000000..66a66e8 Binary files /dev/null and b/docs/Alpha派/assets/Alpha派-1.png differ diff --git a/docs/Alpha派/assets/Alpha派-2.png b/docs/Alpha派/assets/Alpha派-2.png new file mode 100644 index 0000000..8cf9d2f Binary files /dev/null and b/docs/Alpha派/assets/Alpha派-2.png differ diff --git a/docs/Alpha派/assets/favicon.png b/docs/Alpha派/assets/favicon.png new file mode 100644 index 0000000..1ea576d Binary files /dev/null and b/docs/Alpha派/assets/favicon.png differ diff --git a/docs/Alpha派/assets/favicon1.png b/docs/Alpha派/assets/favicon1.png new file mode 100644 index 0000000..33bf996 Binary files /dev/null and b/docs/Alpha派/assets/favicon1.png differ diff --git a/docs/Coco AI/README.md b/docs/Coco AI/README.md new file mode 100644 index 0000000..95f9839 --- /dev/null +++ b/docs/Coco AI/README.md @@ -0,0 +1,16 @@ + + + + +# Coco AI - Connect & Collaborate + +Coco AI is a unified search platform that connects all your enterprise applications and data—Google Workspace, Dropbox, Confluent Wiki, GitHub, and more—into a single, powerful search interface. This repository contains the **COCO App**, built for both **desktop and mobile**. The app allows users to search and interact with their enterprise data across platforms. + +In addition, COCO offers a **Gen-AI Chat for Teams**—imagine **Deepseek** but tailored to your team’s unique knowledge and internal resources. COCO enhances collaboration by making information instantly accessible and providing AI-driven insights based on your enterprise's specific data. + +# UI +![alt text](assets/coco-deepseek-1.png) +![alt text](assets/coco-deepseek-2.png) +![alt text](assets/coco-deepseek-3.png) +![alt text](assets/coco-deepseek-4.png) +![alt text](assets/coco-deepseek-5.png) diff --git a/docs/Coco AI/README_cn.md b/docs/Coco AI/README_cn.md new file mode 100644 index 0000000..6a58a2b --- /dev/null +++ b/docs/Coco AI/README_cn.md @@ -0,0 +1,16 @@ + + + + +# Coco AI - Connect & Collaborate + +Coco AI 是一个统一的搜索平台,能够将您企业的所有应用程序和数据——包括 Google Workspace、Dropbox、Confluence Wiki、GitHub 等——整合到一个强大而统一的搜索界面中。这个仓库包含了适用于桌面端和移动端的 COCO 应用程序。用户可以通过该应用在不同平台上搜索和操作企业数据。 + +此外,Coco 还提供了一个面向团队的生成式人工智能聊天工具——可以将其想象为专为您的团队量身定制的 Deepseek,它能够与团队独特的知识体系和内部资源相结合。Coco 通过即时提供信息访问和基于企业特定数据的人工智能驱动洞察,增强了团队协作能力。 + +# UI +![alt text](assets/coco-deepseek-1.png) +![alt text](assets/coco-deepseek-2.png) +![alt text](assets/coco-deepseek-3.png) +![alt text](assets/coco-deepseek-4.png) +![alt text](assets/coco-deepseek-5.png) diff --git a/docs/Coco AI/assets/coco-deepseek-1.png b/docs/Coco AI/assets/coco-deepseek-1.png new file mode 100644 index 0000000..3bd791f Binary files /dev/null and b/docs/Coco AI/assets/coco-deepseek-1.png differ diff --git a/docs/Coco AI/assets/coco-deepseek-2.png b/docs/Coco AI/assets/coco-deepseek-2.png new file mode 100644 index 0000000..44f552a Binary files /dev/null and b/docs/Coco AI/assets/coco-deepseek-2.png differ diff --git a/docs/Coco AI/assets/coco-deepseek-3.png b/docs/Coco AI/assets/coco-deepseek-3.png new file mode 100644 index 0000000..dbde418 Binary files /dev/null and b/docs/Coco AI/assets/coco-deepseek-3.png differ diff --git a/docs/Coco AI/assets/coco-deepseek-4.png b/docs/Coco AI/assets/coco-deepseek-4.png new file mode 100644 index 0000000..417e767 Binary files /dev/null and b/docs/Coco AI/assets/coco-deepseek-4.png differ diff --git a/docs/Coco AI/assets/coco-deepseek-5.png b/docs/Coco AI/assets/coco-deepseek-5.png new file mode 100644 index 0000000..f7625b4 Binary files /dev/null and b/docs/Coco AI/assets/coco-deepseek-5.png differ diff --git a/docs/Coco AI/assets/favicon.png b/docs/Coco AI/assets/favicon.png new file mode 100644 index 0000000..850f0af Binary files /dev/null and b/docs/Coco AI/assets/favicon.png differ diff --git a/docs/Coco AI/assets/og-image.gif b/docs/Coco AI/assets/og-image.gif new file mode 100644 index 0000000..2890455 Binary files /dev/null and b/docs/Coco AI/assets/og-image.gif differ diff --git a/docs/ComfyUI-Copilot/README.md b/docs/ComfyUI-Copilot/README.md new file mode 100644 index 0000000..03a4ff3 --- /dev/null +++ b/docs/ComfyUI-Copilot/README.md @@ -0,0 +1,112 @@ +
+ +# 🎯 ComfyUI-Copilot: Your Intelligent Assistant for Comfy-UI + + + +

+ +
+Version +License +Stars +Issues +Python + +

+ + +👾 _**Alibaba International Digital Commerce**_ 👾 + +:octocat: [**Github**](https://github.com/AIDC-AI/ComfyUI-Copilot) + +
+ +https://github.com/user-attachments/assets/0372faf4-eb64-4aad-82e6-5fd69f349c2c + +## 🌟 Introduction + +Welcome to **ComfyUI-Copilot**, an intelligent assistant built on the Comfy-UI framework that simplifies and enhances the AI algorithm debugging and deployment process through natural language interactions. + +Whether it's generating text, images, or audio, ComfyUI-Copilot offers intuitive node recommendations, workflow building aids, and model querying services to streamline your development process. + +
+ +
+ + +--- + +## 🤔 Why Choose ComfyUI-Copilot? + +- 🍀 **Ease of Use**: Lower the barriers to entry with natural language interaction, making Comfy-UI accessible even for beginners. +- 🍀 **Smart Recommendations**: Leverage AI-driven node suggestions and workflow implementations to boost development efficiency. +- 🍀 **Real-Time Assistance**: Benefit from round-the-clock interactive support to address any issues encountered during development. + +--- + +## 🔥 Core Features + +- 💎 **Interactive Q&A Bot**: Access a robust Q&A platform where users can inquire about model intricacies, node details, and parameter utilization with ease. +- 💎 **Natural Language Node Suggestions**: Employ our advanced search mechanism to swiftly identify desired nodes and enhance workflow construction efficacy. + + +- 💎 **Node Query System**: Dive deeper into nodes by exploring their explanations, parameter definitions, usage tips, and downstream workflow recommendations. + + +- 💎 **Smart Workflow Assistance**: Automatically discern developer needs to recommend and build fitting workflow frameworks, minimizing manual setup time. + + +- 💎 **Model Querying**: Prompt Copilot to seek foundational models and 'lora' based on requirements. +- 💎 **Up-and-Coming Features**: + + - **Automated Parameter Tuning**: Exploit machine learning algorithms for seamless analysis and optimization of critical workflow parameters. + - **Error Diagnosis and Fix Suggestions**: Receive comprehensive error insights and corrective advice to swiftly pinpoint and resolve issues. + +--- + +## 🚀 Getting Started + +**Repository Overview**: Visit the [GitHub Repository](https://github.com/AIDC-AI/ComfyUI-Copilot) to access the complete codebase. + +1. **Installation**: + + ```bash + cd ComfyUI/custom_nodes + git clone git@github.com:AIDC-AI/ComfyUI-Copilot.git + ``` + + or + + ```bash + cd ComfyUI/custom_nodes + git clone https://github.com/AIDC-AI/ComfyUI-Copilot + ``` +2. **Activation**: After running the ComfyUI project, find the Copilot activation button at the top-right corner of the board to launch its service. + +<<<<<<< HEAD +3. **KeyGeneration**:Enter your email address on the link, the api-key will automatically be sent to your email address later. +======= + +3. **KeyGeneration**:Enter your name and email address on the link, and the api-key will automatically be sent to your email address later. +>>>>>>> 62f0878737649971dd8c5e71c3c44e2328a38e74 + + + +--- + +## 🤝 Contributions + +We welcome any form of contribution! Feel free to make issues, pull requests, or suggest new features. + +--- + +## 📞 Contact Us + +For any queries or suggestions, please feel free to contact: ComfyUI-Copilot@service.alibaba.com. + +--- + +## 📚 License + +This project is licensed under the MIT License - see the [LICENSE](https://opensource.org/licenses/MIT) file for details. diff --git a/docs/ComfyUI-Copilot/assets/Framework.png b/docs/ComfyUI-Copilot/assets/Framework.png new file mode 100644 index 0000000..400f0ba Binary files /dev/null and b/docs/ComfyUI-Copilot/assets/Framework.png differ diff --git a/docs/ComfyUI-Copilot/assets/comfycopilot_nodes_recommend.gif b/docs/ComfyUI-Copilot/assets/comfycopilot_nodes_recommend.gif new file mode 100644 index 0000000..0b46c48 Binary files /dev/null and b/docs/ComfyUI-Copilot/assets/comfycopilot_nodes_recommend.gif differ diff --git a/docs/ComfyUI-Copilot/assets/comfycopilot_nodes_search.gif b/docs/ComfyUI-Copilot/assets/comfycopilot_nodes_search.gif new file mode 100644 index 0000000..6793ca8 Binary files /dev/null and b/docs/ComfyUI-Copilot/assets/comfycopilot_nodes_search.gif differ diff --git a/docs/ComfyUI-Copilot/assets/keygen.png b/docs/ComfyUI-Copilot/assets/keygen.png new file mode 100644 index 0000000..6dc1211 Binary files /dev/null and b/docs/ComfyUI-Copilot/assets/keygen.png differ diff --git a/docs/ComfyUI-Copilot/assets/logo 2.png b/docs/ComfyUI-Copilot/assets/logo 2.png new file mode 100644 index 0000000..179510c Binary files /dev/null and b/docs/ComfyUI-Copilot/assets/logo 2.png differ diff --git a/docs/ComfyUI-Copilot/assets/logo.png b/docs/ComfyUI-Copilot/assets/logo.png new file mode 100644 index 0000000..3237ccf Binary files /dev/null and b/docs/ComfyUI-Copilot/assets/logo.png differ diff --git a/docs/ComfyUI-Copilot/assets/start.png b/docs/ComfyUI-Copilot/assets/start.png new file mode 100644 index 0000000..10500b3 Binary files /dev/null and b/docs/ComfyUI-Copilot/assets/start.png differ diff --git a/docs/ComfyUI-Copilot/assets/工作流检索.png b/docs/ComfyUI-Copilot/assets/工作流检索.png new file mode 100644 index 0000000..6c9c862 Binary files /dev/null and b/docs/ComfyUI-Copilot/assets/工作流检索.png differ diff --git a/docs/Geneplore AI/README.md b/docs/Geneplore AI/README.md new file mode 100644 index 0000000..27719e0 --- /dev/null +++ b/docs/Geneplore AI/README.md @@ -0,0 +1,17 @@ +# [Geneplore AI](https://geneplore.com/bot) + +## Geneplore AI is building the world's easiest way to use AI - Use 50+ models, all on Discord + +Chat with the all-new Deepseek v3, GPT-4o, Claude 3 Opus, LLaMA 3, Gemini Pro, FLUX.1, and ChatGPT with **one bot**. Generate videos with Stable Diffusion Video, and images with the newest and most popular models available. + +Don't like how the bot responds? Simply change the model in *seconds* and continue chatting like normal, without adding another bot to your server. No more fiddling with API keys and webhooks - every model is completely integrated into the bot. + +**NEW:** Try the most powerful open AI model, Deepseek v3, for free with our bot. Simply type /chat and select Deepseek in the model list. + +![image](https://github.com/user-attachments/assets/14db7e3c-c2c7-46d7-9fe1-5a5d1e3fc856) + +Use the bot trusted by over 60,000 servers and hundreds of paying subscribers, without the hassle of multiple $20/month subscriptions and complicated programming. + +https://geneplore.com + +© 2025 Geneplore AI, All Rights Reserved. diff --git a/docs/HIX.AI/assets/logo.svg b/docs/HIX.AI/assets/logo.svg new file mode 100644 index 0000000..ee496fc --- /dev/null +++ b/docs/HIX.AI/assets/logo.svg @@ -0,0 +1 @@ + diff --git a/docs/Ncurator/README.md b/docs/Ncurator/README.md new file mode 100644 index 0000000..9661cdd --- /dev/null +++ b/docs/Ncurator/README.md @@ -0,0 +1,12 @@ + + +# [Ncurator](https://www.ncurator.com) + +Knowledge Base AI Q&A Assistant - +Let AI help you organize and analyze knowledge + +## UI + + +## Integrate with Deepseek API + \ No newline at end of file diff --git a/docs/Ncurator/README_cn.md b/docs/Ncurator/README_cn.md new file mode 100644 index 0000000..8b9f87b --- /dev/null +++ b/docs/Ncurator/README_cn.md @@ -0,0 +1,11 @@ + + +# [Ncurator](https://www.ncurator.com) + +知识库AI问答助手-让AI帮助你整理与分析知识 + +## UI + + +## 配置 Deepseek API + \ No newline at end of file diff --git a/docs/Ncurator/assets/logo.png b/docs/Ncurator/assets/logo.png new file mode 100644 index 0000000..09bda82 Binary files /dev/null and b/docs/Ncurator/assets/logo.png differ diff --git a/docs/Ncurator/assets/screenshot1.png b/docs/Ncurator/assets/screenshot1.png new file mode 100644 index 0000000..3d1f517 Binary files /dev/null and b/docs/Ncurator/assets/screenshot1.png differ diff --git a/docs/Ncurator/assets/screenshot2.png b/docs/Ncurator/assets/screenshot2.png new file mode 100644 index 0000000..2d8b119 Binary files /dev/null and b/docs/Ncurator/assets/screenshot2.png differ diff --git a/docs/Ncurator/assets/screenshot3.png b/docs/Ncurator/assets/screenshot3.png new file mode 100644 index 0000000..91276b1 Binary files /dev/null and b/docs/Ncurator/assets/screenshot3.png differ diff --git a/docs/PopAi/assets/logo.svg b/docs/PopAi/assets/logo.svg new file mode 100644 index 0000000..bd51aa4 --- /dev/null +++ b/docs/PopAi/assets/logo.svg @@ -0,0 +1,9 @@ + + + + + \ No newline at end of file diff --git a/docs/Siyuan/README.md b/docs/SiYuan/README.md similarity index 93% rename from docs/Siyuan/README.md rename to docs/SiYuan/README.md index fb13da8..ec5cdc6 100644 --- a/docs/Siyuan/README.md +++ b/docs/SiYuan/README.md @@ -2,7 +2,7 @@ ‍ -​![image](assets/image-20250122162731-7wkftbw.png)​ +​![image](https://b3log.org/images/brand/siyuan-128.png)​ --- diff --git a/docs/Siyuan/README_cn.md b/docs/SiYuan/README_cn.md similarity index 92% rename from docs/Siyuan/README_cn.md rename to docs/SiYuan/README_cn.md index b3c7247..c1123bc 100644 --- a/docs/Siyuan/README_cn.md +++ b/docs/SiYuan/README_cn.md @@ -1,6 +1,6 @@ # README_cn -​![image](assets/image-20250122162731-7wkftbw.png)​ +​![image](https://b3log.org/images/brand/siyuan-128.png)​ --- diff --git a/docs/Siyuan/assets/image-20250122162241-32a4oma.png b/docs/SiYuan/assets/image-20250122162241-32a4oma.png similarity index 100% rename from docs/Siyuan/assets/image-20250122162241-32a4oma.png rename to docs/SiYuan/assets/image-20250122162241-32a4oma.png diff --git a/docs/Siyuan/assets/image-20250122162425-wlsgw0u.png b/docs/SiYuan/assets/image-20250122162425-wlsgw0u.png similarity index 100% rename from docs/Siyuan/assets/image-20250122162425-wlsgw0u.png rename to docs/SiYuan/assets/image-20250122162425-wlsgw0u.png diff --git a/docs/Siyuan/assets/image-20250122163007-hkuruoe.png b/docs/SiYuan/assets/image-20250122163007-hkuruoe.png similarity index 100% rename from docs/Siyuan/assets/image-20250122163007-hkuruoe.png rename to docs/SiYuan/assets/image-20250122163007-hkuruoe.png diff --git a/docs/Siyuan/assets/image-20250122162731-7wkftbw.png b/docs/Siyuan/assets/image-20250122162731-7wkftbw.png deleted file mode 100644 index 2bb3189..0000000 Binary files a/docs/Siyuan/assets/image-20250122162731-7wkftbw.png and /dev/null differ diff --git a/docs/TigerGPT/assets/logo.png b/docs/TigerGPT/assets/logo.png new file mode 100644 index 0000000..5f5e137 Binary files /dev/null and b/docs/TigerGPT/assets/logo.png differ diff --git a/docs/Typral/README.md b/docs/Typral/README.md new file mode 100644 index 0000000..30c1f83 --- /dev/null +++ b/docs/Typral/README.md @@ -0,0 +1,11 @@ + + +# [Typral](https://www.typral.com) + +Fast AI writer assistant - Let AI help you quickly improve article, paper, text... + +## UI + + +## 配置 Deepseek API + \ No newline at end of file diff --git a/docs/Typral/README_cn.md b/docs/Typral/README_cn.md new file mode 100644 index 0000000..88d3162 --- /dev/null +++ b/docs/Typral/README_cn.md @@ -0,0 +1,11 @@ + + +# [Typral](https://www.typral.com) + +超快的AI写作助手 - 让AI帮你快速优化日报,文章,文本等等... + +## UI + + +## 配置 Deepseek API + \ No newline at end of file diff --git a/docs/Typral/assets/screenshot1.png b/docs/Typral/assets/screenshot1.png new file mode 100644 index 0000000..a2ad8d4 Binary files /dev/null and b/docs/Typral/assets/screenshot1.png differ diff --git a/docs/Typral/assets/screenshot2.png b/docs/Typral/assets/screenshot2.png new file mode 100644 index 0000000..763c981 Binary files /dev/null and b/docs/Typral/assets/screenshot2.png differ diff --git a/docs/agentUniverse/README.md b/docs/agentUniverse/README.md new file mode 100644 index 0000000..a267b98 --- /dev/null +++ b/docs/agentUniverse/README.md @@ -0,0 +1,30 @@ +___ + + +___ + +# [agentUniverse](https://github.com/antgroup/agentUniverse) + +agentUniverse is a multi-agent collaboration framework designed for complex business scenarios. It offers rapid and user-friendly development capabilities for LLM agent applications, with a focus on mechanisms such as agent collaborative scheduling, autonomous decision-making, and dynamic feedback. The framework originates from Ant Group's real-world business practices in the financial industry. In June 2024, agentUniverse achieved full integration support for the DeepSeek series of models. + +## Concepts + + + +## Integrate with Deepseek API + +### Configure via Python code +```python +import os +os.environ['DEEPSEEK_API_KEY'] = 'sk-***' +os.environ['DEEPSEEK_API_BASE'] = 'https://xxxxxx' +``` +### Configure via the configuration file +In the custom_key.toml file under the config directory of the project, add the configuration: +```toml +DEEPSEEK_API_KEY="sk-******" +DEEPSEEK_API_BASE="https://xxxxxx" +``` + +For more details, please refer to the [Documentation: DeepSeek Integration](https://github.com/antgroup/agentUniverse/blob/master/docs/guidebook/en/In-Depth_Guides/Components/LLMs/DeepSeek_LLM_Use.md) + diff --git a/docs/agentUniverse/README_cn.md b/docs/agentUniverse/README_cn.md new file mode 100644 index 0000000..956188e --- /dev/null +++ b/docs/agentUniverse/README_cn.md @@ -0,0 +1,32 @@ +___ + + +___ + +# [agentUniverse](https://github.com/antgroup/agentUniverse) + +agentUniverse 是一个面向复杂业务场景设计的多智能体协作框架。其提供了快速易用的大模型智能体应用搭建能力,并着重于提供智能体协同调度、自主决策与动态反馈等机制,其源自蚂蚁集团在金融领域的真实业务实践沉淀。agentUniverse于2024年6月全面接入支持deepseek系列模型。 + +## 设计理念 + + + +## 在项目中使用 deepseek API + +### 通过python代码配置 +必须配置:DEEPSEEK_API_KEY +可选配置:DEEPSEEK_API_BASE +```python +import os +os.environ['DEEPSEEK_API_KEY'] = 'sk-***' +os.environ['DEEPSEEK_API_BASE'] = 'https://xxxxxx' +``` +### 通过配置文件配置 +在项目的config目录下的custom_key.toml当中,添加配置: +```toml +DEEPSEEK_API_KEY="sk-******" +DEEPSEEK_API_BASE="https://xxxxxx" +``` + +更多使用详情可参阅[官方文档:DeepSeek接入使用](https://github.com/antgroup/agentUniverse/blob/master/docs/guidebook/zh/In-Depth_Guides/%E7%BB%84%E4%BB%B6%E5%88%97%E8%A1%A8/%E6%A8%A1%E5%9E%8B%E5%88%97%E8%A1%A8/DeepSeek%E4%BD%BF%E7%94%A8.md) + diff --git a/docs/agentUniverse/README_ja.md b/docs/agentUniverse/README_ja.md new file mode 100644 index 0000000..336e4f6 --- /dev/null +++ b/docs/agentUniverse/README_ja.md @@ -0,0 +1,30 @@ +___ + + +___ + +# [agentUniverse](https://github.com/antgroup/agentUniverse) + +agentUniverseは、複雑なビジネスシーン向けに設計されたマルチエージェント協調フレームワークです。迅速で使いやすい大規模モデルのインテリジェントエージェントアプリケーション構築能力を提供し、特にエージェント間の協調スケジューリング、自律的な意思決定、動的なフィードバックなどのメカニズムに重点を置いています。これは、Ant Groupの金融業界における実践的なビジネス経験に基づいて開発されました。agentUniverseは、2024年6月にDeepSeekシリーズモデルのサポートを全面的に統合しました。 + +## 概念 + + + +## Deepseek APIと統合する + +### Pythonコードを使って設定する +```python +import os +os.environ['DEEPSEEK_API_KEY'] = 'sk-***' +os.environ['DEEPSEEK_API_BASE'] = 'https://xxxxxx' +``` +### 設定ファイルを使って設定する +プロジェクトのconfigディレクトリ内のcustom_key.tomlファイルに、設定を追加します: +```toml +DEEPSEEK_API_KEY="sk-******" +DEEPSEEK_API_BASE="https://xxxxxx" +``` + +詳細についてはドキュメントを参照してください - [Documentation: DeepSeek Integration](https://github.com/antgroup/agentUniverse/blob/master/docs/guidebook/en/In-Depth_Guides/Components/LLMs/DeepSeek_LLM_Use.md). + diff --git a/docs/agentUniverse/assets/agentUniverse_logo.jpg b/docs/agentUniverse/assets/agentUniverse_logo.jpg new file mode 100644 index 0000000..c1fd702 Binary files /dev/null and b/docs/agentUniverse/assets/agentUniverse_logo.jpg differ diff --git a/docs/agentUniverse/assets/agentUniverse_logo_s.png b/docs/agentUniverse/assets/agentUniverse_logo_s.png new file mode 100644 index 0000000..f6ee4ad Binary files /dev/null and b/docs/agentUniverse/assets/agentUniverse_logo_s.png differ diff --git a/docs/anything-copilot/README.md b/docs/anything-copilot/README.md new file mode 100644 index 0000000..70a46a6 --- /dev/null +++ b/docs/anything-copilot/README.md @@ -0,0 +1,10 @@ +# [**Anything Copilot**](https://github.com/baotlake/anything-copilot) + +> ![](./assets/logo_16x16.svg) Anything Copilot is a browser extension that enables seamless access to mainstream AI tools directly from your sidebar. For instance, you can open AI chat interfaces like DeepSeek within the sidebar, effectively transforming it into your AI Copilot that remains accessible while you browse. + +**Install in Chrome Web Store**: [Anything Copilot - A more powerful sidebar, split-screen, and AI assistant](https://chromewebstore.google.com/detail/anything-copilot-a-more-p/lilckelmopbcffmglfmfhelaajhjpcff) + +**Install in Edge Add-ons**: [Anything Copilot - A more powerful sidebar, split-screen, and AI assistant](https://microsoftedge.microsoft.com/addons/detail/anything-copilot-a-more/lbeehbkcmjaopnlccpjcdgamcabhnanl) + + +![](./assets/Screenshot_DeepSeek.webp) diff --git a/docs/anything-copilot/README_cn.md b/docs/anything-copilot/README_cn.md new file mode 100644 index 0000000..c2d241a --- /dev/null +++ b/docs/anything-copilot/README_cn.md @@ -0,0 +1,10 @@ +# [**Anything Copilot**](https://github.com/baotlake/anything-copilot) + +> ![](./assets/logo_16x16.svg) Anything Copilot 是一款可以让你在侧边栏无缝使用任意主流AI工具的浏览器插件。例如,你可以在侧边栏中打开 DeepSeek 等 AI 聊天页面 ,把 DeekSeek 变成你的侧边栏中的 AI Copilot 。 + +**在 Chrome Web Store 安装**: [Anything Copilot - 更强大的侧边栏,分屏,AI 助手](https://chromewebstore.google.com/detail/anything-copilot-%E6%9B%B4%E5%BC%BA%E5%A4%A7%E7%9A%84%E4%BE%A7%E8%BE%B9%E6%A0%8F%EF%BC%8C/lilckelmopbcffmglfmfhelaajhjpcff?hl=zh) + +**在 Edge Add-ons 安装**: [Anything Copilot - 更强大的侧边栏,分屏,AI 助手](https://microsoftedge.microsoft.com/addons/detail/anything-copilot-%E6%9B%B4%E5%BC%BA%E5%A4%A7%E7%9A%84%E4%BE%A7%E8%BE%B9/lbeehbkcmjaopnlccpjcdgamcabhnanl?hl=zh) + + +![](./assets/Screenshot_DeepSeek.webp) diff --git a/docs/anything-copilot/assets/Screenshot_DeepSeek.webp b/docs/anything-copilot/assets/Screenshot_DeepSeek.webp new file mode 100644 index 0000000..54e2751 Binary files /dev/null and b/docs/anything-copilot/assets/Screenshot_DeepSeek.webp differ diff --git a/docs/anything-copilot/assets/logo.svg b/docs/anything-copilot/assets/logo.svg new file mode 100644 index 0000000..2591019 --- /dev/null +++ b/docs/anything-copilot/assets/logo.svg @@ -0,0 +1,5 @@ + + + + + diff --git a/docs/anything-copilot/assets/logo_16x16.svg b/docs/anything-copilot/assets/logo_16x16.svg new file mode 100644 index 0000000..3e4693f --- /dev/null +++ b/docs/anything-copilot/assets/logo_16x16.svg @@ -0,0 +1,12 @@ + + + + + + + + + + + + diff --git a/docs/autoflow/README.md b/docs/autoflow/README.md new file mode 100644 index 0000000..e024893 --- /dev/null +++ b/docs/autoflow/README.md @@ -0,0 +1,23 @@ +# Autoflow + +pingcap%2Fautoflow | Trendshift + +[AutoFlow](https://github.com/pingcap/autoflow) is an open-source knowledge base tool based on GraphRAG (Graph-based Retrieval-Augmented Generation), built on [TiDB](https://www.pingcap.com/ai?utm_source=tidb.ai&utm_medium=community) Vector, LlamaIndex, and DSPy. It provides a Perplexity-like search interface and allows easy integration of AutoFlow's conversational search window into your website by embedding a simple JavaScript snippet. + +## UI + +1. **Perplexity-style Conversational Search page**: Our platform features an advanced built-in website crawler, designed to elevate your browsing experience. This crawler effortlessly navigates official and documentation sites, ensuring comprehensive coverage and streamlined search processes through sitemap URL scraping. + + ![Image](https://github.com/user-attachments/assets/50a4e5ce-8b93-446a-8ce7-11ed7844bd1e) + +2. **Embeddable JavaScript Snippet**: Integrate our conversational search window effortlessly into your website by copying and embedding a simple JavaScript code snippet. This widget, typically placed at the bottom right corner of your site, facilitates instant responses to product-related queries. + + ![Image](https://github.com/user-attachments/assets/f0dc82db-c14d-4863-a242-c7da3a719568) + +## Integrate with Deepseek API + +- Click the tab `Models` then `LLMs` to enter the LLM model management page. +- Click the `Create` button to create a new LLM model. +- Input data like below, then click the `Create LLM` button. + +![Image](https://github.com/user-attachments/assets/875cac18-707b-465f-ac62-89ddb416f94d) diff --git a/docs/autoflow/README_cn.md b/docs/autoflow/README_cn.md new file mode 100644 index 0000000..1de1269 --- /dev/null +++ b/docs/autoflow/README_cn.md @@ -0,0 +1,23 @@ +# Autoflow + +pingcap%2Fautoflow | Trendshift + +[AutoFlow](https://github.com/pingcap/autoflow) 是一个基于 GraphRAG(基于图的检索增强生成)的开源知识库工具,构建于 [TiDB](https://www.pingcap.com/ai?utm_source=tidb.ai&utm_medium=community) Vector、LlamaIndex 和 DSPy 之上。它提供类似 Perplexity 的搜索界面,并允许通过嵌入简单的 JavaScript 代码片段,将 AutoFlow 的对话式搜索窗口轻松集成到您的网站中。 + +## UI 界面 + +1. **Perplexity 风格的对话式搜索页面**:我们的平台配备了高级内置网站爬虫,旨在提升您的浏览体验。该爬虫能够轻松抓取官方网站和文档站点,通过 sitemap 抓取,实现全面覆盖和高效搜索。 + + ![Image](https://github.com/user-attachments/assets/50a4e5ce-8b93-446a-8ce7-11ed7844bd1e) + +2. **可嵌入的 JavaScript 代码片段**:通过复制并嵌入一段简单的 JavaScript 代码,即可轻松将我们的对话式搜索窗口集成到您的网站中。此小部件通常放置在网站右下角,可即时回答与产品相关的查询。 + + ![Image](https://github.com/user-attachments/assets/f0dc82db-c14d-4863-a242-c7da3a719568) + +## 集成 Deepseek API + +- 点击 `Models` 选项卡,然后进入 `LLMs` 以进入 LLM 模型管理页面。 +- 点击 `Create` 按钮创建一个新的 LLM 模型。 +- 按照下方示例输入数据,然后点击 `Create LLM` 按钮。 + +![Image](https://github.com/user-attachments/assets/875cac18-707b-465f-ac62-89ddb416f94d) diff --git a/docs/avante.nvim/README.md b/docs/avante.nvim/README.md index 1b350d5..71860ee 100644 --- a/docs/avante.nvim/README.md +++ b/docs/avante.nvim/README.md @@ -25,16 +25,14 @@ return { lazy = false, version = false, -- set this if you want to always pull the latest change opts = { - provider = "openai", - auto_suggestions_provider = "openai", -- Since auto-suggestions are a high-frequency operation and therefore expensive, it is recommended to specify an inexpensive provider or even a free provider: copilot - openai = { - endpoint = "https://api.deepseek.com/v1", - model = "deepseek-chat", - timeout = 30000, -- Timeout in milliseconds - temperature = 0, - max_tokens = 4096, - -- optional - api_key_name = "OPENAI_API_KEY", -- default OPENAI_API_KEY if not set + provider = "deepseek", + vendors = { + deepseek = { + __inherited_from = "openai", + api_key_name = "DEEPSEEK_API_KEY", + endpoint = "https://api.deepseek.com", + model = "deepseek-coder", + }, }, }, -- if you want to build from source then do `make BUILD_FROM_SOURCE=true` diff --git a/docs/avante.nvim/README_cn.md b/docs/avante.nvim/README_cn.md index 3ada845..40f19bf 100644 --- a/docs/avante.nvim/README_cn.md +++ b/docs/avante.nvim/README_cn.md @@ -25,16 +25,14 @@ return { lazy = false, version = false, -- set this if you want to always pull the latest change opts = { - provider = "openai", - auto_suggestions_provider = "openai", -- Since auto-suggestions are a high-frequency operation and therefore expensive, it is recommended to specify an inexpensive provider or even a free provider: copilot - openai = { - endpoint = "https://api.deepseek.com/v1", - model = "deepseek-chat", - timeout = 30000, -- Timeout in milliseconds - temperature = 0, - max_tokens = 4096, - -- optional - api_key_name = "OPENAI_API_KEY", -- default OPENAI_API_KEY if not set + provider = "deepseek", + vendors = { + deepseek = { + __inherited_from = "openai", + api_key_name = "DEEPSEEK_API_KEY", + endpoint = "https://api.deepseek.com", + model = "deepseek-coder", + }, }, }, -- if you want to build from source then do `make BUILD_FROM_SOURCE=true` diff --git a/docs/codecompanion.nvim/README.md b/docs/codecompanion.nvim/README.md index 4e9eb28..9ab08cb 100644 --- a/docs/codecompanion.nvim/README.md +++ b/docs/codecompanion.nvim/README.md @@ -34,9 +34,8 @@ return { require("codecompanion").setup({ adapters = { deepseek = function() - return require("codecompanion.adapters").extend("openai_compatible", { + return require("codecompanion.adapters").extend("deepseek", { env = { - url = "https://api.deepseek.com", api_key = "YOUR_API_KEY", }, }) @@ -71,9 +70,8 @@ later(function() require("codecompanion").setup({ adapters = { deepseek = function() - return require("codecompanion.adapters").extend("openai_compatible", { + return require("codecompanion.adapters").extend("deepseek", { env = { - url = "https://api.deepseek.com", api_key = "YOUR_API_KEY", }, }) diff --git a/docs/codecompanion.nvim/README_cn.md b/docs/codecompanion.nvim/README_cn.md index 01173ec..df69ebb 100644 --- a/docs/codecompanion.nvim/README_cn.md +++ b/docs/codecompanion.nvim/README_cn.md @@ -34,9 +34,8 @@ return { require("codecompanion").setup({ adapters = { deepseek = function() - return require("codecompanion.adapters").extend("openai_compatible", { + return require("codecompanion.adapters").extend("deepseek", { env = { - url = "https://api.deepseek.com", api_key = "YOUR_API_KEY", }, }) @@ -71,9 +70,8 @@ later(function() require("codecompanion").setup({ adapters = { deepseek = function() - return require("codecompanion.adapters").extend("openai_compatible", { + return require("codecompanion.adapters").extend("deepseek", { env = { - url = "https://api.deepseek.com", api_key = "YOUR_API_KEY", }, }) diff --git a/docs/codegate/README.md b/docs/codegate/README.md new file mode 100644 index 0000000..9653112 --- /dev/null +++ b/docs/codegate/README.md @@ -0,0 +1,158 @@ +# CodeGate: secure AI code generation + +CodeGate is a **local gateway** that makes AI agents and coding assistants safer. It +ensures AI-generated recommendations adhere to best practices while safeguarding +your code's integrity and protecting your privacy. With CodeGate, you can +confidently leverage AI in your development workflow without sacrificing +security or productivity. + + + + CodeGate dashboard + + +--- +## ✨ Why choose CodeGate? + +AI coding assistants are powerful, but they can inadvertently introduce risks. +CodeGate protects your development process by: + +- 🔒 Preventing accidental exposure of secrets and sensitive data +- 🛡️ Ensuring AI suggestions follow secure coding practices +- ⚠️ Blocking recommendations of known malicious or deprecated libraries +- 🔍 Providing real-time security analysis of AI suggestions + +--- +## 🚀 Quickstart with 🐋 Deepseek! + +### Prerequisites + +CodeGate is distributed as a Docker container. You need a container runtime like +Docker Desktop or Docker Engine. Podman and Podman Desktop are also supported. +CodeGate works on Windows, macOS, and Linux operating systems with x86_64 and +arm64 (ARM and Apple Silicon) CPU architectures. + +These instructions assume the `docker` CLI is available. If you use Podman, +replace `docker` with `podman` in all commands. + +### Installation + +To start CodeGate, run this simple command (making sure to pass in the +deepseek.com URL as the `CODEGATE_PROVIDER_OPENAI_URL` environment variable): + +```bash +docker run --name codegate -d -p 8989:8989 -p 9090:9090 -p 8990:8990 \ + -e CODEGATE_PROVIDER_OPENAI_URL=https://api.deepseek.com \ + --mount type=volume,src=codegate_volume,dst=/app/codegate_volume \ + --restart unless-stopped ghcr.io/stacklok/codegate:latest +``` + +That’s it! CodeGate is now running locally. + +### Using CodeGate and 🐋 Deepseek within Continue + +To use Continue with CodeGate, open the Continue settings and add +the following configuration: + +```json +{ + "title": "Deepseek-r1", + "provider": "openai", + "model": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", + "apiKey": "YOUR_DEEPSEEK_API_KEY", + "apiBase": "http://localhost:8989/openai", +} +``` + +Just use Continue as normal, and you know longer have to worry about security +or privacy concerns! + +![continue-image](https://github.com/deepseek/awesome-deepseek-integration/blob/codegate/docs/codegate/assets/continue-screen.png) + + +### Using CodeGate and 🐋 Deepseek with Cline + +To use Cline with CodeGate, open the Cline settings and add +the following configuration: + +![cline-settings](https://github.com/deepseek/awesome-deepseek-integration/blob/codegate/docs/codegate/assets/cline-settings.png) + +Just use Cline as normal, and you know longer have to worry about security +or privacy concerns! + +![cline-image](https://github.com/deepseek/awesome-deepseek-integration/blob/codegate/docs/codegate/assets/cline-screen.png) + +--- +## 🖥️ Dashboard + +CodeGate includes a web dashboard that provides: + +- A view of **security risks** detected by CodeGate +- A **history of interactions** between your AI coding assistant and your LLM + + + + CodeGate dashboard + + +### Accessing the dashboard + +Open [http://localhost:9090](http://localhost:9090) in your web browser to +access the dashboard. + +To learn more, visit the +[CodeGate Dashboard documentation](https://docs.codegate.ai/how-to/dashboard). + +--- +## 🔐 Features + +### Secrets encryption + +CodeGate helps you protect sensitive information from being accidentally exposed +to AI models and third-party AI provider systems by redacting detected secrets +from your prompts using encryption. +[Learn more](https://docs.codegate.ai/features/secrets-encryption) + +### Dependency risk awareness + +LLMs’ knowledge cutoff date is often months or even years in the past. They +might suggest outdated, vulnerable, or non-existent packages (hallucinations), +exposing you and your users to security risks. + +CodeGate scans direct, transitive, and development dependencies in your package +definition files, installation scripts, and source code imports that you supply +as context to an LLM. +[Learn more](https://docs.codegate.ai/features/dependency-risk) + +### Security reviews + +CodeGate performs security-centric code reviews, identifying insecure patterns +or potential vulnerabilities to help you adopt more secure coding practices. +[Learn more](https://docs.codegate.ai/features/security-reviews) + +--- +## 🛡️ Privacy first + +Unlike other tools, with CodeGate **your code never leaves your machine**. +CodeGate is built with privacy at its core: + +- 🏠 **Everything stays local** +- 🚫 **No external data collection** +- 🔐 **No calling home or telemetry** +- 💪 **Complete control over your data** + +--- +## 🛠️ Development + +Are you a developer looking to contribute? Dive into our technical resources: + +- [Development guide](https://github.com/stacklok/codegate/blob/main/docs/development.md) +- [CLI commands and flags](https://github.com/stacklok/codegate/blob/main/docs/cli.md) +- [Configuration system](https://github.com/stacklok/codegate/blob/main/docs/configuration.md) +- [Logging system](https://github.com/stacklok/codegate/blob/main/docs/logging.md) + +--- +## 📜 License + +CodeGate is licensed under the terms specified in the +[LICENSE file](https://github.com/stacklok/codegate/blob/main/LICENSE). diff --git a/docs/codegate/README_cn.md b/docs/codegate/README_cn.md new file mode 100644 index 0000000..10de199 --- /dev/null +++ b/docs/codegate/README_cn.md @@ -0,0 +1,132 @@ +# CodeGate:安全的 AI 代码生成 + +CodeGate 是一个**本地代理**,可以让 AI 代理和编码助手更加安全。它确保 AI 生成的建议遵循最佳实践,同时保护您的代码完整性和隐私。使用 CodeGate,您可以在开发工作流程中自信地利用 AI,而不会牺牲安全性或生产力。 + + + + CodeGate dashboard + + +--- +## ✨ 为什么选择 CodeGate? + +AI 编码助手功能强大,但可能会无意中带来风险。CodeGate 通过以下方式保护您的开发过程: + +- 🔒 防止意外泄露机密和敏感数据 +- 🛡️ 确保 AI 建议遵循安全编码实践 +- ⚠️ 阻止推荐已知的恶意或已弃用的库 +- 🔍 提供 AI 建议的实时安全分析 + +--- +## 🚀 使用 🐋 Deepseek 快速开始! + +### 前提条件 + +CodeGate 以 Docker 容器的形式分发。您需要一个容器运行时,如 Docker Desktop 或 Docker Engine。同时也支持 Podman 和 Podman Desktop。CodeGate 可在 Windows、macOS 和 Linux 操作系统上运行,支持 x86_64 和 arm64(ARM 和 Apple Silicon)CPU 架构。 + +以下说明基于 `docker` CLI 可用的前提。如果您使用 Podman,请在所有命令中将 `docker` 替换为 `podman`。 + +### 安装 + +要启动 CodeGate,运行这个简单的命令(确保将 deepseek.com URL 作为 `CODEGATE_PROVIDER_OPENAI_URL` 环境变量传入): + +```bash +docker run --name codegate -d -p 8989:8989 -p 9090:9090 -p 8990:8990 \ + -e CODEGATE_PROVIDER_OPENAI_URL=https://api.deepseek.com \ + --mount type=volume,src=codegate_volume,dst=/app/codegate_volume \ + --restart unless-stopped ghcr.io/stacklok/codegate:latest +``` + +就是这样!CodeGate 现在在本地运行了。 + +### 在 Continue 中使用 CodeGate 和 🐋 Deepseek + +要在 Continue 中使用 CodeGate,打开 Continue 设置并添加以下配置: + +```json +{ + "title": "Deepseek-r1", + "provider": "openai", + "model": "deepseek-ai/DeepSeek-R1-Distill-Qwen-32B", + "apiKey": "YOUR_DEEPSEEK_API_KEY", + "apiBase": "http://localhost:8989/openai", +} +``` + +像往常一样使用 Continue,您不再需要担心安全或隐私问题! + +![continue-image](](https://github.com/deepseek/awesome-deepseek-integration/blob/codegate/docs/codegate/assets/continue-screen.png)) + +### 在 Cline 中使用 CodeGate 和 🐋 Deepseek + +要在 Cline 中使用 CodeGate,打开 Cline 设置并添加以下配置: + +![cline-settings](https://github.com/deepseek/awesome-deepseek-integration/blob/codegate/docs/codegate/assets/cline-settings.png) + +像往常一样使用 Cline,您不再需要担心安全或隐私问题! + +![cline-image](https://github.com/deepseek/awesome-deepseek-integration/blob/codegate/docs/codegate/assets/cline-screen.png) + +--- +## 🖥️ 仪表板 + +CodeGate 包含一个 Web 仪表板,提供: + +- CodeGate 检测到的**安全风险**视图 +- AI 编码助手与 LLM 之间的**交互历史** + + + + CodeGate dashboard + + +### 访问仪表板 + +在您的网络浏览器中打开 [http://localhost:9090](http://localhost:9090) 以访问仪表板。 + +要了解更多信息,请访问 [CodeGate 仪表板文档](https://docs.codegate.ai/how-to/dashboard)。 + +--- +## 🔐 功能 + +### 机密加密 + +CodeGate 通过使用加密对检测到的机密进行编辑,帮助您防止敏感信息意外暴露给 AI 模型和第三方 AI 提供商系统。 +[了解更多](https://docs.codegate.ai/features/secrets-encryption) + +### 依赖风险意识 + +LLM 的知识截止日期通常是几个月甚至几年前。它们可能会建议过时的、易受攻击的或不存在的包(幻觉),使您和您的用户面临安全风险。 + +CodeGate 扫描您作为上下文提供给 LLM 的包定义文件、安装脚本和源代码导入中的直接依赖、传递依赖和开发依赖。 +[了解更多](https://docs.codegate.ai/features/dependency-risk) + +### 安全审查 + +CodeGate 执行以安全为中心的代码审查,识别不安全的模式或潜在的漏洞,帮助您采用更安全的编码实践。 +[了解更多](https://docs.codegate.ai/features/security-reviews) + +--- +## 🛡️ 隐私优先 + +与其他工具不同,使用 CodeGate **您的代码永远不会离开您的机器**。CodeGate 以隐私为核心构建: + +- 🏠 **所有数据均本地存储** +- 🚫 **没有外部数据收集** +- 🔐 **没有回传或遥测** +- 💪 **完全控制您的数据** + +--- +## 🛠️ 开发 + +您是想要贡献的开发者吗?深入了解我们的技术资源: + +- [开发指南](https://github.com/stacklok/codegate/blob/main/docs/development.md) +- [CLI 命令和标志](https://github.com/stacklok/codegate/blob/main/docs/cli.md) +- [配置系统](https://github.com/stacklok/codegate/blob/main/docs/configuration.md) +- [日志系统](https://github.com/stacklok/codegate/blob/main/docs/logging.md) + +--- +## 📜 许可证 + +CodeGate 根据 [LICENSE 文件](https://github.com/stacklok/codegate/blob/main/LICENSE) 中指定的条款获得许可。 \ No newline at end of file diff --git a/docs/codegate/assets/cline-screen.png b/docs/codegate/assets/cline-screen.png new file mode 100644 index 0000000..f59dd29 Binary files /dev/null and b/docs/codegate/assets/cline-screen.png differ diff --git a/docs/codegate/assets/cline-settings.png b/docs/codegate/assets/cline-settings.png new file mode 100644 index 0000000..3b60c5e Binary files /dev/null and b/docs/codegate/assets/cline-settings.png differ diff --git a/docs/codegate/assets/codegate.png b/docs/codegate/assets/codegate.png new file mode 100644 index 0000000..d625d61 Binary files /dev/null and b/docs/codegate/assets/codegate.png differ diff --git a/docs/codegate/assets/continue-screen.png b/docs/codegate/assets/continue-screen.png new file mode 100644 index 0000000..1b93a5d Binary files /dev/null and b/docs/codegate/assets/continue-screen.png differ diff --git a/docs/continue/README.md b/docs/continue/README.md index c32595d..81cdfde 100644 --- a/docs/continue/README.md +++ b/docs/continue/README.md @@ -1,4 +1,4 @@ - + # [Continue](https://continue.dev/) @@ -27,7 +27,7 @@ Continue will generate, refactor, and explain entire sections of code with LLMs. "model": "deepseek-chat", "contextLength": 128000, "apiKey": "REDACTED", - "provider": "deepseek", + "provider": "openai", "apiBase": "https://api.deepseek.com/beta" } ], @@ -35,7 +35,7 @@ Continue will generate, refactor, and explain entire sections of code with LLMs. "title": "DeepSeek", "model": "deepseek-chat", "apiKey": "REDACTED", - "provider": "deepseek", + "provider": "openai", "apiBase": "https://api.deepseek.com/beta" }, ... diff --git a/docs/continue/README_cn.md b/docs/continue/README_cn.md index 195b790..7f0ec17 100644 --- a/docs/continue/README_cn.md +++ b/docs/continue/README_cn.md @@ -1,4 +1,4 @@ - + # [Continue](https://continue.dev/) diff --git a/docs/curator/README.md b/docs/curator/README.md new file mode 100644 index 0000000..c307d9d --- /dev/null +++ b/docs/curator/README.md @@ -0,0 +1,30 @@ + +![image](https://raw.githubusercontent.com/bespokelabsai/curator/main/docs/Bespoke-Labs-Logomark-Red-crop.png) + + +# [Curator](https://github.com/bespokelabsai/curator) + + +Curator is an open-source tool to curate large scale datasets for post-training LLMs. + +Curator was used to curate [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k), a reasoning dataset to train a fully open reasoning model [Bespoke-Stratos](https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation). + + +### Curator supports: + +- Calling Deepseek API for scalable synthetic data curation +- Easy structured data extraction +- Caching and automatic recovery +- Dataset visualization +- Saving $$$ using batch mode + +### Call Deepseek API with Curator easily: + +![image](https://pbs.twimg.com/media/GiLHb-xasAAbs4m?format=jpg&name=4096x4096) + +# Get Started here + +- [Colab Example](https://colab.research.google.com/drive/1Z78ciwHIl_ytACzcrslNrZP2iwK05eIF?usp=sharing) +- [Github Repo](https://github.com/bespokelabsai/curator) +- [Documentation](https://docs.bespokelabs.ai/) +- [Discord](https://discord.com/invite/KqpXvpzVBS) diff --git a/docs/curator/README_cn.md b/docs/curator/README_cn.md new file mode 100644 index 0000000..2c7dbe2 --- /dev/null +++ b/docs/curator/README_cn.md @@ -0,0 +1,29 @@ +![image](https://raw.githubusercontent.com/bespokelabsai/curator/main/docs/Bespoke-Labs-Logomark-Red-crop.png) + + +# [Curator](https://github.com/bespokelabsai/curator) + + +Curator 是一个用于后训练大型语言模型 (LLMs) 和结构化数据提取的制作与管理可扩展的数据集的开源工具。 + +Curator 被用来制作 [Bespoke-Stratos-17k](https://huggingface.co/datasets/bespokelabs/Bespoke-Stratos-17k),这是一个用于训练完全开源的推理模型 [Bespoke-Stratos](https://www.bespokelabs.ai/blog/bespoke-stratos-the-unreasonable-effectiveness-of-reasoning-distillation) 的推理数据集。 + + +### Curator 支持: + +- 调用 Deepseek API 进行可扩展的合成数据管理 +- 简便的结构化数据提取 +- 缓存和自动恢复 +- 数据集可视化 +- 使用批处理模式节省费用 + +### 轻松使用 Curator 调用 Deepseek API: + +![image](https://pbs.twimg.com/media/GiLHb-xasAAbs4m?format=jpg&name=4096x4096) + +# 从这里开始 + +- [Colab 示例](https://colab.research.google.com/drive/1Z78ciwHIl_ytACzcrslNrZP2iwK05eIF?usp=sharing) +- [Github 仓库](https://github.com/bespokelabsai/curator) +- [文档](https://docs.bespokelabs.ai/) +- [Discord](https://discord.com/invite/KqpXvpzVBS) \ No newline at end of file diff --git a/docs/fhe.mind-network/README.md b/docs/fhe.mind-network/README.md new file mode 100644 index 0000000..4250bf9 --- /dev/null +++ b/docs/fhe.mind-network/README.md @@ -0,0 +1,210 @@ +# Mind Network FHE Rust SDK +`mind_sdk_deepseek` is a **Native Rust SDK** by [Mind Network](https://www.mindnetwork.xyz/) to enable FHE function for DeepSeek. DeepSeek will assist user as an AI Agent to think and predict, and then encrypted by FHE and submit to Mind Network for model consensus. + +[![mind_sdk_deepseek on crates.io](https://img.shields.io/crates/v/mind_sdk_deepseek)](https://crates.io/crates/mind_sdk_deepseek) +[![Documentation on docs.rs](https://img.shields.io/badge/docs-docs.rs-blue)](https://docs.rs/mind_sdk_deepseek) +[![Licensed](https://img.shields.io/badge/license-MIT-blue.svg)](./LICENSE) +[![Github](https://img.shields.io/badge/source-github.com-blue.svg)](https://github.com/mind-network/mind-sdk-deepseek-rust) +[![Github](https://img.shields.io/badge/build-pass-green.svg)](https://github.com/mind-network/mind-sdk-deepseek-rust) + + +## Usage +```rust +// call deepseek to predict, you can change to other prompt as you wish +let prompt = "Please predict BTC price in next 7 days, return must be a positive integer".to_string() +let client = deepseek_rs::DeepSeekClient::default().unwrap(); +let request = deepseek_rs::client::chat_completions::request::RequestBody::new_messages(vec![ + deepseek_rs::client::chat_completions::request::Message::new_user_message(prompt) + ]).with_model(deepseek_rs::client::chat_completions::request::Model::DeepSeekReasoner); +let response = client.chat_completions(request).await.unwrap(); +//println!("Reasoning: {}", response.choices[0].message.reasoning_content.unwrap()); +//println!("Answer: {}", response.choices[0].message.content.unwrap()); + +// convert deepseek prediction to int type +let deepseek_prediction = match response.choices[0].clone().message.content.unwrap().parse::() { + Ok(prediction) => prediction, + Err(_) => 0, +}; + +// fhe encrypt +let fhe: mind_sdk_fhe::FheInt = mind_sdk_fhe::FheInt::new_from_public_key_local(&fhe_public_key_fp); +let ciphertext = mind_sdk_fhe::fhe_client::encrypt(&fhe, "u8", deepseek_prediction.clone()); +let ciphertext_str: String = mind_sdk_fhe::io::serialize_base64(ciphertext)?; + +// submit ciphertext onchain +let result: alloy::rpc::types::TransactionReceipt = self.submit_fhe_encrypted(ciphertext_str).await?; +``` + +## Quick Start + +## Source +```bash +git clone https://github.com/mind-network/mind-sdk-deepseek-rust.git +cd mind-sdk-deepseek-rust +``` + +### Install +```toml +[dependencies] +mind_sdk_deepseek = "* +``` + +### build +```bash +cargo build --debug +cargo build --release +``` + +### run +``` bash +cd msn +cargo build && ./target/debug/mind_sdk_deepseek --log-level=info check-hot-wallet-address + +cargo build && ./target/debug/mind_sdk_deepseek --log-level=debug --node-config-file=./config/config_fvn.toml register 0x06eF5C5ba427434bf36469B877e4ea9044D1b735 +cargo build && ./target/debug/mind_sdk_deepseek --log-level=debug --node-config-file=./config/config_fvn_1.toml register 0x2F8aCe76a34e50943573826A326a8Eb8DC854f84 +cargo build && ./target/debug/mind_sdk_deepseek --log-level=debug --node-config-file=./config/config_fvn_2.toml register 0x3df4b66E1895E68aB000f1086e9393ca1937Cd8b + +cargo build && ./target/debug/mind_sdk_deepseek --log-level=debug --node-config-file=./config/config_fvn.toml deepseek-fhe-vote +cargo build && ./target/debug/mind_sdk_deepseek --log-level=debug --node-config-file=./config/config_fvn_1.toml deepseek-fhe-vote +cargo build && ./target/debug/mind_sdk_deepseek --log-level=debug --node-config-file=./config/config_fvn_2.toml deepseek-fhe-vote + +cargo build && ./target/debug/mind_sdk_deepseek --log-level=debug --node-config-file=./config/config_fvn.toml check-registration +cargo build && ./target/debug/mind_sdk_deepseek --log-level=debug --node-config-file=./config/config_fvn_1.toml check-registration +cargo build && ./target/debug/mind_sdk_deepseek --log-level=debug --node-config-file=./config/config_fvn_2.toml check-registration +``` + + + +## CLI Help +``` bash +# ./bin/deepseek --help + +FHE Randen Voter Node Cli + +Usage: fvn [OPTIONS] + +Commands: + deepseek-fhe-vote let deepseek think and predict, and then encrypted by FHE and submit to Mind Network for model consensus + check-hot-wallet-address check hot wallet address, by default will use ./config/config_fvn.toml + check-gas-balance check hot wallet gas balance, need gas fee to vote + check-registration check if hot wallet has registered with a particular voter wallet + register register voter address + check-vote-rewards check voting rewards + check-vote check voting tx history on the explore + help Print this message or the help of the given subcommand(s) + +Options: + --node-config-file + fvn config file, contains all the config to run fvn [default: ./config/config_fvn.toml] + --log-level + control level of print, useful for debug, default is info [default: info] [possible values: debug, info, warn, error] + --hot-wallet-private-key + fvn wallet private key is needed if to load a different wallet from config_fvn.toml to sign the message onchain, by default load from ./config/config_fvn.toml + -h, --help + Print help + -V, --version + Print version +``` + +## CLI Example +``` bash +## command +./bin/deepseek --log-level=info deepseek-fhe-vote +{ + "app": "deepseek", + "command": "deepseek-fhe-vote", + "arg": "deekseek predicted BTC price: 95833, hot_wallet: 0x6224F72f1439E76803e063262a7e1c03e86c6Dbd", + "status": true, + "result": "0x42d78185e4779dd3105598ac4f2786998c5059f8381a55daec12e4ffcc952a56", + "note": "deekseek predicted BTC price: 95833, gas_sued: 304749, block_number: 26373, tx_hash: 0x42d78185e4779dd3105598ac4f2786998c5059f8381a55daec12e4ffcc952a56" +} + +## command +./bin/deepseek --log-level=info check-hot-wallet-address +{ + "app": "deepseek", + "command": "check-hot-wallet-address", + "arg": "", + "status": true, + "result": "0x64FF17078669A507D0c831D9E844AF1C967604Dd", + "note": "" +} + +## command +./bin/deepseek --log-level=info check-gas-balance +{ + "app": "deepseek", + "command": "check-gas-balance", + "arg": "hot_wallet: 0x6224F72f1439E76803e063262a7e1c03e86c6Dbd", + "status": true, + "result": "197015375000000", + "note": "" +} + +## command +./bin/deepseek --log-level=info check-registration +{ + "app": "deepseek", + "command": "check-registration", + "arg": "0x6224F72f1439E76803e063262a7e1c03e86c6Dbd", + "status": false, + "result": "", + "note": "hot wallet is not registered with any voter wallet" +} + +## command +./bin/deepseek --log-level=info check-vote-rewards +{ + "app": "deepseek", + "command": "check-vote-rewards", + "arg": "hot_wallet: 0x6224F72f1439E76803e063262a7e1c03e86c6Dbd", + "status": false, + "result": "0", + "note": "hot_wallet: 0x6224F72f1439E76803e063262a7e1c03e86c6Dbd, voter_wallet: , vote_rewards: 0" +} + +## command +./bin/deepseek --log-level=info register 0x06eF5C5ba427434bf36469B877e4ea9044D1b735 +{ + "app": "deepseek", + "command": "register", + "arg": "hot_wallet: 0x6224F72f1439E76803e063262a7e1c03e86c6Dbd, voter_wallet: 0x06eF5C5ba427434bf36469B877e4ea9044D1b735", + "status": true, + "result": "registration successful !", + "note": "is_registered: true, hot_wallet: 0x6224F72f1439E76803e063262a7e1c03e86c6Dbd, voter_wallet: 0x06eF5C5ba427434bf36469B877e4ea9044D1b735" +} + + +## command +./bin/deepseek --log-level=info check-vote-rewards +{ + "app": "deepseek", + "command": "check-vote-rewards", + "arg": "hot_wallet: 0x6224F72f1439E76803e063262a7e1c03e86c6Dbd", + "status": true, + "result": "206095238095238095", + "note": "hot_wallet: 0x6224F72f1439E76803e063262a7e1c03e86c6Dbd, voter_wallet: 0x06eF5C5ba427434bf36469B877e4ea9044D1b735, vote_rewards: 206095238095238095" +} + +## command +./bin/deepseek --log-level=info check-vote +{ + "app": "deepseek", + "command": "check-vote", + "arg": "0x6224F72f1439E76803e063262a7e1c03e86c6Dbd", + "status": true, + "result": "check on the explore: testnet: https://explorer-testnet.mindnetwork.xyz/address/0x6224F72f1439E76803e063262a7e1c03e86c6Dbd, mainnet: https://explorer.mindnetwork.xyz/address/0x6224F72f1439E76803e063262a7e1c03e86c6Dbd", + "note": "" +} + +``` + +## **License** + +This project is licensed under the **MIT License**. + +## **Contact** + +For questions or support, please contact [Mind Network Official Channels](https://mindnetwork.xyz/). + + diff --git a/docs/fhe.mind-network/mind-network-log.png b/docs/fhe.mind-network/mind-network-log.png new file mode 100644 index 0000000..1947fe6 Binary files /dev/null and b/docs/fhe.mind-network/mind-network-log.png differ diff --git a/docs/minuet-ai.nvim/README.md b/docs/minuet-ai.nvim/README.md new file mode 100644 index 0000000..a0386bf --- /dev/null +++ b/docs/minuet-ai.nvim/README.md @@ -0,0 +1,182 @@ + +# Minuet AI + +Minuet AI: Dance with Intelligence in Your Code 💃. + +`Minuet-ai` brings the grace and harmony of a minuet to your coding process. +Just as dancers move during a minuet. + +# Features + +- AI-powered code completion with dual modes: + - Specialized prompts and various enhancements for chat-based LLMs on code completion tasks. + - Fill-in-the-middle (FIM) completion for compatible models (DeepSeek, + Codestral, Qwen, and others). +- Support for multiple AI providers (OpenAI, Claude, Gemini, Codestral, Ollama, and + OpenAI-compatible services). +- Customizable configuration options. +- Streaming support to enable completion delivery even with slower LLMs. +- Support `nvim-cmp`, `blink-cmp`, `virtual text` frontend. + +# Requirements + +- Neovim 0.10+. +- [plenary.nvim](https://github.com/nvim-lua/plenary.nvim) +- optional: [nvim-cmp](https://github.com/hrsh7th/nvim-cmp) +- optional: [blink.cmp](https://github.com/Saghen/blink.cmp) +- An API key for at least one of the supported AI providers + +# Installation + +**Lazy.nvim**: + +```lua +specs = { + { + 'milanglacier/minuet-ai.nvim', + config = function() + require('minuet').setup { + -- Your configuration options here + } + end, + }, + { 'nvim-lua/plenary.nvim' }, + -- optional, if you are using virtual-text frontend, nvim-cmp is not + -- required. + { 'hrsh7th/nvim-cmp' }, + -- optional, if you are using virtual-text frontend, blink is not required. + { 'Saghen/blink.cmp' }, +} +``` + +**Rocks.nvim**: + +`Minuet` is available on luarocks.org. Simply run `Rocks install +minuet-ai.nvim` to install it like any other luarocks package. + +**Setting up with virtual text**: + +```lua +require('minuet').setup { + virtualtext = { + auto_trigger_ft = {}, + keymap = { + -- accept whole completion + accept = '', + -- accept one line + accept_line = '', + -- accept n lines (prompts for number) + -- e.g. "A-z 2 CR" will accept 2 lines + accept_n_lines = '', + -- Cycle to prev completion item, or manually invoke completion + prev = '', + -- Cycle to next completion item, or manually invoke completion + next = '', + dismiss = '', + }, + }, +} +``` + +**Setting up with nvim-cmp**: + +
+ +```lua +require('cmp').setup { + sources = { + { + -- Include minuet as a source to enable autocompletion + { name = 'minuet' }, + -- and your other sources + } + }, + performance = { + -- It is recommended to increase the timeout duration due to + -- the typically slower response speed of LLMs compared to + -- other completion sources. This is not needed when you only + -- need manual completion. + fetching_timeout = 2000, + }, +} + + +-- If you wish to invoke completion manually, +-- The following configuration binds `A-y` key +-- to invoke the configuration manually. +require('cmp').setup { + mapping = { + [""] = require('minuet').make_cmp_map() + -- and your other keymappings + }, +} +``` + +
+ +**Setting up with blink-cmp**: + +
+ +```lua +require('blink-cmp').setup { + keymap = { + -- Manually invoke minuet completion. + [''] = require('minuet').make_blink_map(), + }, + sources = { + -- Enable minuet for autocomplete + default = { 'lsp', 'path', 'buffer', 'snippets', 'minuet' }, + -- For manual completion only, remove 'minuet' from default + providers = { + minuet = { + name = 'minuet', + module = 'minuet.blink', + score_offset = 8, -- Gives minuet higher priority among suggestions + }, + }, + }, + -- Recommended to avoid unnecessary request + completion = { trigger = { prefetch_on_insert = false } }, +} +``` + +
+ +**LLM Provider Examples**: + +**Deepseek**: + +```lua +-- you can use deepseek with both openai_fim_compatible or openai_compatible provider +require('minuet').setup { + provider = 'openai_fim_compatible', + provider_options = { + openai_fim_compatible = { + api_key = 'DEEPSEEK_API_KEY', + name = 'deepseek', + optional = { + max_tokens = 256, + top_p = 0.9, + }, + }, + }, +} + + +-- or +require('minuet').setup { + provider = 'openai_compatible', + provider_options = { + openai_compatible = { + end_point = 'https://api.deepseek.com/v1/chat/completions', + api_key = 'DEEPSEEK_API_KEY', + name = 'deepseek', + optional = { + max_tokens = 256, + top_p = 0.9, + }, + }, + }, +} +``` diff --git a/docs/minuet-ai.nvim/README_cn.md b/docs/minuet-ai.nvim/README_cn.md new file mode 100644 index 0000000..31610dd --- /dev/null +++ b/docs/minuet-ai.nvim/README_cn.md @@ -0,0 +1,172 @@ +# Minuet AI + +Minuet AI:在您的代码中翩翩起舞,挥洒智能 💃。 + +`Minuet-ai` 将小步舞曲的优雅与和谐带入您的编码流程。正如舞者在小步舞曲中舞动一样。 + +# 特性 + +- 基于 AI 的代码补全,提供双重模式: + - 针对代码补全任务,为基于聊天的 LLMs 提供专门的提示和各种增强功能。 + - 针对兼容的模型(DeepSeek、Codestral、Qwen 等)提供中间填充 (FIM) 补全。 +- 支持多种 AI 提供商(OpenAI、Claude、Gemini、Codestral、Ollama 和兼容 OpenAI 的服务)。 +- 可自定义配置选项。 +- 支持流式传输,即使使用较慢的 LLMs 也能实现补全的交付。 +- 支持 `nvim-cmp`、`blink-cmp`、`virtual text` 前端。 + +# 要求 + +- Neovim 0.10+。 +- [plenary.nvim](https://github.com/nvim-lua/plenary.nvim) +- 可选: [nvim-cmp](https://github.com/hrsh7th/nvim-cmp) +- 可选: [blink.cmp](https://github.com/Saghen/blink.cmp) +- 至少一个受支持的 AI 提供商的 API 密钥 + +# 安装 + +**Lazy.nvim:** + +```lua +specs = { + { + 'milanglacier/minuet-ai.nvim', + config = function() + require('minuet').setup { + -- 在此处配置您的选项 + } + end, + }, + { 'nvim-lua/plenary.nvim' }, + -- 可选,如果您使用 virtual-text 前端,则不需要 nvim-cmp。 + { 'hrsh7th/nvim-cmp' }, + -- 可选,如果您使用 virtual-text 前端,则不需要 blink。 + { 'Saghen/blink.cmp' }, +} +``` + +**Rocks.nvim:** + +`Minuet` 可在 luarocks.org 上获取。只需运行 `Rocks install minuet-ai.nvim` 即可像安装其他 luarocks 包一样安装它。 + +**使用 virtual text 进行设置:** + +```lua +require('minuet').setup { + virtualtext = { + auto_trigger_ft = {}, + keymap = { + -- 接受完整补全 + accept = '', + -- 接受一行 + accept_line = '', + -- 接受 n 行(提示输入数字) + -- 例如,“A-z 2 CR”将接受 2 行 + accept_n_lines = '', + -- 切换到上一个补全项,或手动调用补全 + prev = '', + -- 切换到下一个补全项,或手动调用补全 + next = '', + dismiss = '', + }, + }, +} +``` + +**使用 nvim-cmp 进行设置:** + +
+ +```lua +require('cmp').setup { + sources = { + { + -- 包含 minuet 作为源以启用自动补全 + { name = 'minuet' }, + -- 和您的其他来源 + } + }, + performance = { + -- 建议增加超时时间,因为与其他补全来源相比,LLMs 的响应速度通常较慢。如果您只需要手动补全,则不需要此设置。 + fetching_timeout = 2000, + }, +} + + +-- 如果你希望手动调用补全, +-- 以下配置将 `A-y` 键绑定到手动调用配置。 +require('cmp').setup { + mapping = { + [""] = require('minuet').make_cmp_map() + -- 和您的其他键映射 + }, +} +``` + +
+ +**使用 blink-cmp 进行设置:** + +
+ +```lua +require('blink-cmp').setup { + keymap = { + -- 手动调用 minuet 补全。 + [''] = require('minuet').make_blink_map(), + }, + sources = { + -- 启用 minuet 进行自动补全 + default = { 'lsp', 'path', 'buffer', 'snippets', 'minuet' }, + -- 仅对于手动补全,从默认值中删除 'minuet' + providers = { + minuet = { + name = 'minuet', + module = 'minuet.blink', + score_offset = 8, -- 在建议中赋予 minuet 更高的优先级 + }, + }, + }, + -- 建议避免不必要的请求 + completion = { trigger = { prefetch_on_insert = false } }, +} +``` + +
+ +**LLM 提供商示例:** + +**Deepseek:** + +```lua +-- 你可以使用 openai_fim_compatible 或 openai_compatible 提供商来使用 deepseek +require('minuet').setup { + provider = 'openai_fim_compatible', + provider_options = { + openai_fim_compatible = { + api_key = 'DEEPSEEK_API_KEY', + name = 'deepseek', + optional = { + max_tokens = 256, + top_p = 0.9, + }, + }, + }, +} + + +-- 或者 +require('minuet').setup { + provider = 'openai_compatible', + provider_options = { + openai_compatible = { + end_point = 'https://api.deepseek.com/v1/chat/completions', + api_key = 'DEEPSEEK_API_KEY', + name = 'deepseek', + optional = { + max_tokens = 256, + top_p = 0.9, + }, + }, + }, +} +``` diff --git a/docs/ruzhiai_note/README.md b/docs/ruzhiai_note/README.md new file mode 100644 index 0000000..fee5960 --- /dev/null +++ b/docs/ruzhiai_note/README.md @@ -0,0 +1,47 @@ +
+ +
+ +# RuZhi AI Notes + +RuZhi AI Notes is an intelligent AI knowledge management tool dedicated to providing users with one-stop knowledge management and application services, including AI search and exploration, AI results to notes conversion, note management and organization, knowledge presentation and sharing, etc. + +## Core Advantages + +### 1. Powerful AI Model Support +- Integrated with DeepSeek thinking model +- According to user feedback, compared to similar products, our model: + - More stable + - Higher output quality + - Can integrate with enterprise internal knowledge bases + + + +### 2. Convenient Content Management +- One-click save AI conversation content to notes +- Support for various export formats: + - Markdown + - Word documents + - PDF files + - PPT presentations + - Resume templates + and more formats + +### 3. Upcoming Features +- Task management list +- To-do tracking +- Calendar management +- More intelligent features +- Multi-platform clients + +## Usage Guide + +#### Method 1: Search +1. Search for "RuZhi AI Notes" in search engines +2. Register + +#### Method 2: Direct Link Access +1. Visit our official link https://ruzhiai.perfcloud.cn/ +2. Complete the registration process + - The system will automatically create an application for you after registration + - You can then start using all features diff --git a/docs/ruzhiai_note/README_cn.md b/docs/ruzhiai_note/README_cn.md new file mode 100644 index 0000000..1690729 --- /dev/null +++ b/docs/ruzhiai_note/README_cn.md @@ -0,0 +1,49 @@ +
+ +
+ +# 如知AI笔记 + +如知AI笔记是一款智能化的AI知识管理工具,致力于为用户提供一站式的知识管理和应用服务,包括AI搜索探索、AI结果转笔记、笔记管理与整理、 知识演示与分享等。 + +## 核心优势 + +### 1. 强大的AI模型支持 +- 集成了DeepSeek深度思考模型 +- 用户反馈显示,相较同类产品,我们的模型: + - 更加稳定 + - 输出质量更高 + - 可以融合企业内部知识库 + + + + +### 2. 便捷的内容管理 +- 一键保存AI对话内容到笔记 +- 多样化的导出格式支持: + - Markdown + - Word文档 + - PDF文件 + - PPT演示文稿 + - 简历模板 + 等多种格式 + +### 3. 即将推出的功能 +- 任务管理清单 +- 待办事项追踪 +- 日历管理 +- 更多智能化功能 +- 多端客户端 + +## 使用指南 + + +#### 方式一:搜索 +1. 在搜索引擎中搜索"如知AI笔记" +2. 注册 + +#### 方式二:直接访问链接 +1. 访问我们的官方链接 https://ruzhiai.perfcloud.cn/ +2. 完成注册流程 + - 注册后系统会自动为您创建应用 + - 即可开始使用所有功能 diff --git a/docs/ruzhiai_note/assets/deepseek_used.jpg b/docs/ruzhiai_note/assets/deepseek_used.jpg new file mode 100644 index 0000000..e041393 Binary files /dev/null and b/docs/ruzhiai_note/assets/deepseek_used.jpg differ diff --git a/docs/ruzhiai_note/assets/play_store_512.png b/docs/ruzhiai_note/assets/play_store_512.png new file mode 100644 index 0000000..5b469c4 Binary files /dev/null and b/docs/ruzhiai_note/assets/play_store_512.png differ diff --git a/docs/solana-agent-kit/README.md b/docs/solana-agent-kit/README.md new file mode 100644 index 0000000..07ca47c --- /dev/null +++ b/docs/solana-agent-kit/README.md @@ -0,0 +1,67 @@ +# README + +logo + +--- + +An open-source toolkit for connecting AI agents to Solana protocols. Now, any agent, using any model can autonomously perform 60+ Solana actions: +‍ + +## STEP1. + +Apply a token from [deepseek open platform](https://platform.deepseek.com/) + +## STEP 2 + +Intialize the DeepSeek LLM + +``` +import { ChatDeepSeek } from "@langchain/deepseek"; + +const deepseek = new ChatDeepSeek({ + model: "deepseek-chat", + temperature: 0, +}); +``` +‍ +## STEP 3 + +Intialize the Solana Agent Kit with DeepSeek + +```typescript +const solanaAgent = new SolanaAgentKit( + process.env.SOLANA_PRIVATE_KEY!, + process.env.RPC_URL!, + { + OPENAI_API_KEY: process.env.OPENAI_API_KEY!, + HELIUS_API_KEY: process.env.HELIUS_API_KEY!, + PERPLEXITY_API_KEY: process.env.PERPLEXITY_API_KEY!, + }, +); + +const tools = createSolanaTools(solanaAgent); + +const agent = createReactAgent({ + llm: deepseek, + tools, + checkpointSaver: memory, + messageModifier: ` + You are a helpful agent that can interact onchain using the Solana Agent Kit. You are + empowered to interact onchain using your tools. If you ever need funds, you can request them from the + faucet. If not, you can provide your wallet details and request funds from the user. If there is a 5XX + (internal) HTTP error code, ask the user to try again later. If someone asks you to do something you + can't do with your currently available tools, you must say so, and encourage them to implement it + themselves using the Solana Agent Kit, recommend they go to https://www.solanaagentkit.xyz for more information. Be + concise and helpful with your responses. Refrain from restating your tools' descriptions unless it is explicitly requested. + `, +}); + +const stream = await agent.stream( + { messages: [new HumanMessage(userInput)] }, + config, +); +``` + +More guides can be found in the [Solana Agent Kit](https://docs.solanaagentkit.xyz/v0/introduction) + +‍ \ No newline at end of file diff --git a/docs/solana-agent-kit/assets/sendai-logo.png b/docs/solana-agent-kit/assets/sendai-logo.png new file mode 100644 index 0000000..638b962 Binary files /dev/null and b/docs/solana-agent-kit/assets/sendai-logo.png differ diff --git a/docs/stranslate/README.md b/docs/stranslate/README.md new file mode 100644 index 0000000..3d4c3fd --- /dev/null +++ b/docs/stranslate/README.md @@ -0,0 +1,31 @@ + + +# [`STranslate`](https://stranslate.zggsong.com/) + +STranslate is a translation and OCR tool that is ready to use on the go. + +## Translation + +Supports multiple translation languages and various translation methods such as input, text selection, screenshot, clipboard monitoring, and mouse text selection. It also allows displaying multiple service translation results simultaneously for easy comparison. + +## OCR + +Supports fully offline OCR for Chinese, English, Japanese, and Korean, based on PaddleOCR, with excellent performance and quick response. It supports screenshot, clipboard, and file OCR, as well as silent OCR. Additionally, it supports OCR services from WeChat, Baidu, Tencent, OpenAI, and Google. + +## Services + +Supports integration with over ten translation services including DeepSeek, OpenAI, Gemini, ChatGLM, Baidu, Microsoft, Tencent, Youdao, and Alibaba. It also offers free API options. Built-in services like Microsoft, Yandex, Google, and Kingsoft PowerWord are ready to use out of the box. + +## Features + +Supports back-translation, global TTS, writing (directly translating and replacing selected content), custom prompts, QR code recognition, external calls, and more. + +## Main Interface + +![main_ui](./assets/main.png) + +## Configuration + +![settings_1](./assets/settings_1.png) + +![settings_2](./assets/settings_2.png) \ No newline at end of file diff --git a/docs/stranslate/README_cn.md b/docs/stranslate/README_cn.md new file mode 100644 index 0000000..43f9b15 --- /dev/null +++ b/docs/stranslate/README_cn.md @@ -0,0 +1,31 @@ + + +# [`STranslate`](https://stranslate.zggsong.com/) + +STranslate 是一款即用即走的翻译、OCR工具 + +## 翻译 + +支持多种翻译语言,支持输入、划词、截图、监听剪贴板、监听鼠标划词等多种翻译方式,支持同时显示多个服务翻译结果,方便比较翻译结果 + +## OCR + +支持中英日韩完全离线OCR,基于 PaddleOCR,效果优秀反应迅速,支持截图、剪贴板、文件OCR,支持静默OCR,同时支持微信、百度、腾讯、OpenAI、Google等OCR + +## 服务 + +支持DeepSeek、OpenAI、Gemini、ChatGLM、百度、微软、腾讯、有道、阿里等十多家翻译服务接入;同时还提供免费API可供选择;内置微软、Yandex、Google、金山词霸等内置服务可做到开箱即用 + +## 特色 + +支持回译、全局TTS、写作(选中后直接翻译替换内容)、自定义Prompt、二维码识别、外部调用等等功能 + +## 主界面 + +![main_ui](./assets/main.png) + +## 配置 + +![settings_1](./assets/settings_1.png) + +![settings_2](./assets/settings_2.png) \ No newline at end of file diff --git a/docs/stranslate/assets/main.png b/docs/stranslate/assets/main.png new file mode 100644 index 0000000..7c5c832 Binary files /dev/null and b/docs/stranslate/assets/main.png differ diff --git a/docs/stranslate/assets/settings_1.png b/docs/stranslate/assets/settings_1.png new file mode 100644 index 0000000..70d84db Binary files /dev/null and b/docs/stranslate/assets/settings_1.png differ diff --git a/docs/stranslate/assets/settings_2.png b/docs/stranslate/assets/settings_2.png new file mode 100644 index 0000000..7aaad02 Binary files /dev/null and b/docs/stranslate/assets/settings_2.png differ diff --git a/docs/stranslate/assets/stranslate.svg b/docs/stranslate/assets/stranslate.svg new file mode 100644 index 0000000..df8c4c3 --- /dev/null +++ b/docs/stranslate/assets/stranslate.svg @@ -0,0 +1,2 @@ + + \ No newline at end of file diff --git a/docs/superagentx/README.md b/docs/superagentx/README.md new file mode 100644 index 0000000..69de83c --- /dev/null +++ b/docs/superagentx/README.md @@ -0,0 +1,33 @@ +# `SuperAgentX` + +> 🤖 SuperAgentX: A lightweight autonomous true multi-agent framework with AGI capabilities. + +**SuperAgentX Source Code**: [https://github.com/superagentxai/superagentx](https://github.com/superagentxai/superagentx) + +**DeepSeek AI Agent Example**: [https://github.com/superagentxai/superagentx/blob/master/tests/llm/test_deepseek_client.py](https://github.com/superagentxai/superagentx/blob/master/tests/llm/test_deepseek_client.py) + +**Documentation** : [https://docs.superagentx.ai/](https://docs.superagentx.ai/) + +The SuperAgentX framework integrates DeepSeek as its LLM service provider, enhancing the multi-agent's reasoning and decision-making capabilities. + +## 🤖 Introduction + +`SuperAgentX` SuperAgentX is an advanced agentic AI framework designed to accelerate the development of Artificial General Intelligence (AGI). It provides a powerful, modular, and flexible platform for building autonomous AI agents capable of executing complex tasks with minimal human intervention. + +![SuperAgentX Diagram](https://raw.githubusercontent.com/superagentxai/superagentX/refs/heads/master/docs/images/architecture.png) + +### ✨ Key Features + +🚀 Open-Source Framework: A lightweight, open-source AI framework built for multi-agent applications with Artificial General Intelligence (AGI) capabilities. + +🎯 Goal-Oriented Multi-Agents: This technology enables the creation of agents with retry mechanisms to achieve set goals. Communication between agents is Parallel, Sequential, or hybrid. + +🏖️ Easy Deployment: Offers WebSocket, RESTful API, and IO console interfaces for rapid setup of agent-based AI solutions. + +♨️ Streamlined Architecture: Enterprise-ready scalable and pluggable architecture. No major dependencies; built independently! + +📚 Contextual Memory: Uses SQL + Vector databases to store and retrieve user-specific context effectively. + +🧠 Flexible LLM Configuration: Supports simple configuration options of various Gen AI models. + +🤝🏻 Extendable Handlers: Allows integration with diverse APIs, databases, data warehouses, data lakes, IoT streams, and more, making them accessible for function-calling features. diff --git a/docs/superagentx/assets/architecture.png b/docs/superagentx/assets/architecture.png new file mode 100644 index 0000000..f155158 Binary files /dev/null and b/docs/superagentx/assets/architecture.png differ diff --git a/docs/tencent/hunyuan.png b/docs/tencent/hunyuan.png new file mode 100644 index 0000000..a5b7012 Binary files /dev/null and b/docs/tencent/hunyuan.png differ diff --git a/docs/tomemo/README.md b/docs/tomemo/README.md index 86c637f..f550279 100644 --- a/docs/tomemo/README.md +++ b/docs/tomemo/README.md @@ -12,7 +12,19 @@ ToMemo is a phrasebook + clipboard history + keyboard iOS app with integrated AI ## Integrate with Deepseek API -Go to Settings-Extensions-AI Services-AI Providers to add the Deepseek API Key. -After adding, you can turn on the 「show in bottom tab」 in the AI service page, so that you can talk to Deepseek directly in the application. +- Go to "Settings-Extensions-AI Services-AI Providers", click "Add" in the top right corner, and select "DeepSeek" in the **Provider** field. +- Enter your API Key in the **API Key** field. +- Click the "Test" button to verify if the input is valid. +- Click "Load Models" to select the model you want to use +- Turn on "Enable" and click "Save" -![image](assets/Integrate.jpg) +![image](assets/app-provider.png) + +## Use + +- Go to "Settings-Extensions-AI Services" +- Click "AI Assistant" to enter the AI Assistant page +- Add an AI Assistant in the top right corner, you can select "Deepseek" in the models +- Start chatting with Deepseek + +![image](assets/use-deepseek.png) diff --git a/docs/tomemo/README_cn.md b/docs/tomemo/README_cn.md index b46b0b2..766675a 100644 --- a/docs/tomemo/README_cn.md +++ b/docs/tomemo/README_cn.md @@ -12,7 +12,19 @@ ToMemo 是一款短语合集 + 剪切板历史 + 键盘输出的 iOS 应用, ## Integrate with Deepseek API -进入设置-扩展-AI 服务-AI 供应商,即可添加 Deepseek API Key。 -添加完成后,可以 AI 服务页面中开启底部 Tab 页,方便应用中直接与 Deepseek 对话。 +- 进入「设置-扩展-AI 服务-AI 供应商」,点击右上角「添加」,在**供应商**中选择「DeepSeek」。 +- 在**API Key**中输入你的 API Key。 +- 点击「测试」按钮,测试填入是否可用。 +- 点击「加载模型」,选择需要使用的模型 +- 打开「启用」后,点击「保存」 -![image](assets/Integrate.jpg) +![image](assets/app-provider.png) + +## Use + +- 进入「设置-扩展-AI 服务」, +- 点击「AI 助手」进入 AI 助手页面, +- 右上角添加 AI 助手,可以在模型中选择「深度求索」 +- 开始和 Deepseek 聊天 + +![image](assets/use-deepseek.png) diff --git a/docs/tomemo/assets/app-provider.png b/docs/tomemo/assets/app-provider.png new file mode 100644 index 0000000..5309a86 Binary files /dev/null and b/docs/tomemo/assets/app-provider.png differ diff --git a/docs/tomemo/assets/use-deepseek.png b/docs/tomemo/assets/use-deepseek.png new file mode 100644 index 0000000..2c0a1a1 Binary files /dev/null and b/docs/tomemo/assets/use-deepseek.png differ diff --git a/docs/translate.js/README.md b/docs/translate.js/README.md new file mode 100644 index 0000000..b35d4b7 --- /dev/null +++ b/docs/translate.js/README.md @@ -0,0 +1,89 @@ +

+ translate.js +

+

+ It is an AI i18n tool designed for front-end developers. With just two lines of JavaScript code, it can achieve fully automatic translation of HTML.
+ Leave it to the AI. There is no need to modify the page, no language configuration file is required, no API Key is needed, and it is SEO-friendly! +

+ +![image](assets/html_demo.gif) + + +# Usage method +### 1. Deploy the text translation API +First, deploy the text translation open interface. It supports batch translation of multiple texts at once and has a built-in multi-layer caching system, which minimizes the time-consuming process of AI translation. This enables users to achieve instant and delay-free translation when using it. + +##### 1.1 Server specifications +It can run perfectly with the following server configuration: 1 core, 1GB of memory, a 20GB system disk, a bandwidth of 1MB, and the operating system being CentOS 7.4 (any version from 7.0 to 7.9 is acceptable). + +##### 1.2 Install +```` +wget https://gitee.com/mail_osc/translate/raw/master/deploy/install_translate.service.sh -O install.sh && chmod -R 777 install.sh && sh ./install.sh +```` + +##### 1.3 Configure DeepSeek parameters + +Edit application.properties ,Edit command: +```` +vi /mnt/tomcat8/webapps/ROOT/WEB-INF/classes/application.properties +```` +Then append a few lines of configuration at the very end: +```` +# The request URL of the large model interface. For example, the following is the request URL of Huawei DeepSeek. Additionally, the request URL of GiteeAI is https://ai.gitee.com/v1/chat/completions , You can obtain and fill in the information for other platforms by yourself. +translate.service.deepSeek.url=https://infer-modelarts-cn-southwest-2.modelarts-infer.com/v1/infers/fd53915b-8935-48fe-be70-449d76c0fc87/v1/chat/completions +# Access token +translate.service.deepSeek.key=QM8jrVl98lTluLhzCaO4i9PFv-caRk6U7kDL-H6CIyApytMG69jO33aasO1GnduQak8fGI7dtpmbsM98Qh3ywA +# Which model to use? Here, it is advisable to use DeepSeek-V3 by default, and there is no need to make any changes. +translate.service.deepSeek.model=DeepSeek-V3 +# The maximum number of tokens for a single AI operation. If not set, the default value is 3000. You can just use this default value here. +translate.service.deepSeek.max_tokens=3000 +```` +The final effect is shown in the following figure: +![image](assets/application_properties_demo.png) + +##### 1.4 Restart the service +```` +pkill java +sudo /mnt/tomcat8/bin/startup.sh +```` + +##### 1.5 Test the text translation API. +![image](assets/texts_translate_api_demo.png) +The "from" passed in here represents the language before translation. If you know what language it is, fill it in. If you don't know and it's difficult to judge, just fill it in as shown in the picture above. DeepSeek will automatically recognize and perform the translation. +For detailed instructions on this translation API interface, please refer to: [http://api.zvo.cn/translate/service/20230807/translate.json.html](http://api.zvo.cn/translate/service/20230807/translate.json.html) + + + +### 2. Use translate.js in HTML. + +**Click on a certain language in an ordinary website to switch. ** + +As shown in the following figure, there should be an option to switch among several languages at a certain position on the website. +![](assets/site_demo.png) +Directly add the following code at the end of its HTML code: + +```` + + + + + + +```` + +This is just the use in the most common scenario. In addition, for various frameworks such as VUE, React, and so on, as well as all kinds of management backends, as long as JavaScript can be run, it can be used! + +# Open-source repository +https://github.com/xnx3/translate + diff --git a/docs/translate.js/README_cn.md b/docs/translate.js/README_cn.md new file mode 100644 index 0000000..fd364ad --- /dev/null +++ b/docs/translate.js/README_cn.md @@ -0,0 +1,141 @@ +

+ translate.js +

+

+ 它是面向前端开发者使用的 AI i18n,两行js实现html全自动翻译。
+ 交给AI,无需改动页面、无语言配置文件、无API Key、对SEO友好! +

+ +![image](assets/html_demo.gif) + + +# 使用方式 +### 1. 部署文本翻译API +首先部署文本翻译开放接口,它支持一次性批量翻译多个文本,同时内置多层缓存体系,最大化降低AI翻译的耗时。以使用户在使用时做到瞬间翻译无延迟的能力。 + +##### 1.1 服务器规格 +1核1G、20G系统盘、1MB带宽,操作系统为CentOS 7.4 (7.0~7.9都可)即可完美运行。 + +##### 1.2 安装 +```` +wget https://gitee.com/mail_osc/translate/raw/master/deploy/install_translate.service.sh -O install.sh && chmod -R 777 install.sh && sh ./install.sh +```` + +##### 1.3 配置 DeepSeek 参数 + +编辑 application.properties ,编辑命令: +```` +vi /mnt/tomcat8/webapps/ROOT/WEB-INF/classes/application.properties +```` +然后再最后面追加几行配置: +```` +# 大模型接口请求URL, 比如下面的是华为DeepSeek的请求URL的,另外像是GiteeAI的请求URL是 https://ai.gitee.com/v1/chat/completions 其他的平台的可自行获取填入 +translate.service.deepSeek.url=https://infer-modelarts-cn-southwest-2.modelarts-infer.com/v1/infers/fd53915b-8935-48fe-be70-449d76c0fc87/v1/chat/completions +# 访问令牌 +translate.service.deepSeek.key=QM8jrVl98lTluLhzCaO4i9PFv-caRk6U7kDL-H6CIyApytMG69jO33aasO1GnduQak8fGI7dtpmbsM98Qh3ywA +# 使用哪个模型,这里默认使用 DeepSeek-V3 即可,无需更改 +translate.service.deepSeek.model=DeepSeek-V3 +# AI单次的最大token数量,不设置默认是3000,这里可以默认用这个即可 +translate.service.deepSeek.max_tokens=3000 +```` +最终的效果如下图所示: +![image](assets/application_properties_demo.png) + +##### 1.4 重启服务 +```` +pkill java +sudo /mnt/tomcat8/bin/startup.sh +```` + +##### 1.5 文本翻译API测试一下 +![image](assets/texts_translate_api_demo.png) +这里传入的 from 代表翻以前的语种语言,如果你知道是什么语言则填上,如果不知道不好判断,那就固定上图这样填写即可,DeepSeek会自动识别并进行翻译。 +有关此翻译API接口的详细说明可参考: [http://api.zvo.cn/translate/service/20230807/translate.json.html](http://api.zvo.cn/translate/service/20230807/translate.json.html) + + + +### 2. html中使用 translate.js + +**普通网站中点击某个语言进行切换** +如下图所示,网站中的某个位置要有几种语言切换 +![](assets/site_demo.png) +直接在其html代码末尾的位置加入以下代码: + +```` + + + + + + +```` + +这只是一个最普通的场景使用,另外像是各框架了比如VUE、React、等等,各种管理后台,只要能运行js,都能使用它! + +# 开源仓库 +https://github.com/xnx3/translate +交流QQ群: 240567964 + +# 它的能力 + +### 特性说明 +* **使用极其简单。** 直接加入几行 JavaScript 代码即可让其拥有上百种语言切换能力。 +* **不增加工作量。** 无需改造页面本身植入大量垃圾代码变得臃肿,也不需要配置各种语种的语言文件,因为它会直接扫描你的DOM自动识别并翻译显示,它不需要你到某某网站登录去申请什么key,它是开源开放的,拿来就能用。 +* **极其灵活扩展。** 您可指定它[只翻译某些指定区域的元素](http://translate.zvo.cn/4063.html)、[自定义切换语言方式及美化](http://translate.zvo.cn/4056.html)、[某些id、class、tag不被翻译](https://translate.zvo.cn/4061.html)、[自定义翻译术语](https://translate.zvo.cn/4070.html) ...... 只要你想的,它都能做到。做不到的,你找我我来让它做到! +* **自动切换语种。** [自动根据用户的语言喜好及所在的国家切换到这个语种进行浏览](http://translate.zvo.cn/4065.html) +* **极速翻译能力。** [内置三层缓存、预加载机制,毫秒级瞬间翻译的能力。它并不是你理解的大模型蜗牛似的逐个字往外出的那样](http://translate.zvo.cn/4026.html) +* [**永久开源免费。** 采用Apache-2.0开源协议,您可永久免费使用](https://github.com/xnx3/translate/blob/master/LICENSE)。[另外你可以用它来做某些系统的三方插件直接售卖盈利](http://translate.zvo.cn/4036.html)、或者你是建站公司用它来做为一项高级功能盈利,我们都是完全认可并支持的,并不需要给我们任何费用! +* **搜索引擎友好。** 完全不影响你本身网站搜索引擎的收录。爬虫所爬取的网页源代码,它不会对其进行任何改动,你可完全放心。[另外我们还有高级版本让你翻译之后的页面也能被搜索引擎收录](http://translate.zvo.cn/236896.html) +* **支持私有部署。** [在某些政府机关及大集团内部项目中,对数据隐私及安全保密有强要求场景、或者完全不通外网的场景,可以自行私有部署翻译API服务](http://translate.zvo.cn/4052.html) +* **全球网络节点**。美洲、亚洲、欧洲 ... 都有网络节点,它能自动适配最快节点,每间隔1分钟自动获取一次延迟最小的节点进行接入使用,使全球范围使用都可高效稳定。 +* **HTML整体翻译**。[提供开放API接口,传入html文件(html源代码)及要翻译为的语言即可拿到翻译后的html源码。完美支持识别各种复杂及不规范html代码, +支持翻译前的微调,比如不翻译某个区域、图片翻译、js语法操作html文件中的元素进行增删改等。](https://translate.zvo.cn/4022.html) +* **源站翻译及域名分发**。[将您现有的网站,翻译成全新的小语种网站,小语种网站可以分别绑定域名并支持搜索引擎收录和排名。而您的源站无需任何改动。也就是你可以将你朋友的网站,翻译为小语种网站,绑定上自己的域名,提供对外访问。而你无需向你朋友取得任何的如账号等相关权限](https://translate.zvo.cn/236896.html) +* **浏览器翻译插件**。[提供整体的浏览器翻译插件的全套方案,您如果是开发者,完全可以拿去将界面美化包装一下,而后直接提交应用市场进行售卖盈利](https://translate.zvo.cn/4037.html) + + +### 微调指令 +* **[设置默认翻译为的语种](http://translate.zvo.cn/4071.html)**,用户第一次打开时,默认以什么语种显示。 +* **[自定义翻译术语](http://translate.zvo.cn/41555.html)**,如果你感觉某些翻译不太符合你的预期,可进行针对性的定义某些词或句子的翻译结果,进行自定义术语库 +* **[翻译完后自动触发执行](http://translate.zvo.cn/4069.html)**,当翻译完成后会自动触发执行您的某个方法,以便您来做自定义扩展。 +* **[指定翻译服务接口](http://translate.zvo.cn/4068.html)**,如果你不想用我们开源免费的翻译服务接口,使用您自己私有部署的、或者您自己二次开发对接的某个翻译服务,可通过此来指定自己的翻译接口。 +* **[监控页面动态渲染的文本进行自动翻译](http://translate.zvo.cn/4067.html)**,如果页面用 JavaScript 的地方比较多,内容都是随时用JS来控制显示的,比如 VUE、React 等框架做的应用,它可以实时监控DOM中文字的变动,当发生变动后立即识别并进行翻译。 +* **[设置本地语种(当前网页的语种)](http://translate.zvo.cn/4066.html)**,手动指定当前页面的语言。如果不设置,它会自动识别当前网页的文本,取当前网页文本中,出现频率最高的语种为默认语种。 +* **[自动切换为用户所使用的语种](http://translate.zvo.cn/4065.html)**,用户第一次打开网页时,自动判断当前用户所使用的语种、以及所在的国家,来自动进行切换为这个语种。 +* **[主动进行语言切换](http://translate.zvo.cn/4064.html)**,开放一个方法提供程序调用,只需传入翻译的目标语言,即可快速切换到指定语种 +* **[只翻译指定的元素](http://translate.zvo.cn/4063.html)**,指定要翻译的元素的集合,可传入一个或多个元素。如果不设置此,默认翻译整个网页。 +* **[翻译时忽略指定的id](http://translate.zvo.cn/4062.html)**,翻译时追加上自己想忽略不进行翻译的id的值,凡是在这里面的,都不进行翻译,也就是当前元素以及其子元素都不会被翻译。 +* **[翻译时忽略指定的class属性](http://translate.zvo.cn/4061.html)**,翻译时追加上自己想忽略不进行翻译的class标签,凡是在这里面的,都不进行翻译,也就是当前元素以及其子元素都不会被翻译。 +* **[翻译时忽略指定的tag标签](http://translate.zvo.cn/4060.html)**,翻译时追加上自己想忽略不进行翻译的tag标签,凡是在这里面的,都不进行翻译,也就是当前元素以及其子元素都不会被翻译。 +* **[翻译时忽略指定的文字不翻译](http://translate.zvo.cn/283381.html)**,翻译时追加上自己想忽略不进行翻译的文字,凡是在这里面的,都不进行翻译。 +* **[对网页中图片进行翻译](http://translate.zvo.cn/4055.html)**,在进行翻译时,对其中的图片也会一起进行翻译。 +* **[鼠标划词翻译](http://translate.zvo.cn/4072.html)**,鼠标在网页中选中一段文字,会自动出现对应翻译后的文本 +* **[获取当前显示的是什么语种](http://translate.zvo.cn/4074.html)**,如果用户切换为英语进行浏览,那么这个方法将返回翻译的目标语种。 +* **[根据URL传参控制以何种语种显示](http://translate.zvo.cn/41929.html)**,设置可以根据当前访问url的某个get参数来控制使用哪种语言显示。 +* **[离线翻译及自动生成配置](http://translate.zvo.cn/4076.html)**,其实它也就是传统 i18n 的能力,有语言配置文件提供翻译结果。 +* **[手动调用接口进行翻译操作](http://translate.zvo.cn/4077.html)**,通过JavaScript调用一个方法,传入翻译文本进行翻译,并获得翻译结果 +* **[元素的内容整体翻译能力配置](http://translate.zvo.cn/4078.html)**,对node节点的文本拿来进行整体翻译处理,而不再拆分具体语种,提高翻译语句阅读通顺程度 +* **[翻译接口响应捕获处理](http://translate.zvo.cn/4079.html)**,对翻译API接口的响应进行捕获,进行一些自定义扩展 +* **[清除历史翻译语种的缓存](http://translate.zvo.cn/4080.html)**,清除掉你上个页面所记忆的翻译语种,从而达到切换页面时不会按照上个页面翻译语种自动进行翻译的目的。 +* **[网页ajax请求触发自动翻译](http://translate.zvo.cn/4086.html)**,监听当前网页中所有的ajax请求,当请求结束后,自动触发翻译 +* **[设置只对指定语种进行翻译](http://translate.zvo.cn/4085.html)**,翻译时只会翻译在这里设置的语种,未在里面的语种将不会被翻译。 +* **[识别字符串语种及分析](http://translate.zvo.cn/43128.html)**,对字符串进行分析,识别出都有哪些语种,每个语种的字符是什么、每个语种包含的字符数是多少 +* **[重写一级缓存(浏览器缓存)](http://translate.zvo.cn/4082.html)**,你如果不想使用默认的 localStorage 的缓存,您完全可以对其重写,设置自己想使用的缓存方式 +* **[设置使用的翻译服务 translate.service.use](http://translate.zvo.cn/4081.html)**,目前有自有的服务器提供翻译API方式、无自己服务器API的方式两种。 +* **[启用企业级稳定翻译](http://translate.zvo.cn/4087.html)**,独立于开源版本的翻译通道之外,仅对少数用户开放,提供企业级的稳定、高速、以及更多网络分发节点。 +* **[增加对指定标签的属性进行翻译](http://translate.zvo.cn/231504.html)**,可以增加对指定html标签的某个或某些属性进行翻译。比如element、vue 等框架,有些自定义的标签属性,想让其也正常翻译 +* **[本地语种也进行强制翻译](http://translate.zvo.cn/289574.html)**,切换为中文时,即使本地语种设置的是中文,网页中只要不是中文的元素,都会被翻译为要显示的语种 +* **[自定义通过翻译API进行时的监听事件](http://translate.zvo.cn/379207.html)**,当通过翻译API进行文本翻译时的整个过程进行监听,做一些自定义处理,比如翻译API请求前要做些什么、请求翻译API完成并在DOM渲染完毕后触发些什么。 + + diff --git a/docs/translate.js/README_ja.md b/docs/translate.js/README_ja.md new file mode 100644 index 0000000..0e9c2d1 --- /dev/null +++ b/docs/translate.js/README_ja.md @@ -0,0 +1,86 @@ +

+ translate.js +

+

+ それはフロントエンド開発者向けのAI i18nで、2行のJSでHTMLの全自動翻訳を実現します。
+ AIに任せて、ページの変更なし、言語設定ファイルなし、APIキーなし、SEOに優しい! +

+ +![image](assets/html_demo.gif) + +# 使用方法 +### 1. テキスト翻訳APIの展開 +まず、テキスト翻訳のオープンインターフェースを展開し、一度に複数のテキストをバッチ翻訳することをサポートし、同時に多層キャッシュシステムを内蔵して、AI翻訳の時間を最大限に短縮します。これにより、ユーザーが使用時に瞬時に遅延なく翻訳できる能力を実現します。 + +##### 1.1 サーバー仕様 +1コア1GB、20GBのシステムディスク、1MBの帯域幅で、オペレーティングシステムはCentOS 7.4(7.0〜7.9でも可)で完璧に動作します。 + +##### 1.2 インストール +```` +wget https://gitee.com/mail_osc/translate/raw/master/deploy/install_translate.service.sh -O install.sh && chmod -R 777 install.sh && sh ./install.sh +```` + +##### 1.3 DeepSeek パラメータの設定 + +編集 application.properties : +```` +vi /mnt/tomcat8/webapps/ROOT/WEB-INF/classes/application.properties +```` +その後、最後にいくつかの設定を追加します: +```` +# 大規模モデルインターフェースリクエストURL、例えば以下のものはHuawei DeepSeekのリクエストURLです。また、GiteeAIのリクエストURLは https://ai.gitee.com/v1/chat/completions 他のプラットフォームからは自分で取得して入力できます +translate.service.deepSeek.url=https://infer-modelarts-cn-southwest-2.modelarts-infer.com/v1/infers/fd53915b-8935-48fe-be70-449d76c0fc87/v1/chat/completions +# アクセストークン +translate.service.deepSeek.key=QM8jrVl98lTluLhzCaO4i9PFv-caRk6U7kDL-H6CIyApytMG69jO33aasO1GnduQak8fGI7dtpmbsM98Qh3ywA +# どのモデルを使用するか、ここではデフォルトで DeepSeek-V3 を使用すればよく、変更する必要はありません。 +translate.service.deepSeek.model=DeepSeek-V3 +# AIの単回の最大トークン数、設定しない場合デフォルトは3000で、ここではデフォルトでこれを使用すれば良いです。 +translate.service.deepSeek.max_tokens=3000 +```` +最終の効果は以下の図のとおりです: +![image](assets/application_properties_demo.png) + +##### 1.4 サービスを再起動する +```` +pkill java +sudo /mnt/tomcat8/bin/startup.sh +```` + +##### 1.5 テキスト翻訳APIをテストする +![image](assets/texts_translate_api_demo.png) +ここで渡される「from」は、翻訳前の言語を表します。もしその言語がわかればそれを記入し、わからない場合や判断が難しい場合は、上記のように固定して記入してください。DeepSeekが自動的に認識し、翻訳を行います。 +この翻訳APIインターフェースの詳細な説明については、以下を参照してください: [http://api.zvo.cn/translate/service/20230807/translate.json.html](http://api.zvo.cn/translate/service/20230807/translate.json.html) + + + +### 2. htmlでtranslate.jsを使用する + +**通常のウェブサイトで、ある言語をクリックして切り替える** +以下の図に示すように、ウェブサイトの特定の場所でいくつかの言語を切り替える必要があります。 +![](assets/site_demo.png) +そのHTMLコードの末尾に以下のコードを直接追加します: + +```` + + + + + + +```` + +これは最も一般的なシーンでの使用例です。また、VUEやReactなどの各フレームワークや、さまざまな管理画面など、JSが実行できる環境であればどこでも使用できます! + +# オープンソースリポジトリ +https://github.com/xnx3/translate diff --git a/docs/translate.js/assets/application_properties_demo.png b/docs/translate.js/assets/application_properties_demo.png new file mode 100644 index 0000000..e0d4a79 Binary files /dev/null and b/docs/translate.js/assets/application_properties_demo.png differ diff --git a/docs/translate.js/assets/html_demo.gif b/docs/translate.js/assets/html_demo.gif new file mode 100644 index 0000000..e83bd4f Binary files /dev/null and b/docs/translate.js/assets/html_demo.gif differ diff --git a/docs/translate.js/assets/icon.png b/docs/translate.js/assets/icon.png new file mode 100644 index 0000000..d5c21cf Binary files /dev/null and b/docs/translate.js/assets/icon.png differ diff --git a/docs/translate.js/assets/site_demo.png b/docs/translate.js/assets/site_demo.png new file mode 100644 index 0000000..5ae16f2 Binary files /dev/null and b/docs/translate.js/assets/site_demo.png differ diff --git a/docs/translate.js/assets/texts_translate_api_demo.png b/docs/translate.js/assets/texts_translate_api_demo.png new file mode 100644 index 0000000..7d27b83 Binary files /dev/null and b/docs/translate.js/assets/texts_translate_api_demo.png differ diff --git a/docs/xhai_browser/README.md b/docs/xhai_browser/README.md new file mode 100644 index 0000000..6883b8f --- /dev/null +++ b/docs/xhai_browser/README.md @@ -0,0 +1,14 @@ + + +# [xhai Browser](https://www.dahai123.top/) + + +[xhai Browser](https://m.malink.cn/s/7JFfIv) is an Android desktop management & AI browser, DeepSeek is the default AI dialog engine. +He has the ultimate performance (0.2 seconds to start), slim size (apk 3M), no ads, ultra-fast ad blocking, multi-screen classification, screen navigation, multi-search box, a box multiple search! + +xhai means "xiaohai" in Chinese, which means "extreme, high-performance, AI Browser" in English. + +## UI + + + diff --git a/docs/xhai_browser/README_cn.md b/docs/xhai_browser/README_cn.md new file mode 100644 index 0000000..0789078 --- /dev/null +++ b/docs/xhai_browser/README_cn.md @@ -0,0 +1,15 @@ + + +# [小海浏览器](https://www.dahai123.top/) + + +[小海浏览器](https://m.malink.cn/s/7JFfIv)是安卓桌面管理&AI浏览器,DeepSeek是默认AI对话引擎 +他有极致的性能(0.2秒启动),苗条的体型(apk 3M大),无广告,超高速广告拦截,多屏分类,屏幕导航,多搜索框,一框多搜 + + +## UI + + + + + diff --git a/docs/xhai_browser/assets/deepseek3-4_2t.jpg b/docs/xhai_browser/assets/deepseek3-4_2t.jpg new file mode 100644 index 0000000..5faa459 Binary files /dev/null and b/docs/xhai_browser/assets/deepseek3-4_2t.jpg differ diff --git a/docs/xhai_browser/assets/deepseek3-4t.jpg b/docs/xhai_browser/assets/deepseek3-4t.jpg new file mode 100644 index 0000000..1209e6f Binary files /dev/null and b/docs/xhai_browser/assets/deepseek3-4t.jpg differ diff --git a/docs/xhai_browser/assets/desktop_512.png b/docs/xhai_browser/assets/desktop_512.png new file mode 100644 index 0000000..d6a9802 Binary files /dev/null and b/docs/xhai_browser/assets/desktop_512.png differ diff --git a/docs/xhai_browser/assets/logo_512.png b/docs/xhai_browser/assets/logo_512.png new file mode 100644 index 0000000..2103f37 Binary files /dev/null and b/docs/xhai_browser/assets/logo_512.png differ diff --git a/docs/yomo/README.md b/docs/yomo/README.md new file mode 100644 index 0000000..1b727c1 --- /dev/null +++ b/docs/yomo/README.md @@ -0,0 +1,146 @@ +# YoMo Framework - Deepseek Provider + +YoMo is an open-source LLM Function Calling Framework for building Geo-distributed AI agents. Built atop QUIC Transport Protocol and Strongly-typed Stateful Serverless architecture, makes your AI agents low-latency, reliable, secure, and easy. + +## 🚀 Getting Started + +Let's implement a function calling serverless `sfn-get-ip-latency`: + +### Step 1. Install CLI + +```bash +curl -fsSL https://get.yomo.run | sh +``` + +### Step 2. Start the server + +Prepare the configuration as `my-agent.yaml` + +```yaml +name: ai-zipper +host: 0.0.0.0 +port: 9000 + +auth: + type: token + token: SECRET_TOKEN + +bridge: + ai: + server: + addr: 0.0.0.0:9000 ## Restful API endpoint + provider: deepseek ## LLM API Service we will use + + providers: + deepseek: + api_key: + model: deepseek-reasoner +``` + +Start the server: + +```sh +YOMO_LOG_LEVEL=debug yomo serve -c my-agent.yaml +``` + +### Step 3. Write the function + +First, let's define what this function do and how's the parameters required, these will be combined to prompt when invoking LLM. + +```golang +type Parameter struct { + Domain string `json:"domain" jsonschema:"description=Domain of the website,example=example.com"` +} + +func Description() string { + return `if user asks ip or network latency of a domain, you should return the result of the giving domain. try your best to dissect user expressions to infer the right domain names` +} + +func InputSchema() any { + return &Parameter{} +} +``` + +Create a Stateful Serverless Function to get the IP and Latency of a domain: + +```golang +func Handler(ctx serverless.Context) { + var msg Parameter + ctx.ReadLLMArguments(&msg) + + // get ip of the domain + ips, _ := net.LookupIP(msg.Domain) + + // get ip[0] ping latency + pinger, _ := ping.NewPinger(ips[0].String()) + pinger.Count = 3 + pinger.Run() + stats := pinger.Statistics() + + val := fmt.Sprintf("domain %s has ip %s with average latency %s", msg.Domain, ips[0], stats.AvgRtt) + ctx.WriteLLMResult(val) +} + +``` + +Finally, let's run it + +```bash +$ yomo run app.go + +time=2025-01-29T21:43:30.583+08:00 level=INFO msg="connected to zipper" component=StreamFunction sfn_id=B0ttNSEKLSgMjXidB11K1 sfn_name=fn-get-ip-from-domain zipper_addr=localhost:9000 +time=2025-01-29T21:43:30.584+08:00 level=INFO msg="register ai function success" component=StreamFunction sfn_id=B0ttNSEKLSgMjXidB11K1 sfn_name=fn-get-ip-from-domain zipper_addr=localhost:9000 name=fn-get-ip-from-domain tag=16 +``` + +### Done, let's have a try + +```sh +$ curl -i http://127.0.0.1:9000/v1/chat/completions -H "Content-Type: application/json" -d '{ + "messages": [ + { + "role": "system", + "content": "You are a test assistant." + }, + { + "role": "user", + "content": "Compare website speed between Nike and Puma" + } + ], + "stream": false +}' + +HTTP/1.1 200 OK +Content-Length: 944 +Connection: keep-alive +Content-Type: application/json +Date: Wed, 29 Jan 2025 13:30:14 GMT +Keep-Alive: timeout=4 +Proxy-Connection: keep-alive + +{ + "Content": "Based on the data provided for the domains nike.com and puma.com which include IP addresses and average latencies, we can infer the following about their website speeds: + - Nike.com has an IP address of 13.225.183.84 with an average latency of 65.568333 milliseconds. + - Puma.com has an IP address of 151.101.194.132 with an average latency of 54.563666 milliseconds. + + Comparing these latencies, Puma.com is faster than Nike.com as it has a lower average latency. + + Please be aware, however, that website speed can be influenced by many factors beyond latency, such as server processing time, content size, and delivery networks among others. To get a more comprehensive understanding of website speed, you would need to consider additional metrics and possibly conductreal-time speed tests.", + "FinishReason": "stop" +} +``` + +### Full Example Code + +[Full LLM Function Calling Codes](https://github.com/yomorun/llm-function-calling-examples) + +## 🎯 Focuses on Geo-distributed AI Inference Infra + +It’s no secret that today’s users want instant AI inference, every AI +application is more powerful when it response quickly. But, currently, when we +talk about `distribution`, it represents **distribution in data center**. The AI model is +far away from their users from all over the world. + +If an application can be deployed anywhere close to their end users, solve the +problem, this is **Geo-distributed System Architecture**: + +yomo geo-distributed system