mirror of
https://github.com/deepseek-ai/awesome-deepseek-integration.git
synced 2025-04-04 19:14:02 +00:00
Merge a0bf35a12d
into 5b8fad04a6
This commit is contained in:
commit
40d5c2f571
4 changed files with 270 additions and 0 deletions
|
@ -191,9 +191,15 @@ English/[简体中文](https://github.com/deepseek-ai/awesome-deepseek-integrati
|
|||
|
||||
<table>
|
||||
<tr>
|
||||
<td> <img src="https://avatars.githubusercontent.com/u/182288589?s=200&v=4" alt="Icon" width="64" height="auto" /> </td>
|
||||
<td> <a href="https://github.com/DMontgomery40/deepseek-mcp-server/blob/main/README.md">DeepSeek MCP Server</a> </td>
|
||||
<td> Model Context Protocol server for DeepSeek's advanced language models.</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td> <img src="https://raw.githubusercontent.com/superagentxai/superagentX/refs/heads/master/docs/logo/icononly_transparent_nobuffer.png" alt="Icon" width="64" height="auto" /> </td>
|
||||
<td> <a href="https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/docs/superagentx/README.md">SuperAgentX</a> </td>
|
||||
<td>SuperAgentX: A Lightweight Open Source AI Framework Built for Autonomous Multi-Agent Applications with Artificial General Intelligence (AGI) Capabilities.</td>
|
||||
|
||||
</tr>
|
||||
<tr>
|
||||
<td> <img src="https://panda.fans/_assets/favicons/apple-touch-icon.png" alt="Icon" width="64" height="auto" /> </td>
|
||||
|
|
|
@ -146,6 +146,11 @@
|
|||
### AI Agent 框架
|
||||
|
||||
<table>
|
||||
<tr>
|
||||
<td> <img src="https://avatars.githubusercontent.com/u/182288589?s=200&v=4" alt="Icon" width="64" height="auto" /> </td>
|
||||
<td> <a href="https://github.com/DMontgomery40/deepseek-mcp-server/blob/main/README.md">DeepSeek MCP Server</a> </td>
|
||||
<td> 用于 DeepSeek 高级语言模型的 Model Context Protocol 服务器
|
||||
</tr>
|
||||
<tr>
|
||||
<td> <img src="https://panda.fans/_assets/favicons/apple-touch-icon.png" alt="Icon" width="64" height="auto" /> </td>
|
||||
<td> <a href="https://github.com/deepseek-ai/awesome-deepseek-integration/blob/main/docs/anda/README_cn.md">Anda</a> </td>
|
||||
|
|
131
docs/model_context_protocol/README.md
Normal file
131
docs/model_context_protocol/README.md
Normal file
|
@ -0,0 +1,131 @@
|
|||
|
||||
# DeepSeek MCP Server
|
||||
|
||||
A Model Context Protocol (MCP) server for the DeepSeek API, allowing seamless integration of DeepSeek's powerful language models with MCP-compatible applications like Claude Desktop.
|
||||
|
||||
## *Anonymously* use DeepSeek API -- Only a proxy is seen on the other side
|
||||
|
||||
<a href="https://glama.ai/mcp/servers/asht4rqltn"><img width="380" height="200" src="https://glama.ai/mcp/servers/asht4rqltn/badge" alt="DeepSeek Server MCP server" /></a>
|
||||
<a href="https://smithery.ai/server/@dmontgomery40/deepseek-mcp-server"><img alt="Smithery Badge" src="https://smithery.ai/badge/@dmontgomery40/deepseek-mcp-server"></a>
|
||||
|
||||
|
||||
[](https://www.npmjs.com/package/deepseek-mcp-server)
|
||||
[](https://www.npmjs.com/package/deepseek-mcp-server)
|
||||
[](https://github.com/DMontgomery40/deepseek-mcp-server/issues)
|
||||
[](https://github.com/DMontgomery40/deepseek-mcp-server/network)
|
||||
[](https://github.com/DMontgomery40/deepseek-mcp-server/stargazers)
|
||||
[](https://github.com/DMontgomery40/deepseek-mcp-server/blob/main/LICENSE)
|
||||
|
||||
## Installation
|
||||
|
||||
### Installing via Smithery
|
||||
|
||||
To install DeepSeek MCP Server for Claude Desktop automatically via [Smithery](https://smithery.ai/server/@dmontgomery40/deepseek-mcp-server):
|
||||
|
||||
```bash
|
||||
npx -y @smithery/cli install @dmontgomery40/deepseek-mcp-server --client claude
|
||||
```
|
||||
|
||||
### Manual Installation
|
||||
```bash
|
||||
npm install -g deepseek-mcp-server
|
||||
```
|
||||
### Usage with Claude Desktop
|
||||
|
||||
Add this to your `claude_desktop_config.json`:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"deepseek": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"-y",
|
||||
"deepseek-mcp-server"
|
||||
],
|
||||
"env": {
|
||||
"DEEPSEEK_API_KEY": "your-api-key"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
## Features
|
||||
|
||||
> Note: The server intelligently handles these natural language requests by mapping them to appropriate configuration changes. You can also query the current settings and available models:
|
||||
|
||||
- User: "What models are available?"
|
||||
- Response: Shows list of available models and their capabilities via the models resource.
|
||||
- User: "What configuration options do I have?"
|
||||
- Response: Lists all available configuration options via the model-config resource.
|
||||
- User: "What is the current temperature setting?"
|
||||
- Response: Displays the current temperature setting.
|
||||
- User: "Start a multi-turn conversation. With the following settings: model: 'deepseek-chat', make it not too creative, and
|
||||
allow 8000 tokens."
|
||||
- Response: *Starts a multi-turn conversation with the specified settings.*
|
||||
|
||||
### Automatic Model Fallback if R1 is down
|
||||
|
||||
- If the primary model (R1) is down (called `deepseek-reasoner` in the server), the server will automatically attempt to try with v3 (called `deepseek-chat` in the server)
|
||||
> Note: You can switch back and forth anytime as well, by just giving your prompt and saying "use `deepseek-reasoner`" or "use `deepseek-chat`"
|
||||
- V3 is recommended for general purpose use, while R1 is recommended for more technical and complex queries, primarily due to speed and token useage
|
||||
|
||||
### Resource discovery for available models and configurations:
|
||||
* Custom model selection
|
||||
* Temperature control (0.0 - 2.0)
|
||||
* Max tokens limit
|
||||
* Top P sampling (0.0 - 1.0)
|
||||
* Presence penalty (-2.0 - 2.0)
|
||||
* Frequency penalty (-2.0 - 2.0)
|
||||
|
||||
## Enhanced Conversation Features
|
||||
|
||||
**Multi-turn conversation support:**
|
||||
* Maintains complete message history and context across exchanges
|
||||
* Preserves configuration settings throughout the conversation
|
||||
* Handles complex dialogue flows and follow-up chains automatically
|
||||
|
||||
This feature is particularly valuable for two key use cases:
|
||||
|
||||
1. **Training & Fine-tuning:**
|
||||
Since DeepSeek is open source, many users are training their own versions. The multi-turn support provides properly formatted conversation data that's essential for training high-quality dialogue models.
|
||||
|
||||
2. **Complex Interactions:**
|
||||
For production use, this helps manage longer conversations where context is crucial:
|
||||
* Multi-step reasoning problems
|
||||
* Interactive troubleshooting sessions
|
||||
* Detailed technical discussions
|
||||
* Any scenario where context from earlier messages impacts later responses
|
||||
|
||||
The implementation handles all context management and message formatting behind the scenes, letting you focus on the actual interaction rather than the technical details of maintaining conversation state.
|
||||
|
||||
|
||||
|
||||
|
||||
## Testing with MCP Inspector
|
||||
|
||||
You can test the server locally using the MCP Inspector tool:
|
||||
|
||||
1. Build the server:
|
||||
```bash
|
||||
npm run build
|
||||
```
|
||||
|
||||
2. Run the server with MCP Inspector:
|
||||
```bash
|
||||
# Make sure to specify the full path to the built server
|
||||
npx @modelcontextprotocol/inspector node ./build/index.js
|
||||
```
|
||||
|
||||
The inspector will open in your browser and connect to the server via stdio transport. You can:
|
||||
- View available tools
|
||||
- Test chat completions with different parameters
|
||||
- Debug server responses
|
||||
- Monitor server performance
|
||||
|
||||
Note: The server uses DeepSeek's R1 model (deepseek-reasoner) by default, which provides state-of-the-art performance for reasoning and general tasks.
|
||||
|
||||
## License
|
||||
|
||||
MIT
|
128
docs/model_context_protocol/README_cn.md
Normal file
128
docs/model_context_protocol/README_cn.md
Normal file
|
@ -0,0 +1,128 @@
|
|||
# DeepSeek MCP 服务器
|
||||
|
||||
这是一个适用于 DeepSeek API 的 Model Context Protocol (MCP) 服务器,可与 Claude Desktop 等兼容 MCP 的应用程序无缝集成,从而利用 DeepSeek 强大的语言模型。
|
||||
|
||||
## *匿名* 使用 DeepSeek API —— 另一端只会看到代理
|
||||
|
||||
<a href="https://glama.ai/mcp/servers/asht4rqltn"><img width="380" height="200" src="https://glama.ai/mcp/servers/asht4rqltn/badge" alt="DeepSeek MCP Server" /></a>
|
||||
<a href="https://smithery.ai/server/@dmontgomery40/deepseek-mcp-server"><img alt="Smithery Badge" src="https://smithery.ai/badge/@dmontgomery40/deepseek-mcp-server"></a>
|
||||
|
||||
[](https://www.npmjs.com/package/deepseek-mcp-server)
|
||||
[](https://www.npmjs.com/package/deepseek-mcp-server)
|
||||
[](https://github.com/DMontgomery40/deepseek-mcp-server/issues)
|
||||
[](https://github.com/DMontgomery40/deepseek-mcp-server/network)
|
||||
[](https://github.com/DMontgomery40/deepseek-mcp-server/stargazers)
|
||||
[](https://github.com/DMontgomery40/deepseek-mcp-server/blob/main/LICENSE)
|
||||
|
||||
## 安装
|
||||
|
||||
### 通过 Smithery 安装
|
||||
|
||||
要使用 Smithery 在 Claude Desktop 上自动安装 DeepSeek MCP Server,请执行以下命令(请确保已安装 `@smithery/cli`):
|
||||
|
||||
`bash
|
||||
npx -y @smithery/cli install @dmontgomery40/deepseek-mcp-server --client claude
|
||||
`
|
||||
|
||||
### 手动安装
|
||||
|
||||
`bash
|
||||
npm install -g deepseek-mcp-server
|
||||
`
|
||||
|
||||
### 在 Claude Desktop 中使用
|
||||
|
||||
在你的 `claude_desktop_config.json` 中添加:
|
||||
|
||||
`json
|
||||
{
|
||||
"mcpServers": {
|
||||
"deepseek": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"-y",
|
||||
"deepseek-mcp-server"
|
||||
],
|
||||
"env": {
|
||||
"DEEPSEEK_API_KEY": "your-api-key"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
`
|
||||
|
||||
## 功能简介
|
||||
|
||||
> 注意:该服务器能够根据自然语言请求智能地将其映射到相应的配置更改。你也可以查询当前设置和可用模型:
|
||||
|
||||
- 用户:“有哪些可用的模型?”
|
||||
- 响应:通过 models 资源列出可用模型及其功能。
|
||||
- 用户:“我有哪些配置选项?”
|
||||
- 响应:通过 model-config 资源列出所有可用的配置选项。
|
||||
- 用户:“当前的温度(temperature)设置是多少?”
|
||||
- 响应:显示当前温度设置。
|
||||
- 用户:“开始一个多轮对话。使用如下设置:model: 'deepseek-chat',创意度不要太高,并且允许 8000 个 token。”
|
||||
- 响应:使用指定设置启动一个多轮对话。
|
||||
|
||||
### 当 R1 出现故障时自动回退到其他模型
|
||||
|
||||
- 如果主模型(R1,服务器中称为 `deepseek-reasoner`)出现故障,服务器会自动尝试使用 v3(服务器中称为 `deepseek-chat`)
|
||||
- 你也可以随时在对话中切换,只需在对话中输入提示并说“使用 `deepseek-reasoner`”或“使用 `deepseek-chat`”
|
||||
- v3 更适用于通用场景;R1 更适用于处理较为复杂的技术性问题,主要得益于速度和 token 使用的优化
|
||||
|
||||
### 资源发现:可用的模型和配置
|
||||
|
||||
- 自定义模型选择
|
||||
- 温度控制(0.0 - 2.0)
|
||||
- 最大 token 限制
|
||||
- Top P 采样(0.0 - 1.0)
|
||||
- 存在惩罚(presence penalty)(-2.0 - 2.0)
|
||||
- 频率惩罚(frequency penalty)(-2.0 - 2.0)
|
||||
|
||||
## 增强的对话功能
|
||||
|
||||
**多轮对话支持:**
|
||||
- 在多轮交互过程中维护完整的消息历史和上下文
|
||||
- 在对话过程中保留配置设置
|
||||
- 自动处理复杂的对话逻辑和后续请求
|
||||
|
||||
这一功能在以下两个主要场景中特别有价值:
|
||||
|
||||
1. **训练 & 微调:**
|
||||
- 由于 DeepSeek 是开源的,很多用户正在训练自己的版本。多轮对话支持能够提供格式正确的对话数据,这对于训练高质量对话模型至关重要。
|
||||
|
||||
2. **复杂场景交互:**
|
||||
- 在生产环境中,这种功能有助于管理需要保留上下文的更长对话,例如:
|
||||
* 多步骤推理问题
|
||||
* 交互式故障排查
|
||||
* 详尽的技术讨论
|
||||
* 任何需要利用早期消息上下文来影响后续响应的场景
|
||||
|
||||
该功能在幕后自动处理所有上下文管理和消息格式,你只需关注对话本身,无需担心维护对话状态的技术细节。
|
||||
|
||||
## 使用 MCP Inspector 进行测试
|
||||
|
||||
你可以使用 MCP Inspector 工具在本地测试服务器:
|
||||
|
||||
1. 构建服务器:
|
||||
`bash
|
||||
npm run build
|
||||
`
|
||||
|
||||
2. 使用 MCP Inspector 启动服务器:
|
||||
`bash
|
||||
npx @modelcontextprotocol/inspector node ./build/index.js
|
||||
`
|
||||
|
||||
MCP Inspector 将在你的浏览器中打开,并通过 stdio 传输连接到该服务器。你可以:
|
||||
|
||||
- 查看可用工具
|
||||
- 使用不同参数测试对话补全
|
||||
- 调试服务器响应
|
||||
- 监控服务器性能
|
||||
|
||||
注意:服务器默认使用 DeepSeek 的 R1 模型(`deepseek-reasoner`),它在推理和通用任务方面具有最先进的性能。
|
||||
|
||||
## 许可证
|
||||
|
||||
MIT
|
Loading…
Add table
Reference in a new issue