1. 引入
当我们用vllm部署一个大模型,就可以调用langchain的ChatOpenAI()接口来访问大模型(具体过程参考[1]),这也是langchain的Agent的基础接口使用方式。
那么问题来了,这个接口是使用哪种方式与大模型进行通信的呢?
2. 抓包过程
我们可以通过抓包这个过程来看一看:
-
首先,启动wireshark
-
运行如下python代码,与大模型进行通信
from langchain_openai import ChatOpenAI
from langchain.schema import HumanMessage
llm = ChatOpenAI(
streaming=True,
verbose=True,
openai_api_key="none",
openai_api_base='http://10.11.12.13:4000',
model_name="aaa-gpt"
)
output = llm([HumanMessage(content="你好")])
print(output.content) # 你好!很高兴为你提供帮助。有什么问题或需要什么信息呢?
- 使用wireshark搜索目的地址为大模型服务IP的内容
搜索:ip.dst eq 10.11.12.13 and tcp
可以看到,ChatOpenAI接口,往目的地址,POST了一个HTTP请求到/chat/completions,并收到回复。
要知道具体POST了什么内容,可以鼠标右键,选“追踪流”:
就能看到
(1)接口发送的内容如下
POST /chat/completions HTTP/1.1
Host: 10.11.12.13:4000
Accept-Encoding: gzip, deflate, br
Connection: keep-alive
Accept: application/json
Content-Type: application/json
User-Agent: OpenAI/Python 1.51.2
X-Stainless-Lang: python
X-Stainless-Package-Version: 1.51.2
X-Stainless-OS: Windows
X-Stainless-Arch: other:amd64
X-Stainless-Runtime: CPython
X-Stainless-Runtime-Version: 3.12.7
Authorization: Bearer none
X-Stainless-Async: false
x-stainless-retry-count: 0
Content-Length: 122
{"messages": [{"content": "\u4f60\u597d", "role": "user"}], "model": "aa-gpt", "n": 1, "stream": true, "temperature": 0.7}
这里有两个关键点:
- \u4f60\u597d:是中文“你好”的unicode
- Authorization: Bearer none,这里的none就是python代码中的openai_api_key
可以看到,openai_api_key,是被放在POST请求的header中发送的。
(2)接收/大模型回复的内容如下
HTTP/1.1 200 OK
date: Tue, 15 Oct 2024 06:39:28 GMT
server: uvicorn
x-litellm-call-id: 1111c2cf-011c-4111-b110-c31111113e11
x-litellm-model-id: 15111111da111111c2618be6ed1111113a15111111db0101111114c111111725
x-litellm-version: 1.43.4
x-litellm-key-tpm-limit: None
x-litellm-key-rpm-limit: None
llm_provider-date: Tue, 15 Oct 2024 06:39:28 GMT
llm_provider-server: uvicorn
llm_provider-content-type: text/event-stream; charset=utf-8
llm_provider-transfer-encoding: chunked
content-type: text/event-stream; charset=utf-8
transfer-encoding: chunked
c5
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"......","role":"assistant"}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
af
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"..."}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
b5
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"........."}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
af
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"..."}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
b2
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"......"}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
b2
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"......"}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
af
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"..."}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
b5
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"........."}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
b2
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"......"}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
b2
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"......"}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
b2
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"......"}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
b2
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"......"}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
af
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"..."}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
af
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"..."}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
af
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":"..."}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
ac
data: {"id":"chat-111111111111111111111111111","choices":[{"index":0,"delta":{"content":""}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
b7
data: {"id":"chat-111111111111111111111111111","choices":[{"finish_reason":"stop","index":0,"delta":{}}],"created":1728974369,"model":"tq-gpt","object":"chat.completion.chunk"}
e
data: [DONE]
0
3. 测试POST
把上面分析得到的内容,POST的header配置,写为如下的POST请求:
curl http://10.11.12.13:4000/chat/completions \
-H "Content-Type: application/json" \
-H "Authorization: Bearer none" \
-d '{"model": "aa-gpt","messages": [{"role":"user","content":"你好"}]}'
到linux上测试,得到的结果也和上面接口的一致。
4. 总结
通过wireshark抓包,目的IP地址过滤,追踪流,就能看到langchain的ChatOpenAI发送的POST请求细节内容,并能依此构造一个POST请求来模拟该接口的通信。
参考
- https://blog.csdn.net/ybdesire/article/details/140691972