Skip to content

Commit d197f7d

Browse files
heliqiroot
andauthored
update serving images (#2387)
Co-authored-by: root <root@bjyz-sys-gpu-kongming2.bjyz.baidu.com>
1 parent 1314f32 commit d197f7d

2 files changed

Lines changed: 4 additions & 4 deletions

File tree

serving/README.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -22,15 +22,15 @@ FastDeploy builds an end-to-end serving deployment based on [Triton Inference Se
2222
CPU images only support Paddle/ONNX models for serving deployment on CPUs, and supported inference backends include OpenVINO, Paddle Inference, and ONNX Runtime
2323

2424
```shell
25-
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.4-cpu-only-21.10
25+
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.7-cpu-only-21.10
2626
```
2727

2828
#### GPU Image
2929

3030
GPU images support Paddle/ONNX models for serving deployment on GPU and CPU, and supported inference backends including OpenVINO, TensorRT, Paddle Inference, and ONNX Runtime
3131

3232
```
33-
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.4-gpu-cuda11.4-trt8.5-21.10
33+
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.7-gpu-cuda11.4-trt8.5-21.10
3434
```
3535

3636
Users can also compile the image by themselves according to their own needs, referring to the following documents:

serving/README_CN.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -18,13 +18,13 @@ FastDeploy基于[Triton Inference Server](https://github.com/triton-inference-se
1818
#### CPU镜像
1919
CPU镜像仅支持Paddle/ONNX模型在CPU上进行服务化部署,支持的推理后端包括OpenVINO、Paddle Inference和ONNX Runtime
2020
``` shell
21-
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.4-cpu-only-21.10
21+
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.7-cpu-only-21.10
2222
```
2323

2424
#### GPU镜像
2525
GPU镜像支持Paddle/ONNX模型在GPU/CPU上进行服务化部署,支持的推理后端包括OpenVINO、TensorRT、Paddle Inference和ONNX Runtime
2626
```
27-
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.4-gpu-cuda11.4-trt8.5-21.10
27+
docker pull registry.baidubce.com/paddlepaddle/fastdeploy:1.0.7-gpu-cuda11.4-trt8.5-21.10
2828
```
2929

3030
用户也可根据自身需求,参考如下文档自行编译镜像

0 commit comments

Comments
 (0)