Skip to content

Commit 8b37c16

Browse files
authored
Remove OnnxRuntime BERT inference notebooks (#183)
* update pytorch bert tutorial * update test tool * update with GPU run * Remove OnnxRuntime inference tutorials. The latest tutorials can be found in https://github.com/microsoft/onnxruntime/tree/master/onnxruntime/python/tools/bert/notebooks * python version * remove onnxruntime Azure ML notebook * update links
1 parent d64a6da commit 8b37c16

4 files changed

Lines changed: 4 additions & 1206 deletions

README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -66,9 +66,9 @@ Once you have an ONNX model, it can be scored with a variety of tools.
6666
* [Serving PyTorch Models on AWS Lambda with Caffe2 & ONNX](https://machinelearnings.co/serving-pytorch-models-on-aws-lambda-with-caffe2-onnx-7b096806cfac)
6767
* [MXNet to ONNX to ML.NET with SageMaker, ECS and ECR](https://cosminsanda.com/posts/mxnet-to-onnx-to-ml.net-with-sagemaker-ecs-and-ecr/) - external link
6868
* [Convert CoreML YOLO model to ONNX, score with ONNX Runtime, and deploy in Azure](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/deployment/onnx/onnx-convert-aml-deploy-tinyyolo.ipynb)
69-
* [Inference PyTorch Bert Model for High Performance in ONNX Runtime](tutorials/Inference-PyTorch-Bert-Model-for-High-Performance-in-ONNX-Runtime.ipynb)
70-
* [Inference TensorFlow Bert Model for High Performance in ONNX Runtime](tutorials/Inference-TensorFlow-Bert-Model-for-High-Performance-in-ONNX-Runtime.ipynb)
71-
* [Inference Bert Model for High Performance with ONNX Runtime on AzureML](tutorials/Inference-Bert-Model-for-High-Performance-with-ONNX-Runtime-on-AzureML.ipynb)
69+
* [Inference PyTorch Bert Model for High Performance in ONNX Runtime](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/bert/notebooks/PyTorch_Bert-Squad_OnnxRuntime_GPU.ipynb)
70+
* [Inference TensorFlow Bert Model for High Performance in ONNX Runtime](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/bert/notebooks/Tensorflow_Keras_Bert-Squad_OnnxRuntime_CPU.ipynb)
71+
* [Inference Bert Model for High Performance with ONNX Runtime on AzureML](https://github.com/microsoft/onnxruntime/blob/master/onnxruntime/python/tools/bert/notebooks/Inference_Bert_with_OnnxRuntime_on_AzureML.ipynb)
7272
* [Various Samples: Inferencing ONNX models using ONNX Runtime (Python, C#, C, Java, etc)](https://github.com/microsoft/onnxruntime/tree/master/samples)
7373

7474
### Serving
@@ -86,7 +86,7 @@ Once you have an ONNX model, it can be scored with a variety of tools.
8686

8787
### ONNX Custom Operators
8888
* [How to export Pytorch model with custom op to ONNX and run it in ONNX Runtime](PyTorchCustomOperator/README.md)
89-
89+
9090
## Other ONNX tools
9191

9292
* [Verifying correctness and comparing performance](tutorials/CorrectnessVerificationAndPerformanceComparison.ipynb)

0 commit comments

Comments
 (0)