Skip to content

Commit a3791ba

Browse files
authored
Create tutorial README.md
1 parent 697770b commit a3791ba

1 file changed

Lines changed: 145 additions & 0 deletions

File tree

examples/CoreML/ONNXLive/README.md

Lines changed: 145 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,145 @@
1+
# ONNXLive Tutorial:
2+
Convert style transfer models from PyTorch to CoreML.
3+
4+
ONNX is a neural network exchange format that can be exported and imported from/to many deep learning frameworks.
5+
In this tutorial, we will use it to convert PyTorch style transfer models to CoreML and
6+
run them on an iPhone on the live stream from the camera.
7+
8+
## Overview
9+
10+
This tutorial goes through 4 steps:
11+
12+
1. [Download (or train) PyTorch style transfer models](#download-or-train-pytorch-style-transfer-models)
13+
2. [Convert the PyTorch models to ONNX models](#convert-the-pytorch-models-to-onnx-models)
14+
3. [Convert the ONNX models to CoreML models](#convert-the-onnx-models-to-coreml-models)
15+
4. [Run the CoreML models in a style transfer iOS App](#run-the-coreml-models-in-a-style-transfer-ios-app)
16+
17+
## Preparations
18+
19+
We're working in a virtualenv to not screw with your local packages.
20+
We're using Python 3.6 for this tutorial, but other versions should work too.
21+
22+
python3.6 -m venv venv
23+
source ./venv/bin/activate
24+
25+
You need to install pytorch
26+
27+
pip install torchvision
28+
29+
and the onnx->coreml converter.
30+
31+
git clone https://github.com/onnx/onnx-coreml
32+
cd onnx-coreml
33+
git submodule update --recursive --init
34+
./install.sh
35+
cd ..
36+
37+
You'd also need to install XCode if you want to run the iOS style transfer app on your iPhone.
38+
Converting the models will work on Linux too, but to run the iOS app, you need a Mac.
39+
40+
## Download (or train) PyTorch style transfer models
41+
42+
For this tutorial, we use the style transfer models that are published with pytorch in https://github.com/pytorch/examples/tree/master/fast_neural_style .
43+
If you want to use different PyTorch or ONNX models, feel free to skip this step.
44+
45+
These models are meant for doing style transfer on still images and are not optimized to be fast enough for video,
46+
but if we reduce the resolution enough, they work well on videos as well.
47+
48+
Let's download these models.
49+
50+
git clone https://github.com/pytorch/examples
51+
cd examples/fast_neural_style
52+
53+
If you want to train the models yourself, the pytorch/examples repository we just cloned has more information on how to do this.
54+
For now, we'll just download pretrained models with the script provided by the repository:
55+
56+
./download_saved_models.sh
57+
58+
This script downloads the pretrained PyTorch models and puts them into the `saved_models` folder.
59+
There should now be 4 files, `candy.pth`, `mosaic.pth`, `rain_princess.pth` and `udnie.pth`.
60+
61+
## Convert the PyTorch models to ONNX models
62+
63+
Now we have the pretrained PyTorch models as `.pth` files in the `saved_models` folder and want to convert them to ONNX.
64+
The model definition is in the pytorch/examples repository we cloned before, and with a few lines of python we can export it.
65+
Basically, instead of running the neural net, we call `torch.onnx._export`, which is provided with PyTorch.
66+
However, in this case we don't even need to do that, because there already is a script `neural_style/neural_style.py` doing this for us.
67+
You can take a look at that script if you want to apply it to different models.
68+
69+
Since the PyTorch to ONNX export is based on tracing your neural network, this will internally run the network.
70+
For this, it needs an input image to apply the style transfer to. This can just be an blank image.
71+
However, the pixel size of this image is important, as this will be the size for the exported style transfer model.
72+
To get good performance, we'll use a resolution of 250x540. Feel free to take a larger resolution if you care less about
73+
FPS and more about style transfer quality.
74+
75+
Let's create a blank image of the resolution we want
76+
77+
convert -size 250x540 xc:white png24:dummy.jpg
78+
79+
and use that to export the PyTorch models
80+
81+
python ./neural_style/neural_style.py eval --content-image dummy.jpg --output-image dummy-out.jpg --model ./saved_models/candy.pth --cuda 0 --export_onnx ./saved_models/candy.onnx
82+
python ./neural_style/neural_style.py eval --content-image dummy.jpg --output-image dummy-out.jpg --model ./saved_models/udnie.pth --cuda 0 --export_onnx ./saved_models/udnie.onnx
83+
python ./neural_style/neural_style.py eval --content-image dummy.jpg --output-image dummy-out.jpg --model ./saved_models/rain_princess.pth --cuda 0 --export_onnx ./saved_models/rain_princess.onnx
84+
python ./neural_style/neural_style.py eval --content-image dummy.jpg --output-image dummy-out.jpg --model ./saved_models/mosaic.pth --cuda 0 --export_onnx ./saved_models/mosaic.onnx
85+
86+
You should end up getting 4 files, `candy.onnx`, `mosaic.onnx`, `rain_princess.onnx` and `udnie.onnx`,
87+
created from the corresponding `.pth` files.
88+
89+
## Convert the ONNX models to CoreML models
90+
91+
Now that we have the ONNX models, we can convert them to CoreML models to run on iOS devices.
92+
For this, we use the onnx-coreml converter we installed previously.
93+
The converter comes with a `convert-onnx-to-coreml` script, which the installation steps above added to our path,
94+
but unfortunately that doesn't work for us, because we need to mark the input and output of the network as an image
95+
and while this is supported by the converter, it is only supported when calling the converter from python.
96+
97+
Looking at the style transfer model (for example opening the .onnx file in an application like [Netron](https://github.com/lutzroeder/Netron)),
98+
we see that the input is named '0' and the output is named '186'. These are just numeric ids assigned by PyTorch.
99+
These are the ones we need to mark as images.
100+
101+
So let's write a small python file `onnx_to_coreml.py`
102+
103+
import sys
104+
from onnx import onnx_pb2
105+
from onnx_coreml import convert
106+
107+
model_in = sys.argv[1]
108+
model_out = sys.argv[2]
109+
110+
model_file = open(model_in, 'rb')
111+
model_proto = onnx_pb2.ModelProto()
112+
model_proto.ParseFromString(model_file.read())
113+
coreml_model = convert(model_proto, image_input_names=['0'], image_output_names=['186'])
114+
coreml_model.save(model_out)
115+
116+
and run it
117+
118+
python onnx_to_coreml.py ./saved_models/candy.onnx ./saved_models/candy.mlmodel
119+
python onnx_to_coreml.py ./saved_models/udnie.onnx ./saved_models/udnie.mlmodel
120+
python onnx_to_coreml.py ./saved_models/rain_princess.onnx ./saved_models/rain_princess.mlmodel
121+
python onnx_to_coreml.py ./saved_models/mosaic.onnx ./saved_models/mosaic.mlmodel
122+
123+
Now, there should be 4 CoreML models in your `saved_models` directory: `candy.mlmodel`, `mosaic.mlmodel`, rain_princess.mlmodel` and `udnie.mlmodel`.
124+
125+
## Run the CoreML models in a style transfer iOS App
126+
127+
This repository (i.e. the one you're currently reading the README.md of) contains an iOS app able to run CoreML style transfer models
128+
on a live camera stream from your phone camera. Let's clone the repository
129+
130+
git clone https://github.com/onnx/tutorials
131+
132+
and open the `tutorials/examples/CoreML/ONNXLive/ONNXLive.xcodeproj` project in XCode.
133+
We recommend using XCode 9.3 and an iPhone X. There might be issues running on older devices or XCode versions.
134+
135+
In the `Models/` folder, the project contains some .mlmodel files. We're going to replace them with the models we just created.
136+
137+
Then, run the app on your iPhone and your all set. Tapping on the screen switches through the models.
138+
139+
## Conclusion
140+
141+
We hope this tutorial gave you an overview of what ONNX is about and how you can use it to convert neural networks
142+
between frameworks, in this case a few style transfer models from PyTorch to CoreML.
143+
144+
Feel free to experiment with these steps and test them on your own models.
145+
Please let us know if you hit any issues or want to give feedback. We'd like to hear what you think.

0 commit comments

Comments
 (0)