设为首页收藏本站

Jetson开发者

搜索
查看: 83|回复: 0

Jetson TX2 TensorFlow/TensorRT 工作流程

[复制链接]

120

主题

152

帖子

1949

积分

管理员

Rank: 9Rank: 9Rank: 9

积分
1949
发表于 2018-5-29 17:58:38 | 显示全部楼层 |阅读模式
Jetson TX2
Here is my understanding of the steps involved:
1. Train the network in TensorFlow and then freeze the network into a .pb file. This can be done on any machine.
2. Convert the .pb file to a UFF file using the convert-to-uff.py script. My understanding is that the resulting UFF file is still platform independent, so this step can be done on any machine.
3. From the UFF file, build a TensorRT engine. Based on my reading of the documentation, this step must be done on the TX2 itself, since the resulting serialized engine (I want to save it to a file so I can just use said file over and over again for inference) is platform specific. Also, since this step must be done on the TX2 and because the new Python API isn't supported on ARM, this mean writing some C++ code to handle this step.
4. Do inference using the engine created in step #3. Once again, this means some C++ running on the TX2, although this should be as easy as just loading the engine and then calling the appropriate inference function.

回复

使用道具 举报

您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

关闭

站长推荐上一条 /1 下一条

快速回复 返回顶部 返回列表