必应搜索网站甘肃省建设厅查行网站
web/
2025/9/27 14:17:33/
文章来源:
必应搜索网站,甘肃省建设厅查行网站,网址推荐你会感谢我的,外贸 网站建设服务器配置如下#xff1a;
CPU/NPU#xff1a;鲲鹏 CPU#xff08;ARM64#xff09;A300I pro推理卡 系统#xff1a;Kylin V10 SP1【下载链接】【安装链接】 驱动与固件版本版本#xff1a; Ascend-hdk-310p-npu-driver_23.0.1_linux-aarch64.run【下载链接】 Ascend-…服务器配置如下
CPU/NPU鲲鹏 CPUARM64A300I pro推理卡 系统Kylin V10 SP1【下载链接】【安装链接】 驱动与固件版本版本 Ascend-hdk-310p-npu-driver_23.0.1_linux-aarch64.run【下载链接】 Ascend-hdk-310p-npu-firmware_7.1.0.4.220.run【下载链接】 MCU版本Ascend-hdk-310p-mcu_23.2.3【下载链接】 CANN开发套件版本7.0.1【Toolkit下载链接】【Kernels下载链接】
测试om模型环境如下
Python版本3.8.11 推理工具ais_bench 测试图像分类算法 1ShuffleNetv2 2DenseNet 3EfficientNet 4MobileNetv2 5MobileNetv3 6ResNet 7SE-ResNet 8Vision Transformer 9SwinTransformer
专栏其他文章 Atlas800昇腾服务器型号3000—驱动与固件安装一 Atlas800昇腾服务器型号3000—CANN安装二 Atlas800昇腾服务器型号3000—YOLO全系列om模型转换测试三 Atlas800昇腾服务器型号3000—AIPP加速前处理四 Atlas800昇腾服务器型号3000—YOLO全系列NPU推理【检测】五 Atlas800昇腾服务器型号3000—YOLO全系列NPU推理【实例分割】六 Atlas800昇腾服务器型号3000—YOLO全系列NPU推理【关键点】七 Atlas800昇腾服务器型号3000—YOLO全系列NPU推理【跟踪】八 Atlas800昇腾服务器型号3000—SwinTransformer等NPU推理【图像分类】九
1 Docker安装
# 1.安装
yum install -y docker
# 2. 重启
systemctl start docker
# 3.打印版本信息显示即成功
docker version2 将自己项目打包成镜像
1进入待打包文件夹内容如下
其中Software_Back内容如下 2导出requirements.txt文件【这里剔除ais_bench相关】
pip list --formatfreeze requirements.txt3构建Dockerfile文件内容如下 FROM docker.wuxs.icu/library/python:3.8.11
RUN /etc/apt/source.list \echo deb http://mirrors.aliyun.com/debian stable main contrib non-free /etc/apt/source.list \echo deb http://mirrors.aliyun.com/debian stable-update main contrib non-free /etc/apt/source.listRUN apt-get updateRUN pip install -U pip -i https://pypi.tuna.tsinghua.edu.cn/simpleCOPY requirements.txt .
COPY Software_Back/aclruntime-0.0.2-cp38-cp38-linux_aarch64.whl /ais_bench/
COPY Software_Back/ais_bench-0.0.2-py3-none-any.whl /ais_bench/RUN pip install -r requirements.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
RUN pip3 install /ais_bench/aclruntime-0.0.2-cp38-cp38-linux_aarch64.whl
RUN pip3 install /ais_bench/ais_bench-0.0.2-py3-none-any.whl
RUN apt-get install -y --fix-missing libgl1-mesa-glxRUN rm requirements.txt /ais_bench/aclruntime-0.0.2-cp38-cp38-linux_aarch64.whl /ais_bench/ais_bench-0.0.2-py3-none-any.whlCOPY Software_Back/Ascend-cann-toolkit_7.0.1_linux-aarch64.run /CANN/
COPY Software_Back/Ascend-cann-kernels-310p_7.0.1_linux.run /CANN/ # Ascend-cann-toolkit
RUN chmod x /CANN/Ascend-cann-toolkit_7.0.1_linux-aarch64.run \ /CANN/Ascend-cann-toolkit_7.0.1_linux-aarch64.run --install --install-for-all --quiet \ rm /CANN/Ascend-cann-toolkit_7.0.1_linux-aarch64.run # Ascend-cann-kernels
RUN chmod x /CANN/Ascend-cann-kernels-310p_7.0.1_linux.run \ /CANN/Ascend-cann-kernels-310p_7.0.1_linux.run --install --install-for-all --quiet \ rm /CANN/Ascend-cann-kernels-310p_7.0.1_linux.run RUN useradd cls -m -u 1000 -d /home/cls
USER 1000
WORKDIR /home/clsCOPY images /home/cls/images
COPY results /home/cls/results
COPY weights /home/cls/weights
COPY imagenet_classes.txt /home/cls/imagenet_classes.txt
COPY om_infer.py /home/cls/om_infer.py4构建镜像-名字images_classfication:0001.rc
sudo docker build . -t images_classfication:0001.rc -f images_classfication.Dockerfile5查看镜像是否存在
sudo docker images3 启动容器
参考宿主机目录挂载到容器 1启动容器进入终端【需映射驱动等路径】
sudo docker run -p8080:8080 --user root --name custom_transformer_test --rm \
-it --network host \
--ipchost \
--device/dev/davinci0 \
--device/dev/davinci_manager \
--device/dev/devmm_svm \
--device/dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/common:/usr/local/Ascend/driver/lib64/common \
-v /usr/local/Ascend/driver/lib64/driver:/usr/local/Ascend/driver/lib64/driver \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /etc/vnpu.cfg:/etc/vnpu.cfg \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
images_classfication:0001.rc /bin/bash2启动容器直接运行脚本【需映射驱动等路径】
sudo docker run -p8080:8080 --user root --name custom_transformer_test --rm \
-it --network host \
--ipchost \
--device/dev/davinci0 \
--device/dev/davinci_manager \
--device/dev/devmm_svm \
--device/dev/hisi_hdc \
-v /usr/local/dcmi:/usr/local/dcmi \
-v /usr/local/bin/npu-smi:/usr/local/bin/npu-smi \
-v /usr/local/Ascend/driver/lib64/common:/usr/local/Ascend/driver/lib64/common \
-v /usr/local/Ascend/driver/lib64/driver:/usr/local/Ascend/driver/lib64/driver \
-v /etc/ascend_install.info:/etc/ascend_install.info \
-v /etc/vnpu.cfg:/etc/vnpu.cfg \
-v /usr/local/Ascend/driver/version.info:/usr/local/Ascend/driver/version.info \
images_classfication:0001.rc /bin/bash -c groupadd -g 1001 HwHiAiUser useradd -g HwHiAiUser -d /home/HwHiAiUser -m HwHiAiUser echo ok export LD_LIBRARY_PATH/usr/local/Ascend/driver/lib64/common:/usr/local/Ascend/driver/lib64/driver:${LD_LIBRARY_PATH} source /usr/local/Ascend/ascend-toolkit/set_env.sh exec python om_infer.py --model_path /home/cls/weights/swin_tiny.om
本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/web/81079.shtml
如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!