AI开发平台MODELARTS-Step1 在Notebook中构建一个新镜像:Dockerfile模板

时间:2024-04-30 17:31:41

Dockerfile模板

Dockerfile样例,此样例可以直接另存为一个Dockerfile文件使用。此处可以使用的基础镜像列表请参见推理基础镜像列表
FROM swr.cn-north-4.myhuaweicloud.com/atelier/tensorflow_2_1:tensorflow_2.1.0-cuda_10.1-py_3.7-ubuntu_18.04-x86_64-20221121111529-d65d817

# here create a soft link from '/home/ma-user/anaconda/lib/python3.7/site-packages/model_service' to '/home/ma-user/infer/model_service'. It’s the build-in inference framework code dir
# if the installed python version of this base image is python3.8, you should create a soft link from '/home/ma-user/anaconda/lib/python3.8/site-packages/model_service' to '/home/ma-user/infer/model_service'.
USER root
RUN ln -s /home/ma-user/anaconda/lib/python3.7/site-packages/model_service  /home/ma-user/infer/model_service
USER ma-user

# here we supply a demo, you can change it to your own model files
ADD model/  /home/ma-user/infer/model/1
USER root
RUN chown -R ma-user:ma-group  /home/ma-user/infer/model/1
USER ma-user

# default MODELARTS_SSL_CLIENT_VERIFY switch is "true". In order to debug, we set it to be "false"
ENV MODELARTS_SSL_CLIENT_VERIFY="false"

# change your port and protocol here, default is 8443 and https
# ENV MODELARTS_SERVICE_PORT=8080
# ENV MODELARTS_SSL_ENABLED="false"

# add pip install here
# RUN pip install numpy==1.16.4
# RUN pip install -r requirements.txt

# default cmd, you can chage it here
# CMD sh /home/ma-user/infer/run.sh
support.huaweicloud.com/docker-modelarts/docker-modelarts_0033.html