检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Using Lite Cluster ModelArts Lite Cluster offers hosted Kubernetes clusters with pre-installed AI development and acceleration plug-ins. These elastic clusters allow you to access AI resources and tasks in a cloud-native environment.
GS_MODEL_WAREHOUSE GS_MODEL_WAREHOUSE stores AI engine training models, including the models and detailed description of the training process.
GS_MODEL_WAREHOUSE GS_MODEL_WAREHOUSE stores AI engine training models, including the models and detailed description of the training process.
Options: 0: The Ascend AI processor is unhealthy. 1: The Ascend AI processor is healthy. container_name: a container name String id: an NPU ID String model_name: name of an Ascend AI processor String namespace: a namespace name String pcie_bus_info: PCIe information of an Ascend AI
Introduction The EIServices module provides convenient APIs for you to quickly call various AI services on HUAWEI CLOUD. For details about the AI services, see the related service documentation. Currently, two types of APIs are provided: common APIs and encapsulated APIs.
Introduction The EIServices module provides convenient APIs for you to quickly call various AI services on HUAWEI CLOUD. For details about the AI services, see the related service documentation. Currently, two types of APIs are provided: common APIs and encapsulated APIs.
Large AI model training acceleration: Three-level cache interworking for AI native storage accelerates data loading, model training, and fault backup and recovery.
TensorFlow (CPU/GPU)-powered Inference Base Images ModelArts provides the following inference base images powered by TensorFlow (CPU/GPU): Engine Version 1: tensorflow_2.1.0-cuda_10.1-py_3.7-ubuntu_18.04-x86_64 Engine Version 2: tensorflow_1.15.5-cuda_11.4-py_3.8-ubuntu_20.04-x86_
PyTorch (CPU/GPU)-powered Inference Base Images ModelArts provides the following inference base images powered by PyTorch (CPU/GPU): Engine Version 1: pytorch_1.8.0-cuda_10.2-py_3.7-ubuntu_18.04-x86_64 Engine Version 2: pytorch_1.8.2-cuda_11.1-py_3.7-ubuntu_18.04-x86_64 Engine Version
MindSpore (CPU/GPU)-powered Inference Base Images ModelArts provides the following inference base images powered by MindSpore (CPU/GPU): Engine Version 1: mindspore_1.7.0-cpu-py_3.7-ubuntu_18.04-x86_64 Engine Version 2: mindspore_1.7.0-cuda_10.1-py_3.7-ubuntu_18.04-x86_64 Engine Version
Uploading Data to a Notebook Instance Through PyCharm If the data is less than or equal to 500 MB, directly copy the data to the local IDE. If the data is larger than 500 MB, upload it to OBS, and then download it to the notebook instance. Figure 1 Uploading data to a notebook Instance
Uploading and Downloading Files in VS Code Uploading Data to a Notebook Instance Using VS Code If the data is less than or equal to 500 MB, directly copy the data to the local IDE. If the data is larger than 500 MB, upload it to OBS and then to the notebook instance. Procedure Upload
Cloning GitHub Open-Source Repository Files to JupyterLab Files can be cloned from a GitHub open-source repository to JupyterLab. Use JupyterLab to open a running notebook instance. Click in the navigation bar on the top of the JupyterLab window. In the displayed dialog box, click
The AI platform consulting and planning service cannot be changed. Parent topic: About Service Purchase
PyCharm Toolkit AI developers use PyCharm tools to develop algorithms or models. Therefore, ModelArts provides PyCharm Toolkit to help AI developers quickly submit locally developed code to a training environment on ModelArts.
Creating a Hyperparameter Search Job Background If the AI engine is pytorch_1.8.0-cuda_10.2-py_3.7-ubuntu_18.04-x86_64 or tensorflow_2.1.0-cuda_10.1-py_3.7-ubuntu_18.04-x86_64 and the hyperparameter to be optimized is of the float type, you can use hyperparameter search on ModelArts
AI assets include but are not limited to texts, graphics, data, articles, photos, images, illustrations, code, AI algorithms, and AI models.
Creating a Custom Image for a Model If you have developed a model using an AI engine that is not supported by ModelArts, to use this model to create AI applications, create a custom image, import the image to ModelArts, and use it to create models.
All these functions are implemented by the AI Core. Static AIPP and dynamic AIPP modes are supported. However, the two modes are mutually exclusive. Static AIPP: During model conversion, set the AIPP mode to static and set the AIPP parameters.
All these functions are implemented by the AI Core. Static AIPP and dynamic AIPP modes are supported. However, the two modes are mutually exclusive. Static AIPP: During model conversion, set the AIPP mode to static and set the AIPP parameters.