检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Figure 1 Selecting an AI engine After the file is created, the JupyterLab page is displayed by default. Figure 2 Encoding page Calling mox.file. Enter the following code to implement the following simple functions: Introduce MoXing Framework.
EIHealth Powered by the advantages of AI and big data technologies from HUAWEI CLOUD, EIHealth provides a professional AI R&D platform to accelerate AI researches and applications in genomics, drug discovery and medical imaging.
development pipeline to empower local enterprises to become AI pioneers in their own industry.
Built on a robust foundation of cloud and AI, the architecture establishes secure and reliable platforms for cloud infrastructure, data, AI, and digital identity.
Configuring SFS Turbo and OBS Interworking SFS Turbo HPC file systems can access objects stored in OBS buckets seamlessly. You can specify an SFS Turbo interworking directory and associate it with an OBS bucket. Log in to the SFS console. In the left navigation pane, choose SFS Turbo
Autocompletion for ma-cli Commands CLI autocomplete enables you to get a list of supported ma-cli commands by typing a command prefix and pressing Tab on your terminal. Autocomplete for ma-cli commands needs to be enabled in Terminal. After running the ma-cli auto-completion command
Configuring Workflow Parameters Description A workflow parameter is a placeholder object that can be configured when the workflow runs. The following data types are supported: int, str, bool, float, Enum, dict, and list. You can display fields (such as algorithm hyperparameters) in
Creating Workflow Phases Creating a Dataset Phase Creating a Dataset Labeling Phase Creating a Dataset Import Phase Creating a Dataset Release Phase Creating a Training Job Phase Creating a Model Registration Phase Creating a Service Deployment Phase Parent topic: Workflow Development
Creating a Multi-Branch Workflow Multi-Branch Workflow Creating a Condition Phase to Control Branch Execution Configuring Phase Parameters to Control Branch Execution Configuring Multi-Branch Phase Data Parent topic: Workflow Development Command Reference
Advanced Workflow Capabilities Using Big Data Capabilities (MRS) in a Workflow Specifying Certain Phases to Run in a Workflow Parent topic: Workflow Development Command Reference
Introduction to Inference After an AI model is developed, you can use it to create an AI application and quickly deploy the application as an inference service. The AI inference capabilities can be integrated into your IT platform by calling APIs.
AI application users Typical AI application users include AI software integrators, hardware vendors, AI deployment personnel, and AI O&M personnel.
GalleryModel: defines a model subscribed from AI Gallery. This object is used for model registration. Placeholder data objects, which are specified when a workflow is running DatasetPlaceholder: defines datasets to be specified when a workflow is running.
Cost Management Cost Composition ModelArts provides the AI tool chain and AI compute power. The cost consists of the resource cost and O&M cost of AI compute power. Cost Allocation ModelArts supports enterprise project management.
Besides simplifying AI application development, ModelArts also slashes costs and lowers the bar for AI application developers.
On December 31, 2025 00:00 (GMT +08:00), Huawei Cloud will take two modules of AI development platform ModelArts offline.
Parent topic: Using FunctionGraph to Deploy Stable Diffusion for AI Drawing
Configuring the SFS Turbo Data Eviction Policy After an OBS bucket is added as the storage backend of an SFS Turbo HPC file system, you are advised to configure a cold data eviction duration. Once configured, SFS Turbo will automatically delete files that have not been accessed within
Uploading Data to OBS and Preloading the Data to SFS Turbo Uploading Data to OBS An OBS bucket has been created by referring to Creating a Bucket. obsutil has been installed by referring to Downloading and Installing obsutil. Visit the ImageNet official website at http://image-net.org
Notebook Cache Directory Alarm Reporting When creating a notebook instance, you can select CPU, GPU, or Ascend resources based on the service data volume. If you select GPU or Ascend resources, ModelArts mounts hard disks to the cache directory. You can use this directory to store