检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Step Operation Document 1 Making preparations Before using ModelArts Studio, create required resources, such as OBS buckets and resource pools.
The OBS directory you use and ModelArts are in the same region. Creating a Visualization Job Log in to the ModelArts management console. In the left navigation pane, choose Training Jobs. On the displayed page, click the Visualization Jobs tab.
By default, this parameter is left blank. inputs No String OBS storage path of a training job dataset_id No String Dataset ID of a training job.
OBS Not supported Not supported Method 1: Modify Service Information on the Service Management Page Log in to the ModelArts management console and choose Service Deployment from the left navigation pane. Go to the service management page of the target service.
After the model is saved, it must be uploaded to the OBS directory before being published. The config.json and customize_service.py files must be contained during publishing. For details about the definition method, see Introduction to Model Package Specifications.
After the permission is granted, you can access OBS and SWR of IAM users in a notebook instance.
After the permission is granted, you can access OBS and SWR of IAM users in a notebook instance.
Output Path Select an OBS path for storing ExeML data. NOTE: The output path stores all data generated in the ExeML project. Training Flavor Select a training flavor for this ExeML project. You will be billed based on different flavors.
Output Path An OBS path for storing ExeML data NOTE: The output path stores all data generated in the ExeML project. Training Flavor Select a training flavor for this ExeML project. You will be billed based on different flavors.
Output Path An OBS path for storing ExeML data NOTE: The output path stores all data generated in the ExeML project. Training Flavor Select a training flavor for this ExeML project. You will be billed based on different flavors.
Output Path Select an OBS path for storing ExeML data. NOTE: The output path stores all data generated in the ExeML project. Training Flavor Select a training flavor for this ExeML project. You will be billed based on different flavors.
After the model is saved, it must be uploaded to the OBS directory before being published. The config.json configuration and the customize_service.py inference code must be included during the publishing.
Output Path Select an OBS path for storing ExeML data. NOTE: The output path stores all data generated in the ExeML project. Training Flavor Select a training flavor for this ExeML project. You will be billed based on different flavors.
Figure 2 Data labeling - text classification Adding or Deleting Data In an ExeML project, the data source is the OBS directory corresponding to the input path of the dataset.
The options are as follows: DIR: Data is exported to OBS (default value).
# Build a local image and save to local path and OBS ma-cli image build .ma/customize_from_ubuntu_18.04_to_modelarts/Dockerfile --target .
However, the original data in the dataset and the labeled data that has been accepted are still stored in the corresponding OBS bucket. Parent topic: Team Labeling
# Directories for storing the label.txt file on OBS and in the model package # with open(os.path.join(self.model_path, 'label.txt')) as f: # self.label = json.load(f) # Load the model in saved_model format in non-blocking mode to prevent blocking
Storage resource fee: fee for storing data in OBS, EVS, or SFS Table 1 Billing items Billing Item Description Billing Mode Billing Formula Compute resource Public resource pools Usage of compute resources. For details, see ModelArts Pricing Details.
Currently, obs, flavor, train_flavor, swr, and pacific are supported. No str delay Whether parameters are set when the workflow is running. The default value is False, indicating that parameters are set before the workflow runs.