检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Currently, only the value 0 is supported, indicating that the OBS file size is limited. import_data Boolean Whether to import data.
OPENMPI_HOST_FILE_PATH} \ -mca plm_rsh_args "-p ${SSHD_PORT}" \ -tune ${TUNE_ENV_FILE} \ ${OPENMPI_BIND_ARGS} \ ${OPENMPI_X_ARGS} \ ${OPENMPI_MCA_ARGS} \ ${OPENMPI_EXTRA_ARGS} \ python /home/ma-user/user-job-dir/gpu-train/train.py --datasets=obs
Deploying the predictor in Deploying a Real-Time Service is to deploy the model file stored in OBS to the container provided by the Service Deployment module. The environment specifications (such as CPU and GPU specifications) are determined by configs parameters of predictor.
Pay-per-use Yearly/Monthly Creating an OBS bucket is free of charge. You pay only for the storage capacity and duration you actually use. For details, see Object Storage Price Calculator.
Supported AI Engines for ModelArts Inference If you import a model from a template or OBS to create an AI application, the following AI engines and versions are supported.
The options are as follows: DIR: Data is exported to OBS (default value).
When creating an algorithm, ensure that the names of files and folders in the OBS bucket where the algorithm code is stored are unique. Otherwise, the algorithm may fail to be published. If the algorithm is published, the code fails to be opened.
Supported AI Engines for Inference If you import a preset image from a template or OBS to create a model, you can select the AI engines and versions in the table below.
Prerequisites The OBS directory you use and ModelArts are in the same region. Procedure Log in to the ModelArts console and choose Model Management in the navigation pane on the left. Click Create Model. Configure parameters. Set basic information about the model.
OBS path to the output data of a batch job instance_count Yes Integer Common parameter. Number of instances deployed in a model. The maximum number of instances is 128.
This parameter is a container environment variable if a job uses a custom image. log_url No String OBS URL of training job logs. By default, this parameter is left blank. An example value is /usr/log/. train_instance_type Yes String Resource flavor selected for a training job.
General-purpose Intel CPU flavor, ideal for rapid data exploration and experiments", "feature" : "NOTEBOOK", "free" : false, "id" : "modelarts.vm.cpu.2u", "memory" : 8388608, "name" : "CPU: 2 vCPUs 8 GB", "sold_out" : false, "storages" : [ "EVS", "OBSFS", "EFS", "OBS
Deleting an access authorization ModelArtsConsoleBackend deleteAuthorization Querying the authorization list ModelArtsConsoleBackend listAuthorization Querying a feature switch ModelArtsConsoleBackend showFeature Adding a feature switch ModelArtsConsoleBackend showFeature Creating an OBS
This parameter is returned only when ManifestFile is used. dest_path String OBS path to the output data of a batch job Example: https://xxx.obs.myhwclouds.com/res/. instance_count Integer Number of instances deployed for a model status String Service status.
Create a model directory in OBS and upload the triton_serving.sh file and llama_7b folder to the model directory. Figure 2 Uploading files to the model directory Create a model. Set Meta Model Source to OBS and select the meta model from the model directory.
For example, ModelArts must access OBS to read your data for training. For security purposes, ModelArts must be authorized to access other cloud services. This is agency authorization. ModelArts provides one-click auto authorization.
The {service_id}-infer-result subdirectory in the output_dir directory is used by default. key_sample_output String Output path of hard examples in active learning log_url String OBS URL of the logs of a training job.
For example, ModelArts needs to access OBS to read your data for training. For security purposes, ModelArts must be authorized to access other cloud services. This is agency authorization. ModelArts provides one-click automatic authorization.
Response Parameters Status code: 200 Table 3 Response body parameters Parameter Type Description model_version String Model version source_job_version String Version of the source training job source_location String OBS path where the model is located or the template address of the
The default value is 0. process_parameter No String Image resize configuration, which is the same as OBS settings. For details, see Resizing Images.