检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Edition, and Web Edition Microsoft SQL Server 2016 Enterprise Edition, Standard Edition, and Web Edition Microsoft SQL Server 2017 Enterprise Edition, Standard Edition, and Web Edition Microsoft SQL Server 2019 Enterprise Edition, Standard Edition, and Web Edition Full Incremental OBS
STANDARD: Huawei Cloud OBS Standard storage IA: Huawei Cloud OBS Infrequent Access storage ARCHIVE: Huawei Cloud OBS Archive storage DEEP_ARCHIVE: Huawei Cloud OBS Deep Archive storage SRC_STORAGE_MAPPING: converts the source storage class into an OBS storage class based on the predefined
Constraints: N/A Options: N/A Default value: N/A bucket No String Parameter description: Name of the OBS bucket used for backup. Constraints: N/A Options: N/A Default value: N/A basePath No String Parameter description: Storage path of the snapshot in the OBS bucket.
Response Parameters Status code: 200 Table 3 Response body parameters Parameter Type Description model_version String Model version source_job_version String Version of the source training job source_location String OBS path where the model is located or the template address of the
Options: asc: ascending order desc: descending order (default value) process_parameter No String Image resize configuration, which is the same as OBS settings. For details, see Resizing Images.
The default value is 0. process_parameter No String Image resize configuration, which is the same as OBS settings. For details, see Resizing Images.
The corresponding link parameters are as follows: generic-jdbc-connector: link to relational database obs-connector: link to OBS hdfs-connector: link to HDFS hbase-connector: link to HBase and link to CloudTable hive-connector: link to Hive ftp-connector/sftp-connector: link to an
The corresponding link parameters are as follows: generic-jdbc-connector: link to relational database obs-connector: link to OBS hdfs-connector: link to HDFS hbase-connector: link to HBase and link to CloudTable hive-connector: link to Hive ftp-connector/sftp-connector: link to an
get For SFS volumes: sfs:shares:getAllSharesDetail √ √ Listing PersistentVolumeClaims GET /api/v1/namespaces/{namespace}/persistentvolumeclaims CCI:namespaceSubResource:List For EVS volumes: evs:volumes:list For SFS volumes: sfs:shares:getAllSharesDetail sfs:shares:ShareAction For OBS
OBS metric data can be queried only when the related OBS APIs are called.
For example, the cloud service type code of OBS is hws.service.type.obs. To obtain a specific service type, call the API in Querying Cloud Service Types.
If files are migrated between FTP, SFTP, HDFS, and OBS and the migration source's File Format is set to Binary, files will be directly transferred, free from field mapping. You can create a field converter on the Map Field page when creating a table/file migration job.
Use cstore_buffers to specify the cache of ORC, Parquet, or CarbonData metadata and data for OBS or HDFS foreign tables. The metadata cache size should be 1/4 of cstore_buffers and not exceed 2 GB.
Type: USERSET Value range: enumerated values 0: The size of the database is directly estimated based on the OBS bucket. 1: The size of the entire database is normally calculated in regular mode.
include but are not limited to: Center for Internet Security (CIS), International Organization for Standardization (ISO), National Institute of Standards and Technology (NIST), Cloud Security Alliance, and product vendors. cts-obs-bucket-track Create at least one CTS tracker for each OBS
The {service_id}-infer-result subdirectory in the output_dir directory is used by default. key_sample_output String Output path of hard examples in active learning log_url String OBS URL of the logs of a training job.
], "Resource": [ "*" ], "Condition": { "StringNotMatch": { "rms:TrackerBucketName": [ "BucketName" ] } } } ] } rms:TrackerBucketPathPrefix Preventing storing resource data to unexpected OBS
If you cannot directly connect to the client node to upload files through the local network, upload the JAR file or source data to OBS, import the file to HDFS on the Files tab page of the MRS cluster, and run the hdfs dfs -get command on the HDFS client to download the file to the
Options: OBS: OBS bucket SFTP: SFTP interconnection obs_bucket_source String OBS bucket source. Options: AUTO_CREATE: automatically created CREATED: created obs_bucket_name String OBS bucket name retention_duration Integer Retention period of a screen recording file, in days.
Options: OBS: OBS bucket SFTP: SFTP interconnection obs_bucket_source String OBS bucket source. Options: AUTO_CREATE: automatically created CREATED: created obs_bucket_name String OBS bucket name retention_duration Integer Retention period of a screen recording file, in days.