检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Maintaining Build Jobs Editing a Source Code Job Editing a Package Job Starting a Build Job Configuring Branch or Tag Build Deleting a Build Job Parent topic: Continuous Delivery
Number of Parallel Jobs Billing Description The billing item of parallel job extension is parallel jobs. To learn about the price of CodeArts, visit CodeArts Pricing Details. Table 1 Billing by number of parallel jobs Billing Item Description Resource Extension Type Billing Formula
Executing Public Jobs Scenarios Public jobs are predefined jobs that you can read only and execute. Basic public jobs are listed and can be executed on target resources. Precautions Before executing a public job, ensure that you have the resource permissions of target instances. Executing
Creating Custom Jobs Scenarios You can create custom jobs, including custom scripts, APIs, and process controls. The jobs can be used for global parameters and can be associated with the parameter center. Precautions Confirm and fill in the risk level of the operation according to
Managing Custom Jobs Scenarios To approve, modify, clone, or delete a custom job, perform the operations in this section. Precautions When modifying or cloning a job, determine and fill out the risk level of the job. Modifying a Custom Job Log in to COC. In the navigation pane on
Managing CDM Jobs Scenario This section describes how to manage CDM table/file migration jobs in batches. The following operations are supported: Managing jobs by group Running jobs in batches Deleting jobs in batches Exporting jobs in batches Importing jobs in batches You can export
Overview of Offline Jobs Offline processing migration jobs support cross-cluster delivery of data migration jobs to implement batch job migration. Compared with traditional migration jobs that are managed in CDM clusters, offline processing migration jobs are managed in DataArts Factory
Batch Reinstalling OSs Function This API is used to reinstall an OS using a new OS image. It is an asynchronous API. You can query the instance status by calling the ShowInstanceStatus API. If the status changes to pending, the OS is being reinstalled. If the status changes to running
Orchestrating Pipeline Jobs Jobs in a Pipeline A job is the minimum manageable execution unit in a pipeline. Jobs can be managed and orchestrated in serial and parallel modes, and executed based on a specific sequence and time in a stage. Refer to this section to configure jobs. Notes
Executing Custom Jobs Scenarios To execute a custom job, perform the operations in this section. Precautions Before executing a job, ensure that you have the resource permissions of target instances. A maximum of 999 instances can be selected for a task. Executing a Custom Job Log
Applying for KooPhone OBT Apply for open beta test (OBT). After your application is approved, you can use KooPhone. Prerequisites OBT resources are limited. Only Huawei accounts that have completed real-name authentication can apply for OBT. Operations required in Signing up for a
Managing Flink Jobs Viewing Flink Job Details Setting the Priority for a Flink Job Enabling Dynamic Scaling for Flink Jobs Querying Logs for Flink Jobs Common Operations of Flink Jobs Parent Topic: Submitting a Flink Job on the DLI Management Console
Managing Spark Jobs Viewing Basic Information On the Overview page, click Spark Jobs to go to the SQL job management page. Alternatively, you can click Job Management > Spark Jobs. The page displays all Spark jobs. If there are a large number of jobs, they will be displayed on multiple
Managing Labeling Jobs Viewing Labeling Jobs On the ModelArts Data Labeling page, view your created labeling jobs in the My Creations tab. Log in to the ModelArts management console. In the navigation pane on the left, choose Data Preparation > Label Data. In the My Creations tab,
Querying Registered OUs Function This API is used to query information about OUs registered with RGC. URI GET https://{endpoint}/v1/managed-organization/managed-organizational-units/{managed_organizational_unit_id} Table 1 Path Parameters Parameter Mandatory Type Description managed_organizational_unit_id
Checking Jobs and Links Scenarios RDS for SQL Server provides job monitoring and link monitoring. Job monitoring allows you to view publication and subscription jobs and their execution history. You can also modify profiles and restart jobs. Link monitoring allows you to check or
Managing FlinkServer Jobs Viewing the Health Status of FlinkServer Jobs Importing and Exporting FlinkServer Job Information Configuring Automatic Clearing of FlinkServer Job Residuals Adding Third-Party Dependency JAR Packages to a FlinkServer Job Using UDFs in FlinkServer Jobs Configuring
Managing Loader Jobs Migrating Loader Jobs in Batches Deleting Loader Jobs in Batches Importing Loader Jobs in Batches Exporting Loader Jobs in Batches Viewing Historical Information About a Loader Job Purging Historical Loader Data Managing Loader Links Parent topic: Using Loader
list cluster * g:ResourceTag/<tag-key> g:EnterpriseProjectId dws:cluster:getQueryPropertyForDMS Grants the permission to obtain query attributes in DMS. read cluster * g:ResourceTag/<tag-key> g:EnterpriseProjectId dws:cluster:listBucketForDMS Grants the permission to obtain the OBS
What Are the Differences Between Quality Jobs and Comparison Jobs? Possible Causes Differences between quality jobs and comparison jobs Solution You can create quality jobs to apply the created rules to existing tables. Comparison jobs support cross-source data comparison. You can