检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Managing Spark Jobs Viewing Basic Information On the Overview page, click Spark Jobs to go to the SQL job management page. Alternatively, you can click Job Management > Spark Jobs. The page displays all Spark jobs. If there are a large number of jobs, they will be displayed on multiple
Querying All Jobs Function This API is used to query information about all jobs in the current project. URI URI format GET /v1.0/{project_id}/jobs Parameter description Table 1 URI parameter Parameter Mandatory Type Description project_id Yes String Project ID, which is used for resource
Running Jobs in Batches Function This API is used to trigger batch job running. URI URI format POST /v1.0/{project_id}/streaming/jobs/run Parameter description Table 1 URI parameter Parameter Mandatory Type Description project_id Yes String Project ID, which is used for resource isolation
Stopping Jobs in Batches Function This API is used to stop running jobs in batches. URI URI format POST /v1.0/{project_id}/streaming/jobs/stop Parameter description Table 1 URI parameter Parameter Mandatory Type Description project_id Yes String Project ID, which is used for resource
Deleting Jobs in Batches Function This API is used to batch delete jobs at any state. URI URI format POST /v1.0/{project_id}/streaming/jobs/delete Parameter description Table 1 URI parameter Parameter Mandatory Type Description project_id Yes String Project ID, which is used for resource
Listing Batch Processing Jobs Function This API is used to list batch processing jobs in a queue of a project. URI URI format GET /v2.0/{project_id}/batches Parameter description Table 1 URI parameter Parameter Mandatory Type Description project_id Yes String Project ID, which is
Flink OpenSource SQL Jobs Reading Data from Kafka and Writing Data to RDS Reading Data from Kafka and Writing Data to GaussDB(DWS) Reading Data from Kafka and Writing Data to Elasticsearch Reading Data from MySQL CDC and Writing Data to GaussDB(DWS) Reading Data from PostgreSQL CDC
SDKs Related to SQL Jobs Database-Related SDKs Table-Related SDKs Job-related SDKs Parent topic: Java SDK (DLI SDK V1)
SDKs Related to SQL Jobs Database-Related SDKs Table-Related SDKs Job-related SDKs Parent topic: Python SDK (DLI SDK V1)
SDKs Related to Spark Jobs Prerequisites You have configured the Java SDK environment by following the instructions provided Overview. You have initialized the DLI Client by following the instructions provided in Initializing the DLI Client and created queues by following the instructions
SDKs Related to Flink Jobs Prerequisites You have configured the Java SDK environment by referring to Overview. You have initialized the DLI client by referring to Initializing the DLI Client and created queues by referring to Queue-Related SDKs. Creating a SQL Job DLI provides an
SDKs Related to Spark Jobs For details about the dependencies and complete sample code, see Overview. Submitting Batch Jobs DLI provides an API to perform batch jobs. The example code is as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 def submit_spark_batch_job(dli_client
Managing Program Packages of Jar Jobs Package Management Overview Creating a DLI Package Configuring DLI Package Permissions Changing the DLI Package Owner Managing DLI Package Tags DLI Built-in Dependencies Parent topic: Common DLI Management Operations
Using Delta to Develop Jobs in DLI DLI Delta Metadata Using Delta to Submit a Spark Jar Job in DLI
Using Hudi to Develop Jobs in DLI Submitting a Spark SQL Job in DLI Using Hudi Submitting a Spark Jar Job in DLI Using Hudi Submitting a Flink SQL Job in DLI Using Hudi Using HetuEngine on Hudi
APIs Related to SQL Jobs (Discarded) Submitting a SQL Job (Discarded) Canceling a Job (Discarded) Querying the Job Execution Result-Method 1 (Discarded) Querying the Job Execution Result-Method 2 (Discarded) Parent topic: Out-of-Date APIs
APIs Related to Flink Jobs (Discarded) Querying Job Monitoring Information (Discarded) Parent topic: Out-of-Date APIs
Connecting to DLI and Submitting SQL Jobs Using JDBC Scenario In Linux or Windows, you can connect to the DLI server using JDBC. Jobs submitted to DLI using JDBC are executed on the Spark engine. Once JDBC 2.X has undergone function reconstruction, query results can only be accessed
How Do I Manage Jobs Running on DLI? To manage a large number of DLI jobs, you can use the following methods: Manage jobs by group. Group tens of thousands of jobs by type and run each group on a queue. Create IAM users. Alternatively, create IAM users to execute different types of
Using Spark Jobs to Access Data Sources of Datasource Connections Overview Connecting to CSS Connecting to GaussDB(DWS) Connecting to HBase Connecting to OpenTSDB Connecting to RDS Connecting to Redis Connecting to Mongo Parent topic: Spark Jar Jobs