检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Here is an example: 1 jdbcDF.drop("id").show() Submitting a Spark job Generate a JAR file based on the code file and upload the JAR file to the OBS bucket. In the Spark job editor, select the corresponding dependency module and execute the Spark job.
sparkSession.sql("INSERT INTO TABLE person VALUES ('John', 30),('Peter', 45)".stripMargin) Query data. 1 sparkSession.sql("SELECT * FROM person".stripMargin).collect().foreach(println) Submitting a Spark job Generate a JAR file based on the code file and upload the JAR file to the OBS
Select Save Job Log, and specify the OBS bucket for saving job logs. Storing authentication credentials such as usernames and passwords in code or plaintext poses significant security risks. It is recommended using DEW to manage credentials instead.
abs This function is used to calculate the absolute value of an input parameter. Syntax abs(DOUBLE a) Parameters Table 1 Parameter Parameter Mandatory Type Description a Yes DOUBLE, BIGINT, DECIMAL, or STRING The value can be a float, integer, or string. If the value is not of the
SQL Jobs SQL Job Development SQL Job O&M
Spark Jobs Spark Job Development Spark Job O&M
Flink Jobs Flink Job Consulting Flink SQL Jobs Flink Jar Jobs Flink Job Performance Tuning
Spark Jobs Does DLI Spark Support Scheduled Periodic Jobs? DLI Spark does not support job scheduling. You can use other services, such as DataArts Studio, or use APIs or SDKs to customize job schedule. Can I Define the Primary Key When I Create a Table with a Spark SQL Statement?
Managing Flink Jobs Viewing Flink Job Details Setting the Priority for a Flink Job Enabling Dynamic Scaling for Flink Jobs Querying Logs for Flink Jobs Common Operations of Flink Jobs Parent Topic: Submitting a Flink Job on the DLI Management Console
Managing Spark Jobs Viewing Basic Information On the Overview page, click Spark Jobs to go to the SQL job management page. Alternatively, you can click Job Management > Spark Jobs. The page displays all Spark jobs. If there are a large number of jobs, they will be displayed on multiple
Running Jobs in Batches Function This API is used to trigger batch job running. URI URI format POST /v1.0/{project_id}/streaming/jobs/run Parameter description Table 1 URI parameter Parameter Mandatory Type Description project_id Yes String Project ID, which is used for resource isolation
Batch Stopping Jobs Function This API is used to stop running jobs in batches. URI URI format POST /v1.0/{project_id}/streaming/jobs/stop Parameter description Table 1 URI parameter Parameter Mandatory Type Description project_id Yes String Project ID, which is used for resource isolation
Batch Deleting Jobs Function This API is used to batch delete jobs in any state. URI URI format POST /v1.0/{project_id}/streaming/jobs/delete Parameter description Table 1 URI parameter Parameter Mandatory Type Description project_id Yes String Project ID, which is used for resource
SDKs Related to SQL Jobs Database-Related SDKs Table-Related SDKs Job-related SDKs Parent topic: Java SDK (DLI SDK V1)
SDKs Related to SQL Jobs Database-Related SDKs Table-Related SDKs Job-related SDKs Parent topic: Python SDK (DLI SDK V1)
SDKs Related to Spark Jobs Prerequisites You have configured the Java SDK environment by following the instructions provided Overview. You have initialized the DLI Client by following the instructions provided in Initializing the DLI Client and created queues by following the instructions
SDKs Related to Flink Jobs Prerequisites You have configured the Java SDK environment by referring to Overview. You have initialized the DLI client by referring to Initializing the DLI Client and created queues by referring to Queue-Related SDKs. Creating a SQL Job DLI provides an
SDKs Related to Spark Jobs For details about the dependencies and complete sample code, see Overview. Submitting Batch Jobs DLI provides an API to perform batch jobs. The example code is as follows: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 def submit_spark_batch_job(dli_client
Flink OpenSource SQL Jobs Reading Data from Kafka and Writing Data to RDS Reading Data from Kafka and Writing Data to GaussDB(DWS) Reading Data from Kafka and Writing Data to Elasticsearch Reading Data from MySQL CDC and Writing Data to GaussDB(DWS) Reading Data from PostgreSQL CDC
Listing Batch Processing Jobs Function This API is used to list batch processing jobs in a queue of a project. URI URI format GET /v2.0/{project_id}/batches Parameter description Table 1 URI parameter Parameter Mandatory Type Description project_id Yes String Definition Project ID