检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date. Go to the folder of the date, find the folder whose name contains taskmanager, download the .out file, and view result logs.
url", url) .option("uri", uri) .option("database", database) .option("collection", collection) .option("user", user) .option("password", password) .load() Operation result Submitting a Spark job Generate a JAR file based on the code file and upload the JAR file to the OBS
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Select Save Job Log, and specify the OBS bucket for saving job logs. Storing authentication credentials such as usernames and passwords in code or plaintext poses significant security risks. It is recommended using DEW to manage credentials instead.
To use DLI, you need to access services such as Object Storage Service (OBS), Virtual Private Cloud (VPC), and Simple Message Notification (SMN). If it is your first time using DLI, you will need to configure an agency to allow access to these dependent services.
Submitting a Spark job Upload the Python code file to the OBS bucket. (Optional) Add the krb5.conf and user.keytab files to other dependency files of the job when creating a Spark job in an MRS cluster with Kerberos authentication enabled.
Sample code: Prepare data: create table test_null2(str1 string,str2 string,str3 string,str4 string); insert into test_null2 select "a\rb", null, "1\n2", "ab"; Execute SQL: SELECT * FROM test_null2; Spark 2.4.5 a b 1 2 ab Spark 3.3.1 a b 1 2 ab Export query results to OBS and check
Select Save Job Log, and specify the OBS bucket for saving job logs. Storing authentication credentials such as usernames and passwords in code or plaintext poses significant security risks. It is recommended using DEW to manage credentials instead.
Here is an example: 1 jdbcDF.drop("id").show() Submitting a Job Generate a JAR file based on the code file and upload the JAR file to the OBS bucket. In the Spark job editor, select the corresponding dependency module and execute the Spark job.
Here is an example: 1 jdbcDF.drop("id").show() Submitting a Spark job Generate a JAR file based on the code file and upload the JAR file to the OBS bucket. In the Spark job editor, select the corresponding dependency module and execute the Spark job.
Select Save Job Log, and specify the OBS bucket for saving job logs. Storing authentication credentials such as usernames and passwords in code or plaintext poses significant security risks. It is recommended using DEW to manage credentials instead.
sparkSession.sql("INSERT INTO TABLE person VALUES ('John', 30),('Peter', 45)".stripMargin) Query data. 1 sparkSession.sql("SELECT * FROM person".stripMargin).collect().foreach(println) Submitting a Spark job Generate a JAR file based on the code file and upload the JAR file to the OBS
abs This function is used to calculate the absolute value of an input parameter. Syntax abs(DOUBLE a) Parameters Table 1 Parameter Parameter Mandatory Type Description a Yes DOUBLE, BIGINT, DECIMAL, or STRING The value can be a float, integer, or string. If the value is not of the
SQL Jobs SQL Job Development SQL Job O&M
Spark Jobs Spark Job Development Spark Job O&M
Flink Jobs Flink Job Consulting Flink SQL Jobs Flink Jar Jobs Flink Job Performance Tuning
Spark Jobs Does DLI Spark Support Scheduled Periodic Jobs? DLI Spark does not support job scheduling. You can use other services, such as DataArts Studio, or use APIs or SDKs to customize job schedule. Can I Define the Primary Key When I Create a Table with a Spark SQL Statement?
Managing Flink Jobs Viewing Flink Job Details Setting the Priority for a Flink Job Enabling Dynamic Scaling for Flink Jobs Querying Logs for Flink Jobs Common Operations of Flink Jobs Parent topic: Submitting a Flink Job Using DLI
Managing Spark Jobs Viewing Basic Information On the Overview page, click Spark Jobs to go to the SQL job management page. Alternatively, you can click Job Management > Spark Jobs. The page displays all Spark jobs. If there are a large number of jobs, they will be displayed on multiple
Querying All Jobs Function This API is used to query information about all jobs in the current project. URI URI format GET /v1.0/{project_id}/jobs Parameter description Table 1 URI parameter Parameter Mandatory Type Description project_id Yes String Project ID, which is used for resource