检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Upload the billing details downloaded in Step 1: Obtaining Consumption Data to the created OBS bucket. Create a table on DLI. Log in to the DLI console. In the navigation pane, choose SQL Editor. Select spark for Engine, and select the queue and database.
For the OBS table in JSON format, the key type of the MAP supports only the STRING type. The key of the MAP type cannot be NULL. Therefore, the MAP key does not support implicit conversion between inserted data formats where NULL values are allowed.
Select Save Job Log, and specify the OBS bucket for saving job logs. Set the values of the parameters in bold in the following script as needed.
Select Save Job Log, and specify the OBS bucket for saving job logs. Set the values of the parameters in bold in the following script as needed.
Select Save Job Log, and specify the OBS bucket for saving job logs. Set the values of the parameters in bold in the following script as needed.
On the displayed page, click Create and use the JAR package uploaded to OBS to create a package. In the left navigation, choose Job Management and click Flink Jobs.
Create a Hive OBS external table using Spark SQL and insert data.
On the displayed page, click Create and use the JAR package uploaded to OBS to create a package. In the left navigation, choose Job Management and click Flink Jobs.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date. Go to the folder of the date, find the folder whose name contains taskmanager, download the .out file, and view result logs.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date. Go to the folder of the date, find the folder whose name contains taskmanager, download the .out file, and view result logs.
Select Save Job Log, and specify the OBS bucket for saving job logs. Storing authentication credentials such as usernames and passwords in code or plaintext poses significant security risks. It is recommended using DEW to manage credentials instead.
Select Save Job Log, and specify the OBS bucket for saving job logs.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Submitting a Spark job Upload the Python code file to the OBS bucket. In the Spark job editor, select the corresponding dependency module and execute the Spark job.
On the displayed page, click Create and use the JAR package uploaded to OBS to create a package. In the left navigation, choose Job Management and click Flink Jobs.
Select Save Job Log, and specify the OBS bucket for saving job logs. Storing authentication credentials such as usernames and passwords in code or plaintext poses significant security risks. It is recommended using DEW to manage credentials instead.
CreateSparkJobResponse resp = client.createSparkJob(new CreateSparkJobRequest() .withBody(new CreateSparkJobRequestBody() .withQueue(queueName) .withSparkVersion("2.4.5") .withName("demo_spark_app") .withFile("obs
Submitting a Spark job Upload the Java code file to the OBS bucket. In the Spark job editor, select the corresponding dependency module and execute the Spark job.
Configure Path for Partition: This permission allows you to set the path of a partition in a partition table to a specified OBS path. Rename Table Partition: This permission allows you to rename partitions in a partition table.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date. Go to the folder of the date, find the folder whose name contains taskmanager, download the .out file, and view result logs.