检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Table 1 Primitive data types Data Type Description Storage Space Value Range Support by OBS Table Support by DLI Table INT Signed integer 4 bytes –2147483648 to 2147483647 Yes Yes STRING String - - Yes Yes FLOAT Single-precision floating point 4 bytes - Yes Yes DOUBLE Double-precision
Select Save Job Log, and specify the OBS bucket for saving job logs. Storing authentication credentials such as usernames and passwords in code or plaintext poses significant security risks. It is recommended using DEW to manage credentials instead.
Select Save Job Log, and specify the OBS bucket for saving job logs.
Select Save Job Log, and specify the OBS bucket for saving job logs.
Select Save Job Log, and specify the OBS bucket for saving job logs. Set the values of the parameters in bold in the following script as needed.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date.
Select Save Job Log, and specify the OBS bucket for saving job logs.
For the OBS table in JSON format, the key type of the MAP supports only the STRING type. The key of the MAP type cannot be NULL. Therefore, the MAP key does not support implicit conversion between inserted data formats where NULL values are allowed.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date. Go to the folder of the date, find the folder whose name contains taskmanager, download the .out file, and view result logs.
Upload the billing details downloaded in Step 1: Obtaining Consumption Data to the created OBS bucket. Create a table on DLI. Log in to the DLI console. In the navigation pane, choose SQL Editor. Select spark for Engine, and select the queue and database.
Submitting a Spark job Upload the Java code file to the OBS bucket. In the Spark job editor, select the corresponding dependency module and execute the Spark job.
Select Save Job Log, and specify the OBS bucket for saving job logs. Set the values of the parameters in bold in the following script as needed.
Select Save Job Log, and specify the OBS bucket for saving job logs. Set the values of the parameters in bold in the following script as needed.
Select Save Job Log, and specify the OBS bucket for saving job logs. Set the values of the parameters in bold in the following script as needed.
On the displayed page, click Create and use the JAR package uploaded to OBS to create a package. In the left navigation, choose Job Management and click Flink Jobs.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date. Go to the folder of the date, find the folder whose name contains taskmanager, download the .out file, and view result logs.
Select Save Job Log, and specify the OBS bucket for saving job logs. Storing authentication credentials such as usernames and passwords in code or plaintext poses significant security risks. It is recommended using DEW to manage credentials instead.
Create a Hive OBS external table using Spark SQL and insert data.
On the displayed page, click Create and use the JAR package uploaded to OBS to create a package. In the left navigation, choose Job Management and click Flink Jobs.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date. Go to the folder of the date, find the folder whose name contains taskmanager, download the .out file, and view result logs.