检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
In the displayed dialog box, select the OBS bucket path storing the checkpoint. The checkpoint save path is Bucket name/jobs/checkpoint/Directory starting with the job ID. Click OK. Restart the Flink job again. The job will be restored fom the checkpoint path.
You can go to the Flink job list and choose More > Import Savepoint in the Operation column of a Flink job to import the latest checkpoint in OBS and restore the job from it.
Public services, such as Elastic Cloud Server (ECS), Elastic Volume Service (EVS), Object Storage Service (OBS), Virtual Private Cloud (VPC), Elastic IP (EIP), and Image Management Service (IMS), are shared within the same region.
When data is stored on OBS, any charges for storage resource usage will be billed by OBS, not DLI.
bucket name tbl_path Storage location of the Delta table in the OBS bucket target_alias Alias of the target table sub_query Subquery. source_alias Alias of the source table or source expression merge_condition Condition for associating the source table or expression with the target
For details about the supported configuration items, see Table 3. result_format String Definition Storage format of job results Range Currently, only CSV is supported. result_path String Definition OBS path of job results Range None execution_details_path String Definition OBS path
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date.
Insert data. 1 sparkSession.sql("insert into testhbase values('95274','abc','Hongkong')"); Query data. 1 sparkSession.sql("select * from testhbase").show(); Submitting a Spark job Generate a JAR file based on the code file and upload the JAR file to the OBS bucket.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date.
Select Save Job Log, and specify the OBS bucket for saving job logs.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Flink job-related APIs You can authorize DLI to OBS, create and update SQL jobs and user-defined Flink jobs, run jobs in batches, query the job list, job details, job execution plans, and job monitoring information.
Creating a datasource connection: VPC ReadOnlyAccess Creating yearly/monthly resources: BSS Administrator Creating a tag: TMS FullAccess and EPS EPS FullAccess Using OBS for storage: OBS OperateAccess Creating an agency: Security Administrator DLI ReadOnlyAccess Read-only permissions
database_name Name of the database, consisting of letters, numbers, and underscores (_) table_name Name of the table in the database, consisting of letters, numbers, and underscores (_) using Uses hudi to define and create a Hudi table. table_comment Description of the table location_path OBS
Public services, such as Elastic Cloud Server (ECS), Elastic Volume Service (EVS), Object Storage Service (OBS), Virtual Private Cloud (VPC), Elastic IP (EIP), and Image Management Service (IMS), are shared within the same region.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Figure 1 Viewing logs Obtain the folder of the archived logs in the OBS directory. The details are as follows: Spark SQL jobs: Locate the log folder whose name contains driver or container_ xxx _000001.