检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Select Save Job Log, and specify the OBS bucket for saving job logs.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Flink job-related APIs You can authorize DLI to OBS, create and update SQL jobs and user-defined Flink jobs, run jobs in batches, query the job list, job details, job execution plans, and job monitoring information.
Creating a datasource connection: VPC ReadOnlyAccess Creating yearly/monthly resources: BSS Administrator Creating a tag: TMS FullAccess and EPS EPS FullAccess Using OBS for storage: OBS OperateAccess Creating an agency: Security Administrator DLI ReadOnlyAccess Read-only permissions
Public services, such as Elastic Cloud Server (ECS), Elastic Volume Service (EVS), Object Storage Service (OBS), Virtual Private Cloud (VPC), Elastic IP (EIP), and Image Management Service (IMS), are shared within the same region.
database_name Name of the database, consisting of letters, numbers, and underscores (_) table_name Name of the table in the database, consisting of letters, numbers, and underscores (_) using Uses hudi to define and create a Hudi table. table_comment Description of the table location_path OBS
The default value is false. obs_bucket No String Name of an OBS bucket. smn_topic No String SMN topic name.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Notes Flink jobs can directly access DIS, OBS, and SMN data sources without using datasource connections. You are advised to use enhanced datasource connections to connect DLI to data sources.
Figure 1 Viewing logs Obtain the folder of the archived logs in the OBS directory. The details are as follows: Spark SQL jobs: Locate the log folder whose name contains driver or container_ xxx _000001.
Creating a datasource connection: VPC ReadOnlyAccess Creating yearly/monthly resources: BSS Administrator Creating a tag: TMS FullAccess and EPS EPS FullAccess Using OBS for storage: OBS OperateAccess Creating an agency: Security Administrator DLI ReadOnlyAccess Read-only permissions
Figure 2 Selecting a data catalog on the SQL editor page When connecting DLI to a LakeFormation instance, you need to specify the OBS path for storing the database when creating it.
Method 2: If you allow DLI to save job logs in OBS, view the output in the taskmanager.out file. +I(47.29.201.179 - - [28/Feb/2019:13:17:10 +0000] "GET /?p=1 HTTP/2.0"2005316"https://domain.com/?
Insert the following data into the source Kafka topic: 202103251505050001,appshop,2021-03-25 15:05:05,500.00,400.00,2021-03-25 15:10:00,0003,Cindy,330108 202103241606060001,appShop,2021-03-24 16:06:06,200.00,180.00,2021-03-24 16:10:06,0001,Alice,330106 Read the ORC file in the OBS
Only the exclusive_cluster mode is supported. .withLogEnabled(true) // Enable the function of uploading job logs to OBS buckets. .withObsBucket("YourObsBucketName") // OBS bucket name, which is used to store logs and checkpoint data.
log_enabled=True, # Enable the function of uploading job logs to OBS buckets. obs_bucket="your_obs_bucket_name", # OBS bucket name, which is used to store logs and checkpoints. job_type="flink_opensource_sql_job", # Job type.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.