检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the corresponding log based on the job running date.
When using the metadata service provided by DLI, only OBS tables can be created. options( type='mor', --Table type: mor or cow primaryKey='id', -- Primary key, which can be a composite one but must be globally unique. preCombineField
Request Table 2 Request parameters Parameter Mandatory Type Description obs_dir Yes String OBS path for storing exported job files. is_selected Yes Boolean Whether to export a specified job. job_selected No Array of Longs This parameter indicates the ID set of jobs to be exported
Improving the performance of OBS Committer when writing small files Improved the performance of Object Storage Service (OBS) when writing small files, improving data transfer efficiency.
Table 1 SDK function matrix Language Function Content Java Submitting a SQL Job Using an SDK This section provides instructions on how to authorize DLI's Java SDKs to access and operate on OBS buckets.
Java SDKs Table 1 Java SDKs SDK Description Submitting a SQL Job Using an SDK This section provides instructions on how to authorize DLI's Java SDKs to access and operate on OBS buckets.
On the OBS console, you can configure lifecycle rules for a bucket to periodically delete objects in it or change object storage classes. For details, see Configuring a Lifecycle Rule.
Tenant Administrator permissions are required to access data from OBS to execute Flink jobs on DLI, for example, obtaining OBS data sources, log dump (including bucket authorization), checkpointing enabling, and job import and export.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Select Save Job Log, and specify the OBS bucket for saving job logs.
So, you must configure an OBS bucket, save logs, and enable checkpointing. Do not set the scaling detection period to a small value to avoid frequent job start and stop. The restoration duration of a scaling job is affected by the savepoint size.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Public services, such as Elastic Cloud Server (ECS), Elastic Volume Service (EVS), Object Storage Service (OBS), Virtual Private Cloud (VPC), Elastic IP (EIP), and Image Management Service (IMS), are shared within the same region.
You can go to the Flink job list and choose More > Import Savepoint in the Operation column of a Flink job to import the latest checkpoint in OBS and restore the job from it.
When data is stored on OBS, any charges for storage resource usage will be billed by OBS, not DLI.
bucket name tbl_path Storage location of the Delta table in the OBS bucket target_alias Alias of the target table sub_query Subquery. source_alias Alias of the source table or source expression merge_condition Condition for associating the source table or expression with the target
Insert data. 1 sparkSession.sql("insert into testhbase values('95274','abc','Hongkong')"); Query data. 1 sparkSession.sql("select * from testhbase").show(); Submitting a Spark job Generate a JAR file based on the code file and upload the JAR file to the OBS bucket.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date.
Select Save Job Log, and specify the OBS bucket for saving job logs.