检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Flink jobs can directly access DIS, OBS, and SMN data sources without using datasource connections. Enhanced connections can only be created for yearly/monthly and pay-per-use queues.
OBS Bucket: Select an OBS bucket for storing job logs and grant access permissions of the OBS bucket as prompted. Enable Checkpointing: Enable this function. Enter a SQL statement in the editing pane. The following is an example. Modify the parameters in bold as you need.
OBS Bucket: Select an OBS bucket for storing job logs and grant access permissions of the OBS bucket as prompted. Enable Checkpointing: Enable this function. Enter a SQL statement in the editing pane. The following is an example. Modify the parameters in bold as you need.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Insert the following data into the source Kafka topic: 202103251505050001,appShop,2021-03-25 15:05:05,500.00,400.00,2021-03-25 15:10:00,0003,Cindy,330108 202103241606060001,appShop,2021-03-24 16:06:06,200.00,180.00,2021-03-24 16:10:06,0001,Alice,330106 Read the Parquet file in the OBS
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Create a Kafka cluster for DMS, enable SASL_SSL, download the SSL certificate, and upload the downloaded certificate client.jks to an OBS bucket.
The column name contains letters, digits, and underscores (_). using Uses hudi to define and create a Hudi table. table_comment Description of the table. location_path OBS path.
This parameter specifies the OBS address. Example: obs://bucket/path/CloudSearchService.cer Key Handling The Elasticsearch sink can work in either upsert mode or append mode, depending on whether a primary key is defined.
Write Dirty Data: Specify this parameter if data that fails to be processed or filtered out during job execution needs to be written to OBS for future viewing. Before writing dirty data, create an OBS link on the CDM console.
Before writing dirty data, create an OBS link. You can view the data on OBS later. Retain the default value No, meaning dirty data is not recorded. Click Save and Run. On the Job Management page, you can view the job execution progress and result.
Before writing dirty data, create an OBS link. You can view the data on OBS later. Retain the default value No, meaning dirty data is not recorded. Click Save and Run. On the Job Management page, you can view the job execution progress and result.
If no partition storage path is specified when running ADD PARTITION, the partition directory will be deleted from OBS, and the data will be moved to the .Trash/Current folder.
OBS Bucket: Select an OBS bucket for storing job logs and grant access permissions of the OBS bucket as prompted. Enable Checkpointing: Enable this function. Enter a SQL statement in the editing pane. The following is an example. Modify the parameters in bold as you need.
Select Save Job Log, and specify the OBS bucket for saving job logs. Change the values of the parameters in bold as needed in the following script.
Resource Planning and Costs Table 1 Resource planning and costs Resource Description Cost OBS You need to create an OBS bucket and upload data to OBS for data analysis using DLI.
---------------------------- External FUNCTION example.namespace02.repeat ( s varchar, n integer ) RETURNS varchar COMMENT 'repeat' LANGUAGE JAVA DETERMINISTIC CALLED ON NULL INPUT SYMBOL com.test.udf.hetuengine.functions.repeat URI obs
password'='**')"); Insert data. 1 sparkSession.sql("insert into dli_to_dws values(3,'Liu'),(4,'Xie')"); Query data. 1 sparkSession.sql("select * from dli_to_dws").show(); Response: Submitting a Spark Job Generate a JAR file based on the code file and upload the JAR file to the OBS
sparkSession.sql("insert into opentsdb_new_test values('Penglai', 'abc', '2021-06-30 18:00:00', 30.0)"); Query data. 1 sparkSession.sql("select * from opentsdb_new_test").show(); Response Submitting a Spark job Generate a JAR file based on the code file and upload the JAR file to the OBS