检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
How Do I Do If the Compression Rate of an OBS Table Is High?
How Do I Map an OBS Table to a DLI Partitioned Table? Scenario When using a Flink SQL job, you need to create an OBS partition table for subsequent batch processing.
The following compression formats are supported: none bzip2 deflate gzip Storage Path Enter or select an OBS path. The export path must be a folder that does not exist in the OBS bucket. Specifically, you need to create a folder in the target OBS directory.
How Do I Resolve an Unauthorized OBS Bucket Error? DLI's agency dli_admin_agency is upgraded to dli_management_agency. dli_management_agency contains the permissions required for datasource operations, message notifications, and user authorization operations.
How Do I Create a Table Using JSON Data in an OBS Bucket? To associate JSON data nested in an OBS bucket, you can create a table in asynchronous mode.
Use the DataSource syntax to create an OBS partitioned table.
Why Is Error Invalid OBS Bucket Name Reported After a Flink Job Submission Failed? Symptom The storage path of the Flink Jar job checkpoints was set to an OBS bucket. The job failed to be submitted, and an error message indicating an invalid OBS bucket name was displayed.
How Can I Resolve Data Inconsistencies When Importing Data from DLI to OBS? Symptom When DLI is used to insert data into an OBS temporary table, only part of data is imported.
Example To set the OBS path of partition dt='2008-08-08',city='xxx' in table student to obs://bucketName/fileName/student/dt=2008-08-08/city=xxx, run the following statement: 1 2 3 ALTER TABLE student PARTITION(dt='2008-08-08',city='xxx') SET LOCATION 'obs://bucketName/fileName
Updating Partitioned Table Data (Only OBS Tables Supported) Function This statement is used to update the partition information about a table in the Metastore.
How Do I Do If I Can't Query Data After Manually Adding It to the Partition Directory of an OBS Table? Symptom Partition data is manually uploaded to a partition of an OBS table. However, the data cannot be queried using DLI SQL editor.
Figure 1 Failed to create an OBS table in Avro format Possible Causes Spark 2.3.x does not support creating OBS tables in Avro format. Solution When using the DataSource syntax to create an OBS table in Avro format, select Spark 2.4.x or later. Parent topic: Spark Job O&M
Why Are Logs Not Written to the OBS Bucket After a DLI Flink Job Fails to Be Submitted for Running? Mode for storing generated job logs when a DLI Flink job fails to be submitted or executed.
How Do I Set Up AK/SK So That a General Queue Can Access Tables Stored in OBS? (Recommended) Solution 1: Using a Temporary AK/SK The temporary AK/SK is recommended.
OBS data.
Why Is Error "IllegalArgumentException: Buffer size too small. size" Reported When Data Is Loaded to an OBS Foreign Table?
Spark Jar Jobs Using DEW to Acquire Access Credentials for Reading and Writing Data from and to OBS Scenario To write the output data of a Spark Jar job to OBS, AK/SK is required for accessing OBS.
Flink Jar Jobs Using DEW to Acquire Access Credentials for Reading and Writing Data from and to OBS Scenario To write the output data of a Flink Jar job to OBS, AK/SK is required for accessing OBS.
Why Do I Encounter the Error "verifyBucketExists on XXXX: status [403]" When Using a Spark Job to Access an OBS Bucket That I Have Permission to Access? This error message may be due to the OBS bucket being set as the DLI log bucket, which cannot be used for other purposes.
When importing data, you can select an existing OBS bucket path or create an OBS bucket path, but only one OBS bucket path can be specified.