检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
The detailed parameter description is as follows: address Specifies the IP address of the OBS service endpoint or HDFS cluster. OBS: Specifies the endpoint of the OBS service.
Step 2: Uploading Data to OBS Create an OBS bucket and upload the local CSV data to the bucket.
Note that <obs_bucket_name> in the following statement indicates the OBS bucket name. Only some regions are supported. For details about the supported regions and OBS bucket names, see Table 1. GaussDB(DWS) clusters do not support cross-region access to OBS bucket data.
Step 1: Starting Preparations This guide is an introductory tutorial that demonstrates how to create a sample GaussDB(DWS) cluster, connect to the cluster database, import the sample data from OBS, and analyze the sample data. You can use this tutorial to evaluate GaussDB(DWS).
You can choose to set a global OBS bucket, and any file directories you create will be saved to this bucket's folder by default.
Bucket Bucket name, which is used to store reports. test123 OBS Path Storage directory, which can be customized.
Manually Creating a Foreign Server In the syntax CREATE FOREIGN TABLE (SQL on Hadoop or OBS) for creating a foreign table, you need to specify a foreign server associated with the MRS data source connection.
Hive is interconnected with OBS: Go back to OBS Console, click the name of the bucket, choose Objects > Upload Object, and upload the product_info.txt file to the path of the product_info table in the OBS bucket.
This column only applies to OBS 3.0 tables and foreign tables. vfs_remote_read_bytes Bigint Total number of bytes actually read from OBS by the OBS virtual file system, in bytes.
This column only applies to OBS 3.0 tables and foreign tables with storage and compute decoupled. vfs_remote_read_bytes Bigint Total number of bytes actually read from OBS by the OBS virtual file system, in bytes.
This column only applies to OBS 3.0 tables and foreign tables with storage and compute decoupled. vfs_remote_read_bytes Bigint Total number of bytes actually read from OBS by the OBS virtual file system, in bytes.
This column only applies to OBS 3.0 tables and foreign tables with storage and compute decoupled. vfs_remote_read_bytes Bigint Total number of bytes actually read from OBS by the OBS virtual file system, in bytes.
This column only applies to OBS 3.0 tables and foreign tables with storage and compute decoupled. vfs_remote_read_bytes Bigint Total number of bytes actually read from OBS by the OBS virtual file system, in bytes.
CREATE TABLE Asynchronous read and write for OBS tables with decoupled storage OBS tables that use decoupled storage can perform asynchronous reads and writes. - Parallel ANALYZE for OBS tables with decoupled storage OBS tables with decoupled storage support parallel ANALYZE, which
Add "region_name": "obs domain" to the $GAUSSHOME/etc/region_map file. region_name can be a string consisting of uppercase letters, lowercase letters, digits, slashes (/), or underscores (_). obs domain indicates the domain name of the OBS server.
Add "region_name": "obs domain" to the $GAUSSHOME/etc/region_map file. region_name can be a string consisting of uppercase letters, lowercase letters, digits, slashes (/), or underscores (_). obs domain indicates the domain name of the OBS server.
Figure 2 Enabling Kernel Audit Log Dump Create Agency: Select an OBS bucket to store kernel audit data. If no OBS bucket is available, click View OBS Bucket to access the OBS console and create one.
Each folder can hold up to 100 script files, which are stored in the corresponding OBS bucket file directory. To make things easier, the OBS bucket file address can be set globally. For details, see Global Settings..
"obs:bucket:GetBucketAcl", "obs:bucket:GetBucketVersioning", "obs:bucket:GetBucketStoragePolicy", "obs:bucket:ListBucketMultipartUploads", "obs:object:ListMultipartUploadParts", "obs:bucket:ListBucketVersions
Residual files in the OBS path are archived in the corresponding OBS database directory. count bigint Number of archived residual files. size bigint Total size of archived residual files, in bytes.