检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
OBS Path: obs://sparksql/input/sparksql-test.txt HDFS Path: /user/userinput Figure 1 Importing data from OBS to HDFS Submit the SQL statement. On the details page of the MRS cluster, click the Jobs tab. For details, see Running a Spark Job.
OBS to HBase Typical Scenario: Importing Data from a Relational Database to ClickHouse Typical Scenario: Importing Data from HDFS to ClickHouse Parent topic: Using Loader
from an FTP Server to HBase Using Loader to Import Data from a Relational Database to HDFS or OBS Using Loader to Import Data from a Relational Database to HBase Using Loader to Import Data from a Relational Database to Hive Using Loader to Import Data from HDFS or OBS to HBase Using
Output path should be a directory that does not exist, for example, obs://obs-demo-analysis-hwt4/output/. NOTE: To obtain the AK/SK for accessing OBS, perform the following steps: Log in to the Huawei Cloud management console.
Configuring Storage-Compute Decoupling for an MRS Cluster Configuration Process Interconnecting an MRS Cluster with OBS Using an IAM Agency Interconnecting an MRS Cluster with OBS Through Guardian FAQ About Decoupled Storage and Compute
Migrating Data from MRS HDFS to OBS This practice demonstrates how to migrate file data from MRS HDFS to OBS using CDM. System Interconnection Using DBeaver to Access Phoenix This practice describes how to use DBeaver to access Phoenix.
storage location of the table partition to another bucket does not take effect. alter table table_name partition(dt date) set location "obs://OBS bucket 2/Folder in the bucket"; Parent topic: Application Development
storage location of the table partition to another bucket does not take effect. alter table table_name partition(dt date) set location "obs://OBS bucket 2/Folder in the bucket"; Parent topic: Application Development
storage location of the table partition to another bucket does not take effect. alter table table_name partition(dt date) set location "obs://OBS bucket 2/Folder in the bucket"; Parent topic: Developing Impala Applications
storage location of the table partition to another bucket does not take effect. alter table table_name partition(dt date) set location "obs://OBS bucket 2/Folder in the bucket"; Parent topic: Developing a Hive Application
The OBS program path should start with obs://, for example, obs://wordcount/program/XXX.jar. The HDFS program path should start with hdfs://, for example, hdfs://hacluster/user/XXX.jar.
Data Analytics Using Spark2x to Analyze IoV Drivers' Driving Behavior Using Hive to Load HDFS Data and Analyze Book Scores Using Hive to Load OBS Data and Analyze Enterprise Employee Information Using Flink Jobs to Process OBS Data Consuming Kafka Data Using Spark Streaming Jobs Using
Supported OBS monitoring. Upgraded the OBS packages. Resolved the issue that some data is not inserted when 10 data records are concurrently inserted into hive-jdbc. Resolved the issue that Hive occasionally reports a Kryo deserialization failure.
OBS: logs are saved to OBS. This is the default option. Go to 5. Set OBS Path to the OBS path for storing service log files. Enter a full path that does not start with a slash (/) and is no more than 900 bytes. The system will automatically create the path if it does not exist.
Supported OBS monitoring. Upgraded the OBS packages. Resolved the issue that some data is not inserted when 10 data records are concurrently inserted into hive-jdbc. Resolved the issue that Hive occasionally reports a Kryo deserialization failure.
Why Hive Tables in the OBS Directory Fail to Be Deleted? Why Does an OBS Quickly Deleted Directory Not Take Effect After Being Added to the Customized Hive Configuration? Parent topic: Using Hive
Fine-grained authentication for OBS storage-compute decoupled clusters If you want to perform fine-grained permission control on OBS resources in OBS storage-compute decoupled clusters, MRS provides you with a fine-grained permission control solution based on the IAM agency.
The OBS program path should start with obs://, for example, obs://wordcount/program/XXX.jar. The HDFS program path should start with hdfs://, for example, hdfs://hacluster/user/XXX.jar.
Request body: { "job_name":"MapReduceTest", "job_type":"MapReduce", "arguments":[ "obs://obs-test/program/hadoop-mapreduce-examples-x.x.x.jar", "wordcount", "obs://obs-test/input/", "obs://obs-test/job/mapreduce/output" ], "properties
data from an SFTP server to HDFS or OBS Importing data from an SFTP server to HBase Importing data from an SFTP server to Phoenix tables Importing data from an SFTP server to Hive tables Importing data from an FTP server to HDFS or OBS Importing data from an FTP server to HBase Importing