检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Set this parameter to the OBS storage path of the hive Catalog, which is obtained in Creating a LakeFormation Instance. Figure 1 Configuring hive.metastore.warehouse.dir Click Save. Interconnecting Spark with OBS If your cluster does not have the Spark component, skip this step.
data from an SFTP server to HDFS or OBS Importing data from an SFTP server to HBase Importing data from an SFTP server to Phoenix tables Importing data from an SFTP server to Hive tables Importing data from an FTP server to HDFS or OBS Importing data from an FTP server to HBase Importing
OBS Permission Control Click Manage and modify the mapping between MRS users and OBS permissions. For details, see Configuring Fine-Grained OBS Access Permissions for MRS Cluster Users. Logging Used to collect logs about cluster creation and scaling failures.
Question: When data is exported from a DistCP job, if some files already exist in OBS, how will the job process the files? Answer: DistCP jobs will overwrite the files in OBS. Parent topic: Job Management
You can enter the path or click HDFS or OBS to select a file. The path can contain a maximum of 1,023 characters. It cannot contain special characters (;|&>,<'$) and cannot be left blank or all spaces. The OBS program path starts with obs://.
You can enter the path or click HDFS or OBS to select a file. The path can contain a maximum of 1,023 characters. It cannot contain special characters (;|&>,<'$) and cannot be left blank or all spaces. The OBS program path starts with obs://.
The submission command example is as follows (the topology name is obs-test): storm jar /opt/jartarget/storm-examples-1.0.jar com.huawei.storm.example.obs.SimpleOBSTopology obs://my-bucket obs-test After the topology is submitted successfully, log in to OBS Browser to view the topology
OBS Indicates that backup files are stored in an OBS directory. You need to set the Target Path to the OBS directory for storing backup data. Set Maximum Number of Recovery Points to the number of snapshots that can be retained in the cluster.
Creating a VPC and Subnet Object Storage Service (OBS) OBS stores the following user data: MRS job input data, such as user programs and data files MRS job output data, such as result files and log files of jobs In MRS clusters, HDFS, Hive, MapReduce, YARN, Spark, Flume, and Loader
Interconnection with Other Cloud Services Using MRS Spark SQL to Access GaussDB(DWS) Connecting to the OBS File System with an MRS Hive Table Interconnecting Hive with CSS
Prerequisites You have uploaded the program packages and data files required by jobs to OBS or HDFS. If the job program needs to read and analyze data in the OBS file system, you need to configure storage-compute decoupling for the MRS cluster.
Importing Doris Data Importing Data to Doris with Broker Load Importing OBS Data to Doris with Broker Load Importing Data to Doris with Stream Load Parent topic: Using Doris
ClickHouse Data Import Interconnecting ClickHouse with RDS for MySQL Interconnecting ClickHouse with OBS Synchronizing Kafka Data to ClickHouse Importing DWS Table Data to ClickHouse Using ClickHouse to Import and Export Data Parent topic: Using ClickHouse
Set Program Path to the path where programs are stored on OBS, for example, obs://sparkpi/program/spark-examples_2.11-2.1.0.jar. In Program Parameter, select --class for Parameter and set Value to org.apache.spark.examples.SparkPi. Set Parameters to 10.
Loader supports the following data export modes: Exporting data from HDFS or OBS to an SFTP server Exporting data from HDFS or OBS to a relational database Exporting data from HBase to an SFTP server Exporting data from HBase to a relational database Exporting data from Phoenix tables
The OBS program path should start with obs://, for example, obs://wordcount/program/XXX.jar. The HDFS program path should start with hdfs://, for example, hdfs://hacluster/user/XXX.jar.
Developing an HDFS Application HDFS Development Plan Initializing HDFS Writing Data to an HDFS File Appending HDFS File Content Reading an HDFS File Deleting an HDFS File HDFS Colocation Setting HDFS Storage Policies Using HDFS to Access OBS Parent topic: HDFS Development Guide
For a storage-compute decoupled system, OBS files may fail to be accessed. As a result, upper-layer component services cannot process data. Possible Causes The meta role of the MRS cluster is abnormal.
Figure 2 MRS Job Submission Process Data processed by MRS jobs is usually stored in OBS or HDFS. Before creating a job, you need to upload the data to be analyzed to an OBS file system or the HDFS in the MRS cluster.
ECS BMS VPC EVS Image Management Service (IMS) OBS EIP SMN IAM For details about how to view and modify quotas, see Quotas.