检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
DataArts Studio Failed to Schedule Spark Jobs Issue DataArts Studio fails to schedule jobs, and a message is displayed indicating that data in the /thriftserver/active_thriftserver directory cannot be read. Symptom DataArts Studio fails to schedule jobs, and the following error is
Failed to Run Jobs Related to the sftp-connector Connector Symptom The jobs related to the sftp-connector connector fail to be executed and "Failed to obtain the SFTP stream" is displayed. xxx (Failed to send channel request.) Error "subsystem request failed on channel 0. Connection
Configuring the Distributed Cache to Execute MapReduce Jobs Scenarios This section applies to MRS 3.x or later. Distributed caching is useful in the following scenarios: Rolling upgrade During the upgrade, applications must keep the text content (JAR file or configuration file) unchanged
Configuring the Distributed Cache to Execute MapReduce Jobs Scenarios This section applies to MRS 3.x or later. Distributed caching is useful in the following scenarios: Rolling upgrade During the upgrade, applications must keep the text content (JAR file or configuration file) unchanged
Changing the Password for an OMS Database Access User This section describes how to regularly change the password for an OMS database access user to enhance system O&M security. Impact on the System The OMS service needs to be restarted for the new password to take effect. The service
Where Are the Execution Logs of Spark Jobs Stored? Logs of unfinished Spark jobs are stored in the /srv/BigData/hadoop/data1/nm/containerlogs/ directory on the Core node. Logs of finished Spark jobs are stored in the /tmp/logs/Username/logs directory of the HDFS. Parent topic: Job
An Error Is Reported When Oozie Schedules HiveSQL Jobs Symptom In an MRS 3.x cluster, Oozie fails to schedule Hive jobs. The HiveSQL logs show that the scheduling task is executed successfully but the Yarn job fails. The error information is as follows: java.io.ioException:output.properties
Enhancing the Joins of Large and Small Tables in Flink Jobs This topic is available for MRS 3.3.0 or later only. Joining Big and Small Tables There are big tables and small tables when you join two Flink streams. Small table data is broadcasted to every join task, and large table
Data Cannot Be Saved When Loader Jobs Are Configured Question When Internet Explorer 10 or 11 is used to access the Loader page and submit data, an error is reported. Answer Symptom After data is saved and submitted, an error similar to "Invalid query parameter jobgroup id. cause:
ALM-12039 Active/Standby OMS Databases Not Synchronized Description The system checks the data synchronization status between the active and standby OMS Databases every 10 seconds. This alarm is generated when the synchronization status cannot be queried for 30 consecutive times or
ALM-12062 OMS Parameter Configurations Mismatch with the Cluster Scale Alarm Description The system checks whether the OMS parameter configurations match with the cluster scale at each top hour. If the OMS parameter configurations do not meet the cluster scale requirements, the system
What Types of Spark Jobs Can Be Submitted in a Cluster? Question: What Types of Spark Jobs Can Be Submitted in a Cluster? Answer: MRS clusters support Spark jobs submitted in Spark, Spark Script, or Spark SQL mode. Parent topic: Job Management
Querying the exe Object List of Jobs (Deprecated) Function This API is used to query the exe object list of all jobs. This API is incompatible with Sahara. URI Format GET /v1.1/{project_id}/job-exes Parameter description Table 1 URI parameter Parameter Mandatory Description project_id
ALM-50401 Number of JobServer Jobs Waiting to Be Executed Exceeds the Threshold Alarm Description The system checks the number of jobs submitted to JobServer every 30 seconds. This alarm is generated when the number of jobs to be executed exceeds 800. Alarm Attributes Alarm ID Alarm
Class Cannot Be Found After Flume Submits Jobs to Spark Streaming Issue After Flume submits jobs to Spark Streaming, the class cannot be found. Symptom After the Spark Streaming code is packed into a JAR file and submitted to the cluster, an error message is displayed indicating that
A Large Number of Jobs Occupying Resources After Yarn Is Started in a Cluster Symptom In an MRS 2.x cluster or earlier, a large number of jobs are generated after Yarn is started, occupying computing resources of the cluster. Cause Analysis If the source IP address of the Any protocol
What Are the Differences Between the Client Mode and Cluster Mode of Spark Jobs? You need to understand the concept ApplicationMaster before understanding the essential differences between Yarn-client and Yarn-cluster. In Yarn, each application instance has an ApplicationMaster process
An Error Is Reported When a Yarn Client Command Is Used to Query Historical Jobs Symptom When a Yarn client command is executed to query historical jobs, the following error is reported and the process is terminated. Cause Analysis The memory allocated to the client is insufficient
How Do I Forcibly Stop MapReduce Jobs Executed by Hive? Question How do I stop a MapReduce task manually if the task is suspended for a long time? Answer Log in to FusionInsight Manager. Choose Cluster > Name of the desired cluster > Services > Yarn. On the left pane, click ResourceManager
How Do I Forcibly Stop MapReduce Jobs Executed by Hive? Question How do I stop a MapReduce task manually if the task is suspended for a long time? Answer Log in to FusionInsight Manager. Choose Cluster > Name of the desired cluster > Services > Yarn. On the left pane, click ResourceManager