检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Solution Delete the file with an incorrect format from the HDFS directory or replace it with a correct one. Parent topic: Using Hive
Run the unlink LocalBackup command to delete the LocalBackup soft link. Run the mkdir –p LocalBackup command to create the LocalBackup directory. Run the chown –R omm:wheel LocalBackup command to change the user and group to which the file belongs.
Modify the dfs.client.failover.proxy.provider.hacluster configuration as follows: <property> <name>dfs.client.failover.proxy.provider.hacluster</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> You can also delete the preceding
If the default policy is deleted by mistake, you can manually delete the service and then restart the component service. Figure 1 Relationships between Ranger and other components Parent topic: Ranger
If another service (such as YARN) needs to access HDFS and add, delete, modify, or query data in HDFS, the corresponding TGT and ST must be obtained for secure access.
When Too Many HBase Connections Occupy the Network Ports HBase BulkLoad Tasks of 210,000 Map Tasks and 10,000 Reduce Tasks failed To Be Executed Modified and Deleted Data Can Still Be Queried by the Scan Command Failed to Create Tables When the Region is in FAILED_OPEN State How to Delete
Set Batch Delete to a job export type. ALL: exports all jobs. Specify Job: exports specified jobs. Select Specify Job. In the job list, select the jobs to be exported. Specify Group: exports all the jobs in a specified group. Select Specify Group.
Application edit permission Users who have the permission can create, edit, and delete cluster connections and data connections. They can also create stream tables as well as create and run jobs. In addition, users who have the permission can view current applications.
To handle this problem, use hdfs fsck to check the health status of the file blocks, delete the damaged or lost blocks, and run task again. Parent topic: HDFS Troubleshooting
Application edit permission Users who have the permission can create, edit, and delete cluster connections and data connections. They can also create stream tables as well as create and run jobs. In addition, users who have the permission can view current applications.
Delete the following custom parameters based on the application scenario and save the configuration: Parameter Value Configuration File Description implicit-conversion true coordinator.config.properties Implicit conversion udf-implicit-conversion true coordinator.config.properties
For example, if you run the delete from tableName command in Hudi to physically delete table data, the table data still exists on the destination DWS or ClickHouse. Binary logging (enabled by default) and GTID have been enabled for the MySQL database.
active master node and delete temporary files.
REMOVE TABLE hbase_tablename [WHERE where_condition]; The statement is used to delete data that meets criteria from the Hive on HBase table.
REMOVE TABLE hbase_tablename [WHERE where_condition]; The statement is used to delete data that meets criteria from the Hive on HBase table.
Procedure Check the disk capacity and delete unnecessary files. On the FusionInsight Manager portal, choose Cluster > Name of the desired cluster > Services > HDFS.
If resources do not need to be managed in a security zone, the Ranger administrator can click Delete to delete the security zone. Configuring Permission Policies in a Security Zone Log in to the Ranger management page as the Ranger administrator of a security zone.
REMOVE TABLE hbase_tablename [WHERE where_condition]; The statement is used to delete data that meets criteria from the Hive on HBase table.
If the default policy is deleted by mistake, you can manually delete the service and then restart the component service. Choose Access Manager > Reports to view all security access policies of each component.
Delete the container folder (if any) from the /sys/fs/cgroup/cpuset/hadoop-yarn/ directory. Delete all CPUs configured in the cpuset.cpus file in /sys/fs/cgroup/cpuset/hadoop-yarn/. Procedure Log in to Manager.