检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
Delete the /admin/reassign_partitions and /controller directories. Perform the preceding steps to forcibly stop the migration. After the cluster recovers, run the kafka-reassign-partitions.sh command to delete redundant copies generated during the intermediate process.
Create a ZNode. create /test View ZNode information. ls / Write data to the ZNode. set /test "zookeeper test" View the data written to the ZNode. get /test Delete the created ZNode. delete /test Parent topic: Using ZooKeeper
On Manager, choose System > Permission to add or delete a user, user group, or a role, and to grant or cancel permissions.
Delete user data from the user information table. delete 'user_info','12005000201','i' Delete the user information table. disable 'user_info' drop 'user_info' Follow-up Operations: Releasing Resources To avoid additional expenditures, release resources promptly if you no longer need
Otherwise, the schemas or tables cannot be queried. false You can click Delete to delete custom configuration parameters. Click OK.
Delete the user information table after service A ends. Table 1 User information No.
Delete the user information table after service A ends. Table 1 User information No.
LOAD INSERT and DELETE ALTER TABLE DROP PARTITION DELETE CREATE FUNCTION Hive Admin Privilege DROP FUNCTION Hive Admin Privilege ALTER DATABASE Hive Admin Privilege Parent topic: Hive User Permission Management
Change the value of the DataNode parameter dfs.datanode.data.dir and delete the directories that use the same disk as critical directories. Go to Step 24. Check whether multiple directories in the DataNode data directory use the same disk.
You must export data before you delete clusters. HDFS is recommended for computing-intensive scenarios. Parent topic: MRS Basics
You can delete the logs in earlier days to release storage space. If the log file is large, add the log file directory to /etc/logrotate.d/syslog to enable periodical deletion. Run sed -i '3 a/var/log/sudo/sudo.log' /etc/logrotate.d/syslog.
To solve this problem, perform the following operations to delete and re-create the tables: To do so, perform the following steps: Run the following command on the cluster client to repair the tables: hbase hbck -j ${CLIENT_HOME}/HBase/hbase/tools/hbase-hbck2-1.1.0-h0.cbu.mrs.*.jar
Delete a topic (only supported on 0.8.2+ and delete.topic.enable = true is set in broker configuration). Batch generate partition assignments for multiple topics with option to select brokers to use. Batch run reassignment of partitions for multiple topics.
Delete topic on the UI (supported only by clusters of version 0.8.2 with delete.topic.enable set to true) Batch generate partition assignments for multiple topics with option to select brokers to use. Batch run reassignment of partitions for multiple topics.
-name "*20051*" command in the /tmp and /var/run/MRS-DBService/ directories, and delete all files found. Log in to Manager and restart DBService. Parent topic: Using DBService
Run the following commands to log in to the HDFS client: cd HDFS client installation directory source bigdata_env kinit Service user Run the following command to delete the damaged block: hdfs dfs -rm -skipTrash /tmp/hive-scratch/omm/_tez_session_dir/xxx-resources/xxx.jar Run the
FSResult[] get(List<FSGet> fsGets) Reads multiple lines of data from HFS tables. void delete(FSDelete fsDelete) Deletes data from HFS tables. void delete(List<FSDelete> fsDeletes) Deletes multiple lines of data from HFS tables. void close() Closes a table object. org.apache.hadoop.hbase.filestream.client.FSTable
If the default policy is deleted by mistake, you can manually delete the service and then restart the component service. Choose Access Manager > Reports to view all security access policies of each component.
To add a directory, for example, /srv/BigData/yarn/data2/nm/localdir, you need to delete the files in /srv/BigData/yarn/data2/nm/localdir first.
To delete data, the DELETE permission is required. For details, see Granting Hive Permissions on Tables, Columns, or Databases.