检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
spark-archive-2x-arm.zip and spark-archive-2x-x86.zip packages in 3 to the /user/spark2x/jars/8.1.0.1 HDFS directory: hdfs dfs -put spark-archive-2x-arm.zip /user/spark2x/jars/8.1.0.1/ hdfs dfs -put spark-archive-2x-x86.zip /user/spark2x/jars/8.1.0.1/ After the upload is complete, delete
Spark client node and run the following commands: source Client installation directory/bigdata_env source Client installation directory/Hudi/component_env After compiling and building the sample code, you can use the spark-submit command to perform the write, update, query, and delete
In the Topology actions area, click Kill to delete the submitted Storm topology. Submit the Storm topology again and enable the function of viewing topology data processing logs.
In this example, the parent directory of the file's parent directory is deleted. 2017-05-31 02:04:08,286 | INFO | IPC Server handler 30 on 25000 | allowed=true ugi=appUser@HADOOP.COM (auth:TOKEN) ip=/192.168.1.22 cmd=delete src=/user/sparkhive/warehouse/daas/dsp/output
Method 2: Delete the /system/balancer.id file from HDFS and perform the balance operation again. Parent topic: Using HDFS
Principal [name=user1, type=USER] does not have following privileges on Object [type=LOCAL_URI, name=file:/tmp/input/mapdata] for operation LOAD : [SELECT, INSERT, DELETE] Error 2: HiveAccessControlException Permission denied.
If not, delete the file or directory. If the file or directory cannot be deleted, change the file or directory permission to 770. Parent topic: Using Hive
In the Topology actions area, click Activate, Deactivate, Rebalance, Kill, Debug, Stop Debug, and Change Log Level to activate, deactivate, redeploy, delete, debug, and stop debugging the topology, and modify the log levels, respectively.
If this function is enabled, you can use remove table xx where xxx to delete HBase records from Hive. NOTE: This parameter applies to MRS 3.x or later. true hive.metastore.server.min.threads Indicates the number of threads started by MetaStore for processing connections.
algorithms, which can be specified during table creation: AES (the encryption class is org.apache.hadoop.hive.serde2.AESRewriter) SMS4 (the encryption class is org.apache.hadoop.hive.serde2.SMS4Rewriter) After importing data from a common Hive table to a Hive column encryption table, delete
After the data connection is created, you can edit, test, or delete the data connection in the Operation column.
After the data connection is created, you can edit, test, or delete the data connection in the Operation column.
If this function is enabled, you can use remove table xx where xxx to delete HBase records from Hive. true hive.metastore.server.min.threads Indicates the number of threads started by MetaStore for processing connections.
If useless data exists, delete the data to reduce the number of storage files for the HBase. If the preceding conditions are not met, then you need to consider a capacity expansion. Parent topic: HBase Troubleshooting
Delete the storage group displayed in ProcedureInformation again on the IoTDB client. If the deletion is successful, the alarm is automatically cleared. Otherwise, go to 7. Collect fault information.
Delete unnecessary files, wait 5 minutes, and check whether the alarm is cleared. If yes, no further action is required. If no, go to 2.a. Check the number of files in the system. On MRS Manager, choose System > Threshold Configuration.
You can run commands to manually delete residual queues. Impact on the System During the script execution, the Controller service is restarted, Yarn configurations are synchronized, and the active and standby ResourceManagers are restarted.
data in opentsdb, the url is https://node-ana-corejnWt:4242/api/query 2019-06-27 14:05:24,112 INFO [main] examples.OpentsdbExample: Status Code : 200 2019-06-27 14:05:24,112 INFO [main] examples.OpentsdbExample: delete data to opentsdb successfully.
data in opentsdb, the url is https://node-ana-corejnWt:4242/api/query 2019-06-27 14:05:24,112 INFO [main] examples.OpentsdbExample: Status Code : 200 2019-06-27 14:05:24,112 INFO [main] examples.OpentsdbExample: delete data to opentsdb successfully.
execute The following permission allows all files to be read, written, deleted, and executed: permission java.io.FilePermission "<< ALL FILES>>", "read,write,delete,execute"; The following permission allows the read of the user's home directory: permission java.io.FilePermission