检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
OBS Bucket: Select an OBS bucket for storing job logs and grant access permissions of the OBS bucket as prompted. Enable Checkpointing: Enable this function. Enter a SQL statement in the editing pane. The following is an example. Modify the parameters in bold as you need.
The format of the message header is as follows: Authorization: OBS AccessKeyID:signature The signature algorithm process is as follows: 1. Construct the request character string (StringToSign). 2. Perform UTF-8 encoding on the result obtained from the preceding step. 3.
{OBS domain name}:80 {Private IP address of the jump server} Log in to the jump server as user root and run the copied SSH tunneling command. Repeat the preceding steps to create multiple jump servers.
is set to efs. csi-obs: fixed when type is set to obs. csi-nas: fixed when type is set to sfs. csi-disk: fixed when type is set to evs. obs_volume_type String OBS volume type.
Supported OSs The OSs supported by external image files are listed by CPU architecture. x86 Arm When you upload an external image file to an OBS bucket on the management console, the OS contained in the image file will be checked.
This parameter is a container environment variable if a job uses a custom image. log_url No String OBS URL of training job logs. By default, this parameter is left blank. An example value is /usr/log/. train_instance_type Yes String Resource flavor selected for a training job.
Figure 4 Uploading a file Before uploading files, you need to configure the data storage bucket on the System Settings > OBS Bucket Settings page. Choose Data Management > Datasets. On the displayed page, click Create Dataset.
It provides a data abstraction layer for computing frameworks including Apache Spark, Presto, MapReduce, and Apache Hive, so that upper-layer computing applications can access persistent storage systems including HDFS and OBS through unified client APIs and a global namespace.
Therefore, this parameter will be skipped. name string Backup name object_count int Number of objects on OBS for the disk data size int Backup size snapshot_id string ID of the snapshot associated with the backup status string Backup status updated_at string Update time of the backup
Accessing MRS Storm with JDBC The program uses Storm topology to insert data into a table. storm-kafka-examples Interaction between Storm and Kafka of MRS The program uses the Storm topology to send data to Kafka and display the data. storm-obs-examples Interaction between Storm and OBS
It provides a data abstraction layer for computing frameworks including Apache Spark, Presto, MapReduce, and Apache Hive, so that upper-layer computing applications can access persistent storage systems including HDFS and OBS through unified client APIs and a global namespace.
SET SEARCH_PATH TO dgc; SELECT * FROM top_active_movie Figure 6 Viewing the data in the top_active_movie table Developing and Scheduling a Job Assume that the movie and rating tables in the OBS bucket are changing in real time.
Table 2 Cloud service card Card Description Auto Scaling, FunctionGraph, Elastic Volume Service (EVS), Cloud Backup and Recovery (CBR), Object Storage Service (OBS), Scalable File Service (SFS), SFS Turbo, Virtual Private Cloud (VPC), Elastic Load Balance (ELB), Direct Connect, Virtual
Upload the billing details downloaded in Step 1: Obtaining Consumption Data to the created OBS bucket. Create a table on DLI. Log in to the DLI console. In the navigation pane, choose SQL Editor. Select spark for Engine, and select the queue and database.
Figure 1 Object storage migration Table 1 Object storage migration methods Item Method Scenario Remarks Object storage migration Copying data to OBS The data volume is small, around a few gigabytes. - Migrating and retrieving data using OMS The data volume is large, ranging from terabytes
Submitting a Spark job Upload the Java code file to the OBS bucket. In the Spark job editor, select the corresponding dependency module and execute the Spark job.
Click the name of the corresponding Flink job, choose Run Log, click OBS Bucket, and locate the folder of the log you want to view according to the date. Go to the folder of the date, find the folder whose name contains taskmanager, download the .out file, and view result logs.
Viewing GeminiDB Redis instance backups GeminiDB Redis instance backups are stored in OBS buckets and are invisible to you.
You can upload a software/firmware upgrade package to the IoTDA platform or use a file associated with an object on OBS for device remote upgrades. Can the IoT Platform Download Software or Firmware Packages from Third-party Servers? No.
You can transfer logs to OBS buckets for long-term storage. If your logs have not been ingested to LTS, create a log ingestion rule by referring to Log Ingestion and configure a log structuring parsing rule. Select a source log stream.