检测到您已登录华为云国际站账号,为了您更好的体验,建议您访问国际站服务网站 https://www.huaweicloud.com/intl/zh-cn
不再显示此消息
DLI Delta Table Overview Delta tables use the Delta Lake technology to offer a robust data storage solution. By extending Parquet data files with file-based transaction logs, they support ACID transactions and scalable metadata. Delta Lake seamlessly integrates with Apache Spark APIs
Delta SQL Syntax Reference Delta DDL Syntax Delta DML Syntax Schema Evolution Syntax
Restoring a Delta Table to an Earlier State Syntax Restore a Delta table to the state at a specific point in time: RESTORE [TABLE] [database_name.]table_name|DELTA.`obs_path` [TO] TIMESTAMP AS OF timestamp_expression Restore a Delta table to the state at a specific historical version
Using Delta to Develop Jobs in DLI DLI Delta Metadata Using Delta to Submit a Spark Jar Job in DLI
Querying History Version Data of a Delta Table Syntax Query the state of a Delta table at a specific point in time: SELECT * FROM [database_name.]table_name TIMESTAMP AS OF timestamp_expression Query the state of a Delta table at a specific historical version: SELECT * FROM [database_name
When connecting DLI to a LakeFormation instance, you cannot specify filter criteria to delete partitions. When connecting DLI to a LakeFormation instance, you cannot create Truncate Datasource or Hive foreign tables.
Using Delta to Submit a Spark Jar Job in DLI 1. Add the following dependencies: <dependency> <groupId>io.delta</groupId> <artifactId>delta-core_2.12</artifactId> <version>2.3.0</version> </dependency> 2. Add the following parameters to SparkSession: .config("spark.sql.extensions