News

Delta Lake Time Travel Data Versioning and Rollback

Delta Lake Time Travel offers a revolutionary approach to managing data, enabling users to effortlessly access and revert to previous states of their datasets. This powerful capability fundamentally transforms how data engineers and analysts interact with their data, providing an unparalleled safety net against errors and a robust mechanism for historical analysis. It ensures data integrity and reproducibility across various operational and analytical workloads.

At its core, Delta Lake achieves this through sophisticated data versioning, where every modification to a table is meticulously recorded in a transaction log. This log serves as an immutable record of all commits, allowing for the reconstruction of any past state of the data. By leveraging this transaction log, Delta Lake provides not only a detailed audit trail but also significant benefits for data reliability, facilitating quick recovery from erroneous updates and supporting rigorous compliance requirements.

Fundamentals of Data Versioning in Delta Lake

World Traveller Plus On British Airways: 5 Things To Know Before Flying

Data versioning is a cornerstone of modern data management, offering unprecedented control and insight into the evolution of data. In the context of Delta Lake, this capability transforms a simple data lake into a reliable and auditable data system, allowing users to access and analyze historical states of their data with ease. Understanding the underlying mechanisms is crucial to fully leverage the power of Delta Lake’s time travel features.

Delta Lake time travel provides remarkable flexibility, allowing users to effortlessly access past data versions. Much like arranging a versatile picnic table with unattached benches for various group sizes, Delta Lake offers the freedom to configure and revert to any historical state. This ensures robust data management and recovery capabilities, making data versioning incredibly intuitive.

The Core Mechanism of Data Versioning

The fundamental mechanism enabling data versioning within a Delta Lake table is its transaction log, often referred to as the Delta Log. This log is a structured, ordered, and append-only record of every change ever made to the Delta Lake table. Instead of directly modifying the data files, Delta Lake records metadata about changes in this transaction log. Each entry in the log represents an atomic transaction, ensuring data integrity and consistency across operations.

This design means that the actual data files (Parquet, ORC, etc.) are immutable once written; any update or delete operation results in new data files being written and the transaction log being updated to reflect these new files and mark old ones as logically removed.

Conceptual Flow of Data Changes and Version Tracking

The tracking of data changes and versions in Delta Lake can be conceptualized as a continuous timeline of commits, each creating a new snapshot of the table’s state. When an operation (like an `INSERT`, `UPDATE`, `DELETE`, or `MERGE`) is executed on a Delta Lake table, it initiates a transaction. Upon successful completion, this transaction is recorded as a new entry in the append-only transaction log.

This entry, known as a commit, details the changes made, such as which data files were added, which were removed, and any schema modifications.Visually, one can imagine the Delta Lake transaction log as a ledger:

  • Initial State (Version 0): A new Delta table is created, and the first set of data files is written. A commit is recorded in the transaction log, pointing to these initial files. This constitutes the first snapshot.
  • First Update (Version 1): An `UPDATE` operation modifies some rows. Instead of altering existing files, new data files containing the updated rows are written, and the old versions of those specific files are logically marked for removal. A new commit is added to the transaction log, referencing the new data files and invalidating the old ones. This creates a new snapshot, Version 1.
  • Second Update (Version 2): A `DELETE` operation removes some rows. Again, new files are written that exclude the deleted rows, and the transaction log is updated with a new commit, creating Snapshot Version 2.

Each commit in the transaction log effectively defines a complete, consistent snapshot of the table at that point in time. By replaying the log up to a specific commit, Delta Lake can reconstruct the exact state of the table at any previous version, making time travel possible.

Leveraging the Transaction Log for Time Travel Capabilities

The transaction log is the cornerstone of Delta Lake’s time travel functionality. It acts as a definitive, ordered sequence of all operations that have ever occurred on a Delta table. Each entry in this log is a JSON file (or a sequence of files for larger transactions) that describes a set of atomic actions, such as `add` (adding a data file), `remove` (logically removing a data file), `metadata` (schema changes), and `protocol` (versioning information).

The Delta Lake transaction log provides an ordered, atomic, and immutable record of every change, serving as the definitive source of truth for all table versions and enabling robust time travel capabilities.

When a user requests to query a specific historical version of a Delta table (e.g., `VERSION AS OF 5` or `TIMESTAMP AS OF ‘2023-01-01’`), Delta Lake performs the following steps:

  1. It reads the transaction log from the beginning.
  2. It applies each commit sequentially until it reaches the requested version or timestamp.
  3. At the specified point, it identifies the exact set of active data files that constituted the table at that moment.
  4. It then constructs a logical snapshot using only those active files, allowing the query engine to read data as it existed historically.

This process ensures that time travel queries are consistent and reflect the precise state of the table at the chosen point in its history, without requiring data duplication for each version.

Delta Lake time travel provides remarkable flexibility, allowing users to effortlessly access past data versions. Much like arranging a versatile picnic table with unattached benches for various group sizes, Delta Lake offers the freedom to configure and revert to any historical state. This ensures robust data management and recovery capabilities, making data versioning incredibly intuitive.

Key Benefits of Data Versioning for Reliability and Auditing

Data versioning in Delta Lake provides a multitude of advantages that significantly enhance data reliability, auditability, and operational efficiency. These benefits are critical for maintaining high-quality data pipelines and meeting compliance requirements in modern data architectures.The primary advantages include:

  • Data Rollbacks: The ability to revert a table to a previous stable state, effectively undoing accidental deletions, incorrect updates, or problematic schema changes, thus preventing data loss and ensuring data integrity.
  • Reproducible Experiments and Reports: Data scientists and analysts can run queries against specific historical versions of data, guaranteeing that their analyses and machine learning models are based on consistent datasets, enabling reproducibility of results.
  • Audit Trails and Compliance: Every change to the table is recorded in the transaction log, providing a complete and immutable audit trail. This is invaluable for regulatory compliance (e.g., GDPR, HIPAA) and internal governance, allowing auditors to trace data lineage and modifications over time.
  • Debugging and Troubleshooting: When data quality issues or unexpected results arise, versioning allows engineers to examine the state of the data at various points in time, pinpointing exactly when and how an issue was introduced.
  • Concurrent Operations and ACID Properties: While not solely a versioning benefit, the transaction log’s role in versioning is intrinsically linked to Delta Lake’s ACID (Atomicity, Consistency, Isolation, Durability) properties, which enable multiple users or applications to read and write data concurrently without conflicts, always seeing a consistent snapshot.
  • Schema Evolution with Safety: Versioning ensures that schema changes are tracked and can be rolled back if necessary, preventing disruption to downstream applications that might rely on a specific schema.

Practical Scenarios for Data Rollback and Auditing

What Is A River Delta? - WorldAtlas.com

Having established the foundational concepts of data versioning in Delta Lake, it’s time to explore the practical applications where these capabilities truly shine. Delta Lake’s time travel feature isn’t just a theoretical advantage; it’s a robust mechanism that provides significant operational benefits, particularly in maintaining data integrity and ensuring compliance.

This section will delve into specific, real-world scenarios, illustrating how data engineers leverage time travel for critical tasks like reverting accidental changes and performing meticulous historical data audits. We’ll also look beyond these immediate applications to discover its broader utility in the data engineering lifecycle.

Reverting an Erroneous Update in Delta Lake

Consider a common predicament for a data engineer managing a critical production Delta Lake table named customer_transactions. This table records daily sales, and a new batch job is introduced to enrich transaction data with customer segmentation information. On a Tuesday morning, the engineer discovers a severe issue: the new job, due to a logical error in its transformation logic, has erroneously updated a significant portion of the previous day’s (Monday’s) transactions, setting their segment_id to a default ‘unknown’ value instead of their actual segment.

This erroneous update impacts critical reporting and downstream analytics. Fortunately, Delta Lake’s time travel allows for a straightforward rollback without needing complex backups or data recovery procedures. The steps involved would typically be:

  1. Identify the problem: The engineer notices discrepancies in reports or receives alerts about data quality issues.
  2. Determine the point of error: By inspecting the Delta Lake transaction log or using DESCRIBE HISTORY customer_transactions, the engineer can pinpoint the exact commit version (or approximate timestamp) when the erroneous update occurred. Let’s say the faulty job ran and committed at version 15, and the last known good state was at version 14.
  3. Verify the previous state: Before rolling back, it’s prudent to confirm that the data at the previous version (version 14) is indeed correct. This can be done by querying the table using time travel:

    SELECT

    FROM customer_transactions VERSION AS OF 14 WHERE transaction_date = '2023-10-23' LIMIT 10;

    This allows the engineer to visually inspect a sample of the data and confirm its integrity.

  4. Perform the rollback: Once confirmed, the engineer can revert the table to the correct state using the RESTORE TABLE command. This command doesn’t delete the erroneous version; instead, it creates a new commit that effectively undoes the changes by writing the state of the specified previous version back as the current version.

    RESTORE TABLE customer_transactions TO VERSION AS OF 14;

    Alternatively, if the exact version number isn’t readily available but a timestamp is known (e.g., just before the job ran), it could be:

    RESTORE TABLE customer_transactions TO TIMESTAMP AS OF '2023-10-24 08:59:00 UTC';

  5. Validate the rollback: After the restore operation completes, the engineer would again query the table to ensure the data is back to its correct state and that the erroneous segment_id values have been rectified.

    SELECT

    FROM customer_transactions WHERE transaction_date = '2023-10-23' LIMIT 10;

This process is swift and non-destructive, minimizing downtime and data loss, which is a significant advantage in production environments.

Querying Historical Data for Regulatory Compliance

Regulatory compliance often demands the ability to reconstruct data states at specific points in time, providing an immutable audit trail for critical business information. Financial institutions, healthcare providers, and any organization handling sensitive customer data are frequently subjected to audits requiring historical data access. Delta Lake’s time travel is exceptionally well-suited for this.

Consider a financial services company that needs to demonstrate to auditors the exact balance of a customer’s savings account on a particular date and time, perhaps to verify interest calculations or transaction validity for a past period. The account_balances Delta Lake table is updated daily, and potentially multiple times a day, with new transactions. An auditor requests the balance for customer ID CUST123 as it stood on ‘2023-09-15 at 14:30:00 UTC’.

The data engineer can fulfill this request directly using time travel queries:

  1. Accessing data by timestamp: The most direct method for compliance is usually by timestamp, as auditors typically provide specific dates and times.

    SELECT customer_id, account_balance, last_update_timestamp FROM account_balances TIMESTAMP AS OF '2023-09-15 14:30:00 UTC' WHERE customer_id = 'CUST123';

    This query will return the state of the account_balances table exactly as it was at the specified timestamp, allowing the auditor to verify the balance for CUST123 at that precise moment. Delta Lake intelligently determines the most recent version committed

    before or at* that timestamp.

  2. Accessing data by version (if timestamp mapping is known): While less common for direct auditor requests, if the compliance team internally maps specific events or reports to Delta Lake versions, querying by version can also be useful. For instance, if a quarterly report was generated based on data at version 42, an auditor might later ask to see the data underpinning that report.

    SELECT customer_id, account_balance, last_update_timestamp FROM account_balances VERSION AS OF 42 WHERE customer_id = 'CUST123';

    This retrieves the data from the table’s state at commit version 42.

These capabilities provide an immutable and verifiable record of all changes, making it straightforward to satisfy stringent regulatory requirements by presenting the exact state of data at any past moment.

Advanced Use Cases for Delta Lake Time Travel

Beyond the critical functions of rollback and auditing, Delta Lake’s time travel feature unlocks a multitude of advanced scenarios that significantly enhance data engineering workflows and analytical capabilities. These applications often leverage the ability to access previous states of data without complex data duplication or snapshotting processes.

Here are several key use cases that extend beyond simple rollbacks:

  • Reproducible Machine Learning Experiments: Data scientists can use time travel to ensure that their models are trained and evaluated on the exact same dataset version, even if the underlying production data continues to evolve. This is crucial for comparing model performance across different experiments and for reproducing results.
  • Debugging Data Pipelines: When a downstream report or application shows incorrect data, engineers can query the source Delta Lake tables at various historical points in time (e.g., before and after a specific job ran) to isolate when and where the data corruption or error was introduced. This significantly speeds up root cause analysis.
  • A/B Testing and Historical Analysis: Analysts can easily compare different versions of a dataset to understand the impact of new features, data transformations, or business logic changes over time. For example, comparing sales data before and after a pricing model change.
  • Data Recovery from Accidental Deletion or Truncation: If an entire table is accidentally deleted or truncated, Delta Lake’s time travel allows for recovery by restoring the table to its state just before the destructive operation. The actual data files are not immediately removed, providing a grace period for recovery.
  • Schema Evolution Management: When schema changes occur, time travel allows older applications or reports that expect the previous schema to still query the data as it existed before the schema evolution, providing a graceful transition period.
  • Generating Consistent Snapshots for Reporting: Instead of creating physical copies, time travel can be used to generate “virtual” snapshots of data at specific points for consistent reporting, ensuring all reports based on that timestamp see the exact same data.

These applications underscore the versatility of Delta Lake’s time travel, transforming it from a mere recovery tool into a fundamental component of robust data management and analysis.

Accessing Historical Data: Syntax and Parameters

To facilitate efficient interaction with historical data, Delta Lake provides intuitive syntax for accessing previous table states using both version numbers and timestamps. The following table details the common methods, their syntax, descriptions, and typical applications in a data processing environment.

Method Syntax Example Description Common Use Case
Query by Version Number SELECT

FROM my_delta_table VERSION AS OF 5;

Retrieves the table’s state (all data and schema) as it existed immediately after the 5th commit (version 5) to the Delta Lake transaction log. Debugging a specific change, reproducing an experiment from a known commit, reverting to a known good state (when used with RESTORE TABLE).
Query by Timestamp SELECT

FROM my_delta_table TIMESTAMP AS OF '2023-10-26 10

00:00 UTC';

Retrieves the table’s state (all data and schema) as it was at or before the specified timestamp. Delta Lake finds the latest version committed at or before that time. Regulatory auditing, point-in-time analysis for reports, reconstructing historical data for compliance, disaster recovery (when used with RESTORE TABLE).
Restore Table by Version RESTORE TABLE my_delta_table TO VERSION AS OF 5; Reverts the entire table to a previous version. This creates a

new* commit that makes the state of version 5 the current state of the table, effectively undoing subsequent changes.

Correcting erroneous updates, recovering from accidental deletions, rolling back to a stable previous state.
Restore Table by Timestamp RESTORE TABLE my_delta_table TO TIMESTAMP AS OF '2023-10-26 10:00:00 UTC'; Reverts the entire table to its state at a specific timestamp. Similar to restoring by version, a new commit is created to reflect this historical state as current. Disaster recovery, rolling back to a state before a critical incident, restoring a table to a specific historical point for testing.

Optimizing Historical Data Management and Storage

Delta lake time travel

Managing historical data efficiently is paramount in data warehousing and analytics, especially when leveraging Delta Lake’s time travel capabilities. While retaining a complete history offers significant benefits for auditing and data recovery, it also introduces considerations around storage costs and query performance. This section delves into the practical aspects of configuring retention, understanding the trade-offs of long retention, and implementing optimization strategies to ensure that time travel remains both powerful and performant.

Configuring Historical Data Retention

Delta Lake provides specific configuration options that allow users to precisely control how long historical versions of data and associated files are retained. These settings are crucial for balancing the need for historical access with storage efficiency.

  • `delta.logRetentionDuration`: This property dictates how long the transaction log (commit history) for a Delta table is retained. The transaction log is essential for time travel, as it records every change made to the table. By default, Delta Lake retains log entries for 30 days.

    `ALTER TABLE my_delta_table SET TBLPROPERTIES (‘delta.logRetentionDuration’ = ‘INTERVAL 7 days’);`

    Setting this to a shorter duration means older log entries are pruned, making time travel to very old versions impossible. Conversely, a longer duration allows for deeper historical queries but increases the size of the transaction log, which can slightly impact metadata operations.

  • `delta.deletedFileRetentionDuration`: This property determines how long data files that are no longer part of the active table (e.g., files replaced by `UPDATE`, `DELETE`, or `MERGE` operations) are retained before they become eligible for removal by the `VACUUM` operation. The default retention is 7 days.

    `ALTER TABLE my_delta_table SET TBLPROPERTIES (‘delta.deletedFileRetentionDuration’ = ‘INTERVAL 30 days’);`

    This duration directly influences the window for time travel to versions where these files were still active. A longer retention period allows for time travel further back into the past, as the underlying data files are preserved.

It is important to note that these configurations work in conjunction. The `delta.deletedFileRetentionDuration` effectively sets the minimum age for data files to be considered “deleted” and thus eligible for physical removal, while `delta.logRetentionDuration` dictates how far back the transaction log can guide time travel operations to locate these files.

Impact of Long Data Retention Policies

While the ability to time travel far into the past is a powerful feature, maintaining long data retention policies in Delta Lake has significant implications for both storage costs and query performance. Understanding these impacts is key to designing an effective data management strategy.

  • Storage Costs:
    Long retention periods mean that a greater number of historical data files are kept on storage (e.g., S3, ADLS, GCS). Each `UPDATE`, `DELETE`, or `MERGE` operation generates new data files and marks old ones as logically deleted. Without a `VACUUM` operation, these logically deleted files accumulate.

    For instance, a table experiencing frequent updates, such as a customer activity log where user profiles are regularly modified, could quickly double or triple its physical storage footprint if old versions are retained indefinitely. This directly translates to higher cloud storage bills, as you are paying for every byte stored.

  • Query Performance for Time Travel Operations:
    While Delta Lake is highly optimized, querying very old versions of a large table can sometimes be less efficient than querying the latest version. This is primarily because time travel queries might need to reconstruct the state of the table by scanning through a larger number of manifest files and potentially a wider range of data files, especially if the table has undergone significant schema evolution or data transformations over time.

    Additionally, if data files from older versions are less optimally organized (e.g., not compacted or Z-ordered), time travel queries accessing them might experience slower scan times compared to current, optimized data.

Therefore, a careful balance must be struck between the business requirement for historical data access and the operational costs and performance characteristics associated with prolonged data retention.

Optimizing Time Travel Queries: Compaction and Z-Ordering

To enhance the efficiency of time travel queries, particularly on large Delta Lake tables with extensive history, optimization strategies like compaction and Z-ordering play a crucial role. These techniques help ensure that even historical data is stored in an optimal format for retrieval.

Compaction

Compaction involves rewriting small data files into larger, more manageable ones. Delta Lake tables often accumulate many small files due to frequent micro-batch ingests or numerous small updates. While small files are efficient for writes, they can significantly degrade read performance, as the query engine has to open and process metadata for each file.When applying compaction, for example, using `OPTIMIZE my_delta_table`, Delta Lake consolidates these small files into larger ones, typically aiming for a size like 128 MB or 1 GB.

This optimization benefits time travel queries by:

  • Reducing File Overhead: Fewer files mean less overhead for the query planner and faster file listing.
  • Improving Scan Performance: Larger files allow for more efficient data reads, as the I/O system can read contiguous blocks of data more effectively.

Even if you are time traveling to an older version of the table, if that version was subject to compaction, the query will benefit from the optimized file layout. This is particularly relevant for tables that are frequently updated and then queried historically.

Z-Ordering

Z-ordering is a multi-dimensional clustering technique that co-locates related data in the same set of files. It is particularly powerful for queries that filter on multiple columns. When you `OPTIMIZE my_delta_table ZORDER BY (col1, col2)`, Delta Lake physically rearranges the data within the files so that rows with similar values across the specified columns are stored together.The benefits of Z-ordering for time travel queries are substantial:

  • Data Skipping: When a time travel query filters on Z-ordered columns, the query engine can effectively “skip” over large chunks of data files that do not contain the relevant data. This significantly reduces the amount of data that needs to be read.
  • Enhanced Performance for Historical Filters: If your historical analysis frequently involves filtering by, for instance, `customer_id` and `event_date`, Z-ordering on these columns will ensure that even old versions of the table respond quickly to such filtered time travel queries.

    This makes historical reporting much more responsive.

Both compaction and Z-ordering create new, optimized data files and mark the old, unoptimized files as logically deleted. These old files are then subject to the `VACUUM` operation based on the `delta.deletedFileRetentionDuration` policy.

Delta Lake time travel provides remarkable flexibility, allowing users to effortlessly access past data versions. Much like arranging a versatile picnic table with unattached benches for various group sizes, Delta Lake offers the freedom to configure and revert to any historical state. This ensures robust data management and recovery capabilities, making data versioning incredibly intuitive.

The VACUUM Operation in Delta Lake

The `VACUUM` operation is a critical maintenance task in Delta Lake, designed to clean up unreferenced data files and manage storage space. Its proper understanding and application are vital for both cost management and ensuring the integrity of time travel capabilities.

Purpose of VACUUM

The primary purpose of `VACUUM` is to remove data files that are no longer referenced by the current or historical versions of a Delta table, beyond a specified retention threshold. When operations like `UPDATE`, `DELETE`, `MERGE`, or `OPTIMIZE` occur, they write new data files and logically mark old files as deleted in the transaction log. These logically deleted files still consume storage space.

`VACUUM` identifies these unreferenced files that are older than the configured `delta.deletedFileRetentionDuration` (or a custom duration specified in the `VACUUM` command) and permanently deletes them from the underlying storage.

Parameters of VACUUM

The `VACUUM` command in Delta Lake supports specific parameters to control its behavior:

`VACUUM [table_name | path/to/delta/table] [RETAIN duration] [DRY RUN]`

  • `[table_name | path/to/delta/table]`: Specifies the Delta table to vacuum, either by its registered name or the file system path to its data.
  • `RETAIN duration`: This optional parameter allows you to override the `delta.deletedFileRetentionDuration` table property for a specific `VACUUM` run. The `duration` is typically specified as an `INTERVAL` (e.g., `RETAIN INTERVAL 168 HOURS` for 7 days, or `RETAIN INTERVAL ‘7 days’`). The minimum retention period allowed by Delta Lake is 7 days (168 hours) to prevent accidental data loss for recent time travel queries, unless `delta.deletedFileRetentionDuration` is explicitly set to a lower value and `spark.databricks.delta.retentionDurationCheck.enabled` is set to `false` (which is generally not recommended for production).
  • `DRY RUN`: This optional parameter causes `VACUUM` to list all the files that
    -would* be deleted without actually deleting them. This is an invaluable safety mechanism for previewing the impact of a `VACUUM` operation before making permanent changes.

Implications for Time Travel Capabilities

The `VACUUM` operation has direct and significant implications for Delta Lake’s time travel capabilities:

  • Permanent Data Loss for Old Versions: Once `VACUUM` removes data files, any historical version of the table that relied on those files for its state becomes inaccessible. For example, if you `VACUUM` a table with a `RETAIN 7 DAYS` policy, you will no longer be able to time travel to any version older than 7 days, as the underlying data files required to reconstruct that state will have been physically deleted.
  • Storage Cost Reduction: By permanently deleting unreferenced files, `VACUUM` directly reduces storage consumption and, consequently, cloud storage costs. This is the primary benefit of running `VACUUM` regularly.
  • Risk of Accidental Data Loss: Because `VACUUM` permanently deletes data, it must be used with caution. Running `VACUUM` with a very short retention period (e.g., less than the default 7 days, and overriding the safety checks) can lead to the inability to recover from recent accidental deletes or updates via time travel. It is critical to ensure that your `VACUUM` retention period aligns with your business’s data recovery and auditing requirements.

Therefore, `VACUUM` should be scheduled as a routine maintenance task, typically after ensuring that your `delta.deletedFileRetentionDuration` and `delta.logRetentionDuration` settings adequately cover your time travel requirements. Regularly using `DRY RUN` is a recommended best practice before executing `VACUUM` on production tables.

Closure: Delta Lake Time Travel

Least Used Aircraft: Where Delta Air Lines Is Flying Its Airbus A330-200s

In conclusion, Delta Lake Time Travel stands as a cornerstone for modern data architectures, empowering organizations with unprecedented control over their data’s lifecycle. Its ability to navigate historical data, coupled with robust versioning and efficient management strategies, ensures data quality, simplifies auditing, and accelerates debugging processes. Embracing these capabilities allows data teams to operate with greater confidence, fostering innovation and maintaining the highest standards of data governance in dynamic environments.

Question & Answer Hub

Is Delta Lake Time Travel an exclusive feature or part of an open-source project?

Delta Lake Time Travel is a core feature of Delta Lake, which is an open-source storage layer that brings ACID transactions to Apache Spark and big data workloads.

How does schema evolution impact time travel queries in Delta Lake?

Delta Lake gracefully handles schema evolution. When querying historical data, the schema from that specific historical version is used, ensuring consistency even if the table’s current schema has changed.

Can I restore an entire deleted Delta Lake table using time travel?

Time travel primarily allows reverting
-data within* a table to a previous state. If the entire table directory was deleted from the storage layer, time travel alone cannot restore it, as the transaction log itself would be gone.

What is the maximum historical period I can travel back to with Delta Lake?

The maximum period depends on your configured data retention policy (e.g., `delta.logRetentionDuration` and `delta.deletedFileRetentionDuration`). By default, it’s typically 30 days for log entries and 7 days for deleted files, but these can be extended based on your needs and storage considerations.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button