Apache Iceberg is an open table format for datasets that can be used with compute engines like Spark, Trino, PrestoDB, Flink, and Hive. 

It has a lot of failsafes in place to ensure that users don’t accidentally mess up a table with a wrong command. 

Its schema evolution supports tasks like add, drop, update, or rename, and won’t inadvertently un-delete data. It also has hidden partitioning, which helps to prevent silently incorrect results or slow queries that might result from user mistakes. 

Other user experience capabilities include partition layout evolution that can update the table layout as query patterns change, the ability to make queries reproducible, and version rollback to allow users to go back to a previous state of the table. 

It was designed for large tables with the intention that a distributed SQL engine wouldn’t be needed to read it. 

It also works with any cloud store, has serializable isolation, and can support multiple concurrent writers. 

“It’s quickly becoming that industry standard for how tables are represented in systems like S3 and object storage,” said Tomer Shiran, founder and chief product officer at cloud data lake company Dremio, which is a contributor to the project.