1: Separating storage and compute tends to result in relatively high startup latency (Databricks, here's looking at you!) when you have to provision new compute. For a massive batch job, waiting 10 minutes for a cluster to come up is fine, but a lot of orgs don't realize the cost implications and end up with tiny developer/analyst clusters, or aggressively spin inactive clusters down, which results in long wait times. Analyst/developer ergonomics have not traditionally been a major concern in the big data space.
1.5: Dealing with multiple files instead of a data warehouse SQL interface requires work, and can introduce interesting performance issues. Obviously you can put a SQL interface in front of the data, but most RDBMS / Data Warehouses have a lot of functionality around maintaining your data that you may not get with the native file format, so you get soft-locked into the metadata format that comes from your data lake file format. It's all open, so this is only a half issue, but there are switching costs.
(I do some training for Databricks)
1. Yeah, cluster startup time can be not fun. Here are some solutions:
- pools (keeps instances around so you don't have to wait for the cloud to provision them
- serverless SQL warehouses (viable if you're doing only SQL)
- one job with multiple tasks that share the same job cluster. Delta Live Tables does a similar thing but with streaming autoscaling
- streaming: cluster never needs to go down. Can share multiple streams on the same cluster so they load balance each other
Pool costs become more manageable as you have more clusters sharing the same pool. You can also have it have no incremental cost increase by setting the timeout to be 0, but it makes it less useful. You can have more clusters take advantage of it by using the same instance families. If you purchase reserved instances from Azure/AWS, you might as well make a pool with those as well. You may also want to check out fleet instance types.
> The one and a half downsides not mentioned here:
Make that 2.5. If you are building new analytic products the most significant issue is that 90% of the information in data lakes is already there and not in an open format like Iceberg. Instead, it's heaps of CSV and Parquet files, maybe in Hive format but maybe just named in some unique way. If your query engine can't read these it's like being all dressed up with no place to go.
I don't see any mention of sqlite. Is a sqlite file not the same thing they're talking about here? Pretty sure it has a spec and hasn't changed formats in many years so if you wanted to read it out in something that isn't sqlite, I imagine it wouldn't be too hard.
Here's one more problem not mentioned by others: SQLite has some functions that it declares but doesn't define. For example, regexp(). So, if your table is defined as having a constraint with regexp()... well, sucks to be you: it might depend on what regexp() implementation you load whether that constraint can be applied or not.
I know this because I needed this functionality in Ada code, which links statically with SQLite, and I didn't want to also link with PCRE library just to get regexp(), especially since GNATCOLL already, sort of has regexp... except it doesn't have the "fancy" features, s.a. lookaheads / lookbehinds / Unicode support etc.
So, a table that uses regexp() in its constraint definition isn't portable. But, if you only use the table data without the schema you lose a lot of valuable information...
----
Also, come to think about it: unlike server-client databases (eg. MySQL, PostgreSQL etc) SQLite doesn't have a wire-transfer format. Its interface returns values in the way C language understands them. The on-disk binary format isn't at all designed for transfer because it's optimized for access efficiency. This, beside other things, results in SQLite database file typically having tons of empty space, it doesn't use efficient (compressed) value representation etc.
So, trying to use SQLite format for transferring data isn't going to be a good arrangement. It's going to be wasteful and slow.
The main disadvantage of sqlite compared to these other formats is that sqlite is designed for single machine processing. This can be problematic for huge datasets or complex queries.
These other formats easily support using clusters to process your data.
I agree that sqlite has a number of similar benefits - openness, table abstractions and concurrent transactions. It’s also a library so close to how delta, iceberg and hudi are implemented.
I’m glad it’s had an uptick in interest recently but I haven’t yet seen it mentioned for analytics yet.
Yes, SQLite is row oriented. It's not a very space efficient format because it also doesn't support compression or compact representations of numbers in binary.
But it doesn't rely on the JVM and a typical JVM ecosystem. It is a big benefit for some use-cases, like dealing with numerical data on the edge.
The documentation puts a lot of emphasis on loading/querying existing files, as that's the first thing you will want to do for a OLAP use-case, but in general it's the same as sqlite where you maintain a database file.
Thanks. That's good to know. The file format is not highlighted in the documentation but I found some information about it: https://duckdb.org/internals/storage.html
It seems to be a bit early to rely on it to store data in an object store, but I will do some tests to compare with SQLite:
> The DuckDB internal storage format is currently in flux, and is expected to change with each release until we reach v1.0.0.
These are metadata layers on top of file formats, expecting to process many files (where schema may change) in typically object storage, with volumes in excess of a petabyte. There is nothing preventing you from using sqlite with iceberg, for example - except implementing it (may involve implementation in your execution engine too). It already supports Parquet, Orc, and Avro (which is more row oriented than column oriented)
Iceberg and Sqlite might be interesting if you wanted to colocate two tables in the same file, for example. A smart enough data access layer with an appropriate execution engine could possibly see the data for both.
Column orientation is extremely important for query + storage efficiency.
Furthermore, you need a distributed query engine (Athena, Bigquery etc.) and they all support parquet.
I talk to customers basically all day about table formats. Only one customer has really brought up Hudi in a meaningful way. IMO, Hudi is basically out of contention for 95%+ of people looking at table formats.
Biased in that the authors seem to favor Hudi and the arguments for seem based on that favor rather than an objective presentation of all relevant factors.
I ask because, if I didn't know either word, the one would mean, to me, "tiny storage next to a big body of data" and the other would mean "a big body of data".
You can't resell someone a file system if you don't rename it to a "data lake". You can't resell someone an indexed file system from the 80's that can be queried with SQL unless you rename it a "data lakehouse".
Wait til you dive into ETL vs ELT or even better, if you've been doing ETL since long before "ELT" was "a thing", but everyone did ETL actually in an ELT fashion...
It's not only the names, but something like 98% of the tools too, that suck.
Disclaimer - work at Snowflake. Two quick points to mention.
1. Snowflake has always used blob stores + file data + metadata. Architecturally it’s actually always been very Lakehouse-y
2. Parquet and Iceberg should be equivalent in performance and features. It’s more than playing nicely - it’s more choose your own adventure where all things are equal.
"(Data) lakehouse" is an amalgamation of data warehouses and data lakes. It's meant to enable querying and all the support (transaction, etc) of traditional data warehouses on a data lake (unstructured data lying on cheap storage).
> "(Data) lakehouse" is an amalgamation of data warehouses and data lakes. It's meant to enable querying and all the support (transaction, etc) of traditional data warehouses on a data lake (unstructured data lying on cheap storage).
Thank you for that. Do you have any suggestions on where one would start if they wanted to get a better idea and/or some experience using lakehouses?
That's what it originally meant, at least in my experience. It was when warehouses got access to commodity storage through virtualization options (Hey! I can read S3 from Redshift and it looks like a Redshift table). Similar to Postgres foreign data wrappers or polybase in sql server.
Databricks (with Delta as the underpinning) seems to have lead the charge of lakehouse meaning, your data lake+file formats/helpers+compute==data lake+datawarehouse==lakehouse.
The latter seems to be the prevailing definition today with the former aging in place.
No! Haven't you heard? The new SQL "killer" is simply tying LLMs into your data and giving it commands in the common business parlance of your choice! kinda. sorta.
1: Separating storage and compute tends to result in relatively high startup latency (Databricks, here's looking at you!) when you have to provision new compute. For a massive batch job, waiting 10 minutes for a cluster to come up is fine, but a lot of orgs don't realize the cost implications and end up with tiny developer/analyst clusters, or aggressively spin inactive clusters down, which results in long wait times. Analyst/developer ergonomics have not traditionally been a major concern in the big data space.
1.5: Dealing with multiple files instead of a data warehouse SQL interface requires work, and can introduce interesting performance issues. Obviously you can put a SQL interface in front of the data, but most RDBMS / Data Warehouses have a lot of functionality around maintaining your data that you may not get with the native file format, so you get soft-locked into the metadata format that comes from your data lake file format. It's all open, so this is only a half issue, but there are switching costs.