Skip to content
Snippets Groups Projects
Commit 905cfc4d authored by ale's avatar ale
Browse files

Add links that were missing

parent 05afde35
No related branches found
No related tags found
No related merge requests found
......@@ -74,10 +74,11 @@ supports structured schemas, can have trouble reconciling different
STRUCTs across different Parquet files. But it's a TODO to get rid of
this limitation.
The flattened records are then written to [Parquet]() files, which are
rotated periodically (and when they reach a certain size). These files
can be stored remotely, although the current implementation only
supports local filesystem.
The flattened records are then written to
[Parquet](https://parquet.apache.org/) files, which are rotated
periodically (and when they reach a certain size). These files can be
stored remotely, although the current implementation only supports
local filesystem.
The ingestion API endpoint is at */ingest*, and it expects a POST
request with a ND-JSON request body: newline-delimited JSON-encoded
......@@ -109,8 +110,9 @@ problematic field will be dropped from the schema.
### Querying
The query engine is [DuckDB](https://duckdb.org), which can just [read
the Parquet files, even remotely]() and run fast analytical queries on
them.
the Parquet files, even
remotely](https://duckdb.org/docs/data/parquet/overview) and run fast
analytical queries on them.
One thing of note is that, in the current implementation, it is only
possible to query fully written Parquet files. The implication is that
......
0% Loading or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment