From 905cfc4d1f3e028a4f28cbd9edde1f350d287965 Mon Sep 17 00:00:00 2001
From: ale <ale@incal.net>
Date: Fri, 29 Dec 2023 18:20:00 +0000
Subject: [PATCH] Add links that were missing

---
 README.md | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/README.md b/README.md
index 3953f95..523e7b6 100644
--- a/README.md
+++ b/README.md
@@ -74,10 +74,11 @@ supports structured schemas, can have trouble reconciling different
 STRUCTs across different Parquet files. But it's a TODO to get rid of
 this limitation.
 
-The flattened records are then written to [Parquet]() files, which are
-rotated periodically (and when they reach a certain size). These files
-can be stored remotely, although the current implementation only
-supports local filesystem.
+The flattened records are then written to
+[Parquet](https://parquet.apache.org/) files, which are rotated
+periodically (and when they reach a certain size). These files can be
+stored remotely, although the current implementation only supports
+local filesystem.
 
 The ingestion API endpoint is at */ingest*, and it expects a POST
 request with a ND-JSON request body: newline-delimited JSON-encoded
@@ -109,8 +110,9 @@ problematic field will be dropped from the schema.
 ### Querying
 
 The query engine is [DuckDB](https://duckdb.org), which can just [read
-the Parquet files, even remotely]() and run fast analytical queries on
-them.
+the Parquet files, even
+remotely](https://duckdb.org/docs/data/parquet/overview) and run fast
+analytical queries on them.
 
 One thing of note is that, in the current implementation, it is only
 possible to query fully written Parquet files. The implication is that
-- 
GitLab