diff --git a/README.md b/README.md
index 3953f95490ae62b32b689ab549e51baeab5fac4e..523e7b6b1052f4d8b755d6f70f1d7872c9b12db5 100644
--- a/README.md
+++ b/README.md
@@ -74,10 +74,11 @@ supports structured schemas, can have trouble reconciling different
 STRUCTs across different Parquet files. But it's a TODO to get rid of
 this limitation.
 
-The flattened records are then written to [Parquet]() files, which are
-rotated periodically (and when they reach a certain size). These files
-can be stored remotely, although the current implementation only
-supports local filesystem.
+The flattened records are then written to
+[Parquet](https://parquet.apache.org/) files, which are rotated
+periodically (and when they reach a certain size). These files can be
+stored remotely, although the current implementation only supports
+local filesystem.
 
 The ingestion API endpoint is at */ingest*, and it expects a POST
 request with a ND-JSON request body: newline-delimited JSON-encoded
@@ -109,8 +110,9 @@ problematic field will be dropped from the schema.
 ### Querying
 
 The query engine is [DuckDB](https://duckdb.org), which can just [read
-the Parquet files, even remotely]() and run fast analytical queries on
-them.
+the Parquet files, even
+remotely](https://duckdb.org/docs/data/parquet/overview) and run fast
+analytical queries on them.
 
 One thing of note is that, in the current implementation, it is only
 possible to query fully written Parquet files. The implication is that