From bbc09df40da59a4362c4d7cbea1ec4a5e6a8a98c Mon Sep 17 00:00:00 2001
From: ale <ale@incal.net>
Date: Sun, 2 Sep 2018 11:19:53 +0100
Subject: [PATCH] Fix typo

---
 README.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/README.md b/README.md
index 3e4d973..38f7bc3 100644
--- a/README.md
+++ b/README.md
@@ -62,7 +62,7 @@ Like most crawlers, this one has a number of limitations:
 
 * it completely ignores *robots.txt*. You can make such policy
   decisions yourself by turning the robots.txt into a list of patterns
-  to be used with *--exclude-file*.
+  to be used with *--exclude-from-file*.
 * it does not embed a Javascript engine, so Javascript-rendered
   elements will not be detected.
 * CSS parsing is limited (uses regular expressions), so some *url()*
-- 
GitLab