Typo corrections

Signed-off-by: Rob Gill <rrobgill@protonmail.com>
This commit is contained in:
Rob Gill
2025-07-18 08:43:10 +10:00
parent 69fb881177
commit 9e7955a6cc
14 changed files with 23 additions and 23 deletions

View File

@@ -1,6 +1,6 @@
Authentication is required for most API endpoints. The Pi-hole API uses a session-based authentication system. This means that you will not be able to use a static token to authenticate your requests. Instead, you will be given a session ID (SID) that you will have to use. If you didn't set a password for your Pi-hole, you don't have to authenticate your requests. Authentication is required for most API endpoints. The Pi-hole API uses a session-based authentication system. This means that you will not be able to use a static token to authenticate your requests. Instead, you will be given a session ID (SID) that you will have to use. If you didn't set a password for your Pi-hole, you don't have to authenticate your requests.
To get a session ID, you will have to send a `POST` request to the `/api/auth` endpoint with a payload containing your password. Note that is also possible to use an application password instead of your regular password, e.g., if you don't want to put your password in your scripts or if you have 2FA enabled for your regular password. One application password can be generated in the web interface on the settings page. To get a session ID, you will have to send a `POST` request to the `/api/auth` endpoint with a payload containing your password. Note that it is also possible to use an application password instead of your regular password, e.g., if you don't want to put your password in your scripts or if you have 2FA enabled for your regular password. One application password can be generated in the web interface on the settings page.
<!-- markdownlint-disable code-block-style --> <!-- markdownlint-disable code-block-style -->
???+ example "Authentication with password" ???+ example "Authentication with password"
@@ -138,7 +138,7 @@ Once you have a valid SID, you can use it to authenticate your requests. You can
3. In the `X-FTL-SID` header: `X-FTL-SID: vFA+EP4MQ5JJvJg+3Q2Jnw=` 3. In the `X-FTL-SID` header: `X-FTL-SID: vFA+EP4MQ5JJvJg+3Q2Jnw=`
4. In the `sid` cookie: `Cookie: sid=vFA+EP4MQ5JJvJg+3Q2Jnw=` 4. In the `sid` cookie: `Cookie: sid=vFA+EP4MQ5JJvJg+3Q2Jnw=`
Note that when using cookie-based authentication, you will also need to send a `X-FTL-CSRF` header with the CSRF token that was returned when you authenticated. This is to prevent a certain kind of identify theft attack the Pi-hole API is immune against. Note that when using cookie-based authentication, you will also need to send a `X-FTL-CSRF` header with the CSRF token that was returned when you authenticated. This is to prevent a certain kind of identity theft attack the Pi-hole API is immune against.
???+ example "Authentication with SID" ???+ example "Authentication with SID"

View File

@@ -4,7 +4,7 @@ Most (but not all) endpoints require authentication. API endpoints requiring aut
## Accessing the API documentation ## Accessing the API documentation
The entire API is documented at http://pi.hole/api/docs and self-hosted by your Pi-hole to match 100% the API versions your local Pi-hole has. Using this locally served API documentation is preferred. In case you don't have Pi-hole installed yet, you can also check out the documentation for all branches online, e.g., [Pi-hole API documentation](https://ftl.pi-hole.net/master/docs/) (branch `master`). Similarly, you can check out the documentation for a specific other branches by replacing `master` with the corresponding branch name. <!-- markdownlint-disable-line no-bare-urls --> The entire API is documented at http://pi.hole/api/docs and self-hosted by your Pi-hole to match 100% the API version your local Pi-hole has. Using this locally served API documentation is preferred. In case you don't have Pi-hole installed yet, you can also check out the documentation for all branches online, e.g., [Pi-hole API documentation](https://ftl.pi-hole.net/master/docs/) (branch `master`). Similarly, you can check out the documentation for a specific other branches by replacing `master` with the corresponding branch name. <!-- markdownlint-disable-line no-bare-urls -->
## API endpoints ## API endpoints
@@ -118,7 +118,7 @@ In contrast, errors have a uniform, predictable style to ease their programmatic
## HTTP methods used by this API ## HTTP methods used by this API
Each HTTP request consists of a method that indicates the action to be performed on the identified resource. The relevant standards is [RFC 2616](https://tools.ietf.org/html/rfc2616). Though, RFC 2616 has been very clear in differentiating between the methods, complex wordings are a source of confusion for many users. Each HTTP request consists of a method that indicates the action to be performed on the identified resource. The relevant standard is [RFC 2616](https://tools.ietf.org/html/rfc2616). Though, RFC 2616 has been very clear in differentiating between the methods, complex wordings are a source of confusion for many users.
Pi-hole's API uses the methods like this: Pi-hole's API uses the methods like this:
@@ -168,7 +168,7 @@ Method | Description
`DELETE` operations are **idempotent**. If you `DELETE` a resource, its removed from the collection of resources. Repeatedly calling `DELETE` on that resource will not change the outcome however, calling `DELETE` on a resource a second time *may* return a 404 (NOT FOUND) since it was already removed. `DELETE` operations are **idempotent**. If you `DELETE` a resource, its removed from the collection of resources. Repeatedly calling `DELETE` on that resource will not change the outcome however, calling `DELETE` on a resource a second time *may* return a 404 (NOT FOUND) since it was already removed.
???+ info "Example" ???+ info "Example"
Lets list down few URIs and their purpose to get better understanding when to use which method: Lets list down a few URIs and their purpose to get better understanding when to use which method:
Method + URI | Interpretation Method + URI | Interpretation
---------------------|-------------------- ---------------------|--------------------

View File

@@ -69,7 +69,7 @@ Before | After
![Android Firefox Untrusted](../images/api/android-pihole-untrusted.png) | ![Android Firefox Trusted](../images/api/android-pihole-trusted.png) ![Android Firefox Untrusted](../images/api/android-pihole-untrusted.png) | ![Android Firefox Trusted](../images/api/android-pihole-trusted.png)
![Android Chrome Untrusted](../images/api/android-chrome-untrusted.png) | ![Android Chrome Trusted](../images/api/android-chrome-trusted.png) ![Android Chrome Untrusted](../images/api/android-chrome-untrusted.png) | ![Android Chrome Trusted](../images/api/android-chrome-trusted.png)
1. Go to you device's settings 1. Go to your device's settings
2. Navigate to "System Security" or "Security & location" (depending on your device) 2. Navigate to "System Security" or "Security & location" (depending on your device)
3. Navigate to "Credential storage" or similar (depending on your device) 3. Navigate to "Credential storage" or similar (depending on your device)

View File

@@ -40,7 +40,7 @@ The long-term database contains several tables:
### Query Table ### Query Table
Label | Type | Allowed to by empty | Content Label | Type | Allowed to be empty | Content
--- | --- | ---- | ----- --- | --- | ---- | -----
`id` | integer | No | autoincrement ID for the table, only used by SQLite3, not by *FTL*DNS `id` | integer | No | autoincrement ID for the table, only used by SQLite3, not by *FTL*DNS
`timestamp` | integer | No | Unix timestamp when this query arrived at *FTL*DNS (used as index) `timestamp` | integer | No | Unix timestamp when this query arrived at *FTL*DNS (used as index)
@@ -73,7 +73,7 @@ If a query was influenced by a deny or allowlist entry, this field contains the
This table contains counter values integrated over the entire lifetime of the table This table contains counter values integrated over the entire lifetime of the table
Label | Type | Allowed to by empty | Content Label | Type | Allowed to be empty | Content
--- | --- | ---- | ----- --- | --- | ---- | -----
`id` | integer | No | ID for the table used to select a counter (see below) `id` | integer | No | ID for the table used to select a counter (see below)
`value` | integer | No | Value of a given counter `value` | integer | No | Value of a given counter
@@ -170,14 +170,14 @@ The `queries` `VIEW` reads repeating properties from linked tables to reduce bot
#### `domain_by_id` #### `domain_by_id`
Label | Type | Allowed to by empty | Content Label | Type | Allowed to be empty | Content
--- | --- | --- | --- --- | --- | --- | ---
`id` | integer | No | ID of the entry. Used by `query_storage` `id` | integer | No | ID of the entry. Used by `query_storage`
`domain` | text | No | Domain name `domain` | text | No | Domain name
#### `client_by_id` #### `client_by_id`
Label | Type | Allowed to by empty | Content Label | Type | Allowed to be empty | Content
--- | --- | --- | --- --- | --- | --- | ---
`id` | integer | No | ID of the entry. Used by `query_storage` `id` | integer | No | ID of the entry. Used by `query_storage`
`ip` | text | No | Client IP address `ip` | text | No | Client IP address
@@ -185,14 +185,14 @@ Label | Type | Allowed to by empty | Content
#### `forward_by_id` #### `forward_by_id`
Label | Type | Allowed to by empty | Content Label | Type | Allowed to be empty | Content
--- | --- | --- | --- --- | --- | --- | ---
`id` | integer | No | ID of the entry. Used by `query_storage` `id` | integer | No | ID of the entry. Used by `query_storage`
`forward` | text | No | Upstream server identifier (`<ipaddr>#<port>`) `forward` | text | No | Upstream server identifier (`<ipaddr>#<port>`)
#### `addinfo_by_id` #### `addinfo_by_id`
Label | Type | Allowed to by empty | Content Label | Type | Allowed to be empty | Content
--- | --- | --- | --- --- | --- | --- | ---
`id` | integer | No | ID of the entry. Used by `query_storage` `id` | integer | No | ID of the entry. Used by `query_storage`
`type` | integer | No | Type of the `content` field `type` | integer | No | Type of the `content` field
@@ -205,7 +205,7 @@ Valid `type` IDs are currently
### Example for interaction with the long-term query database ### Example for interaction with the long-term query database
In addition to the interactions the Pi-hole database API offers, you can also run your own SQL commands against the database. If you want to obtain the three most queries domains for all time, you could use In addition to the interactions the Pi-hole database API offers, you can also run your own SQL commands against the database. If you want to obtain the three most queried domains for all time, you could use
```bash ```bash
sqlite3 "/etc/pihole/pihole-FTL.db" "SELECT domain,count(domain) FROM queries WHERE (STATUS == 2 OR STATUS == 3) GROUP BY domain ORDER BY count(domain) DESC LIMIT 3" sqlite3 "/etc/pihole/pihole-FTL.db" "SELECT domain,count(domain) FROM queries WHERE (STATUS == 2 OR STATUS == 3) GROUP BY domain ORDER BY count(domain) DESC LIMIT 3"

View File

@@ -15,7 +15,7 @@
If a DNS name exists in the cache, but its time-to-live (TTL) has expired only recently, the data will be used anyway (a refreshing from upstream is triggered). This can improve DNS query delays especially over unreliable or slow Internet connections. This feature comes at the expense of possibly sometimes returning out-of-date data and less efficient cache utilization, since old data cannot be flushed when its TTL expires, so the cache becomes mostly least-recently-used. To mitigate issues caused by massively outdated DNS replies, the maximum overaging of cached records is limited. We strongly recommend staying below 86400 (1 day) with this option. The default value of `dns.cache.optimizer` is one hour (`3600` seconds) which was carefully tested to provide a good balance between cache efficiency and query performance without having otherwise adverse effects. Our investigations revealed, that there has always been a grace time larger than an hour in addition to the TTL of DNS records, so this value should be safe for any practical use cases. If a DNS name exists in the cache, but its time-to-live (TTL) has expired only recently, the data will be used anyway (a refreshing from upstream is triggered). This can improve DNS query delays especially over unreliable or slow Internet connections. This feature comes at the expense of possibly sometimes returning out-of-date data and less efficient cache utilization, since old data cannot be flushed when its TTL expires, so the cache becomes mostly least-recently-used. To mitigate issues caused by massively outdated DNS replies, the maximum overaging of cached records is limited. We strongly recommend staying below 86400 (1 day) with this option. The default value of `dns.cache.optimizer` is one hour (`3600` seconds) which was carefully tested to provide a good balance between cache efficiency and query performance without having otherwise adverse effects. Our investigations revealed, that there has always been a grace time larger than an hour in addition to the TTL of DNS records, so this value should be safe for any practical use cases.
## Cacheing of queries blocked upstream (`dns.cache.upstreamBlockedTTL`) ## Caching of queries blocked upstream (`dns.cache.upstreamBlockedTTL`)
This setting allows you to specify the TTL used for queries blocked upstream. Once the TTL expires, the query will be forwarded to the upstream server again to check if the block is still valid. Defaults to caching for one day (86400 seconds). Setting `dns.cache.upstreamBlockedTTL` to zero disables caching of queries blocked upstream. This setting allows you to specify the TTL used for queries blocked upstream. Once the TTL expires, the query will be forwarded to the upstream server again to check if the block is still valid. Defaults to caching for one day (86400 seconds). Setting `dns.cache.upstreamBlockedTTL` to zero disables caching of queries blocked upstream.

View File

@@ -20,7 +20,7 @@ Users can configure the size of the resolver's name cache. The default is 150 na
#### Improve detection algorithm for determining the "best" forward destination #### Improve detection algorithm for determining the "best" forward destination
The DNS forward destination determination algorithm in *FTL*DNS's is modified to be much less restrictive than the original algorithm in `dnsmasq`. We keep using the fastest responding server now for 1000 queries or 10 minutes (whatever happens earlier) instead of 50 queries or 10 seconds (default values in `dnsmasq`). The DNS forward destination determination algorithm in *FTL*DNS is modified to be much less restrictive than the original algorithm in `dnsmasq`. We keep using the fastest responding server now for 1000 queries or 10 minutes (whatever happens earlier) instead of 50 queries or 10 seconds (default values in `dnsmasq`).
We keep the exceptions, i.e., we try all possible forward destinations if `SERVFAIL` or `REFUSED` is received or if a timeout occurs. We keep the exceptions, i.e., we try all possible forward destinations if `SERVFAIL` or `REFUSED` is received or if a timeout occurs.
Overall, this change has proven to greatly reduce the number of actually performed queries in typical Pi-hole environments. It may even be understood as being preferential in terms of privacy (as we send queries much less often to all servers). Overall, this change has proven to greatly reduce the number of actually performed queries in typical Pi-hole environments. It may even be understood as being preferential in terms of privacy (as we send queries much less often to all servers).
This has been implemented in commit [d1c163e](https://github.com/pi-hole/FTL/commit/d1c163e499a5cd9f311610e9da1e9365bbf81e89). This has been implemented in commit [d1c163e](https://github.com/pi-hole/FTL/commit/d1c163e499a5cd9f311610e9da1e9365bbf81e89).

View File

@@ -167,7 +167,7 @@ Warnings commonly seen in `dnsmasq`'s log file (`/var/log/pihole/pihole.log`) an
!!! warning "overflow: `NUMBER` log entries lost" !!! warning "overflow: `NUMBER` log entries lost"
When using asynchronous logging and the disk is too slow, we can loose log lines during busy times. This can be avoided by decreasing the system load or switching to synchronous logging. Note that synchronous logging has the disadvantage of blocking DNS resolution when waiting for the log to be written to disk. When using asynchronous logging and the disk is too slow, we can lose log lines during busy times. This can be avoided by decreasing the system load or switching to synchronous logging. Note that synchronous logging has the disadvantage of blocking DNS resolution when waiting for the log to be written to disk.
!!! warning "failed to create listening socket for `ADDRESS`: `MSG`" !!! warning "failed to create listening socket for `ADDRESS`: `MSG`"

Binary file not shown.

Before

Width:  |  Height:  |  Size: 19 KiB

After

Width:  |  Height:  |  Size: 13 KiB

View File

@@ -20,7 +20,7 @@ In a normal setup this results in a “No such name” response from your DNS se
Link to [Chromium's source code](https://chromium.googlesource.com/chromium/src/+/refs/heads/main/chrome/browser/intranet_redirect_detector.cc#132) explaining the function. Link to [Chromium's source code](https://chromium.googlesource.com/chromium/src/+/refs/heads/main/chrome/browser/intranet_redirect_detector.cc#132) explaining the function.
### Pi-hole update fails due to repository changed it's 'Suite' value ### Pi-hole update fails due to repository changed its 'Suite' value
This happens after a manual OS upgrade to the next major version on deb based systems. A typical message is This happens after a manual OS upgrade to the next major version on deb based systems. A typical message is

View File

@@ -117,7 +117,7 @@ Gravity is one of the most important scripts of Pi-hole. Its main purpose is to
* It will determine Internet connectivity, and give time for `pihole-FTL` to be resolvable on low-end systems if has just been restarted * It will determine Internet connectivity, and give time for `pihole-FTL` to be resolvable on low-end systems if has just been restarted
* It extracts all URLs and domains from the `adlists` table in [`/etc/pihole/gravity.db`](../database/domain-database/index.md) * It extracts all URLs and domains from the `adlists` table in [`/etc/pihole/gravity.db`](../database/domain-database/index.md)
* It runs through each URL, downloading it if necessary * It runs through each URL, downloading it if necessary
* `curl` checks the servers `Last-Modified` header to ensure it is getting a newer version * `curl` checks the server's `Last-Modified` header to ensure it is getting a newer version
* It will attempt to parse the file into a domains-only format if necessary * It will attempt to parse the file into a domains-only format if necessary
* Lists are merged, comments removed, sorted uniquely and stored in the `gravity` table of [`/etc/pihole/gravity.db`](../database/domain-database/index.md) * Lists are merged, comments removed, sorted uniquely and stored in the `gravity` table of [`/etc/pihole/gravity.db`](../database/domain-database/index.md)
* Gravity cleans up temporary content and reloads the DNS server * Gravity cleans up temporary content and reloads the DNS server

View File

@@ -37,7 +37,7 @@ Pi-hole only supports actively maintained versions of these systems.
<!-- markdownlint-disable code-block-style --> <!-- markdownlint-disable code-block-style -->
!!! info !!! info
Pi-hole may be able to install and run on variants of the above, but we cannot test all of them. Pi-hole may be able to install and run on variants of the above, but we cannot test all of them.
It's possible that that the installation may still fail due to an unsupported configuration or specific OS version. It's possible that the installation may still fail due to an unsupported configuration or specific OS version.
Also, if you are using an operating system not on this list Pi-hole may not work. Also, if you are using an operating system not on this list Pi-hole may not work.
@@ -104,7 +104,7 @@ firewall-cmd --reload
#### ufw #### ufw
ufw stores all rules persistent, so you just need to execute the commands below. ufw stores all rules persistently, so you just need to execute the commands below.
IPv4: IPv4:

View File

@@ -1,6 +1,6 @@
# Approximative matching # Approximative matching
You may or not be know `agrep`. It is basically a "forgiving" `grep` and is, for instance, used for searching through (offline) dictionaries. It is tolerant against errors (up to degree you specify). It may be beneficial is you want to match against domains where you don't really know the pattern. It is just an idea, we will have to see if it is actually useful. You may or not know `agrep`. It is basically a "forgiving" `grep` and is, for instance, used for searching through (offline) dictionaries. It is tolerant against errors (up to degree you specify). It may be beneficial is you want to match against domains where you don't really know the pattern. It is just an idea, we will have to see if it is actually useful.
This is a somewhat complicated topic, we'll approach it by examples as it is very complicated to get the head around it by just listening to the specifications. This is a somewhat complicated topic, we'll approach it by examples as it is very complicated to get the head around it by just listening to the specifications.

View File

@@ -8,7 +8,7 @@ Our implementation is light and fast as each domain is only checked once for a m
*FTL*DNS uses a specific hierarchy to ensure regex filters work as you expect them to. Allowlisting always has priority over denylisting. *FTL*DNS uses a specific hierarchy to ensure regex filters work as you expect them to. Allowlisting always has priority over denylisting.
There are two locations where regex filters are important: There are two locations where regex filters are important:
1. On loading the blocking domains form the `gravity` database table, *FTL*DNS skips not only exactly allowlisted domains but also those that match enabled allowlist regex filters. 1. On loading the blocking domains from the `gravity` database table, *FTL*DNS skips not only exactly allowlisted domains but also those that match enabled allowlist regex filters.
2. When a queried domain matches a denylist regex filter, the query will *not* be blocked if the domain *also* matches an exact or a regex allowlist entry. 2. When a queried domain matches a denylist regex filter, the query will *not* be blocked if the domain *also* matches an exact or a regex allowlist entry.
## How to use regular expressions for filtering domains ## How to use regular expressions for filtering domains

View File

@@ -6,7 +6,7 @@ In order to ease regex development, we added a regex test mode to `pihole-FTL` w
pihole-FTL regex-test doubleclick.net pihole-FTL regex-test doubleclick.net
``` ```
(test `doubleclick.net` against all regexs in the gravity database), or (test `doubleclick.net` against all regexes in the gravity database), or
```bash ```bash
pihole-FTL regex-test doubleclick.net "(^|\.)double" pihole-FTL regex-test doubleclick.net "(^|\.)double"