Merge branch 'master' into fix/etag
Some checks are pending
Check for merge conflicts / main (push) Waiting to run
CodeQL / Analyze (pull_request) Waiting to run
Test Supported Distributions / smoke-tests (pull_request) Waiting to run
Test Supported Distributions / distro-test (alpine_3_21) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (alpine_3_22) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (alpine_3_23) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (centos_10) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (centos_9) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (debian_11) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (debian_12) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (debian_13) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (fedora_40) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (fedora_41) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (fedora_42) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (fedora_43) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (ubuntu_20) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (ubuntu_22) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (ubuntu_24) (pull_request) Blocked by required conditions
Check for merge conflicts / main (pull_request_target) Waiting to run
Some checks are pending
Check for merge conflicts / main (push) Waiting to run
CodeQL / Analyze (pull_request) Waiting to run
Test Supported Distributions / smoke-tests (pull_request) Waiting to run
Test Supported Distributions / distro-test (alpine_3_21) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (alpine_3_22) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (alpine_3_23) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (centos_10) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (centos_9) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (debian_11) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (debian_12) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (debian_13) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (fedora_40) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (fedora_41) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (fedora_42) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (fedora_43) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (ubuntu_20) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (ubuntu_22) (pull_request) Blocked by required conditions
Test Supported Distributions / distro-test (ubuntu_24) (pull_request) Blocked by required conditions
Check for merge conflicts / main (pull_request_target) Waiting to run
This commit is contained in:
8
.github/workflows/codeql-analysis.yml
vendored
8
.github/workflows/codeql-analysis.yml
vendored
@@ -25,16 +25,16 @@ jobs:
|
|||||||
steps:
|
steps:
|
||||||
-
|
-
|
||||||
name: Checkout repository
|
name: Checkout repository
|
||||||
uses: actions/checkout@v4.2.2
|
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd #v6.0.2
|
||||||
# Initializes the CodeQL tools for scanning.
|
# Initializes the CodeQL tools for scanning.
|
||||||
-
|
-
|
||||||
name: Initialize CodeQL
|
name: Initialize CodeQL
|
||||||
uses: github/codeql-action/init@v3
|
uses: github/codeql-action/init@9e907b5e64f6b83e7804b09294d44122997950d6 #v4.32.3
|
||||||
with:
|
with:
|
||||||
languages: 'python'
|
languages: 'python'
|
||||||
-
|
-
|
||||||
name: Autobuild
|
name: Autobuild
|
||||||
uses: github/codeql-action/autobuild@v3
|
uses: github/codeql-action/autobuild@9e907b5e64f6b83e7804b09294d44122997950d6 #v4.32.3
|
||||||
-
|
-
|
||||||
name: Perform CodeQL Analysis
|
name: Perform CodeQL Analysis
|
||||||
uses: github/codeql-action/analyze@v3
|
uses: github/codeql-action/analyze@9e907b5e64f6b83e7804b09294d44122997950d6 #v4.32.3
|
||||||
|
|||||||
2
.github/workflows/merge-conflict.yml
vendored
2
.github/workflows/merge-conflict.yml
vendored
@@ -13,7 +13,7 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Check if PRs are have merge conflicts
|
- name: Check if PRs are have merge conflicts
|
||||||
uses: eps1lon/actions-label-merge-conflict@v3.0.3
|
uses: eps1lon/actions-label-merge-conflict@1df065ebe6e3310545d4f4c4e862e43bdca146f0 #v3.0.3
|
||||||
with:
|
with:
|
||||||
dirtyLabel: "PR: Merge Conflict"
|
dirtyLabel: "PR: Merge Conflict"
|
||||||
repoToken: "${{ secrets.GITHUB_TOKEN }}"
|
repoToken: "${{ secrets.GITHUB_TOKEN }}"
|
||||||
|
|||||||
4
.github/workflows/stale.yml
vendored
4
.github/workflows/stale.yml
vendored
@@ -17,7 +17,7 @@ jobs:
|
|||||||
issues: write
|
issues: write
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/stale@v9.1.0
|
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d #v10.1.1
|
||||||
with:
|
with:
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
days-before-stale: 30
|
days-before-stale: 30
|
||||||
@@ -40,7 +40,7 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4.2.2
|
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd #v6.0.2
|
||||||
- name: Remove 'stale' label
|
- name: Remove 'stale' label
|
||||||
run: gh issue edit ${{ github.event.issue.number }} --remove-label ${{ env.stale_label }}
|
run: gh issue edit ${{ github.event.issue.number }} --remove-label ${{ env.stale_label }}
|
||||||
env:
|
env:
|
||||||
|
|||||||
2
.github/workflows/stale_pr.yml
vendored
2
.github/workflows/stale_pr.yml
vendored
@@ -17,7 +17,7 @@ jobs:
|
|||||||
pull-requests: write
|
pull-requests: write
|
||||||
|
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/stale@v9.1.0
|
- uses: actions/stale@997185467fa4f803885201cee163a9f38240193d #v10.1.1
|
||||||
with:
|
with:
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
# Do not automatically mark PR/issue as stale
|
# Do not automatically mark PR/issue as stale
|
||||||
|
|||||||
2
.github/workflows/sync-back-to-dev.yml
vendored
2
.github/workflows/sync-back-to-dev.yml
vendored
@@ -33,7 +33,7 @@ jobs:
|
|||||||
name: Syncing branches
|
name: Syncing branches
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4.2.2
|
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd #v6.0.2
|
||||||
- name: Opening pull request
|
- name: Opening pull request
|
||||||
run: gh pr create -B development -H master --title 'Sync master back into development' --body 'Created by Github action' --label 'internal'
|
run: gh pr create -B development -H master --title 'Sync master back into development' --body 'Created by Github action' --label 'internal'
|
||||||
env:
|
env:
|
||||||
|
|||||||
19
.github/workflows/test.yml
vendored
19
.github/workflows/test.yml
vendored
@@ -18,7 +18,7 @@ jobs:
|
|||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
uses: actions/checkout@v4.2.2
|
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd #v6.0.2
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0 # Differential ShellCheck requires full git history
|
fetch-depth: 0 # Differential ShellCheck requires full git history
|
||||||
|
|
||||||
@@ -31,25 +31,25 @@ jobs:
|
|||||||
[[ $FAIL == 1 ]] && exit 1 || echo "Scripts are executable!"
|
[[ $FAIL == 1 ]] && exit 1 || echo "Scripts are executable!"
|
||||||
|
|
||||||
- name: Differential ShellCheck
|
- name: Differential ShellCheck
|
||||||
uses: redhat-plumbers-in-action/differential-shellcheck@v5
|
uses: redhat-plumbers-in-action/differential-shellcheck@d965e66ec0b3b2f821f75c8eff9b12442d9a7d1e #v5.5.6
|
||||||
with:
|
with:
|
||||||
severity: warning
|
severity: warning
|
||||||
display-engine: sarif-fmt
|
display-engine: sarif-fmt
|
||||||
|
|
||||||
|
|
||||||
- name: Spell-Checking
|
- name: Spell-Checking
|
||||||
uses: codespell-project/actions-codespell@master
|
uses: codespell-project/actions-codespell@8f01853be192eb0f849a5c7d721450e7a467c579 #v2.2
|
||||||
with:
|
with:
|
||||||
ignore_words_file: .codespellignore
|
ignore_words_file: .codespellignore
|
||||||
|
|
||||||
- name: Get editorconfig-checker
|
- name: Get editorconfig-checker
|
||||||
uses: editorconfig-checker/action-editorconfig-checker@main # tag v1.0.0 is really out of date
|
uses: editorconfig-checker/action-editorconfig-checker@4b6cd6190d435e7e084fb35e36a096e98506f7b9 #v2.1.0
|
||||||
|
|
||||||
- name: Run editorconfig-checker
|
- name: Run editorconfig-checker
|
||||||
run: editorconfig-checker
|
run: editorconfig-checker
|
||||||
|
|
||||||
- name: Check python code formatting with black
|
- name: Check python code formatting with black
|
||||||
uses: psf/black@stable
|
uses: psf/black@6305bf1ae645ab7541be4f5028a86239316178eb #26.1.0
|
||||||
with:
|
with:
|
||||||
src: "./test"
|
src: "./test"
|
||||||
options: "--check --diff --color"
|
options: "--check --diff --color"
|
||||||
@@ -65,6 +65,7 @@ jobs:
|
|||||||
[
|
[
|
||||||
debian_11,
|
debian_11,
|
||||||
debian_12,
|
debian_12,
|
||||||
|
debian_13,
|
||||||
ubuntu_20,
|
ubuntu_20,
|
||||||
ubuntu_22,
|
ubuntu_22,
|
||||||
ubuntu_24,
|
ubuntu_24,
|
||||||
@@ -73,15 +74,19 @@ jobs:
|
|||||||
fedora_40,
|
fedora_40,
|
||||||
fedora_41,
|
fedora_41,
|
||||||
fedora_42,
|
fedora_42,
|
||||||
|
fedora_43,
|
||||||
|
alpine_3_21,
|
||||||
|
alpine_3_22,
|
||||||
|
alpine_3_23,
|
||||||
]
|
]
|
||||||
env:
|
env:
|
||||||
DISTRO: ${{matrix.distro}}
|
DISTRO: ${{matrix.distro}}
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout repository
|
- name: Checkout repository
|
||||||
uses: actions/checkout@v4.2.2
|
uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd #v6.0.2
|
||||||
|
|
||||||
- name: Set up Python
|
- name: Set up Python
|
||||||
uses: actions/setup-python@v5.6.0
|
uses: actions/setup-python@a309ff8b426b58ec0e2a45f0f869d46889d02405 #v6.2.0
|
||||||
with:
|
with:
|
||||||
python-version: "3.13"
|
python-version: "3.13"
|
||||||
|
|
||||||
|
|||||||
@@ -1,2 +1,6 @@
|
|||||||
external-sources=true # allow shellcheck to read external sources
|
external-sources=true # allow shellcheck to read external sources
|
||||||
disable=SC3043 #disable SC3043: In POSIX sh, local is undefined.
|
disable=SC3043 #disable SC3043: In POSIX sh, local is undefined.
|
||||||
|
enable=useless-use-of-cat # disabled by default as of shellcheck 0.11.0
|
||||||
|
enable=avoid-negated-conditions # avoid-negated-conditions is optional as of shellcheck 0.11.0
|
||||||
|
enable=require-variable-braces
|
||||||
|
enable=deprecate-which
|
||||||
|
|||||||
@@ -150,7 +150,6 @@ LoginAPI() {
|
|||||||
|
|
||||||
# Try to login again until the session is valid
|
# Try to login again until the session is valid
|
||||||
while [ ! "${validSession}" = true ] ; do
|
while [ ! "${validSession}" = true ] ; do
|
||||||
echo "Authentication failed. Please enter your Pi-hole password"
|
|
||||||
|
|
||||||
# Print the error message if there is one
|
# Print the error message if there is one
|
||||||
if [ ! "${sessionError}" = "null" ] && [ "${1}" = "verbose" ]; then
|
if [ ! "${sessionError}" = "null" ] && [ "${1}" = "verbose" ]; then
|
||||||
@@ -161,6 +160,14 @@ LoginAPI() {
|
|||||||
echo "Error: ${sessionMessage}"
|
echo "Error: ${sessionMessage}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
if [ "${1}" = "verbose" ]; then
|
||||||
|
# If we are not in verbose mode, no need to print the error message again
|
||||||
|
echo "Please enter your Pi-hole password"
|
||||||
|
else
|
||||||
|
|
||||||
|
echo "Authentication failed. Please enter your Pi-hole password"
|
||||||
|
fi
|
||||||
|
|
||||||
# secretly read the password
|
# secretly read the password
|
||||||
secretRead; printf '\n'
|
secretRead; printf '\n'
|
||||||
|
|
||||||
@@ -183,13 +190,20 @@ Authentication() {
|
|||||||
echo "No response from FTL server. Please check connectivity"
|
echo "No response from FTL server. Please check connectivity"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
# obtain validity, session ID and sessionMessage from session response
|
|
||||||
validSession=$(echo "${sessionResponse}"| jq .session.valid 2>/dev/null)
|
|
||||||
SID=$(echo "${sessionResponse}"| jq --raw-output .session.sid 2>/dev/null)
|
|
||||||
sessionMessage=$(echo "${sessionResponse}"| jq --raw-output .session.message 2>/dev/null)
|
|
||||||
|
|
||||||
# obtain the error message from the session response
|
# obtain validity, session ID, sessionMessage and error message from
|
||||||
sessionError=$(echo "${sessionResponse}"| jq --raw-output .error.message 2>/dev/null)
|
# session response, apply default values if none returned
|
||||||
|
result=$(echo "${sessionResponse}" | jq -r '
|
||||||
|
(.session.valid // false),
|
||||||
|
(.session.sid // null),
|
||||||
|
(.session.message // null),
|
||||||
|
(.error.message // null)
|
||||||
|
' 2>/dev/null)
|
||||||
|
|
||||||
|
validSession=$(echo "${result}" | sed -n '1p')
|
||||||
|
SID=$(echo "${result}" | sed -n '2p')
|
||||||
|
sessionMessage=$(echo "${result}" | sed -n '3p')
|
||||||
|
sessionError=$(echo "${result}" | sed -n '4p')
|
||||||
|
|
||||||
if [ "${1}" = "verbose" ]; then
|
if [ "${1}" = "verbose" ]; then
|
||||||
if [ "${validSession}" = true ]; then
|
if [ "${validSession}" = true ]; then
|
||||||
@@ -353,12 +367,9 @@ apiFunc() {
|
|||||||
if [ "${verbosity}" = "verbose" ]; then
|
if [ "${verbosity}" = "verbose" ]; then
|
||||||
echo "Data:"
|
echo "Data:"
|
||||||
fi
|
fi
|
||||||
|
# Attempt to print the data with jq, if it is not valid JSON, or not installed
|
||||||
if command -v jq >/dev/null && echo "${data}" | jq . >/dev/null 2>&1; then
|
# then print the plain text.
|
||||||
echo "${data}" | jq .
|
echo "${data}" | jq . 2>/dev/null || echo "${data}"
|
||||||
else
|
|
||||||
echo "${data}"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Delete the session
|
# Delete the session
|
||||||
LogoutAPI "${verbosity}"
|
LogoutAPI "${verbosity}"
|
||||||
|
|||||||
@@ -150,4 +150,10 @@ upgrade_gravityDB(){
|
|||||||
pihole-FTL sqlite3 -ni "${database}" < "${scriptPath}/18_to_19.sql"
|
pihole-FTL sqlite3 -ni "${database}" < "${scriptPath}/18_to_19.sql"
|
||||||
version=19
|
version=19
|
||||||
fi
|
fi
|
||||||
|
if [[ "$version" == "19" ]]; then
|
||||||
|
# Update views to use new allowlist/denylist names
|
||||||
|
echo -e " ${INFO} Upgrading gravity database from version 19 to 20"
|
||||||
|
pihole-FTL sqlite3 -ni "${database}" < "${scriptPath}/19_to_20.sql"
|
||||||
|
version=20
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|||||||
43
advanced/Scripts/database_migration/gravity/19_to_20.sql
Normal file
43
advanced/Scripts/database_migration/gravity/19_to_20.sql
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
.timeout 30000
|
||||||
|
|
||||||
|
BEGIN TRANSACTION;
|
||||||
|
|
||||||
|
DROP VIEW vw_whitelist;
|
||||||
|
CREATE VIEW vw_allowlist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
||||||
|
FROM domainlist
|
||||||
|
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
||||||
|
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
||||||
|
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
||||||
|
AND domainlist.type = 0
|
||||||
|
ORDER BY domainlist.id;
|
||||||
|
|
||||||
|
DROP VIEW vw_blacklist;
|
||||||
|
CREATE VIEW vw_denylist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
||||||
|
FROM domainlist
|
||||||
|
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
||||||
|
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
||||||
|
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
||||||
|
AND domainlist.type = 1
|
||||||
|
ORDER BY domainlist.id;
|
||||||
|
|
||||||
|
DROP VIEW vw_regex_whitelist;
|
||||||
|
CREATE VIEW vw_regex_allowlist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
||||||
|
FROM domainlist
|
||||||
|
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
||||||
|
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
||||||
|
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
||||||
|
AND domainlist.type = 2
|
||||||
|
ORDER BY domainlist.id;
|
||||||
|
|
||||||
|
DROP VIEW vw_regex_blacklist;
|
||||||
|
CREATE VIEW vw_regex_denylist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
||||||
|
FROM domainlist
|
||||||
|
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
||||||
|
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
||||||
|
WHERE domainlist.enabled = 1 AND (domainlist_by_group.group_id IS NULL OR "group".enabled = 1)
|
||||||
|
AND domainlist.type = 3
|
||||||
|
ORDER BY domainlist.id;
|
||||||
|
|
||||||
|
UPDATE info SET value = 20 WHERE property = 'version';
|
||||||
|
|
||||||
|
COMMIT;
|
||||||
@@ -1,83 +0,0 @@
|
|||||||
#!/usr/bin/env bash
|
|
||||||
|
|
||||||
# Pi-hole: A black hole for Internet advertisements
|
|
||||||
# (c) 2019 Pi-hole, LLC (https://pi-hole.net)
|
|
||||||
# Network-wide ad blocking via your own hardware.
|
|
||||||
#
|
|
||||||
# ARP table interaction
|
|
||||||
#
|
|
||||||
# This file is copyright under the latest version of the EUPL.
|
|
||||||
# Please see LICENSE file for your rights under this license.
|
|
||||||
|
|
||||||
coltable="/opt/pihole/COL_TABLE"
|
|
||||||
if [[ -f ${coltable} ]]; then
|
|
||||||
# shellcheck source="./advanced/Scripts/COL_TABLE"
|
|
||||||
source ${coltable}
|
|
||||||
fi
|
|
||||||
|
|
||||||
readonly PI_HOLE_SCRIPT_DIR="/opt/pihole"
|
|
||||||
utilsfile="${PI_HOLE_SCRIPT_DIR}/utils.sh"
|
|
||||||
# shellcheck source=./advanced/Scripts/utils.sh
|
|
||||||
source "${utilsfile}"
|
|
||||||
|
|
||||||
# Determine database location
|
|
||||||
DBFILE=$(getFTLConfigValue "files.database")
|
|
||||||
if [ -z "$DBFILE" ]; then
|
|
||||||
DBFILE="/etc/pihole/pihole-FTL.db"
|
|
||||||
fi
|
|
||||||
|
|
||||||
flushARP(){
|
|
||||||
local output
|
|
||||||
if [[ "${args[1]}" != "quiet" ]]; then
|
|
||||||
echo -ne " ${INFO} Flushing network table ..."
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Stop FTL to prevent database access
|
|
||||||
if ! output=$(service pihole-FTL stop 2>&1); then
|
|
||||||
echo -e "${OVER} ${CROSS} Failed to stop FTL"
|
|
||||||
echo " Output: ${output}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Truncate network_addresses table in pihole-FTL.db
|
|
||||||
# This needs to be done before we can truncate the network table due to
|
|
||||||
# foreign key constraints
|
|
||||||
if ! output=$(pihole-FTL sqlite3 -ni "${DBFILE}" "DELETE FROM network_addresses" 2>&1); then
|
|
||||||
echo -e "${OVER} ${CROSS} Failed to truncate network_addresses table"
|
|
||||||
echo " Database location: ${DBFILE}"
|
|
||||||
echo " Output: ${output}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Truncate network table in pihole-FTL.db
|
|
||||||
if ! output=$(pihole-FTL sqlite3 -ni "${DBFILE}" "DELETE FROM network" 2>&1); then
|
|
||||||
echo -e "${OVER} ${CROSS} Failed to truncate network table"
|
|
||||||
echo " Database location: ${DBFILE}"
|
|
||||||
echo " Output: ${output}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Flush ARP cache of the host
|
|
||||||
if ! output=$(ip -s -s neigh flush all 2>&1); then
|
|
||||||
echo -e "${OVER} ${CROSS} Failed to flush ARP cache"
|
|
||||||
echo " Output: ${output}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Start FTL again
|
|
||||||
if ! output=$(service pihole-FTL restart 2>&1); then
|
|
||||||
echo -e "${OVER} ${CROSS} Failed to restart FTL"
|
|
||||||
echo " Output: ${output}"
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [[ "${args[1]}" != "quiet" ]]; then
|
|
||||||
echo -e "${OVER} ${TICK} Flushed network table"
|
|
||||||
fi
|
|
||||||
}
|
|
||||||
|
|
||||||
args=("$@")
|
|
||||||
|
|
||||||
case "${args[0]}" in
|
|
||||||
"arpflush" ) flushARP;;
|
|
||||||
esac
|
|
||||||
@@ -41,6 +41,22 @@ warning1() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
checkout() {
|
checkout() {
|
||||||
|
|
||||||
|
local skipFTL additionalFlag
|
||||||
|
skipFTL=false
|
||||||
|
# Check arguments
|
||||||
|
for var in "$@"; do
|
||||||
|
case "$var" in
|
||||||
|
"--skipFTL") skipFTL=true ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ "${skipFTL}" == true ]; then
|
||||||
|
additionalFlag="--skipFTL"
|
||||||
|
else
|
||||||
|
additionalFlag=""
|
||||||
|
fi
|
||||||
|
|
||||||
local corebranches
|
local corebranches
|
||||||
local webbranches
|
local webbranches
|
||||||
|
|
||||||
@@ -235,7 +251,7 @@ checkout() {
|
|||||||
# Force updating everything
|
# Force updating everything
|
||||||
if [[ ! "${1}" == "web" && ! "${1}" == "ftl" ]]; then
|
if [[ ! "${1}" == "web" && ! "${1}" == "ftl" ]]; then
|
||||||
echo -e " ${INFO} Running installer to upgrade your installation"
|
echo -e " ${INFO} Running installer to upgrade your installation"
|
||||||
if "${PI_HOLE_FILES_DIR}/automated install/basic-install.sh" --unattended; then
|
if "${PI_HOLE_FILES_DIR}/automated install/basic-install.sh" --unattended ${additionalFlag}; then
|
||||||
exit 0
|
exit 0
|
||||||
else
|
else
|
||||||
echo -e " ${COL_RED} Error: Unable to complete update, please contact support${COL_NC}"
|
echo -e " ${COL_RED} Error: Unable to complete update, please contact support${COL_NC}"
|
||||||
|
|||||||
@@ -375,22 +375,6 @@ check_firewalld() {
|
|||||||
log_write "${CROSS} ${COL_RED} Allow Service: ${i}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_FIREWALLD})"
|
log_write "${CROSS} ${COL_RED} Allow Service: ${i}${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_FIREWALLD})"
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
# check for custom FTL FirewallD zone
|
|
||||||
local firewalld_zones
|
|
||||||
firewalld_zones=$(firewall-cmd --get-zones)
|
|
||||||
if [[ "${firewalld_zones}" =~ "ftl" ]]; then
|
|
||||||
log_write "${TICK} ${COL_GREEN}FTL Custom Zone Detected${COL_NC}";
|
|
||||||
# check FTL custom zone interface: lo
|
|
||||||
local firewalld_ftl_zone_interfaces
|
|
||||||
firewalld_ftl_zone_interfaces=$(firewall-cmd --zone=ftl --list-interfaces)
|
|
||||||
if [[ "${firewalld_ftl_zone_interfaces}" =~ "lo" ]]; then
|
|
||||||
log_write "${TICK} ${COL_GREEN} Local Interface Detected${COL_NC}";
|
|
||||||
else
|
|
||||||
log_write "${CROSS} ${COL_RED} Local Interface Not Detected${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_FIREWALLD})"
|
|
||||||
fi
|
|
||||||
else
|
|
||||||
log_write "${CROSS} ${COL_RED}FTL Custom Zone Not Detected${COL_NC} (${FAQ_HARDWARE_REQUIREMENTS_FIREWALLD})"
|
|
||||||
fi
|
|
||||||
fi
|
fi
|
||||||
else
|
else
|
||||||
log_write "${TICK} ${COL_GREEN}Firewalld service not detected${COL_NC}";
|
log_write "${TICK} ${COL_GREEN}Firewalld service not detected${COL_NC}";
|
||||||
@@ -593,18 +577,21 @@ check_required_ports() {
|
|||||||
# Add port 53
|
# Add port 53
|
||||||
ports_configured+=("53")
|
ports_configured+=("53")
|
||||||
|
|
||||||
|
local protocol_type port_number service_name
|
||||||
# Now that we have the values stored,
|
# Now that we have the values stored,
|
||||||
for i in "${!ports_in_use[@]}"; do
|
for i in "${!ports_in_use[@]}"; do
|
||||||
# loop through them and assign some local variables
|
# loop through them and assign some local variables
|
||||||
local service_name
|
read -r protocol_type port_number service_name <<< "$(
|
||||||
service_name=$(echo "${ports_in_use[$i]}" | awk '{gsub(/users:\(\("/,"",$7);gsub(/".*/,"",$7);print $7}')
|
awk '{
|
||||||
local protocol_type
|
p=$1; n=$5; s=$7
|
||||||
protocol_type=$(echo "${ports_in_use[$i]}" | awk '{print $1}')
|
gsub(/users:\(\("/,"",s)
|
||||||
local port_number
|
gsub(/".*/,"",s)
|
||||||
port_number="$(echo "${ports_in_use[$i]}" | awk '{print $5}')" # | awk '{gsub(/^.*:/,"",$5);print $5}')
|
print p, n, s
|
||||||
|
}' <<< "${ports_in_use[$i]}"
|
||||||
|
)"
|
||||||
|
|
||||||
# Check if the right services are using the right ports
|
# Check if the right services are using the right ports
|
||||||
if [[ ${ports_configured[*]} =~ $(echo "${port_number}" | rev | cut -d: -f1 | rev) ]]; then
|
if [[ ${ports_configured[*]} =~ ${port_number##*:} ]]; then
|
||||||
compare_port_to_service_assigned "${ftl}" "${service_name}" "${protocol_type}:${port_number}"
|
compare_port_to_service_assigned "${ftl}" "${service_name}" "${protocol_type}:${port_number}"
|
||||||
else
|
else
|
||||||
# If it's not a default port that Pi-hole needs, just print it out for the user to see
|
# If it's not a default port that Pi-hole needs, just print it out for the user to see
|
||||||
@@ -672,7 +659,7 @@ dig_at() {
|
|||||||
local record_type="A"
|
local record_type="A"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Find a random blocked url that has not been whitelisted and is not ABP style.
|
# Find a random blocked url that has not been allowlisted and is not ABP style.
|
||||||
# This helps emulate queries to different domains that a user might query
|
# This helps emulate queries to different domains that a user might query
|
||||||
# It will also give extra assurance that Pi-hole is correctly resolving and blocking domains
|
# It will also give extra assurance that Pi-hole is correctly resolving and blocking domains
|
||||||
local random_url
|
local random_url
|
||||||
@@ -722,7 +709,7 @@ dig_at() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# Check if Pi-hole can use itself to block a domain
|
# Check if Pi-hole can use itself to block a domain
|
||||||
if local_dig="$(dig +tries=1 +time=2 -"${protocol}" "${random_url}" @"${local_address}" "${record_type}")"; then
|
if local_dig="$(dig +tries=1 +time=2 -"${protocol}" "${random_url}" @"${local_address}" "${record_type}" -p "$(get_ftl_conf_value "dns.port")")"; then
|
||||||
# If it can, show success
|
# If it can, show success
|
||||||
if [[ "${local_dig}" == *"status: NOERROR"* ]]; then
|
if [[ "${local_dig}" == *"status: NOERROR"* ]]; then
|
||||||
local_dig="NOERROR"
|
local_dig="NOERROR"
|
||||||
@@ -778,7 +765,7 @@ process_status(){
|
|||||||
:
|
:
|
||||||
else
|
else
|
||||||
# non-Docker system
|
# non-Docker system
|
||||||
if service "${i}" status | grep -E 'is\srunning' &> /dev/null; then
|
if service "${i}" status | grep -q -E 'is\srunning|started'; then
|
||||||
status_of_process="active"
|
status_of_process="active"
|
||||||
else
|
else
|
||||||
status_of_process="inactive"
|
status_of_process="inactive"
|
||||||
@@ -816,42 +803,27 @@ ftl_full_status(){
|
|||||||
|
|
||||||
make_array_from_file() {
|
make_array_from_file() {
|
||||||
local filename="${1}"
|
local filename="${1}"
|
||||||
|
|
||||||
|
# If the file is a directory do nothing since it cannot be parsed
|
||||||
|
[[ -d "${filename}" ]] && return
|
||||||
|
|
||||||
# The second argument can put a limit on how many line should be read from the file
|
# The second argument can put a limit on how many line should be read from the file
|
||||||
# Since some of the files are so large, this is helpful to limit the output
|
# Since some of the files are so large, this is helpful to limit the output
|
||||||
local limit=${2}
|
local limit=${2}
|
||||||
# A local iterator for testing if we are at the limit above
|
# A local iterator for testing if we are at the limit above
|
||||||
local i=0
|
local i=0
|
||||||
# If the file is a directory
|
|
||||||
if [[ -d "${filename}" ]]; then
|
|
||||||
# do nothing since it cannot be parsed
|
|
||||||
:
|
|
||||||
else
|
|
||||||
# Otherwise, read the file line by line
|
|
||||||
while IFS= read -r line;do
|
|
||||||
# Otherwise, strip out comments and blank lines
|
|
||||||
new_line=$(echo "${line}" | sed -e 's/^\s*#.*$//' -e '/^$/d')
|
|
||||||
# If the line still has content (a non-zero value)
|
|
||||||
if [[ -n "${new_line}" ]]; then
|
|
||||||
|
|
||||||
# If the string contains "### CHANGED", highlight this part in red
|
# Process the file, strip out comments and blank lines
|
||||||
if [[ "${new_line}" == *"### CHANGED"* ]]; then
|
local processed
|
||||||
new_line="${new_line//### CHANGED/${COL_RED}### CHANGED${COL_NC}}"
|
processed=$(sed -e 's/^\s*#.*$//' -e '/^$/d' "${filename}")
|
||||||
fi
|
|
||||||
|
|
||||||
# Finally, write this line to the log
|
while IFS= read -r line; do
|
||||||
log_write " ${new_line}"
|
# If the string contains "### CHANGED", highlight this part in red
|
||||||
fi
|
log_write " ${line//### CHANGED/${COL_RED}### CHANGED${COL_NC}}"
|
||||||
# Increment the iterator +1
|
((i++))
|
||||||
i=$((i+1))
|
# if the limit of lines we want to see is exceeded do nothing
|
||||||
# but if the limit of lines we want to see is exceeded
|
[[ -n ${limit} && $i -eq ${limit} ]] && break
|
||||||
if [[ -z ${limit} ]]; then
|
done <<< "$processed"
|
||||||
# do nothing
|
|
||||||
:
|
|
||||||
elif [[ $i -eq ${limit} ]]; then
|
|
||||||
break
|
|
||||||
fi
|
|
||||||
done < "${filename}"
|
|
||||||
fi
|
|
||||||
}
|
}
|
||||||
|
|
||||||
parse_file() {
|
parse_file() {
|
||||||
@@ -924,38 +896,38 @@ list_files_in_dir() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# Store the files found in an array
|
# Store the files found in an array
|
||||||
mapfile -t files_found < <(ls "${dir_to_parse}")
|
local files_found=("${dir_to_parse}"/*)
|
||||||
# For each file in the array,
|
# For each file in the array,
|
||||||
for each_file in "${files_found[@]}"; do
|
for each_file in "${files_found[@]}"; do
|
||||||
if [[ -d "${dir_to_parse}/${each_file}" ]]; then
|
if [[ -d "${each_file}" ]]; then
|
||||||
# If it's a directory, do nothing
|
# If it's a directory, do nothing
|
||||||
:
|
:
|
||||||
elif [[ "${dir_to_parse}/${each_file}" == "${PIHOLE_DEBUG_LOG}" ]] || \
|
elif [[ "${each_file}" == "${PIHOLE_DEBUG_LOG}" ]] || \
|
||||||
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_RAW_BLOCKLIST_FILES}" ]] || \
|
[[ "${each_file}" == "${PIHOLE_RAW_BLOCKLIST_FILES}" ]] || \
|
||||||
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_INSTALL_LOG_FILE}" ]] || \
|
[[ "${each_file}" == "${PIHOLE_INSTALL_LOG_FILE}" ]] || \
|
||||||
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_LOG}" ]] || \
|
[[ "${each_file}" == "${PIHOLE_LOG}" ]] || \
|
||||||
[[ "${dir_to_parse}/${each_file}" == "${PIHOLE_LOG_GZIPS}" ]]; then
|
[[ "${each_file}" == "${PIHOLE_LOG_GZIPS}" ]]; then
|
||||||
:
|
:
|
||||||
elif [[ "${dir_to_parse}" == "${DNSMASQ_D_DIRECTORY}" ]]; then
|
elif [[ "${dir_to_parse}" == "${DNSMASQ_D_DIRECTORY}" ]]; then
|
||||||
# in case of the dnsmasq directory include all files in the debug output
|
# in case of the dnsmasq directory include all files in the debug output
|
||||||
log_write "\\n${COL_GREEN}$(ls -lhd "${dir_to_parse}"/"${each_file}")${COL_NC}"
|
log_write "\\n${COL_GREEN}$(ls -lhd "${each_file}")${COL_NC}"
|
||||||
make_array_from_file "${dir_to_parse}/${each_file}"
|
make_array_from_file "${each_file}"
|
||||||
else
|
else
|
||||||
# Then, parse the file's content into an array so each line can be analyzed if need be
|
# Then, parse the file's content into an array so each line can be analyzed if need be
|
||||||
for i in "${!REQUIRED_FILES[@]}"; do
|
for i in "${!REQUIRED_FILES[@]}"; do
|
||||||
if [[ "${dir_to_parse}/${each_file}" == "${REQUIRED_FILES[$i]}" ]]; then
|
if [[ "${each_file}" == "${REQUIRED_FILES[$i]}" ]]; then
|
||||||
# display the filename
|
# display the filename
|
||||||
log_write "\\n${COL_GREEN}$(ls -lhd "${dir_to_parse}"/"${each_file}")${COL_NC}"
|
log_write "\\n${COL_GREEN}$(ls -lhd "${each_file}")${COL_NC}"
|
||||||
# Check if the file we want to view has a limit (because sometimes we just need a little bit of info from the file, not the entire thing)
|
# Check if the file we want to view has a limit (because sometimes we just need a little bit of info from the file, not the entire thing)
|
||||||
case "${dir_to_parse}/${each_file}" in
|
case "${each_file}" in
|
||||||
# If it's Web server log, give the first and last 25 lines
|
# If it's Web server log, give the first and last 25 lines
|
||||||
"${PIHOLE_WEBSERVER_LOG}") head_tail_log "${dir_to_parse}/${each_file}" 25
|
"${PIHOLE_WEBSERVER_LOG}") head_tail_log "${each_file}" 25
|
||||||
;;
|
;;
|
||||||
# Same for the FTL log
|
# Same for the FTL log
|
||||||
"${PIHOLE_FTL_LOG}") head_tail_log "${dir_to_parse}/${each_file}" 35
|
"${PIHOLE_FTL_LOG}") head_tail_log "${each_file}" 35
|
||||||
;;
|
;;
|
||||||
# parse the file into an array in case we ever need to analyze it line-by-line
|
# parse the file into an array in case we ever need to analyze it line-by-line
|
||||||
*) make_array_from_file "${dir_to_parse}/${each_file}";
|
*) make_array_from_file "${each_file}";
|
||||||
esac
|
esac
|
||||||
else
|
else
|
||||||
# Otherwise, do nothing since it's not a file needed for Pi-hole so we don't care about it
|
# Otherwise, do nothing since it's not a file needed for Pi-hole so we don't care about it
|
||||||
@@ -991,6 +963,7 @@ head_tail_log() {
|
|||||||
local filename="${1}"
|
local filename="${1}"
|
||||||
# The number of lines to use for head and tail
|
# The number of lines to use for head and tail
|
||||||
local qty="${2}"
|
local qty="${2}"
|
||||||
|
local filebasename="${filename##*/}"
|
||||||
local head_line
|
local head_line
|
||||||
local tail_line
|
local tail_line
|
||||||
# Put the current Internal Field Separator into another variable so it can be restored later
|
# Put the current Internal Field Separator into another variable so it can be restored later
|
||||||
@@ -999,14 +972,14 @@ head_tail_log() {
|
|||||||
IFS=$'\r\n'
|
IFS=$'\r\n'
|
||||||
local log_head=()
|
local log_head=()
|
||||||
mapfile -t log_head < <(head -n "${qty}" "${filename}")
|
mapfile -t log_head < <(head -n "${qty}" "${filename}")
|
||||||
log_write " ${COL_CYAN}-----head of $(basename "${filename}")------${COL_NC}"
|
log_write " ${COL_CYAN}-----head of ${filebasename}------${COL_NC}"
|
||||||
for head_line in "${log_head[@]}"; do
|
for head_line in "${log_head[@]}"; do
|
||||||
log_write " ${head_line}"
|
log_write " ${head_line}"
|
||||||
done
|
done
|
||||||
log_write ""
|
log_write ""
|
||||||
local log_tail=()
|
local log_tail=()
|
||||||
mapfile -t log_tail < <(tail -n "${qty}" "${filename}")
|
mapfile -t log_tail < <(tail -n "${qty}" "${filename}")
|
||||||
log_write " ${COL_CYAN}-----tail of $(basename "${filename}")------${COL_NC}"
|
log_write " ${COL_CYAN}-----tail of ${filebasename}------${COL_NC}"
|
||||||
for tail_line in "${log_tail[@]}"; do
|
for tail_line in "${log_tail[@]}"; do
|
||||||
log_write " ${tail_line}"
|
log_write " ${tail_line}"
|
||||||
done
|
done
|
||||||
@@ -1033,6 +1006,24 @@ show_db_entries() {
|
|||||||
)
|
)
|
||||||
|
|
||||||
for line in "${entries[@]}"; do
|
for line in "${entries[@]}"; do
|
||||||
|
# Use gray color for "no". Normal color for "yes"
|
||||||
|
line=${line//--no---/${COL_GRAY} no ${COL_NC}}
|
||||||
|
line=${line//--yes--/ yes }
|
||||||
|
|
||||||
|
# Use red for "deny" and green for "allow"
|
||||||
|
if [ "$title" = "Domainlist" ]; then
|
||||||
|
line=${line//regex-deny/${COL_RED}regex-deny${COL_NC}}
|
||||||
|
line=${line//regex-allow/${COL_GREEN}regex-allow${COL_NC}}
|
||||||
|
line=${line//exact-deny/${COL_RED}exact-deny${COL_NC}}
|
||||||
|
line=${line//exact-allow/${COL_GREEN}exact-allow${COL_NC}}
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Use red for "block" and green for "allow"
|
||||||
|
if [ "$title" = "Adlists" ]; then
|
||||||
|
line=${line//-BLOCK-/${COL_RED} Block ${COL_NC}}
|
||||||
|
line=${line//-ALLOW-/${COL_GREEN} Allow ${COL_NC}}
|
||||||
|
fi
|
||||||
|
|
||||||
log_write " ${line}"
|
log_write " ${line}"
|
||||||
done
|
done
|
||||||
|
|
||||||
@@ -1080,15 +1071,15 @@ check_dhcp_servers() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
show_groups() {
|
show_groups() {
|
||||||
show_db_entries "Groups" "SELECT id,CASE enabled WHEN '0' THEN ' 0' WHEN '1' THEN ' 1' ELSE enabled END enabled,name,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,description FROM \"group\"" "4 7 50 19 19 50"
|
show_db_entries "Groups" "SELECT id,CASE enabled WHEN '0' THEN '--no---' WHEN '1' THEN '--yes--' ELSE enabled END enabled,name,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,description FROM \"group\"" "4 7 50 19 19 50"
|
||||||
}
|
}
|
||||||
|
|
||||||
show_adlists() {
|
show_adlists() {
|
||||||
show_db_entries "Adlists" "SELECT id,CASE enabled WHEN '0' THEN ' 0' WHEN '1' THEN ' 1' ELSE enabled END enabled,GROUP_CONCAT(adlist_by_group.group_id) group_ids,address,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,comment FROM adlist LEFT JOIN adlist_by_group ON adlist.id = adlist_by_group.adlist_id GROUP BY id;" "5 7 12 100 19 19 50"
|
show_db_entries "Adlists" "SELECT id,CASE enabled WHEN '0' THEN '--no---' WHEN '1' THEN '--yes--' ELSE enabled END enabled,GROUP_CONCAT(adlist_by_group.group_id) group_ids, CASE type WHEN '0' THEN '-BLOCK-' WHEN '1' THEN '-ALLOW-' ELSE type END type, address,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,comment FROM adlist LEFT JOIN adlist_by_group ON adlist.id = adlist_by_group.adlist_id GROUP BY id;" "5 7 12 7 100 19 19 50"
|
||||||
}
|
}
|
||||||
|
|
||||||
show_domainlist() {
|
show_domainlist() {
|
||||||
show_db_entries "Domainlist (0/1 = exact white-/blacklist, 2/3 = regex white-/blacklist)" "SELECT id,CASE type WHEN '0' THEN '0 ' WHEN '1' THEN ' 1 ' WHEN '2' THEN ' 2 ' WHEN '3' THEN ' 3' ELSE type END type,CASE enabled WHEN '0' THEN ' 0' WHEN '1' THEN ' 1' ELSE enabled END enabled,GROUP_CONCAT(domainlist_by_group.group_id) group_ids,domain,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,comment FROM domainlist LEFT JOIN domainlist_by_group ON domainlist.id = domainlist_by_group.domainlist_id GROUP BY id;" "5 4 7 12 100 19 19 50"
|
show_db_entries "Domainlist" "SELECT id,CASE type WHEN '0' THEN 'exact-allow' WHEN '1' THEN 'exact-deny' WHEN '2' THEN 'regex-allow' WHEN '3' THEN 'regex-deny' ELSE type END type,CASE enabled WHEN '0' THEN '--no---' WHEN '1' THEN '--yes--' ELSE enabled END enabled,GROUP_CONCAT(domainlist_by_group.group_id) group_ids,domain,datetime(date_added,'unixepoch','localtime') date_added,datetime(date_modified,'unixepoch','localtime') date_modified,comment FROM domainlist LEFT JOIN domainlist_by_group ON domainlist.id = domainlist_by_group.domainlist_id GROUP BY id;" "5 11 7 12 90 19 19 50"
|
||||||
}
|
}
|
||||||
|
|
||||||
show_clients() {
|
show_clients() {
|
||||||
|
|||||||
@@ -86,6 +86,7 @@ if [[ "$*" == *"once"* ]]; then
|
|||||||
if [[ "$*" != *"quiet"* ]]; then
|
if [[ "$*" != *"quiet"* ]]; then
|
||||||
echo -ne " ${INFO} Running logrotate ..."
|
echo -ne " ${INFO} Running logrotate ..."
|
||||||
fi
|
fi
|
||||||
|
mkdir -p "${STATEFILE%/*}"
|
||||||
/usr/sbin/logrotate --force --state "${STATEFILE}" /etc/pihole/logrotate
|
/usr/sbin/logrotate --force --state "${STATEFILE}" /etc/pihole/logrotate
|
||||||
else
|
else
|
||||||
# Handle rotation for each log file
|
# Handle rotation for each log file
|
||||||
@@ -115,4 +116,3 @@ else
|
|||||||
echo -e "${OVER} ${TICK} Deleted ${deleted} queries from long-term query database"
|
echo -e "${OVER} ${TICK} Deleted ${deleted} queries from long-term query database"
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|||||||
84
advanced/Scripts/piholeNetworkFlush.sh
Executable file
84
advanced/Scripts/piholeNetworkFlush.sh
Executable file
@@ -0,0 +1,84 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Pi-hole: A black hole for Internet advertisements
|
||||||
|
# (c) 2019 Pi-hole, LLC (https://pi-hole.net)
|
||||||
|
# Network-wide ad blocking via your own hardware.
|
||||||
|
#
|
||||||
|
# Network table flush
|
||||||
|
#
|
||||||
|
# This file is copyright under the latest version of the EUPL.
|
||||||
|
# Please see LICENSE file for your rights under this license.
|
||||||
|
|
||||||
|
coltable="/opt/pihole/COL_TABLE"
|
||||||
|
if [[ -f ${coltable} ]]; then
|
||||||
|
# shellcheck source="./advanced/Scripts/COL_TABLE"
|
||||||
|
source ${coltable}
|
||||||
|
fi
|
||||||
|
|
||||||
|
readonly PI_HOLE_SCRIPT_DIR="/opt/pihole"
|
||||||
|
utilsfile="${PI_HOLE_SCRIPT_DIR}/utils.sh"
|
||||||
|
# shellcheck source=./advanced/Scripts/utils.sh
|
||||||
|
source "${utilsfile}"
|
||||||
|
|
||||||
|
# Source api functions
|
||||||
|
# shellcheck source="./advanced/Scripts/api.sh"
|
||||||
|
. "${PI_HOLE_SCRIPT_DIR}/api.sh"
|
||||||
|
|
||||||
|
flushNetwork(){
|
||||||
|
local output
|
||||||
|
|
||||||
|
echo -ne " ${INFO} Flushing network table ..."
|
||||||
|
|
||||||
|
local data status error
|
||||||
|
# Authenticate with FTL
|
||||||
|
LoginAPI
|
||||||
|
|
||||||
|
# send query again
|
||||||
|
data=$(PostFTLData "action/flush/network" "" "status")
|
||||||
|
|
||||||
|
# Separate the status from the data
|
||||||
|
status=$(printf %s "${data#"${data%???}"}")
|
||||||
|
data=$(printf %s "${data%???}")
|
||||||
|
|
||||||
|
# If there is an .error object in the returned data, display it
|
||||||
|
local error
|
||||||
|
error=$(jq --compact-output <<< "${data}" '.error')
|
||||||
|
if [[ $error != "null" && $error != "" ]]; then
|
||||||
|
echo -e "${OVER} ${CROSS} Failed to flush the network table:"
|
||||||
|
echo -e " $(jq <<< "${data}" '.error')"
|
||||||
|
LogoutAPI
|
||||||
|
exit 1
|
||||||
|
elif [[ "${status}" == "200" ]]; then
|
||||||
|
echo -e "${OVER} ${TICK} Flushed network table"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Delete session
|
||||||
|
LogoutAPI
|
||||||
|
}
|
||||||
|
|
||||||
|
flushArp(){
|
||||||
|
# Flush ARP cache of the host
|
||||||
|
if ! output=$(ip -s -s neigh flush all 2>&1); then
|
||||||
|
echo -e "${OVER} ${CROSS} Failed to flush ARP cache"
|
||||||
|
echo " Output: ${output}"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Process all options (if present)
|
||||||
|
while [ "$#" -gt 0 ]; do
|
||||||
|
case "$1" in
|
||||||
|
"--arp" ) doARP=true ;;
|
||||||
|
esac
|
||||||
|
shift
|
||||||
|
done
|
||||||
|
|
||||||
|
flushNetwork
|
||||||
|
|
||||||
|
if [[ "${doARP}" == true ]]; then
|
||||||
|
echo -ne " ${INFO} Flushing ARP cache"
|
||||||
|
if flushArp; then
|
||||||
|
echo -e "${OVER} ${TICK} Flushed ARP cache"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
@@ -37,19 +37,16 @@ Options:
|
|||||||
}
|
}
|
||||||
|
|
||||||
GenerateOutput() {
|
GenerateOutput() {
|
||||||
local data gravity_data lists_data num_gravity num_lists search_type_str
|
local counts data num_gravity num_lists search_type_str
|
||||||
local gravity_data_csv lists_data_csv line current_domain url type color
|
local gravity_data_csv lists_data_csv line url type color
|
||||||
data="${1}"
|
data="${1}"
|
||||||
|
|
||||||
# construct a new json for the list results where each object contains the domain and the related type
|
# Get count of list and gravity matches
|
||||||
lists_data=$(printf %s "${data}" | jq '.search.domains | [.[] | {domain: .domain, type: .type}]')
|
# Use JQ to count number of entries in lists and gravity
|
||||||
|
# (output is number of list matches then number of gravity matches)
|
||||||
# construct a new json for the gravity results where each object contains the adlist URL and the related domains
|
counts=$(printf %s "${data}" | jq --raw-output '(.search.domains | length), (.search.gravity | group_by(.address,.type) | length)')
|
||||||
gravity_data=$(printf %s "${data}" | jq '.search.gravity | group_by(.address,.type) | map({ address: (.[0].address), type: (.[0].type), domains: [.[] | .domain] })')
|
num_lists=$(echo "$counts" | sed -n '1p')
|
||||||
|
num_gravity=$(echo "$counts" | sed -n '2p')
|
||||||
# number of objects in each json
|
|
||||||
num_gravity=$(printf %s "${gravity_data}" | jq length)
|
|
||||||
num_lists=$(printf %s "${lists_data}" | jq length)
|
|
||||||
|
|
||||||
if [ "${partial}" = true ]; then
|
if [ "${partial}" = true ]; then
|
||||||
search_type_str="partially"
|
search_type_str="partially"
|
||||||
@@ -62,7 +59,7 @@ GenerateOutput() {
|
|||||||
if [ "${num_lists}" -gt 0 ]; then
|
if [ "${num_lists}" -gt 0 ]; then
|
||||||
# Convert the data to a csv, each line is a "domain,type" string
|
# Convert the data to a csv, each line is a "domain,type" string
|
||||||
# not using jq's @csv here as it quotes each value individually
|
# not using jq's @csv here as it quotes each value individually
|
||||||
lists_data_csv=$(printf %s "${lists_data}" | jq --raw-output '.[] | [.domain, .type] | join(",")')
|
lists_data_csv=$(printf %s "${data}" | jq --raw-output '.search.domains | map([.domain, .type] | join(",")) | join("\n")')
|
||||||
|
|
||||||
# Generate output for each csv line, separating line in a domain and type substring at the ','
|
# Generate output for each csv line, separating line in a domain and type substring at the ','
|
||||||
echo "${lists_data_csv}" | while read -r line; do
|
echo "${lists_data_csv}" | while read -r line; do
|
||||||
@@ -71,11 +68,11 @@ GenerateOutput() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# Results from gravity
|
# Results from gravity
|
||||||
printf "%s\n\n" "Found ${num_gravity} adlists ${search_type_str} matching '${COL_BLUE}${domain}${COL_NC}'."
|
printf "%s\n\n" "Found ${num_gravity} lists ${search_type_str} matching '${COL_BLUE}${domain}${COL_NC}'."
|
||||||
if [ "${num_gravity}" -gt 0 ]; then
|
if [ "${num_gravity}" -gt 0 ]; then
|
||||||
# Convert the data to a csv, each line is a "URL,domain,domain,...." string
|
# Convert the data to a csv, each line is a "URL,type,domain,domain,...." string
|
||||||
# not using jq's @csv here as it quotes each value individually
|
# not using jq's @csv here as it quotes each value individually
|
||||||
gravity_data_csv=$(printf %s "${gravity_data}" | jq --raw-output '.[] | [.address, .type, .domains[]] | join(",")')
|
gravity_data_csv=$(printf %s "${data}" | jq --raw-output '.search.gravity | group_by(.address,.type) | map([.[0].address, .[0].type, (.[] | .domain)] | join(",")) | join("\n")')
|
||||||
|
|
||||||
# Generate line-by-line output for each csv line
|
# Generate line-by-line output for each csv line
|
||||||
echo "${gravity_data_csv}" | while read -r line; do
|
echo "${gravity_data_csv}" | while read -r line; do
|
||||||
@@ -97,15 +94,8 @@ GenerateOutput() {
|
|||||||
|
|
||||||
# cut off type, leaving "domain,domain,...."
|
# cut off type, leaving "domain,domain,...."
|
||||||
line=${line#*,}
|
line=${line#*,}
|
||||||
# print each domain and remove it from the string until nothing is left
|
# Replace commas with newlines and format output
|
||||||
while [ ${#line} -gt 0 ]; do
|
echo "${line}" | sed 's/,/\n/g' | sed "s/^/ - ${COL_GREEN}/" | sed "s/$/${COL_NC}/"
|
||||||
current_domain=${line%%,*}
|
|
||||||
printf ' - %s\n' "${COL_GREEN}${current_domain}${COL_NC}"
|
|
||||||
# we need to remove the current_domain and the comma in two steps because
|
|
||||||
# the last domain won't have a trailing comma and the while loop wouldn't exit
|
|
||||||
line=${line#"${current_domain}"}
|
|
||||||
line=${line#,}
|
|
||||||
done
|
|
||||||
printf "\n\n"
|
printf "\n\n"
|
||||||
done
|
done
|
||||||
fi
|
fi
|
||||||
|
|||||||
@@ -149,31 +149,37 @@ main() {
|
|||||||
echo -e " ${INFO} Web Interface:\\t${COL_GREEN}up to date${COL_NC}"
|
echo -e " ${INFO} Web Interface:\\t${COL_GREEN}up to date${COL_NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
local funcOutput
|
# Allow the user to skip this check if they are using a self-compiled FTL binary from an unsupported architecture
|
||||||
funcOutput=$(get_binary_name) #Store output of get_binary_name here
|
if [ "${skipFTL}" != true ]; then
|
||||||
local binary
|
local funcOutput
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}" #binary name will be the last line of the output of get_binary_name (it always begins with pihole-FTL)
|
funcOutput=$(get_binary_name) #Store output of get_binary_name here
|
||||||
|
local binary
|
||||||
|
binary="pihole-FTL${funcOutput##*pihole-FTL}" #binary name will be the last line of the output of get_binary_name (it always begins with pihole-FTL)
|
||||||
|
|
||||||
if FTLcheckUpdate "${binary}" &>/dev/null; then
|
if FTLcheckUpdate "${binary}" &>/dev/null; then
|
||||||
FTL_update=true
|
FTL_update=true
|
||||||
echo -e " ${INFO} FTL:\\t\\t${COL_YELLOW}update available${COL_NC}"
|
echo -e " ${INFO} FTL:\\t\\t${COL_YELLOW}update available${COL_NC}"
|
||||||
|
else
|
||||||
|
case $? in
|
||||||
|
1)
|
||||||
|
echo -e " ${INFO} FTL:\\t\\t${COL_GREEN}up to date${COL_NC}"
|
||||||
|
;;
|
||||||
|
2)
|
||||||
|
echo -e " ${INFO} FTL:\\t\\t${COL_RED}Branch is not available.${COL_NC}\\n\\t\\t\\tUse ${COL_GREEN}pihole checkout ftl [branchname]${COL_NC} to switch to a valid branch."
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
3)
|
||||||
|
echo -e " ${INFO} FTL:\\t\\t${COL_RED}Something has gone wrong, cannot reach download server${COL_NC}"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo -e " ${INFO} FTL:\\t\\t${COL_RED}Something has gone wrong, contact support${COL_NC}"
|
||||||
|
exit 1
|
||||||
|
esac
|
||||||
|
FTL_update=false
|
||||||
|
fi
|
||||||
else
|
else
|
||||||
case $? in
|
echo -e " ${INFO} FTL:\\t\\t${COL_YELLOW}--skipFTL set - update check skipped${COL_NC}"
|
||||||
1)
|
|
||||||
echo -e " ${INFO} FTL:\\t\\t${COL_GREEN}up to date${COL_NC}"
|
|
||||||
;;
|
|
||||||
2)
|
|
||||||
echo -e " ${INFO} FTL:\\t\\t${COL_RED}Branch is not available.${COL_NC}\\n\\t\\t\\tUse ${COL_GREEN}pihole checkout ftl [branchname]${COL_NC} to switch to a valid branch."
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
3)
|
|
||||||
echo -e " ${INFO} FTL:\\t\\t${COL_RED}Something has gone wrong, cannot reach download server${COL_NC}"
|
|
||||||
exit 1
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
echo -e " ${INFO} FTL:\\t\\t${COL_RED}Something has gone wrong, contact support${COL_NC}"
|
|
||||||
exit 1
|
|
||||||
esac
|
|
||||||
FTL_update=false
|
FTL_update=false
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -222,7 +228,14 @@ main() {
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "${FTL_update}" == true || "${core_update}" == true ]]; then
|
if [[ "${FTL_update}" == true || "${core_update}" == true ]]; then
|
||||||
${PI_HOLE_FILES_DIR}/automated\ install/basic-install.sh --repair --unattended || \
|
local addionalFlag
|
||||||
|
|
||||||
|
if [[ ${skipFTL} == true ]]; then
|
||||||
|
addionalFlag="--skipFTL"
|
||||||
|
else
|
||||||
|
addionalFlag=""
|
||||||
|
fi
|
||||||
|
${PI_HOLE_FILES_DIR}/automated\ install/basic-install.sh --repair --unattended ${addionalFlag} || \
|
||||||
echo -e "${basicError}" && exit 1
|
echo -e "${basicError}" && exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -242,8 +255,15 @@ main() {
|
|||||||
exit 0
|
exit 0
|
||||||
}
|
}
|
||||||
|
|
||||||
if [[ "$1" == "--check-only" ]]; then
|
CHECK_ONLY=false
|
||||||
CHECK_ONLY=true
|
skipFTL=false
|
||||||
fi
|
|
||||||
|
# Check arguments
|
||||||
|
for var in "$@"; do
|
||||||
|
case "$var" in
|
||||||
|
"--check-only") CHECK_ONLY=true ;;
|
||||||
|
"--skipFTL") skipFTL=true ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
main
|
main
|
||||||
|
|||||||
@@ -73,7 +73,9 @@ getFTLPID() {
|
|||||||
# Example getFTLConfigValue dns.piholePTR
|
# Example getFTLConfigValue dns.piholePTR
|
||||||
#######################
|
#######################
|
||||||
getFTLConfigValue(){
|
getFTLConfigValue(){
|
||||||
pihole-FTL --config -q "${1}"
|
# Pipe to cat to avoid pihole-FTL assuming this is an interactive command
|
||||||
|
# returning colored output.
|
||||||
|
pihole-FTL --config -q "${1}" | cat
|
||||||
}
|
}
|
||||||
|
|
||||||
#######################
|
#######################
|
||||||
@@ -86,9 +88,17 @@ getFTLConfigValue(){
|
|||||||
# setFTLConfigValue dns.upstreams '[ "8.8.8.8" , "8.8.4.4" ]'
|
# setFTLConfigValue dns.upstreams '[ "8.8.8.8" , "8.8.4.4" ]'
|
||||||
#######################
|
#######################
|
||||||
setFTLConfigValue(){
|
setFTLConfigValue(){
|
||||||
pihole-FTL --config "${1}" "${2}" >/dev/null
|
local err
|
||||||
if [ $? -eq 5 ]; then
|
{ pihole-FTL --config "${1}" "${2}" >/dev/null; err="$?"; } || true
|
||||||
printf " %s %s set by environment variable. Please unset it to use this function\n" "${CROSS}" "${1}"
|
|
||||||
exit 5
|
case $err in
|
||||||
fi
|
0) ;;
|
||||||
|
5)
|
||||||
|
# FTL returns 5 if the value was set by an environment variable and is therefore read-only
|
||||||
|
printf " %s %s set by environment variable. Please unset it to use this function\n" "${CROSS}" "${1}";
|
||||||
|
exit 5;;
|
||||||
|
*)
|
||||||
|
printf " %s Failed to set %s. Try with sudo power\n" "${CROSS}" "${1}"
|
||||||
|
exit 1
|
||||||
|
esac
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -66,7 +66,7 @@ CREATE TABLE info
|
|||||||
value TEXT NOT NULL
|
value TEXT NOT NULL
|
||||||
);
|
);
|
||||||
|
|
||||||
INSERT INTO "info" VALUES('version','19');
|
INSERT INTO "info" VALUES('version','20');
|
||||||
/* This is a flag to indicate if gravity was restored from a backup
|
/* This is a flag to indicate if gravity was restored from a backup
|
||||||
false = not restored,
|
false = not restored,
|
||||||
failed = restoration failed due to no backup
|
failed = restoration failed due to no backup
|
||||||
@@ -111,7 +111,7 @@ CREATE TRIGGER tr_domainlist_update AFTER UPDATE ON domainlist
|
|||||||
UPDATE domainlist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
|
UPDATE domainlist SET date_modified = (cast(strftime('%s', 'now') as int)) WHERE domain = NEW.domain;
|
||||||
END;
|
END;
|
||||||
|
|
||||||
CREATE VIEW vw_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
CREATE VIEW vw_allowlist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
||||||
FROM domainlist
|
FROM domainlist
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
||||||
@@ -119,7 +119,7 @@ CREATE VIEW vw_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_gr
|
|||||||
AND domainlist.type = 0
|
AND domainlist.type = 0
|
||||||
ORDER BY domainlist.id;
|
ORDER BY domainlist.id;
|
||||||
|
|
||||||
CREATE VIEW vw_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
CREATE VIEW vw_denylist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
||||||
FROM domainlist
|
FROM domainlist
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
||||||
@@ -127,7 +127,7 @@ CREATE VIEW vw_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_gr
|
|||||||
AND domainlist.type = 1
|
AND domainlist.type = 1
|
||||||
ORDER BY domainlist.id;
|
ORDER BY domainlist.id;
|
||||||
|
|
||||||
CREATE VIEW vw_regex_whitelist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
CREATE VIEW vw_regex_allowlist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
||||||
FROM domainlist
|
FROM domainlist
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
||||||
@@ -135,7 +135,7 @@ CREATE VIEW vw_regex_whitelist AS SELECT domain, domainlist.id AS id, domainlist
|
|||||||
AND domainlist.type = 2
|
AND domainlist.type = 2
|
||||||
ORDER BY domainlist.id;
|
ORDER BY domainlist.id;
|
||||||
|
|
||||||
CREATE VIEW vw_regex_blacklist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
CREATE VIEW vw_regex_denylist AS SELECT domain, domainlist.id AS id, domainlist_by_group.group_id AS group_id
|
||||||
FROM domainlist
|
FROM domainlist
|
||||||
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
LEFT JOIN domainlist_by_group ON domainlist_by_group.domainlist_id = domainlist.id
|
||||||
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
LEFT JOIN "group" ON "group".id = domainlist_by_group.group_id
|
||||||
|
|||||||
@@ -8,12 +8,20 @@ utilsfile="${PI_HOLE_SCRIPT_DIR}/utils.sh"
|
|||||||
|
|
||||||
# Get file paths
|
# Get file paths
|
||||||
FTL_PID_FILE="$(getFTLConfigValue files.pid)"
|
FTL_PID_FILE="$(getFTLConfigValue files.pid)"
|
||||||
|
FTL_LOG_FILE="$(getFTLConfigValue files.log.ftl)"
|
||||||
|
PIHOLE_LOG_FILE="$(getFTLConfigValue files.log.dnsmasq)"
|
||||||
|
WEBSERVER_LOG_FILE="$(getFTLConfigValue files.log.webserver)"
|
||||||
|
FTL_PID_FILE="${FTL_PID_FILE:-/run/pihole-FTL.pid}"
|
||||||
|
FTL_LOG_FILE="${FTL_LOG_FILE:-/var/log/pihole/FTL.log}"
|
||||||
|
PIHOLE_LOG_FILE="${PIHOLE_LOG_FILE:-/var/log/pihole/pihole.log}"
|
||||||
|
WEBSERVER_LOG_FILE="${WEBSERVER_LOG_FILE:-/var/log/pihole/webserver.log}"
|
||||||
|
|
||||||
# Ensure that permissions are set so that pihole-FTL can edit all necessary files
|
# Ensure that permissions are set so that pihole-FTL can edit all necessary files
|
||||||
mkdir -p /var/log/pihole
|
mkdir -p /var/log/pihole
|
||||||
chown -R pihole:pihole /etc/pihole/ /var/log/pihole/
|
chown -R pihole:pihole /etc/pihole/ /var/log/pihole/
|
||||||
|
|
||||||
# allow all users read version file (and use pihole -v)
|
# allow all users read version file (and use pihole -v)
|
||||||
|
touch /etc/pihole/versions
|
||||||
chmod 0644 /etc/pihole/versions
|
chmod 0644 /etc/pihole/versions
|
||||||
|
|
||||||
# allow pihole to access subdirs in /etc/pihole (sets execution bit on dirs)
|
# allow pihole to access subdirs in /etc/pihole (sets execution bit on dirs)
|
||||||
@@ -28,7 +36,7 @@ chown root:root /etc/pihole/logrotate
|
|||||||
|
|
||||||
# Touch files to ensure they exist (create if non-existing, preserve if existing)
|
# Touch files to ensure they exist (create if non-existing, preserve if existing)
|
||||||
[ -f "${FTL_PID_FILE}" ] || install -D -m 644 -o pihole -g pihole /dev/null "${FTL_PID_FILE}"
|
[ -f "${FTL_PID_FILE}" ] || install -D -m 644 -o pihole -g pihole /dev/null "${FTL_PID_FILE}"
|
||||||
[ -f /var/log/pihole/FTL.log ] || install -m 640 -o pihole -g pihole /dev/null /var/log/pihole/FTL.log
|
[ -f "${FTL_LOG_FILE}" ] || install -m 640 -o pihole -g pihole /dev/null "${FTL_LOG_FILE}"
|
||||||
[ -f /var/log/pihole/pihole.log ] || install -m 640 -o pihole -g pihole /dev/null /var/log/pihole/pihole.log
|
[ -f "${PIHOLE_LOG_FILE}" ] || install -m 640 -o pihole -g pihole /dev/null "${PIHOLE_LOG_FILE}"
|
||||||
[ -f /var/log/pihole/webserver.log ] || install -m 640 -o pihole -g pihole /dev/null /var/log/pihole/webserver.log
|
[ -f "${WEBSERVER_LOG_FILE}" ] || install -m 640 -o pihole -g pihole /dev/null "${WEBSERVER_LOG_FILE}"
|
||||||
[ -f /etc/pihole/dhcp.leases ] || install -m 644 -o pihole -g pihole /dev/null /etc/pihole/dhcp.leases
|
[ -f /etc/pihole/dhcp.leases ] || install -m 644 -o pihole -g pihole /dev/null /etc/pihole/dhcp.leases
|
||||||
|
|||||||
40
advanced/Templates/pihole-FTL.openrc
Normal file
40
advanced/Templates/pihole-FTL.openrc
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
#!/sbin/openrc-run
|
||||||
|
# shellcheck shell=sh disable=SC2034
|
||||||
|
|
||||||
|
: "${PI_HOLE_SCRIPT_DIR:=/opt/pihole}"
|
||||||
|
|
||||||
|
command="/usr/bin/pihole-FTL"
|
||||||
|
command_user="pihole:pihole"
|
||||||
|
supervisor=supervise-daemon
|
||||||
|
command_args_foreground="-f"
|
||||||
|
command_background=true
|
||||||
|
pidfile="/run/${RC_SVCNAME}_openrc.pid"
|
||||||
|
extra_started_commands="reload"
|
||||||
|
|
||||||
|
respawn_max=5
|
||||||
|
respawn_period=60
|
||||||
|
capabilities="^CAP_NET_BIND_SERVICE,^CAP_NET_RAW,^CAP_NET_ADMIN,^CAP_SYS_NICE,^CAP_IPC_LOCK,^CAP_CHOWN,^CAP_SYS_TIME"
|
||||||
|
|
||||||
|
depend() {
|
||||||
|
want net
|
||||||
|
provide dns
|
||||||
|
}
|
||||||
|
|
||||||
|
checkconfig() {
|
||||||
|
$command -f test
|
||||||
|
}
|
||||||
|
|
||||||
|
start_pre() {
|
||||||
|
sh "${PI_HOLE_SCRIPT_DIR}/pihole-FTL-prestart.sh"
|
||||||
|
}
|
||||||
|
|
||||||
|
stop_post() {
|
||||||
|
sh "${PI_HOLE_SCRIPT_DIR}/pihole-FTL-poststop.sh"
|
||||||
|
}
|
||||||
|
|
||||||
|
reload() {
|
||||||
|
checkconfig || return $?
|
||||||
|
ebegin "Reloading ${RC_SVCNAME}"
|
||||||
|
start-stop-daemon --signal HUP --pidfile "${pidfile}"
|
||||||
|
eend $?
|
||||||
|
}
|
||||||
@@ -17,15 +17,15 @@ StartLimitIntervalSec=60s
|
|||||||
|
|
||||||
[Service]
|
[Service]
|
||||||
User=pihole
|
User=pihole
|
||||||
PermissionsStartOnly=true
|
|
||||||
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_NET_ADMIN CAP_SYS_NICE CAP_IPC_LOCK CAP_CHOWN CAP_SYS_TIME
|
AmbientCapabilities=CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_NET_ADMIN CAP_SYS_NICE CAP_IPC_LOCK CAP_CHOWN CAP_SYS_TIME
|
||||||
|
|
||||||
ExecStartPre=/opt/pihole/pihole-FTL-prestart.sh
|
# Run prestart with elevated permissions
|
||||||
|
ExecStartPre=+/opt/pihole/pihole-FTL-prestart.sh
|
||||||
ExecStart=/usr/bin/pihole-FTL -f
|
ExecStart=/usr/bin/pihole-FTL -f
|
||||||
Restart=on-failure
|
Restart=on-failure
|
||||||
RestartSec=5s
|
RestartSec=5s
|
||||||
ExecReload=/bin/kill -HUP $MAINPID
|
ExecReload=/bin/kill -HUP $MAINPID
|
||||||
ExecStopPost=/opt/pihole/pihole-FTL-poststop.sh
|
ExecStopPost=+/opt/pihole/pihole-FTL-poststop.sh
|
||||||
|
|
||||||
# Use graceful shutdown with a reasonable timeout
|
# Use graceful shutdown with a reasonable timeout
|
||||||
TimeoutStopSec=60s
|
TimeoutStopSec=60s
|
||||||
|
|||||||
@@ -1,51 +0,0 @@
|
|||||||
_pihole() {
|
|
||||||
local cur prev opts opts_checkout opts_debug opts_logging opts_query opts_update opts_version
|
|
||||||
COMPREPLY=()
|
|
||||||
cur="${COMP_WORDS[COMP_CWORD]}"
|
|
||||||
prev="${COMP_WORDS[COMP_CWORD-1]}"
|
|
||||||
prev2="${COMP_WORDS[COMP_CWORD-2]}"
|
|
||||||
|
|
||||||
case "${prev}" in
|
|
||||||
"pihole")
|
|
||||||
opts="allow allow-regex allow-wild deny checkout debug disable enable flush help logging query repair regex reloaddns reloadlists status tail uninstall updateGravity updatePihole version wildcard arpflush api"
|
|
||||||
COMPREPLY=( $(compgen -W "${opts}" -- ${cur}) )
|
|
||||||
;;
|
|
||||||
"allow"|"deny"|"wildcard"|"regex"|"allow-regex"|"allow-wild")
|
|
||||||
opts_lists="\not \--delmode \--quiet \--list \--help"
|
|
||||||
COMPREPLY=( $(compgen -W "${opts_lists}" -- ${cur}) )
|
|
||||||
;;
|
|
||||||
"checkout")
|
|
||||||
opts_checkout="core ftl web master dev"
|
|
||||||
COMPREPLY=( $(compgen -W "${opts_checkout}" -- ${cur}) )
|
|
||||||
;;
|
|
||||||
"debug")
|
|
||||||
opts_debug="-a"
|
|
||||||
COMPREPLY=( $(compgen -W "${opts_debug}" -- ${cur}) )
|
|
||||||
;;
|
|
||||||
"logging")
|
|
||||||
opts_logging="on off 'off noflush'"
|
|
||||||
COMPREPLY=( $(compgen -W "${opts_logging}" -- ${cur}) )
|
|
||||||
;;
|
|
||||||
"query")
|
|
||||||
opts_query="--partial --all"
|
|
||||||
COMPREPLY=( $(compgen -W "${opts_query}" -- ${cur}) )
|
|
||||||
;;
|
|
||||||
"updatePihole"|"-up")
|
|
||||||
opts_update="--check-only"
|
|
||||||
COMPREPLY=( $(compgen -W "${opts_update}" -- ${cur}) )
|
|
||||||
;;
|
|
||||||
"core"|"admin"|"ftl")
|
|
||||||
if [[ "$prev2" == "checkout" ]]; then
|
|
||||||
opts_checkout="master dev"
|
|
||||||
COMPREPLY=( $(compgen -W "${opts_checkout}" -- ${cur}) )
|
|
||||||
else
|
|
||||||
return 1
|
|
||||||
fi
|
|
||||||
;;
|
|
||||||
*)
|
|
||||||
return 1
|
|
||||||
;;
|
|
||||||
esac
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
complete -F _pihole pihole
|
|
||||||
9
advanced/bash-completion/pihole-ftl.bash
Normal file
9
advanced/bash-completion/pihole-ftl.bash
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# Bash completion script for pihole-FTL
|
||||||
|
#
|
||||||
|
# This completion script provides tab completion for pihole-FTL CLI flags and commands.
|
||||||
|
# It uses the `pihole-FTL --complete` command to generate the completion options.
|
||||||
|
_complete_FTL() { mapfile -t COMPREPLY < <(pihole-FTL --complete "${COMP_WORDS[@]}"); }
|
||||||
|
|
||||||
|
complete -F _complete_FTL pihole-FTL
|
||||||
59
advanced/bash-completion/pihole.bash
Normal file
59
advanced/bash-completion/pihole.bash
Normal file
@@ -0,0 +1,59 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
#
|
||||||
|
# Bash completion script for pihole
|
||||||
|
#
|
||||||
|
_pihole() {
|
||||||
|
local cur prev prev2 opts opts_lists opts_checkout opts_debug opts_logging opts_query opts_update opts_networkflush
|
||||||
|
COMPREPLY=()
|
||||||
|
cur="${COMP_WORDS[COMP_CWORD]}"
|
||||||
|
prev="${COMP_WORDS[COMP_CWORD-1]}"
|
||||||
|
prev2="${COMP_WORDS[COMP_CWORD-2]}"
|
||||||
|
|
||||||
|
case "${prev}" in
|
||||||
|
"pihole")
|
||||||
|
opts="allow allow-regex allow-wild deny checkout debug disable enable flush help logging query repair regex reloaddns reloadlists setpassword status tail uninstall updateGravity updatePihole version wildcard networkflush api"
|
||||||
|
mapfile -t COMPREPLY < <(compgen -W "${opts}" -- "${cur}")
|
||||||
|
;;
|
||||||
|
"allow"|"deny"|"wildcard"|"regex"|"allow-regex"|"allow-wild")
|
||||||
|
opts_lists="\not \--delmode \--quiet \--list \--help"
|
||||||
|
mapfile -t COMPREPLY < <(compgen -W "${opts_lists}" -- "${cur}")
|
||||||
|
;;
|
||||||
|
"checkout")
|
||||||
|
opts_checkout="core ftl web master dev"
|
||||||
|
mapfile -t COMPREPLY < <(compgen -W "${opts_checkout}" -- "${cur}")
|
||||||
|
;;
|
||||||
|
"debug")
|
||||||
|
opts_debug="-a"
|
||||||
|
mapfile -t COMPREPLY < <(compgen -W "${opts_debug}" -- "${cur}")
|
||||||
|
;;
|
||||||
|
"logging")
|
||||||
|
opts_logging="on off 'off noflush'"
|
||||||
|
mapfile -t COMPREPLY < <(compgen -W "${opts_logging}" -- "${cur}")
|
||||||
|
;;
|
||||||
|
"query")
|
||||||
|
opts_query="--partial --all"
|
||||||
|
mapfile -t COMPREPLY < <(compgen -W "${opts_query}" -- "${cur}")
|
||||||
|
;;
|
||||||
|
"updatePihole"|"-up")
|
||||||
|
opts_update="--check-only"
|
||||||
|
mapfile -t COMPREPLY < <(compgen -W "${opts_update}" -- "${cur}")
|
||||||
|
;;
|
||||||
|
"networkflush")
|
||||||
|
opts_networkflush="--arp"
|
||||||
|
mapfile -t COMPREPLY < <(compgen -W "${opts_networkflush}" -- "${cur}")
|
||||||
|
;;
|
||||||
|
"core"|"web"|"ftl")
|
||||||
|
if [[ "$prev2" == "checkout" ]]; then
|
||||||
|
opts_checkout="master development"
|
||||||
|
mapfile -t COMPREPLY < <(compgen -W "${opts_checkout}" -- "${cur}")
|
||||||
|
else
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
return 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
complete -F _pihole pihole
|
||||||
@@ -94,8 +94,8 @@ fresh_install=true
|
|||||||
|
|
||||||
adlistFile="/etc/pihole/adlists.list"
|
adlistFile="/etc/pihole/adlists.list"
|
||||||
# Pi-hole needs an IP address; to begin, these variables are empty since we don't know what the IP is until this script can run
|
# Pi-hole needs an IP address; to begin, these variables are empty since we don't know what the IP is until this script can run
|
||||||
IPV4_ADDRESS=${IPV4_ADDRESS}
|
IPV4_ADDRESS=
|
||||||
IPV6_ADDRESS=${IPV6_ADDRESS}
|
IPV6_ADDRESS=
|
||||||
# Give settings their default values. These may be changed by prompts later in the script.
|
# Give settings their default values. These may be changed by prompts later in the script.
|
||||||
QUERY_LOGGING=
|
QUERY_LOGGING=
|
||||||
PRIVACY_LEVEL=
|
PRIVACY_LEVEL=
|
||||||
@@ -116,11 +116,11 @@ c=70
|
|||||||
PIHOLE_META_PACKAGE_CONTROL_APT=$(
|
PIHOLE_META_PACKAGE_CONTROL_APT=$(
|
||||||
cat <<EOM
|
cat <<EOM
|
||||||
Package: pihole-meta
|
Package: pihole-meta
|
||||||
Version: 0.4
|
Version: 0.6
|
||||||
Maintainer: Pi-hole team <adblock@pi-hole.net>
|
Maintainer: Pi-hole team <adblock@pi-hole.net>
|
||||||
Architecture: all
|
Architecture: all
|
||||||
Description: Pi-hole dependency meta package
|
Description: Pi-hole dependency meta package
|
||||||
Depends: awk,bash-completion,binutils,ca-certificates,cron|cron-daemon,curl,dialog,dnsutils,dns-root-data,git,grep,iproute2,iputils-ping,jq,libcap2,libcap2-bin,lshw,netcat-openbsd,procps,psmisc,sudo,unzip
|
Depends: awk,bash-completion,binutils,ca-certificates,cron|cron-daemon,curl,dialog,bind9-dnsutils|dnsutils,dns-root-data,git,grep,iproute2,iputils-ping,jq,libcap2,libcap2-bin,lshw,procps,psmisc,sudo,unzip
|
||||||
Section: contrib/metapackages
|
Section: contrib/metapackages
|
||||||
Priority: optional
|
Priority: optional
|
||||||
EOM
|
EOM
|
||||||
@@ -130,12 +130,12 @@ EOM
|
|||||||
PIHOLE_META_PACKAGE_CONTROL_RPM=$(
|
PIHOLE_META_PACKAGE_CONTROL_RPM=$(
|
||||||
cat <<EOM
|
cat <<EOM
|
||||||
Name: pihole-meta
|
Name: pihole-meta
|
||||||
Version: 0.2
|
Version: 0.3
|
||||||
Release: 1
|
Release: 1
|
||||||
License: EUPL
|
License: EUPL
|
||||||
BuildArch: noarch
|
BuildArch: noarch
|
||||||
Summary: Pi-hole dependency meta package
|
Summary: Pi-hole dependency meta package
|
||||||
Requires: bash-completion,bind-utils,binutils,ca-certificates,chkconfig,cronie,curl,dialog,findutils,gawk,git,grep,iproute,jq,libcap,lshw,nmap-ncat,procps-ng,psmisc,sudo,unzip
|
Requires: bash-completion,bind-utils,binutils,ca-certificates,chkconfig,cronie,curl,dialog,findutils,gawk,git,grep,iproute,jq,libcap,lshw,procps-ng,psmisc,sudo,unzip
|
||||||
%description
|
%description
|
||||||
Pi-hole dependency meta package
|
Pi-hole dependency meta package
|
||||||
%prep
|
%prep
|
||||||
@@ -143,6 +143,9 @@ Pi-hole dependency meta package
|
|||||||
%files
|
%files
|
||||||
%install
|
%install
|
||||||
%changelog
|
%changelog
|
||||||
|
* Mon Jul 14 2025 Pi-hole Team - 0.3
|
||||||
|
- Remove nmap-ncat from the list of dependencies
|
||||||
|
|
||||||
* Wed May 28 2025 Pi-hole Team - 0.2
|
* Wed May 28 2025 Pi-hole Team - 0.2
|
||||||
- Add gawk to the list of dependencies
|
- Add gawk to the list of dependencies
|
||||||
|
|
||||||
@@ -151,19 +154,61 @@ Pi-hole dependency meta package
|
|||||||
EOM
|
EOM
|
||||||
)
|
)
|
||||||
|
|
||||||
|
# List of required packages on APK based systems
|
||||||
|
PIHOLE_META_VERSION_APK=0.2
|
||||||
|
PIHOLE_META_DEPS_APK=(
|
||||||
|
bash
|
||||||
|
bash-completion
|
||||||
|
bind-tools
|
||||||
|
binutils
|
||||||
|
coreutils
|
||||||
|
cronie
|
||||||
|
curl
|
||||||
|
dialog
|
||||||
|
git
|
||||||
|
grep
|
||||||
|
iproute2-minimal # piholeARPTable.sh
|
||||||
|
iproute2-ss # piholeDebug.sh
|
||||||
|
jq
|
||||||
|
libcap
|
||||||
|
logrotate
|
||||||
|
lscpu # piholeDebug.sh
|
||||||
|
lshw # piholeDebug.sh
|
||||||
|
ncurses
|
||||||
|
procps-ng
|
||||||
|
psmisc
|
||||||
|
shadow
|
||||||
|
sudo
|
||||||
|
tzdata
|
||||||
|
unzip
|
||||||
|
)
|
||||||
|
|
||||||
######## Undocumented Flags. Shhh ########
|
######## Undocumented Flags. Shhh ########
|
||||||
# These are undocumented flags; some of which we can use when repairing an installation
|
# These are undocumented flags; some of which we can use when repairing an installation
|
||||||
# The runUnattended flag is one example of this
|
# The runUnattended flag is one example of this
|
||||||
repair=false
|
repair=false
|
||||||
runUnattended=false
|
runUnattended=false
|
||||||
|
skipFTL=false
|
||||||
# Check arguments for the undocumented flags
|
# Check arguments for the undocumented flags
|
||||||
for var in "$@"; do
|
for var in "$@"; do
|
||||||
case "$var" in
|
case "${var}" in
|
||||||
"--repair") repair=true ;;
|
"--repair") repair=true ;;
|
||||||
"--unattended") runUnattended=true ;;
|
"--unattended") runUnattended=true ;;
|
||||||
|
"--skipFTL") skipFTL=true ;;
|
||||||
esac
|
esac
|
||||||
done
|
done
|
||||||
|
|
||||||
|
if [[ "${runUnattended}" == true ]]; then
|
||||||
|
# In order to run an unattended setup, a pre-seeded /etc/pihole/pihole.toml must exist
|
||||||
|
if [[ ! -f "${PI_HOLE_CONFIG_DIR}/pihole.toml" ]]; then
|
||||||
|
printf " %b Error: \"%s\" not found. Cannot run unattended setup\\n" "${CROSS}" "${PI_HOLE_CONFIG_DIR}/pihole.toml"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
printf " %b Performing unattended setup, no dialogs will be displayed\\n" "${INFO}"
|
||||||
|
# also disable debconf-apt-progress dialogs
|
||||||
|
export DEBIAN_FRONTEND="noninteractive"
|
||||||
|
fi
|
||||||
|
|
||||||
# If the color table file exists,
|
# If the color table file exists,
|
||||||
if [[ -f "${coltable}" ]]; then
|
if [[ -f "${coltable}" ]]; then
|
||||||
# source it
|
# source it
|
||||||
@@ -268,7 +313,15 @@ package_manager_detect() {
|
|||||||
PKG_COUNT="${PKG_MANAGER} check-update | grep -E '(.i686|.x86|.noarch|.arm|.src|.riscv64)' | wc -l || true"
|
PKG_COUNT="${PKG_MANAGER} check-update | grep -E '(.i686|.x86|.noarch|.arm|.src|.riscv64)' | wc -l || true"
|
||||||
# The command we will use to remove packages (used in the uninstaller)
|
# The command we will use to remove packages (used in the uninstaller)
|
||||||
PKG_REMOVE="${PKG_MANAGER} remove -y"
|
PKG_REMOVE="${PKG_MANAGER} remove -y"
|
||||||
# If neither apt-get or yum/dnf package managers were found
|
|
||||||
|
# If neither apt-get or yum/dnf package managers were found, check for apk.
|
||||||
|
elif is_command apk; then
|
||||||
|
PKG_MANAGER="apk"
|
||||||
|
UPDATE_PKG_CACHE="${PKG_MANAGER} update"
|
||||||
|
PKG_INSTALL="${PKG_MANAGER} add"
|
||||||
|
PKG_COUNT="${PKG_MANAGER} list --upgradable -q | wc -l"
|
||||||
|
PKG_REMOVE="${PKG_MANAGER} del"
|
||||||
|
|
||||||
else
|
else
|
||||||
# we cannot install required packages
|
# we cannot install required packages
|
||||||
printf " %b No supported package manager found\\n" "${CROSS}"
|
printf " %b No supported package manager found\\n" "${CROSS}"
|
||||||
@@ -279,13 +332,20 @@ package_manager_detect() {
|
|||||||
|
|
||||||
build_dependency_package(){
|
build_dependency_package(){
|
||||||
# This function will build a package that contains all the dependencies needed for Pi-hole
|
# This function will build a package that contains all the dependencies needed for Pi-hole
|
||||||
|
if is_command apk ; then
|
||||||
|
local str="APK based system detected. Dependencies will be installed using a virtual package named pihole-meta"
|
||||||
|
printf " %b %s...\\n" "${INFO}" "${str}"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
# remove any leftover build directory that may exist
|
# remove any leftover build directory that may exist
|
||||||
rm -rf /tmp/pihole-meta_*
|
rm -rf /tmp/pihole-meta_*
|
||||||
|
|
||||||
# Create a fresh build directory with random name
|
# Create a fresh build directory with random name
|
||||||
|
# Busybox Compat: `mktemp` long flags unsupported
|
||||||
|
# -d flag is short form of --directory
|
||||||
local tempdir
|
local tempdir
|
||||||
tempdir="$(mktemp --directory /tmp/pihole-meta_XXXXX)"
|
tempdir="$(mktemp -d /tmp/pihole-meta_XXXXX)"
|
||||||
chmod 0755 "${tempdir}"
|
chmod 0755 "${tempdir}"
|
||||||
|
|
||||||
if is_command apt-get; then
|
if is_command apt-get; then
|
||||||
@@ -573,7 +633,7 @@ Do you wish to continue with an IPv6-only installation?\\n\\n" \
|
|||||||
;;
|
;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
DNS_SERVERS="$DNS_SERVERS_IPV6_ONLY"
|
DNS_SERVERS="${DNS_SERVERS_IPV6_ONLY}"
|
||||||
printf " %b Proceeding with IPv6 only installation.\\n" "${INFO}"
|
printf " %b Proceeding with IPv6 only installation.\\n" "${INFO}"
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -646,6 +706,7 @@ chooseInterface() {
|
|||||||
status="OFF"
|
status="OFF"
|
||||||
done
|
done
|
||||||
# Disable check for double quote here as we are passing a string with spaces
|
# Disable check for double quote here as we are passing a string with spaces
|
||||||
|
# shellcheck disable=SC2086
|
||||||
PIHOLE_INTERFACE=$(dialog --no-shadow --keep-tite --output-fd 1 \
|
PIHOLE_INTERFACE=$(dialog --no-shadow --keep-tite --output-fd 1 \
|
||||||
--cancel-label "Exit" --ok-label "Select" \
|
--cancel-label "Exit" --ok-label "Select" \
|
||||||
--radiolist "Choose An Interface (press space to toggle selection)" \
|
--radiolist "Choose An Interface (press space to toggle selection)" \
|
||||||
@@ -671,9 +732,9 @@ testIPv6() {
|
|||||||
# first will contain fda2 (ULA)
|
# first will contain fda2 (ULA)
|
||||||
printf -v first "%s" "${1%%:*}"
|
printf -v first "%s" "${1%%:*}"
|
||||||
# value1 will contain 253 which is the decimal value corresponding to 0xFD
|
# value1 will contain 253 which is the decimal value corresponding to 0xFD
|
||||||
value1=$(((0x$first) / 256))
|
value1=$(((0x${first}) / 256))
|
||||||
# value2 will contain 162 which is the decimal value corresponding to 0xA2
|
# value2 will contain 162 which is the decimal value corresponding to 0xA2
|
||||||
value2=$(((0x$first) % 256))
|
value2=$(((0x${first}) % 256))
|
||||||
# the ULA test is testing for fc00::/7 according to RFC 4193
|
# the ULA test is testing for fc00::/7 according to RFC 4193
|
||||||
if (((value1 & 254) == 252)); then
|
if (((value1 & 254) == 252)); then
|
||||||
# echoing result to calling function as return value
|
# echoing result to calling function as return value
|
||||||
@@ -698,7 +759,7 @@ find_IPv6_information() {
|
|||||||
# For each address in the array above, determine the type of IPv6 address it is
|
# For each address in the array above, determine the type of IPv6 address it is
|
||||||
for i in "${IPV6_ADDRESSES[@]}"; do
|
for i in "${IPV6_ADDRESSES[@]}"; do
|
||||||
# Check if it's ULA, GUA, or LL by using the function created earlier
|
# Check if it's ULA, GUA, or LL by using the function created earlier
|
||||||
result=$(testIPv6 "$i")
|
result=$(testIPv6 "${i}")
|
||||||
# If it's a ULA address, use it and store it as a global variable
|
# If it's a ULA address, use it and store it as a global variable
|
||||||
[[ "${result}" == "ULA" ]] && ULA_ADDRESS="${i%/*}"
|
[[ "${result}" == "ULA" ]] && ULA_ADDRESS="${i%/*}"
|
||||||
# If it's a GUA address, use it and store it as a global variable
|
# If it's a GUA address, use it and store it as a global variable
|
||||||
@@ -733,7 +794,7 @@ collect_v4andv6_information() {
|
|||||||
printf " %b IPv4 address: %s\\n" "${INFO}" "${IPV4_ADDRESS}"
|
printf " %b IPv4 address: %s\\n" "${INFO}" "${IPV4_ADDRESS}"
|
||||||
find_IPv6_information
|
find_IPv6_information
|
||||||
printf " %b IPv6 address: %s\\n" "${INFO}" "${IPV6_ADDRESS}"
|
printf " %b IPv6 address: %s\\n" "${INFO}" "${IPV6_ADDRESS}"
|
||||||
if [ "$IPV4_ADDRESS" == "" ] && [ "$IPV6_ADDRESS" != "" ]; then
|
if [ "${IPV4_ADDRESS}" == "" ] && [ "${IPV6_ADDRESS}" != "" ]; then
|
||||||
confirm_ipv6_only
|
confirm_ipv6_only
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
@@ -753,7 +814,7 @@ valid_ip() {
|
|||||||
local regex="^${ipv4elem}\\.${ipv4elem}\\.${ipv4elem}\\.${ipv4elem}${portelem}$"
|
local regex="^${ipv4elem}\\.${ipv4elem}\\.${ipv4elem}\\.${ipv4elem}${portelem}$"
|
||||||
|
|
||||||
# Evaluate the regex, and return the result
|
# Evaluate the regex, and return the result
|
||||||
[[ $ip =~ ${regex} ]]
|
[[ ${ip} =~ ${regex} ]]
|
||||||
|
|
||||||
stat=$?
|
stat=$?
|
||||||
return "${stat}"
|
return "${stat}"
|
||||||
@@ -788,7 +849,7 @@ setDNS() {
|
|||||||
DNSChooseOptions=()
|
DNSChooseOptions=()
|
||||||
local DNSServerCount=0
|
local DNSServerCount=0
|
||||||
# Save the old Internal Field Separator in a variable,
|
# Save the old Internal Field Separator in a variable,
|
||||||
OIFS=$IFS
|
OIFS=${IFS}
|
||||||
# and set the new one to newline
|
# and set the new one to newline
|
||||||
IFS=$'\n'
|
IFS=$'\n'
|
||||||
# Put the DNS Servers into an array
|
# Put the DNS Servers into an array
|
||||||
@@ -856,7 +917,7 @@ If you want to specify a port other than 53, separate it with a hash.\
|
|||||||
esac
|
esac
|
||||||
|
|
||||||
# Clean user input and replace whitespace with comma.
|
# Clean user input and replace whitespace with comma.
|
||||||
piholeDNS=$(sed 's/[, \t]\+/,/g' <<<"${piholeDNS}")
|
piholeDNS="${piholeDNS//[[:blank:]]/,}"
|
||||||
|
|
||||||
# Separate the user input into the two DNS values (separated by a comma)
|
# Separate the user input into the two DNS values (separated by a comma)
|
||||||
printf -v PIHOLE_DNS_1 "%s" "${piholeDNS%%,*}"
|
printf -v PIHOLE_DNS_1 "%s" "${piholeDNS%%,*}"
|
||||||
@@ -912,7 +973,7 @@ If you want to specify a port other than 53, separate it with a hash.\
|
|||||||
done
|
done
|
||||||
else
|
else
|
||||||
# Save the old Internal Field Separator in a variable,
|
# Save the old Internal Field Separator in a variable,
|
||||||
OIFS=$IFS
|
OIFS=${IFS}
|
||||||
# and set the new one to newline
|
# and set the new one to newline
|
||||||
IFS=$'\n'
|
IFS=$'\n'
|
||||||
for DNSServer in ${DNS_SERVERS}; do
|
for DNSServer in ${DNS_SERVERS}; do
|
||||||
@@ -1134,7 +1195,8 @@ installScripts() {
|
|||||||
install -o "${USER}" -Dm755 -t "${PI_HOLE_INSTALL_DIR}" ./automated\ install/uninstall.sh
|
install -o "${USER}" -Dm755 -t "${PI_HOLE_INSTALL_DIR}" ./automated\ install/uninstall.sh
|
||||||
install -o "${USER}" -Dm755 -t "${PI_HOLE_INSTALL_DIR}" ./advanced/Scripts/COL_TABLE
|
install -o "${USER}" -Dm755 -t "${PI_HOLE_INSTALL_DIR}" ./advanced/Scripts/COL_TABLE
|
||||||
install -o "${USER}" -Dm755 -t "${PI_HOLE_BIN_DIR}" pihole
|
install -o "${USER}" -Dm755 -t "${PI_HOLE_BIN_DIR}" pihole
|
||||||
install -Dm644 ./advanced/bash-completion/pihole /etc/bash_completion.d/pihole
|
install -Dm644 ./advanced/bash-completion/pihole.bash /etc/bash_completion.d/pihole
|
||||||
|
install -Dm644 ./advanced/bash-completion/pihole-ftl.bash /etc/bash_completion.d/pihole-FTL
|
||||||
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
|
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
|
||||||
|
|
||||||
else
|
else
|
||||||
@@ -1173,7 +1235,12 @@ installConfigs() {
|
|||||||
# Load final service
|
# Load final service
|
||||||
systemctl daemon-reload
|
systemctl daemon-reload
|
||||||
else
|
else
|
||||||
install -T -m 0755 "${PI_HOLE_LOCAL_REPO}/advanced/Templates/pihole-FTL.service" '/etc/init.d/pihole-FTL'
|
local INIT="service"
|
||||||
|
if is_command openrc; then
|
||||||
|
INIT="openrc"
|
||||||
|
fi
|
||||||
|
|
||||||
|
install -T -m 0755 "${PI_HOLE_LOCAL_REPO}/advanced/Templates/pihole-FTL.${INIT}" '/etc/init.d/pihole-FTL'
|
||||||
fi
|
fi
|
||||||
install -T -m 0755 "${PI_HOLE_LOCAL_REPO}/advanced/Templates/pihole-FTL-prestart.sh" "${PI_HOLE_INSTALL_DIR}/pihole-FTL-prestart.sh"
|
install -T -m 0755 "${PI_HOLE_LOCAL_REPO}/advanced/Templates/pihole-FTL-prestart.sh" "${PI_HOLE_INSTALL_DIR}/pihole-FTL-prestart.sh"
|
||||||
install -T -m 0755 "${PI_HOLE_LOCAL_REPO}/advanced/Templates/pihole-FTL-poststop.sh" "${PI_HOLE_INSTALL_DIR}/pihole-FTL-poststop.sh"
|
install -T -m 0755 "${PI_HOLE_LOCAL_REPO}/advanced/Templates/pihole-FTL-poststop.sh" "${PI_HOLE_INSTALL_DIR}/pihole-FTL-poststop.sh"
|
||||||
@@ -1197,10 +1264,6 @@ install_manpage() {
|
|||||||
# if not present, create man8 directory
|
# if not present, create man8 directory
|
||||||
install -d -m 755 /usr/local/share/man/man8
|
install -d -m 755 /usr/local/share/man/man8
|
||||||
fi
|
fi
|
||||||
if [[ ! -d "/usr/local/share/man/man5" ]]; then
|
|
||||||
# if not present, create man5 directory
|
|
||||||
install -d -m 755 /usr/local/share/man/man5
|
|
||||||
fi
|
|
||||||
# Testing complete, copy the files & update the man db
|
# Testing complete, copy the files & update the man db
|
||||||
install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/manpages/pihole.8 /usr/local/share/man/man8/pihole.8
|
install -D -m 644 -T ${PI_HOLE_LOCAL_REPO}/manpages/pihole.8 /usr/local/share/man/man8/pihole.8
|
||||||
|
|
||||||
@@ -1262,6 +1325,8 @@ enable_service() {
|
|||||||
if is_command systemctl; then
|
if is_command systemctl; then
|
||||||
# use that to enable the service
|
# use that to enable the service
|
||||||
systemctl -q enable "${1}"
|
systemctl -q enable "${1}"
|
||||||
|
elif is_command openrc; then
|
||||||
|
rc-update add "${1}" "${2:-default}" &> /dev/null
|
||||||
else
|
else
|
||||||
# Otherwise, use update-rc.d to accomplish this
|
# Otherwise, use update-rc.d to accomplish this
|
||||||
update-rc.d "${1}" defaults >/dev/null
|
update-rc.d "${1}" defaults >/dev/null
|
||||||
@@ -1277,7 +1342,10 @@ disable_service() {
|
|||||||
# If systemctl exists,
|
# If systemctl exists,
|
||||||
if is_command systemctl; then
|
if is_command systemctl; then
|
||||||
# use that to disable the service
|
# use that to disable the service
|
||||||
systemctl -q disable "${1}"
|
systemctl -q disable --now "${1}"
|
||||||
|
elif is_command openrc; then
|
||||||
|
rc-update del "${1}" "${2:-default}" &> /dev/null
|
||||||
|
|
||||||
else
|
else
|
||||||
# Otherwise, use update-rc.d to accomplish this
|
# Otherwise, use update-rc.d to accomplish this
|
||||||
update-rc.d "${1}" disable >/dev/null
|
update-rc.d "${1}" disable >/dev/null
|
||||||
@@ -1290,6 +1358,8 @@ check_service_active() {
|
|||||||
if is_command systemctl; then
|
if is_command systemctl; then
|
||||||
# use that to check the status of the service
|
# use that to check the status of the service
|
||||||
systemctl -q is-enabled "${1}" 2>/dev/null
|
systemctl -q is-enabled "${1}" 2>/dev/null
|
||||||
|
elif is_command openrc; then
|
||||||
|
rc-status default boot | grep -q "${1}"
|
||||||
else
|
else
|
||||||
# Otherwise, fall back to service command
|
# Otherwise, fall back to service command
|
||||||
service "${1}" status &>/dev/null
|
service "${1}" status &>/dev/null
|
||||||
@@ -1391,8 +1461,27 @@ install_dependent_packages() {
|
|||||||
printf " %b Error: Unable to find Pi-hole dependency package.\\n" "${COL_RED}"
|
printf " %b Error: Unable to find Pi-hole dependency package.\\n" "${COL_RED}"
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
# Install Alpine packages
|
||||||
|
elif is_command apk; then
|
||||||
|
local repo_str="Ensuring alpine 'community' repo is enabled."
|
||||||
|
printf "%b %b %s" "${OVER}" "${INFO}" "${repo_str}"
|
||||||
|
|
||||||
# If neither apt-get or yum/dnf package managers were found
|
local pattern='^\s*#(.*/community/?)\s*$'
|
||||||
|
sed -Ei "s:${pattern}:\1:" /etc/apk/repositories
|
||||||
|
if grep -Eq "${pattern}" /etc/apk/repositories; then
|
||||||
|
# Repo still commented out = Failure
|
||||||
|
printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${repo_str}"
|
||||||
|
else
|
||||||
|
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${repo_str}"
|
||||||
|
fi
|
||||||
|
printf " %b %s..." "${INFO}" "${str}"
|
||||||
|
if { ${PKG_INSTALL} -q -t "pihole-meta=${PIHOLE_META_VERSION_APK}" "${PIHOLE_META_DEPS_APK[@]}" &>/dev/null; }; then
|
||||||
|
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
|
||||||
|
else
|
||||||
|
printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${str}"
|
||||||
|
printf " %b Error: Unable to install Pi-hole dependency package.\\n" "${COL_RED}"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
else
|
else
|
||||||
# we cannot install the dependency package
|
# we cannot install the dependency package
|
||||||
printf " %b No supported package manager found\\n" "${CROSS}"
|
printf " %b No supported package manager found\\n" "${CROSS}"
|
||||||
@@ -1417,6 +1506,15 @@ installCron() {
|
|||||||
# Randomize update checker time
|
# Randomize update checker time
|
||||||
sed -i "s/59 17/$((1 + RANDOM % 58)) $((12 + RANDOM % 8))/" /etc/cron.d/pihole
|
sed -i "s/59 17/$((1 + RANDOM % 58)) $((12 + RANDOM % 8))/" /etc/cron.d/pihole
|
||||||
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
|
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
|
||||||
|
|
||||||
|
# Switch off of busybox cron on alpine
|
||||||
|
if is_command openrc; then
|
||||||
|
printf " %b Switching from busybox crond to cronie...\\n" "${INFO}"
|
||||||
|
stop_service crond
|
||||||
|
disable_service crond
|
||||||
|
enable_service cronie
|
||||||
|
restart_service cronie
|
||||||
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Gravity is a very important script as it aggregates all of the domains into a single HOSTS formatted list,
|
# Gravity is a very important script as it aggregates all of the domains into a single HOSTS formatted list,
|
||||||
@@ -1466,7 +1564,7 @@ create_pihole_user() {
|
|||||||
# then create and add her to the pihole group
|
# then create and add her to the pihole group
|
||||||
local str="Creating user 'pihole'"
|
local str="Creating user 'pihole'"
|
||||||
printf "%b %b %s..." "${OVER}" "${INFO}" "${str}"
|
printf "%b %b %s..." "${OVER}" "${INFO}" "${str}"
|
||||||
if useradd -r --no-user-group -g pihole -s /usr/sbin/nologin pihole; then
|
if useradd -r --no-user-group -g pihole -s "$(command -v nologin)" pihole; then
|
||||||
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
|
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
|
||||||
else
|
else
|
||||||
printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${str}"
|
printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${str}"
|
||||||
@@ -1481,7 +1579,7 @@ create_pihole_user() {
|
|||||||
# create and add pihole user to the pihole group
|
# create and add pihole user to the pihole group
|
||||||
local str="Creating user 'pihole'"
|
local str="Creating user 'pihole'"
|
||||||
printf "%b %b %s..." "${OVER}" "${INFO}" "${str}"
|
printf "%b %b %s..." "${OVER}" "${INFO}" "${str}"
|
||||||
if useradd -r --no-user-group -g pihole -s /usr/sbin/nologin pihole; then
|
if useradd -r --no-user-group -g pihole -s "$(command -v nologin)" pihole; then
|
||||||
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
|
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
|
||||||
else
|
else
|
||||||
printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${str}"
|
printf "%b %b %s\\n" "${OVER}" "${CROSS}" "${str}"
|
||||||
@@ -1633,9 +1731,9 @@ check_download_exists() {
|
|||||||
status=$(curl --head --silent "https://ftl.pi-hole.net/${1}" | head -n 1)
|
status=$(curl --head --silent "https://ftl.pi-hole.net/${1}" | head -n 1)
|
||||||
|
|
||||||
# Check the status code
|
# Check the status code
|
||||||
if grep -q "200" <<<"$status"; then
|
if grep -q "200" <<<"${status}"; then
|
||||||
return 0
|
return 0
|
||||||
elif grep -q "404" <<<"$status"; then
|
elif grep -q "404" <<<"${status}"; then
|
||||||
return 1
|
return 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -1668,7 +1766,7 @@ get_available_branches() {
|
|||||||
# Get reachable remote branches, but store STDERR as STDOUT variable
|
# Get reachable remote branches, but store STDERR as STDOUT variable
|
||||||
output=$({ git ls-remote --heads --quiet | cut -d'/' -f3- -; } 2>&1)
|
output=$({ git ls-remote --heads --quiet | cut -d'/' -f3- -; } 2>&1)
|
||||||
# echo status for calling function to capture
|
# echo status for calling function to capture
|
||||||
echo "$output"
|
echo "${output}"
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -1701,9 +1799,9 @@ checkout_pull_branch() {
|
|||||||
oldbranch="$(git symbolic-ref HEAD)"
|
oldbranch="$(git symbolic-ref HEAD)"
|
||||||
|
|
||||||
str="Switching to branch: '${branch}' from '${oldbranch}'"
|
str="Switching to branch: '${branch}' from '${oldbranch}'"
|
||||||
printf " %b %s" "${INFO}" "$str"
|
printf " %b %s" "${INFO}" "${str}"
|
||||||
git checkout "${branch}" --quiet || return 1
|
git checkout "${branch}" --quiet || return 1
|
||||||
printf "%b %b %s\\n" "${OVER}" "${TICK}" "$str"
|
printf "%b %b %s\\n" "${OVER}" "${TICK}" "${str}"
|
||||||
# Data in the repositories is public anyway so we can make it readable by everyone (+r to keep executable permission if already set by git)
|
# Data in the repositories is public anyway so we can make it readable by everyone (+r to keep executable permission if already set by git)
|
||||||
chmod -R a+rX "${directory}"
|
chmod -R a+rX "${directory}"
|
||||||
|
|
||||||
@@ -1718,6 +1816,12 @@ clone_or_reset_repos() {
|
|||||||
# If the user wants to repair/update,
|
# If the user wants to repair/update,
|
||||||
if [[ "${repair}" == true ]]; then
|
if [[ "${repair}" == true ]]; then
|
||||||
printf " %b Resetting local repos\\n" "${INFO}"
|
printf " %b Resetting local repos\\n" "${INFO}"
|
||||||
|
|
||||||
|
# import getFTLConfigValue from utils.sh
|
||||||
|
source "/opt/pihole/utils.sh"
|
||||||
|
# Use the configured Web repo location on repair/update
|
||||||
|
webInterfaceDir=$(getFTLConfigValue "webserver.paths.webroot")$(getFTLConfigValue "webserver.paths.webhome")
|
||||||
|
|
||||||
# Reset the Core repo
|
# Reset the Core repo
|
||||||
resetRepo ${PI_HOLE_LOCAL_REPO} ||
|
resetRepo ${PI_HOLE_LOCAL_REPO} ||
|
||||||
{
|
{
|
||||||
@@ -1791,8 +1895,12 @@ FTLinstall() {
|
|||||||
# Before stopping FTL, we download the macvendor database
|
# Before stopping FTL, we download the macvendor database
|
||||||
curl -sSL "https://ftl.pi-hole.net/macvendor.db" -o "${PI_HOLE_CONFIG_DIR}/macvendor.db" || true
|
curl -sSL "https://ftl.pi-hole.net/macvendor.db" -o "${PI_HOLE_CONFIG_DIR}/macvendor.db" || true
|
||||||
|
|
||||||
# Stop pihole-FTL service if available
|
|
||||||
stop_service pihole-FTL >/dev/null
|
# If the binary already exists in /usr/bin, then we need to stop the service
|
||||||
|
# If the binary does not exist (fresh installs), then we can skip this step.
|
||||||
|
if [[ -f /usr/bin/pihole-FTL ]]; then
|
||||||
|
stop_service pihole-FTL >/dev/null
|
||||||
|
fi
|
||||||
|
|
||||||
# Install the new version with the correct permissions
|
# Install the new version with the correct permissions
|
||||||
install -T -m 0755 "${binary}" /usr/bin/pihole-FTL
|
install -T -m 0755 "${binary}" /usr/bin/pihole-FTL
|
||||||
@@ -1906,7 +2014,7 @@ get_binary_name() {
|
|||||||
l_binary="pihole-FTL-riscv64"
|
l_binary="pihole-FTL-riscv64"
|
||||||
else
|
else
|
||||||
# Something else - we try to use 32bit executable and warn the user
|
# Something else - we try to use 32bit executable and warn the user
|
||||||
if [[ ! "${machine}" == "i686" ]]; then
|
if [[ "${machine}" != "i686" ]]; then
|
||||||
printf "%b %b %s...\\n" "${OVER}" "${CROSS}" "${str}"
|
printf "%b %b %s...\\n" "${OVER}" "${CROSS}" "${str}"
|
||||||
printf " %b %bNot able to detect architecture (unknown: %s), trying x86 (32bit) executable%b\\n" "${INFO}" "${COL_RED}" "${machine}" "${COL_NC}"
|
printf " %b %bNot able to detect architecture (unknown: %s), trying x86 (32bit) executable%b\\n" "${INFO}" "${COL_RED}" "${machine}" "${COL_NC}"
|
||||||
printf " %b Contact Pi-hole Support if you experience issues (e.g: FTL not running)\\n" "${INFO}"
|
printf " %b Contact Pi-hole Support if you experience issues (e.g: FTL not running)\\n" "${INFO}"
|
||||||
@@ -1940,14 +2048,14 @@ FTLcheckUpdate() {
|
|||||||
local remoteSha1
|
local remoteSha1
|
||||||
local localSha1
|
local localSha1
|
||||||
|
|
||||||
if [[ ! "${ftlBranch}" == "master" ]]; then
|
if [[ "${ftlBranch}" != "master" ]]; then
|
||||||
# This is not the master branch
|
# This is not the master branch
|
||||||
local path
|
local path
|
||||||
path="${ftlBranch}/${binary}"
|
path="${ftlBranch}/${binary}"
|
||||||
|
|
||||||
# Check whether or not the binary for this FTL branch actually exists. If not, then there is no update!
|
# Check whether or not the binary for this FTL branch actually exists. If not, then there is no update!
|
||||||
local status
|
local status
|
||||||
if ! check_download_exists "$path"; then
|
if ! check_download_exists "${path}"; then
|
||||||
status=$?
|
status=$?
|
||||||
if [ "${status}" -eq 1 ]; then
|
if [ "${status}" -eq 1 ]; then
|
||||||
printf " %b Branch \"%s\" is not available.\\n" "${INFO}" "${ftlBranch}"
|
printf " %b Branch \"%s\" is not available.\\n" "${INFO}" "${ftlBranch}"
|
||||||
@@ -2050,11 +2158,11 @@ make_temporary_log() {
|
|||||||
TEMPLOG=$(mktemp /tmp/pihole_temp.XXXXXX)
|
TEMPLOG=$(mktemp /tmp/pihole_temp.XXXXXX)
|
||||||
# Open handle 3 for templog
|
# Open handle 3 for templog
|
||||||
# https://stackoverflow.com/questions/18460186/writing-outputs-to-log-file-and-console
|
# https://stackoverflow.com/questions/18460186/writing-outputs-to-log-file-and-console
|
||||||
exec 3>"$TEMPLOG"
|
exec 3>"${TEMPLOG}"
|
||||||
# Delete templog, but allow for addressing via file handle
|
# Delete templog, but allow for addressing via file handle
|
||||||
# This lets us write to the log without having a temporary file on the drive, which
|
# This lets us write to the log without having a temporary file on the drive, which
|
||||||
# is meant to be a security measure so there is not a lingering file on the drive during the install process
|
# is meant to be a security measure so there is not a lingering file on the drive during the install process
|
||||||
rm "$TEMPLOG"
|
rm "${TEMPLOG}"
|
||||||
}
|
}
|
||||||
|
|
||||||
copy_to_install_log() {
|
copy_to_install_log() {
|
||||||
@@ -2227,21 +2335,18 @@ main() {
|
|||||||
|
|
||||||
# Check if there is a usable FTL binary available on this architecture - do
|
# Check if there is a usable FTL binary available on this architecture - do
|
||||||
# this early on as FTL is a hard dependency for Pi-hole
|
# this early on as FTL is a hard dependency for Pi-hole
|
||||||
local funcOutput
|
# Allow the user to skip this check if they are using a self-compiled FTL binary from an unsupported architecture
|
||||||
funcOutput=$(get_binary_name) #Store output of get_binary_name here
|
if [ "${skipFTL}" != true ]; then
|
||||||
# Abort early if this processor is not supported (get_binary_name returns empty string)
|
# Get the binary name for the current architecture
|
||||||
if [[ "${funcOutput}" == "" ]]; then
|
local funcOutput
|
||||||
printf " %b Upgrade/install aborted\\n" "${CROSS}" "${DISTRO_NAME}"
|
funcOutput=$(get_binary_name) #Store output of get_binary_name here
|
||||||
exit 1
|
# Abort early if this processor is not supported (get_binary_name returns empty string)
|
||||||
fi
|
if [[ "${funcOutput}" == "" ]]; then
|
||||||
|
printf " %b Upgrade/install aborted\\n" "${CROSS}" "${DISTRO_NAME}"
|
||||||
if [[ "${fresh_install}" == false ]]; then
|
exit 1
|
||||||
# if it's running unattended,
|
|
||||||
if [[ "${runUnattended}" == true ]]; then
|
|
||||||
printf " %b Performing unattended setup, no dialogs will be displayed\\n" "${INFO}"
|
|
||||||
# also disable debconf-apt-progress dialogs
|
|
||||||
export DEBIAN_FRONTEND="noninteractive"
|
|
||||||
fi
|
fi
|
||||||
|
else
|
||||||
|
printf " %b %b--skipFTL set - skipping architecture check%b\\n" "${INFO}" "${COL_YELLOW}" "${COL_NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [[ "${fresh_install}" == true ]]; then
|
if [[ "${fresh_install}" == true ]]; then
|
||||||
@@ -2274,13 +2379,18 @@ main() {
|
|||||||
create_pihole_user
|
create_pihole_user
|
||||||
|
|
||||||
# Download and install FTL
|
# Download and install FTL
|
||||||
local binary
|
# Allow the user to skip this check if they are using a self-compiled FTL binary from an unsupported architecture
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}" #binary name will be the last line of the output of get_binary_name (it always begins with pihole-FTL)
|
if [ "${skipFTL}" != true ]; then
|
||||||
local theRest
|
local binary
|
||||||
theRest="${funcOutput%pihole-FTL*}" # Print the rest of get_binary_name's output to display (cut out from first instance of "pihole-FTL")
|
binary="pihole-FTL${funcOutput##*pihole-FTL}" #binary name will be the last line of the output of get_binary_name (it always begins with pihole-FTL)
|
||||||
if ! FTLdetect "${binary}" "${theRest}"; then
|
local theRest
|
||||||
printf " %b FTL Engine not installed\\n" "${CROSS}"
|
theRest="${funcOutput%pihole-FTL*}" # Print the rest of get_binary_name's output to display (cut out from first instance of "pihole-FTL")
|
||||||
exit 1
|
if ! FTLdetect "${binary}" "${theRest}"; then
|
||||||
|
printf " %b FTL Engine not installed\\n" "${CROSS}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
printf " %b %b--skipFTL set - skipping FTL binary installation%b\\n" "${INFO}" "${COL_YELLOW}" "${COL_NC}"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Install and log everything to a file
|
# Install and log everything to a file
|
||||||
@@ -2344,7 +2454,7 @@ main() {
|
|||||||
if [ -n "${PIHOLE_DNS_1}" ]; then
|
if [ -n "${PIHOLE_DNS_1}" ]; then
|
||||||
local string="\"${PIHOLE_DNS_1}\""
|
local string="\"${PIHOLE_DNS_1}\""
|
||||||
[ -n "${PIHOLE_DNS_2}" ] && string+=", \"${PIHOLE_DNS_2}\""
|
[ -n "${PIHOLE_DNS_2}" ] && string+=", \"${PIHOLE_DNS_2}\""
|
||||||
setFTLConfigValue "dns.upstreams" "[ $string ]"
|
setFTLConfigValue "dns.upstreams" "[ ${string} ]"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ -n "${QUERY_LOGGING}" ]; then
|
if [ -n "${QUERY_LOGGING}" ]; then
|
||||||
@@ -2395,7 +2505,7 @@ main() {
|
|||||||
\\n\\nIPv4: ${IPV4_ADDRESS%/*}\
|
\\n\\nIPv4: ${IPV4_ADDRESS%/*}\
|
||||||
\\nIPv6: ${IPV6_ADDRESS:-"Not Configured"}\
|
\\nIPv6: ${IPV6_ADDRESS:-"Not Configured"}\
|
||||||
\\nIf you have not done so already, the above IP should be set to static.\
|
\\nIf you have not done so already, the above IP should be set to static.\
|
||||||
\\nView the web interface at http://pi.hole/admin:${WEBPORT} or http://${IPV4_ADDRESS%/*}:${WEBPORT}/admin\\n\\nYour Admin Webpage login password is ${pw}\
|
\\nView the web interface at http://pi.hole:${WEBPORT}/admin or http://${IPV4_ADDRESS%/*}:${WEBPORT}/admin\\n\\nYour Admin Webpage login password is ${pw}\
|
||||||
\\n
|
\\n
|
||||||
\\n
|
\\n
|
||||||
\\nTo allow your user to use all CLI functions without authentication,\
|
\\nTo allow your user to use all CLI functions without authentication,\
|
||||||
|
|||||||
@@ -12,9 +12,7 @@
|
|||||||
source "/opt/pihole/COL_TABLE"
|
source "/opt/pihole/COL_TABLE"
|
||||||
# shellcheck source="./advanced/Scripts/utils.sh"
|
# shellcheck source="./advanced/Scripts/utils.sh"
|
||||||
source "/opt/pihole/utils.sh"
|
source "/opt/pihole/utils.sh"
|
||||||
|
# getFTLConfigValue() from utils.sh
|
||||||
ADMIN_INTERFACE_DIR=$(getFTLConfigValue "webserver.paths.webroot")$(getFTLConfigValue "webserver.paths.webhome")
|
|
||||||
readonly ADMIN_INTERFACE_DIR
|
|
||||||
|
|
||||||
while true; do
|
while true; do
|
||||||
read -rp " ${QST} Are you sure you would like to remove ${COL_BOLD}Pi-hole${COL_NC}? [y/N] " answer
|
read -rp " ${QST} Are you sure you would like to remove ${COL_BOLD}Pi-hole${COL_NC}? [y/N] " answer
|
||||||
@@ -29,125 +27,179 @@ str="Root user check"
|
|||||||
if [[ ${EUID} -eq 0 ]]; then
|
if [[ ${EUID} -eq 0 ]]; then
|
||||||
echo -e " ${TICK} ${str}"
|
echo -e " ${TICK} ${str}"
|
||||||
else
|
else
|
||||||
# Check if sudo is actually installed
|
echo -e " ${CROSS} ${str}
|
||||||
# If it isn't, exit because the uninstall can not complete
|
Script called with non-root privileges
|
||||||
if [ -x "$(command -v sudo)" ]; then
|
The Pi-hole requires elevated privileges to uninstall"
|
||||||
export SUDO="sudo"
|
exit 1
|
||||||
else
|
|
||||||
echo -e " ${CROSS} ${str}
|
|
||||||
Script called with non-root privileges
|
|
||||||
The Pi-hole requires elevated privileges to uninstall"
|
|
||||||
exit 1
|
|
||||||
fi
|
|
||||||
fi
|
fi
|
||||||
|
|
||||||
readonly PI_HOLE_FILES_DIR="/etc/.pihole"
|
# Get paths for admin interface, log files and database files,
|
||||||
|
# to allow deletion where user has specified a non-default location
|
||||||
|
ADMIN_INTERFACE_DIR=$(getFTLConfigValue "webserver.paths.webroot")$(getFTLConfigValue "webserver.paths.webhome")
|
||||||
|
FTL_LOG=$(getFTLConfigValue "files.log.ftl")
|
||||||
|
DNSMASQ_LOG=$(getFTLConfigValue "files.log.dnsmasq")
|
||||||
|
WEBSERVER_LOG=$(getFTLConfigValue "files.log.webserver")
|
||||||
|
PIHOLE_DB=$(getFTLConfigValue "files.database")
|
||||||
|
GRAVITY_DB=$(getFTLConfigValue "files.gravity")
|
||||||
|
MACVENDOR_DB=$(getFTLConfigValue "files.macvendor")
|
||||||
|
|
||||||
|
PI_HOLE_LOCAL_REPO="/etc/.pihole"
|
||||||
|
# Setting SKIP_INSTALL="true" to source the installer functions without running them
|
||||||
SKIP_INSTALL="true"
|
SKIP_INSTALL="true"
|
||||||
# shellcheck source="./automated install/basic-install.sh"
|
# shellcheck source="./automated install/basic-install.sh"
|
||||||
source "${PI_HOLE_FILES_DIR}/automated install/basic-install.sh"
|
source "${PI_HOLE_LOCAL_REPO}/automated install/basic-install.sh"
|
||||||
|
# Functions and Variables sources from basic-install:
|
||||||
# package_manager_detect() sourced from basic-install.sh
|
# package_manager_detect(), disable_service(), stop_service(),
|
||||||
package_manager_detect
|
# restart service() and is_command()
|
||||||
|
# PI_HOLE_CONFIG_DIR PI_HOLE_INSTALL_DIR PI_HOLE_LOCAL_REPO
|
||||||
|
|
||||||
removeMetaPackage() {
|
removeMetaPackage() {
|
||||||
# Purge Pi-hole meta package
|
# Purge Pi-hole meta package
|
||||||
echo ""
|
echo ""
|
||||||
echo -ne " ${INFO} Removing Pi-hole meta package...";
|
echo -ne " ${INFO} Removing Pi-hole meta package...";
|
||||||
eval "${SUDO}" "${PKG_REMOVE}" "pihole-meta" &> /dev/null;
|
eval "${PKG_REMOVE}" "pihole-meta" &> /dev/null;
|
||||||
echo -e "${OVER} ${INFO} Removed Pi-hole meta package";
|
echo -e "${OVER} ${INFO} Removed Pi-hole meta package";
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
removePiholeFiles() {
|
removeWebInterface() {
|
||||||
# Remove the web interface of Pi-hole
|
# Remove the web interface of Pi-hole
|
||||||
echo -ne " ${INFO} Removing Web Interface..."
|
echo -ne " ${INFO} Removing Web Interface..."
|
||||||
${SUDO} rm -rf "${ADMIN_INTERFACE_DIR}" &> /dev/null
|
rm -rf "${ADMIN_INTERFACE_DIR:-/var/www/html/admin/}" &> /dev/null
|
||||||
echo -e "${OVER} ${TICK} Removed Web Interface"
|
echo -e "${OVER} ${TICK} Removed Web Interface"
|
||||||
|
}
|
||||||
|
|
||||||
# Attempt to preserve backwards compatibility with older versions
|
removeFTL() {
|
||||||
# to guarantee no additional changes were made to /etc/crontab after
|
# Remove FTL and stop any running FTL service
|
||||||
# the installation of pihole, /etc/crontab.pihole should be permanently
|
if is_command "pihole-FTL"; then
|
||||||
# preserved.
|
# service stop & disable from basic_install.sh
|
||||||
if [[ -f /etc/crontab.orig ]]; then
|
stop_service pihole-FTL
|
||||||
${SUDO} mv /etc/crontab /etc/crontab.pihole
|
disable_service pihole-FTL
|
||||||
${SUDO} mv /etc/crontab.orig /etc/crontab
|
|
||||||
${SUDO} service cron restart
|
|
||||||
echo -e " ${TICK} Restored the default system cron"
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Attempt to preserve backwards compatibility with older versions
|
|
||||||
if [[ -f /etc/cron.d/pihole ]];then
|
|
||||||
${SUDO} rm -f /etc/cron.d/pihole &> /dev/null
|
|
||||||
echo -e " ${TICK} Removed /etc/cron.d/pihole"
|
|
||||||
fi
|
|
||||||
|
|
||||||
${SUDO} rm -rf /var/log/*pihole* &> /dev/null
|
|
||||||
${SUDO} rm -rf /var/log/pihole/*pihole* &> /dev/null
|
|
||||||
${SUDO} rm -rf /etc/pihole/ &> /dev/null
|
|
||||||
${SUDO} rm -rf /etc/.pihole/ &> /dev/null
|
|
||||||
${SUDO} rm -rf /opt/pihole/ &> /dev/null
|
|
||||||
${SUDO} rm -f /usr/local/bin/pihole &> /dev/null
|
|
||||||
${SUDO} rm -f /etc/bash_completion.d/pihole &> /dev/null
|
|
||||||
${SUDO} rm -f /etc/sudoers.d/pihole &> /dev/null
|
|
||||||
echo -e " ${TICK} Removed config files"
|
|
||||||
|
|
||||||
# Restore Resolved
|
|
||||||
if [[ -e /etc/systemd/resolved.conf.orig ]] || [[ -e /etc/systemd/resolved.conf.d/90-pi-hole-disable-stub-listener.conf ]]; then
|
|
||||||
${SUDO} cp -p /etc/systemd/resolved.conf.orig /etc/systemd/resolved.conf &> /dev/null || true
|
|
||||||
${SUDO} rm -f /etc/systemd/resolved.conf.d/90-pi-hole-disable-stub-listener.conf
|
|
||||||
systemctl reload-or-restart systemd-resolved
|
|
||||||
fi
|
|
||||||
|
|
||||||
# Remove FTL
|
|
||||||
if command -v pihole-FTL &> /dev/null; then
|
|
||||||
echo -ne " ${INFO} Removing pihole-FTL..."
|
echo -ne " ${INFO} Removing pihole-FTL..."
|
||||||
if [[ -x "$(command -v systemctl)" ]]; then
|
rm -f /etc/systemd/system/pihole-FTL.service &> /dev/null
|
||||||
systemctl stop pihole-FTL
|
|
||||||
else
|
|
||||||
service pihole-FTL stop
|
|
||||||
fi
|
|
||||||
${SUDO} rm -f /etc/systemd/system/pihole-FTL.service
|
|
||||||
if [[ -d '/etc/systemd/system/pihole-FTL.service.d' ]]; then
|
if [[ -d '/etc/systemd/system/pihole-FTL.service.d' ]]; then
|
||||||
read -rp " ${QST} FTL service override directory /etc/systemd/system/pihole-FTL.service.d detected. Do you wish to remove this from your system? [y/N] " answer
|
read -rp " ${QST} FTL service override directory /etc/systemd/system/pihole-FTL.service.d detected. Do you wish to remove this from your system? [y/N] " answer
|
||||||
case $answer in
|
case $answer in
|
||||||
[yY]*)
|
[yY]*)
|
||||||
echo -ne " ${INFO} Removing /etc/systemd/system/pihole-FTL.service.d..."
|
echo -ne " ${INFO} Removing /etc/systemd/system/pihole-FTL.service.d..."
|
||||||
${SUDO} rm -R /etc/systemd/system/pihole-FTL.service.d
|
rm -R /etc/systemd/system/pihole-FTL.service.d &> /dev/null
|
||||||
echo -e "${OVER} ${INFO} Removed /etc/systemd/system/pihole-FTL.service.d"
|
echo -e "${OVER} ${INFO} Removed /etc/systemd/system/pihole-FTL.service.d"
|
||||||
;;
|
;;
|
||||||
*) echo -e " ${INFO} Leaving /etc/systemd/system/pihole-FTL.service.d in place.";;
|
*) echo -e " ${INFO} Leaving /etc/systemd/system/pihole-FTL.service.d in place.";;
|
||||||
esac
|
esac
|
||||||
fi
|
fi
|
||||||
${SUDO} rm -f /etc/init.d/pihole-FTL
|
rm -f /etc/init.d/pihole-FTL &> /dev/null
|
||||||
${SUDO} rm -f /usr/bin/pihole-FTL
|
rm -f /usr/bin/pihole-FTL &> /dev/null
|
||||||
echo -e "${OVER} ${TICK} Removed pihole-FTL"
|
echo -e "${OVER} ${TICK} Removed pihole-FTL"
|
||||||
|
|
||||||
|
# Force systemd reload after service files are removed
|
||||||
|
if is_command "systemctl"; then
|
||||||
|
echo -ne " ${INFO} Restarting systemd..."
|
||||||
|
systemctl daemon-reload
|
||||||
|
echo -e "${OVER} ${TICK} Restarted systemd..."
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
removeCronFiles() {
|
||||||
|
# Attempt to preserve backwards compatibility with older versions
|
||||||
|
# to guarantee no additional changes were made to /etc/crontab after
|
||||||
|
# the installation of pihole, /etc/crontab.pihole should be permanently
|
||||||
|
# preserved.
|
||||||
|
if [[ -f /etc/crontab.orig ]]; then
|
||||||
|
mv /etc/crontab /etc/crontab.pihole
|
||||||
|
mv /etc/crontab.orig /etc/crontab
|
||||||
|
restart_service cron
|
||||||
|
echo -e " ${TICK} Restored the default system cron"
|
||||||
|
echo -e " ${INFO} A backup of the most recent crontab is saved at /etc/crontab.pihole"
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# If the pihole manpage exists, then delete and rebuild man-db
|
# Attempt to preserve backwards compatibility with older versions
|
||||||
|
if [[ -f /etc/cron.d/pihole ]];then
|
||||||
|
rm -f /etc/cron.d/pihole &> /dev/null
|
||||||
|
echo -e " ${TICK} Removed /etc/cron.d/pihole"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
removePiholeFiles() {
|
||||||
|
# Remove databases (including user specified non-default paths)
|
||||||
|
rm -f "${PIHOLE_DB:-/etc/pihole/pihole-FTL.db}" &> /dev/null
|
||||||
|
rm -f "${GRAVITY_DB:-/etc/pihole/gravity.db}" &> /dev/null
|
||||||
|
rm -f "${MACVENDOR_DB:-/etc/pihole/macvendor.db}" &> /dev/null
|
||||||
|
|
||||||
|
# Remove pihole config, repo and local files
|
||||||
|
rm -rf "${PI_HOLE_CONFIG_DIR:-/etc/pihole}" &> /dev/null
|
||||||
|
rm -rf "${PI_HOLE_LOCAL_REPO:-/etc/.pihole}" &> /dev/null
|
||||||
|
rm -rf "${PI_HOLE_INSTALL_DIR:-/opt/pihole}" &> /dev/null
|
||||||
|
|
||||||
|
# Remove log files (including user specified non-default paths)
|
||||||
|
# and rotated logs
|
||||||
|
# Explicitly escape spaces, in case of trailing space in path before wildcard
|
||||||
|
rm -f "$(printf '%q' "${FTL_LOG:-/var/log/pihole/FTL.log}")*" &> /dev/null
|
||||||
|
rm -f "$(printf '%q' "${DNSMASQ_LOG:-/var/log/pihole/pihole.log}")*" &> /dev/null
|
||||||
|
rm -f "$(printf '%q' "${WEBSERVER_LOG:-/var/log/pihole/webserver.log}")*" &> /dev/null
|
||||||
|
|
||||||
|
# remove any remnant log-files from old versions
|
||||||
|
rm -rf /var/log/*pihole* &> /dev/null
|
||||||
|
|
||||||
|
# remove log directory
|
||||||
|
rm -rf /var/log/pihole &> /dev/null
|
||||||
|
|
||||||
|
# remove the pihole command
|
||||||
|
rm -f /usr/local/bin/pihole &> /dev/null
|
||||||
|
|
||||||
|
# remove Pi-hole's bash completion
|
||||||
|
rm -f /etc/bash_completion.d/pihole &> /dev/null
|
||||||
|
rm -f /etc/bash_completion.d/pihole-FTL &> /dev/null
|
||||||
|
|
||||||
|
# Remove pihole from sudoers for compatibility with old versions
|
||||||
|
rm -f /etc/sudoers.d/pihole &> /dev/null
|
||||||
|
|
||||||
|
echo -e " ${TICK} Removed config files"
|
||||||
|
}
|
||||||
|
|
||||||
|
removeManPage() {
|
||||||
|
# If the pihole manpage exists, then delete
|
||||||
if [[ -f /usr/local/share/man/man8/pihole.8 ]]; then
|
if [[ -f /usr/local/share/man/man8/pihole.8 ]]; then
|
||||||
${SUDO} rm -f /usr/local/share/man/man8/pihole.8 /usr/local/share/man/man8/pihole-FTL.8 /usr/local/share/man/man5/pihole-FTL.conf.5
|
rm -f /usr/local/share/man/man8/pihole.8 /usr/local/share/man/man8/pihole-FTL.8 /usr/local/share/man/man5/pihole-FTL.conf.5
|
||||||
${SUDO} mandb -q &>/dev/null
|
# Rebuild man-db if present
|
||||||
|
if is_command "mandb"; then
|
||||||
|
mandb -q &>/dev/null
|
||||||
|
fi
|
||||||
echo -e " ${TICK} Removed pihole man page"
|
echo -e " ${TICK} Removed pihole man page"
|
||||||
fi
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
removeUser() {
|
||||||
# If the pihole user exists, then remove
|
# If the pihole user exists, then remove
|
||||||
if id "pihole" &> /dev/null; then
|
if id "pihole" &> /dev/null; then
|
||||||
if ${SUDO} userdel -r pihole 2> /dev/null; then
|
if userdel -r pihole 2> /dev/null; then
|
||||||
echo -e " ${TICK} Removed 'pihole' user"
|
echo -e " ${TICK} Removed 'pihole' user"
|
||||||
else
|
else
|
||||||
echo -e " ${CROSS} Unable to remove 'pihole' user"
|
echo -e " ${CROSS} Unable to remove 'pihole' user"
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# If the pihole group exists, then remove
|
# If the pihole group exists, then remove
|
||||||
if getent group "pihole" &> /dev/null; then
|
if getent group "pihole" &> /dev/null; then
|
||||||
if ${SUDO} groupdel pihole 2> /dev/null; then
|
if groupdel pihole 2> /dev/null; then
|
||||||
echo -e " ${TICK} Removed 'pihole' group"
|
echo -e " ${TICK} Removed 'pihole' group"
|
||||||
else
|
else
|
||||||
echo -e " ${CROSS} Unable to remove 'pihole' group"
|
echo -e " ${CROSS} Unable to remove 'pihole' group"
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
restoreResolved() {
|
||||||
|
# Restore Resolved from saved configuration, if present
|
||||||
|
if [[ -e /etc/systemd/resolved.conf.orig ]] || [[ -e /etc/systemd/resolved.conf.d/90-pi-hole-disable-stub-listener.conf ]]; then
|
||||||
|
cp -p /etc/systemd/resolved.conf.orig /etc/systemd/resolved.conf &> /dev/null || true
|
||||||
|
rm -f /etc/systemd/resolved.conf.d/90-pi-hole-disable-stub-listener.conf &> /dev/null
|
||||||
|
systemctl reload-or-restart systemd-resolved
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
completionMessage() {
|
||||||
echo -e "\\n We're sorry to see you go, but thanks for checking out Pi-hole!
|
echo -e "\\n We're sorry to see you go, but thanks for checking out Pi-hole!
|
||||||
If you need help, reach out to us on GitHub, Discourse, Reddit or Twitter
|
If you need help, reach out to us on GitHub, Discourse, Reddit or Twitter
|
||||||
Reinstall at any time: ${COL_BOLD}curl -sSL https://install.pi-hole.net | bash${COL_NC}
|
Reinstall at any time: ${COL_BOLD}curl -sSL https://install.pi-hole.net | bash${COL_NC}
|
||||||
@@ -158,5 +210,17 @@ removePiholeFiles() {
|
|||||||
}
|
}
|
||||||
|
|
||||||
######### SCRIPT ###########
|
######### SCRIPT ###########
|
||||||
|
# The ordering here allows clean uninstallation with nothing
|
||||||
|
# removed before anything that depends upon it.
|
||||||
|
# eg removeFTL relies on scripts removed by removePiholeFiles
|
||||||
|
# removeUser relies on commands removed by removeMetaPackage
|
||||||
|
package_manager_detect
|
||||||
|
removeWebInterface
|
||||||
|
removeCronFiles
|
||||||
|
restoreResolved
|
||||||
|
removeManPage
|
||||||
|
removeFTL
|
||||||
|
removeUser
|
||||||
removeMetaPackage
|
removeMetaPackage
|
||||||
removePiholeFiles
|
removePiholeFiles
|
||||||
|
completionMessage
|
||||||
|
|||||||
68
gravity.sh
68
gravity.sh
@@ -118,9 +118,12 @@ gravity_swap_databases() {
|
|||||||
|
|
||||||
# Swap databases and remove or conditionally rename old database
|
# Swap databases and remove or conditionally rename old database
|
||||||
# Number of available blocks on disk
|
# Number of available blocks on disk
|
||||||
availableBlocks=$(stat -f --format "%a" "${gravityDIR}")
|
# Busybox Compat: `stat` long flags unsupported
|
||||||
|
# -f flag is short form of --file-system.
|
||||||
|
# -c flag is short form of --format.
|
||||||
|
availableBlocks=$(stat -f -c "%a" "${gravityDIR}")
|
||||||
# Number of blocks, used by gravity.db
|
# Number of blocks, used by gravity.db
|
||||||
gravityBlocks=$(stat --format "%b" "${gravityDBfile}")
|
gravityBlocks=$(stat -c "%b" "${gravityDBfile}")
|
||||||
# Only keep the old database if available disk space is at least twice the size of the existing gravity.db.
|
# Only keep the old database if available disk space is at least twice the size of the existing gravity.db.
|
||||||
# Better be safe than sorry...
|
# Better be safe than sorry...
|
||||||
oldAvail=false
|
oldAvail=false
|
||||||
@@ -609,7 +612,7 @@ compareLists() {
|
|||||||
gravity_DownloadBlocklistFromUrl() {
|
gravity_DownloadBlocklistFromUrl() {
|
||||||
local url="${1}" adlistID="${2}" saveLocation="${3}" compression="${4}" gravity_type="${5}" domain="${6}"
|
local url="${1}" adlistID="${2}" saveLocation="${3}" compression="${4}" gravity_type="${5}" domain="${6}"
|
||||||
local listCurlBuffer str httpCode success="" ip customUpstreamResolver=""
|
local listCurlBuffer str httpCode success="" ip customUpstreamResolver=""
|
||||||
local file_path permissions ip_addr port blocked=false download=true
|
local file_path ip_addr port blocked=false download=true
|
||||||
# modifiedOptions is an array to store all the options used to check if the adlist has been changed upstream
|
# modifiedOptions is an array to store all the options used to check if the adlist has been changed upstream
|
||||||
local modifiedOptions=()
|
local modifiedOptions=()
|
||||||
|
|
||||||
@@ -718,36 +721,47 @@ gravity_DownloadBlocklistFromUrl() {
|
|||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# If we are going to "download" a local file, we first check if the target
|
# If we "download" a local file (file://), verify read access before using it.
|
||||||
# file has a+r permission. We explicitly check for all+read because we want
|
# When running as root (e.g., via pihole -g), check that the 'pihole' user can read the file
|
||||||
# to make sure that the file is readable by everyone and not just the user
|
# to match the effective runtime user of FTL; otherwise, check the current user's read access
|
||||||
# running the script.
|
# (e.g., in Docker or when invoked by a non-root user). The target must
|
||||||
if [[ $url == "file://"* ]]; then
|
# resolve to a regular file and be readable by the evaluated user.
|
||||||
|
if [[ "${url}" == "file:/"* ]]; then
|
||||||
# Get the file path
|
# Get the file path
|
||||||
file_path=$(echo "$url" | cut -d'/' -f3-)
|
file_path=$(echo "${url}" | cut -d'/' -f3-)
|
||||||
# Check if the file exists and is a regular file (i.e. not a socket, fifo, tty, block). Might still be a symlink.
|
# Check if the file exists and is a regular file (i.e. not a socket, fifo, tty, block). Might still be a symlink.
|
||||||
if [[ ! -f $file_path ]]; then
|
if [[ ! -f ${file_path} ]]; then
|
||||||
# Output that the file does not exist
|
# Output that the file does not exist
|
||||||
echo -e "${OVER} ${CROSS} ${file_path} does not exist"
|
echo -e "${OVER} ${CROSS} ${file_path} does not exist"
|
||||||
download=false
|
|
||||||
else
|
|
||||||
# Check if the file or a file referenced by the symlink has a+r permissions
|
|
||||||
permissions=$(stat -L -c "%a" "$file_path")
|
|
||||||
if [[ $permissions == *4 || $permissions == *5 || $permissions == *6 || $permissions == *7 ]]; then
|
|
||||||
# Output that we are using the local file
|
|
||||||
echo -e "${OVER} ${INFO} Using local file ${file_path}"
|
|
||||||
else
|
|
||||||
# Output that the file does not have the correct permissions
|
|
||||||
echo -e "${OVER} ${CROSS} Cannot read file (file needs to have a+r permission)"
|
|
||||||
download=false
|
download=false
|
||||||
fi
|
else
|
||||||
|
if [ "$(id -un)" == "root" ]; then
|
||||||
|
# If we are root, we need to check if the pihole user has read permission
|
||||||
|
# otherwise, we might read files that the pihole user should not be able to read
|
||||||
|
if sudo -u pihole test -r "${file_path}"; then
|
||||||
|
echo -e "${OVER} ${INFO} Using local file ${file_path}"
|
||||||
|
else
|
||||||
|
echo -e "${OVER} ${CROSS} Cannot read file (user 'pihole' lacks read permission)"
|
||||||
|
download=false
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# If we are not root, we just check if the current user has read permission
|
||||||
|
if [[ -r "${file_path}" ]]; then
|
||||||
|
# Output that we are using the local file
|
||||||
|
echo -e "${OVER} ${INFO} Using local file ${file_path}"
|
||||||
|
else
|
||||||
|
# Output that the file is not readable by the current user
|
||||||
|
echo -e "${OVER} ${CROSS} Cannot read file (current user '$(id -un)' lacks read permission)"
|
||||||
|
download=false
|
||||||
|
fi
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Check for allowed protocols
|
# Check for allowed protocols
|
||||||
if [[ $url != "http"* && $url != "https"* && $url != "file"* && $url != "ftp"* && $url != "ftps"* && $url != "sftp"* ]]; then
|
if [[ $url != "http"* && $url != "https"* && $url != "file"* && $url != "ftp"* && $url != "ftps"* && $url != "sftp"* ]]; then
|
||||||
echo -e "${OVER} ${CROSS} ${str} Invalid protocol specified. Ignoring list."
|
echo -e "${OVER} ${CROSS} ${str} Invalid protocol specified. Ignoring list."
|
||||||
echo -e "Ensure your URL starts with a valid protocol like http:// , https:// or file:// ."
|
echo -e " Ensure your URL starts with a valid protocol like http:// , https:// or file:// ."
|
||||||
download=false
|
download=false
|
||||||
fi
|
fi
|
||||||
|
|
||||||
@@ -855,7 +869,7 @@ gravity_Table_Count() {
|
|||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
|
|
||||||
# Output count of blacklisted domains and regex filters
|
# Output count of denied and allowed domains and regex filters
|
||||||
gravity_ShowCount() {
|
gravity_ShowCount() {
|
||||||
# Here we use the table "gravity" instead of the view "vw_gravity" for speed.
|
# Here we use the table "gravity" instead of the view "vw_gravity" for speed.
|
||||||
# It's safe to replace it here, because right after a gravity run both will show the exactly same number of domains.
|
# It's safe to replace it here, because right after a gravity run both will show the exactly same number of domains.
|
||||||
@@ -948,7 +962,7 @@ database_recovery() {
|
|||||||
else
|
else
|
||||||
echo -e "${OVER} ${CROSS} ${str} - the following errors happened:"
|
echo -e "${OVER} ${CROSS} ${str} - the following errors happened:"
|
||||||
while IFS= read -r line; do echo " - $line"; done <<<"$result"
|
while IFS= read -r line; do echo " - $line"; done <<<"$result"
|
||||||
echo -e " ${CROSS} Recovery failed. Try \"pihole -r recreate\" instead."
|
echo -e " ${CROSS} Recovery failed. Try \"pihole -g -r recreate\" instead."
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
echo ""
|
echo ""
|
||||||
@@ -1131,7 +1145,7 @@ fi
|
|||||||
|
|
||||||
if [[ "${forceDelete:-}" == true ]]; then
|
if [[ "${forceDelete:-}" == true ]]; then
|
||||||
str="Deleting existing list cache"
|
str="Deleting existing list cache"
|
||||||
echo -ne "${INFO} ${str}..."
|
echo -ne " ${INFO} ${str}..."
|
||||||
|
|
||||||
rm "${listsCacheDir}/list.*" 2>/dev/null || true
|
rm "${listsCacheDir}/list.*" 2>/dev/null || true
|
||||||
echo -e "${OVER} ${TICK} ${str}"
|
echo -e "${OVER} ${TICK} ${str}"
|
||||||
|
|||||||
@@ -105,9 +105,9 @@ Available commands and options:
|
|||||||
Flush the Pi-hole log
|
Flush the Pi-hole log
|
||||||
.br
|
.br
|
||||||
|
|
||||||
\fB-r, reconfigure\fR
|
\fB-r, repair\fR
|
||||||
.br
|
.br
|
||||||
Reconfigure or Repair Pi-hole subsystems
|
Repair Pi-hole subsystems
|
||||||
.br
|
.br
|
||||||
|
|
||||||
\fB-t, tail\fR [arg]
|
\fB-t, tail\fR [arg]
|
||||||
@@ -268,7 +268,7 @@ Allow-/denylist manipulation
|
|||||||
|
|
||||||
\fBpihole --regex "ad.*\\.example\\.com$"\fR
|
\fBpihole --regex "ad.*\\.example\\.com$"\fR
|
||||||
.br
|
.br
|
||||||
Adds "ad.*\\.example\\.com$" to the regex blacklist.
|
Adds "ad.*\\.example\\.com$" to the regex denylist.
|
||||||
Would block all subdomains of example.com which start with "ad"
|
Would block all subdomains of example.com which start with "ad"
|
||||||
.br
|
.br
|
||||||
|
|
||||||
@@ -317,9 +317,10 @@ Switching Pi-hole subsystem branches
|
|||||||
Switch to core development branch
|
Switch to core development branch
|
||||||
.br
|
.br
|
||||||
|
|
||||||
\fBpihole arpflush\fR
|
\fBpihole networkflush\fR
|
||||||
.br
|
.br
|
||||||
Flush information stored in Pi-hole's network tables
|
Flush information stored in Pi-hole's network table
|
||||||
|
Add '--arp' to additionally flush the ARP table
|
||||||
.br
|
.br
|
||||||
|
|
||||||
\fBpihole api stats/summary\fR
|
\fBpihole api stats/summary\fR
|
||||||
|
|||||||
109
pihole
109
pihole
@@ -96,8 +96,18 @@ flushFunc() {
|
|||||||
exit 0
|
exit 0
|
||||||
}
|
}
|
||||||
|
|
||||||
|
# Deprecated function, should be removed in the future
|
||||||
|
# use networkFlush instead
|
||||||
arpFunc() {
|
arpFunc() {
|
||||||
"${PI_HOLE_SCRIPT_DIR}"/piholeARPTable.sh "$@"
|
shift
|
||||||
|
echo -e " ${INFO} The 'arpflush' command is deprecated, use 'networkflush' instead"
|
||||||
|
"${PI_HOLE_SCRIPT_DIR}"/piholeNetworkFlush.sh "$@"
|
||||||
|
exit 0
|
||||||
|
}
|
||||||
|
|
||||||
|
networkFlush() {
|
||||||
|
shift
|
||||||
|
"${PI_HOLE_SCRIPT_DIR}"/piholeNetworkFlush.sh "$@"
|
||||||
exit 0
|
exit 0
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -115,7 +125,22 @@ repairPiholeFunc() {
|
|||||||
if [ -n "${DOCKER_VERSION}" ]; then
|
if [ -n "${DOCKER_VERSION}" ]; then
|
||||||
unsupportedFunc
|
unsupportedFunc
|
||||||
else
|
else
|
||||||
/etc/.pihole/automated\ install/basic-install.sh --repair
|
local skipFTL additionalFlag
|
||||||
|
skipFTL=false
|
||||||
|
# Check arguments
|
||||||
|
for var in "$@"; do
|
||||||
|
case "$var" in
|
||||||
|
"--skipFTL") skipFTL=true ;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
if [ "${skipFTL}" == true ]; then
|
||||||
|
additionalFlag="--skipFTL"
|
||||||
|
else
|
||||||
|
additionalFlag=""
|
||||||
|
fi
|
||||||
|
|
||||||
|
/etc/.pihole/automated\ install/basic-install.sh --repair ${additionalFlag}
|
||||||
exit 0;
|
exit 0;
|
||||||
fi
|
fi
|
||||||
}
|
}
|
||||||
@@ -147,10 +172,11 @@ uninstallFunc() {
|
|||||||
|
|
||||||
versionFunc() {
|
versionFunc() {
|
||||||
exec "${PI_HOLE_SCRIPT_DIR}"/version.sh
|
exec "${PI_HOLE_SCRIPT_DIR}"/version.sh
|
||||||
|
exit 0
|
||||||
}
|
}
|
||||||
|
|
||||||
reloadDNS() {
|
reloadDNS() {
|
||||||
local svcOption svc str output status pid icon FTL_PID_FILE
|
local svcOption svc str output status pid icon FTL_PID_FILE sigrtmin
|
||||||
svcOption="${1:-reload}"
|
svcOption="${1:-reload}"
|
||||||
|
|
||||||
# get the current path to the pihole-FTL.pid
|
# get the current path to the pihole-FTL.pid
|
||||||
@@ -169,7 +195,10 @@ reloadDNS() {
|
|||||||
str="FTL is not running"
|
str="FTL is not running"
|
||||||
icon="${INFO}"
|
icon="${INFO}"
|
||||||
else
|
else
|
||||||
svc="kill -RTMIN ${pid}"
|
sigrtmin="$(pihole-FTL sigrtmin 2>/dev/null)"
|
||||||
|
# Make sure sigrtmin is a number, otherwise fallback to RTMIN
|
||||||
|
[[ "${sigrtmin}" =~ ^[0-9]+$ ]] || unset sigrtmin
|
||||||
|
svc="kill -${sigrtmin:-RTMIN} ${pid}"
|
||||||
str="Reloading DNS lists"
|
str="Reloading DNS lists"
|
||||||
icon="${TICK}"
|
icon="${TICK}"
|
||||||
fi
|
fi
|
||||||
@@ -264,6 +293,7 @@ Time:
|
|||||||
LogoutAPI
|
LogoutAPI
|
||||||
|
|
||||||
echo -e "${OVER} ${TICK} ${str}"
|
echo -e "${OVER} ${TICK} ${str}"
|
||||||
|
exit 0
|
||||||
}
|
}
|
||||||
|
|
||||||
piholeLogging() {
|
piholeLogging() {
|
||||||
@@ -519,7 +549,8 @@ Options:
|
|||||||
reloadlists Update the lists WITHOUT flushing the cache or restarting the DNS server
|
reloadlists Update the lists WITHOUT flushing the cache or restarting the DNS server
|
||||||
checkout Switch Pi-hole subsystems to a different GitHub branch
|
checkout Switch Pi-hole subsystems to a different GitHub branch
|
||||||
Add '-h' for more info on checkout usage
|
Add '-h' for more info on checkout usage
|
||||||
arpflush Flush information stored in Pi-hole's network tables";
|
networkflush Flush information stored in Pi-hole's network tables
|
||||||
|
Add '--arp' to additionally flush the ARP table ";
|
||||||
exit 0
|
exit 0
|
||||||
}
|
}
|
||||||
|
|
||||||
@@ -528,7 +559,7 @@ if [[ $# = 0 ]]; then
|
|||||||
fi
|
fi
|
||||||
|
|
||||||
# functions that do not require sudo power
|
# functions that do not require sudo power
|
||||||
need_root=1
|
need_root=
|
||||||
case "${1}" in
|
case "${1}" in
|
||||||
"-h" | "help" | "--help" ) helpFunc;;
|
"-h" | "help" | "--help" ) helpFunc;;
|
||||||
"-v" | "version" ) versionFunc;;
|
"-v" | "version" ) versionFunc;;
|
||||||
@@ -536,31 +567,32 @@ case "${1}" in
|
|||||||
"-q" | "query" ) queryFunc "$@";;
|
"-q" | "query" ) queryFunc "$@";;
|
||||||
"status" ) statusFunc "$2";;
|
"status" ) statusFunc "$2";;
|
||||||
"tricorder" ) tricorderFunc;;
|
"tricorder" ) tricorderFunc;;
|
||||||
|
"allow" | "allowlist" ) listFunc "$@";;
|
||||||
|
"deny" | "denylist" ) listFunc "$@";;
|
||||||
|
"--wild" | "wildcard" ) listFunc "$@";;
|
||||||
|
"--regex" | "regex" ) listFunc "$@";;
|
||||||
|
"--allow-regex" | "allow-regex" ) listFunc "$@";;
|
||||||
|
"--allow-wild" | "allow-wild" ) listFunc "$@";;
|
||||||
|
"enable" ) piholeEnable true "$2";;
|
||||||
|
"disable" ) piholeEnable false "$2";;
|
||||||
|
"api" ) shift; apiFunc "$@"; exit 0;;
|
||||||
|
|
||||||
# we need to add all arguments that require sudo power to not trigger the * argument
|
# we need to add all arguments that require sudo power to not trigger the * argument
|
||||||
"allow" | "allowlist" ) need_root=0;;
|
"-f" | "flush" ) need_root=true;;
|
||||||
"deny" | "denylist" ) need_root=0;;
|
"-up" | "updatePihole" ) need_root=true;;
|
||||||
"--wild" | "wildcard" ) need_root=0;;
|
"-r" | "repair" ) need_root=true;;
|
||||||
"--regex" | "regex" ) need_root=0;;
|
"-l" | "logging" ) need_root=true;;
|
||||||
"--allow-regex" | "allow-regex" ) need_root=0;;
|
"uninstall" ) need_root=true;;
|
||||||
"--allow-wild" | "allow-wild" ) need_root=0;;
|
"-d" | "debug" ) need_root=true;;
|
||||||
"-f" | "flush" ) ;;
|
"-g" | "updateGravity" ) need_root=true;;
|
||||||
"-up" | "updatePihole" ) ;;
|
"reloaddns" ) need_root=true;;
|
||||||
"-r" | "repair" ) ;;
|
"reloadlists" ) need_root=true;;
|
||||||
"-l" | "logging" ) ;;
|
"setpassword" ) need_root=true;;
|
||||||
"uninstall" ) ;;
|
"checkout" ) need_root=true;;
|
||||||
"enable" ) need_root=0;;
|
"updatechecker" ) need_root=true;;
|
||||||
"disable" ) need_root=0;;
|
"arpflush" ) need_root=true;; # Deprecated, use networkflush instead
|
||||||
"-d" | "debug" ) ;;
|
"networkflush" ) need_root=true;;
|
||||||
"-g" | "updateGravity" ) need_root=0;;
|
"-t" | "tail" ) need_root=true;;
|
||||||
"reloaddns" ) ;;
|
|
||||||
"reloadlists" ) ;;
|
|
||||||
"setpassword" ) ;;
|
|
||||||
"checkout" ) ;;
|
|
||||||
"updatechecker" ) ;;
|
|
||||||
"arpflush" ) ;;
|
|
||||||
"-t" | "tail" ) ;;
|
|
||||||
"api" ) need_root=0;;
|
|
||||||
* ) helpFunc;;
|
* ) helpFunc;;
|
||||||
esac
|
esac
|
||||||
|
|
||||||
@@ -572,36 +604,29 @@ fi
|
|||||||
|
|
||||||
# Check if the current user is not root and if the command
|
# Check if the current user is not root and if the command
|
||||||
# requires root. If so, exit with an error message.
|
# requires root. If so, exit with an error message.
|
||||||
if [[ $EUID -ne 0 && need_root -eq 1 ]];then
|
# Add an exception for the user "pihole" to allow the webserver running gravity
|
||||||
echo -e " ${CROSS} The Pi-hole command requires root privileges, try:"
|
if [[ ( $EUID -ne 0 && ${USER} != "pihole" ) && -n "${need_root}" ]]; then
|
||||||
|
echo -e " ${CROSS} This Pi-hole command requires root privileges, try:"
|
||||||
echo -e " ${COL_GREEN}sudo pihole $*${COL_NC}"
|
echo -e " ${COL_GREEN}sudo pihole $*${COL_NC}"
|
||||||
exit 1
|
exit 1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
# Handle redirecting to specific functions based on arguments
|
# Handle redirecting to specific functions based on arguments
|
||||||
case "${1}" in
|
case "${1}" in
|
||||||
"allow" | "allowlist" ) listFunc "$@";;
|
|
||||||
"deny" | "denylist" ) listFunc "$@";;
|
|
||||||
"--wild" | "wildcard" ) listFunc "$@";;
|
|
||||||
"--regex" | "regex" ) listFunc "$@";;
|
|
||||||
"--allow-regex" | "allow-regex" ) listFunc "$@";;
|
|
||||||
"--allow-wild" | "allow-wild" ) listFunc "$@";;
|
|
||||||
"-d" | "debug" ) debugFunc "$@";;
|
"-d" | "debug" ) debugFunc "$@";;
|
||||||
"-f" | "flush" ) flushFunc "$@";;
|
"-f" | "flush" ) flushFunc "$@";;
|
||||||
"-up" | "updatePihole" ) updatePiholeFunc "$@";;
|
"-up" | "updatePihole" ) updatePiholeFunc "$@";;
|
||||||
"-r" | "repair" ) repairPiholeFunc;;
|
"-r" | "repair" ) repairPiholeFunc "$@";;
|
||||||
"-g" | "updateGravity" ) updateGravityFunc "$@";;
|
"-g" | "updateGravity" ) updateGravityFunc "$@";;
|
||||||
"-l" | "logging" ) piholeLogging "$@";;
|
"-l" | "logging" ) piholeLogging "$@";;
|
||||||
"uninstall" ) uninstallFunc;;
|
"uninstall" ) uninstallFunc;;
|
||||||
"enable" ) piholeEnable true "$2";;
|
|
||||||
"disable" ) piholeEnable false "$2";;
|
|
||||||
"reloaddns" ) reloadDNS "reload";;
|
"reloaddns" ) reloadDNS "reload";;
|
||||||
"reloadlists" ) reloadDNS "reload-lists";;
|
"reloadlists" ) reloadDNS "reload-lists";;
|
||||||
"setpassword" ) SetWebPassword "$@";;
|
"setpassword" ) SetWebPassword "$@";;
|
||||||
"checkout" ) piholeCheckoutFunc "$@";;
|
"checkout" ) piholeCheckoutFunc "$@";;
|
||||||
"updatechecker" ) shift; updateCheckFunc "$@";;
|
"updatechecker" ) shift; updateCheckFunc "$@";;
|
||||||
"arpflush" ) arpFunc "$@";;
|
"arpflush" ) arpFunc "$@";; # Deprecated, use networkflush instead
|
||||||
|
"networkflush" ) networkFlush "$@";;
|
||||||
"-t" | "tail" ) tailFunc "$2";;
|
"-t" | "tail" ) tailFunc "$2";;
|
||||||
"api" ) shift; apiFunc "$@";;
|
|
||||||
* ) helpFunc;;
|
* ) helpFunc;;
|
||||||
esac
|
esac
|
||||||
|
|||||||
18
test/_alpine_3_21.Dockerfile
Normal file
18
test/_alpine_3_21.Dockerfile
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
FROM alpine:3.21
|
||||||
|
|
||||||
|
ENV GITDIR=/etc/.pihole
|
||||||
|
ENV SCRIPTDIR=/opt/pihole
|
||||||
|
RUN sed -i 's/#\(.*\/community\)/\1/' /etc/apk/repositories
|
||||||
|
RUN apk --no-cache add bash coreutils curl git jq openrc shadow
|
||||||
|
|
||||||
|
RUN mkdir -p $GITDIR $SCRIPTDIR /etc/pihole
|
||||||
|
ADD . $GITDIR
|
||||||
|
RUN cp $GITDIR/advanced/Scripts/*.sh $GITDIR/gravity.sh $GITDIR/pihole $GITDIR/automated\ install/*.sh $GITDIR/advanced/Scripts/COL_TABLE $SCRIPTDIR/
|
||||||
|
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$SCRIPTDIR
|
||||||
|
|
||||||
|
RUN true && \
|
||||||
|
chmod +x $SCRIPTDIR/*
|
||||||
|
|
||||||
|
ENV SKIP_INSTALL=true
|
||||||
|
|
||||||
|
#sed '/# Start the installer/Q' /opt/pihole/basic-install.sh > /opt/pihole/stub_basic-install.sh && \
|
||||||
18
test/_alpine_3_22.Dockerfile
Normal file
18
test/_alpine_3_22.Dockerfile
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
FROM alpine:3.22
|
||||||
|
|
||||||
|
ENV GITDIR=/etc/.pihole
|
||||||
|
ENV SCRIPTDIR=/opt/pihole
|
||||||
|
RUN sed -i 's/#\(.*\/community\)/\1/' /etc/apk/repositories
|
||||||
|
RUN apk --no-cache add bash coreutils curl git jq openrc shadow
|
||||||
|
|
||||||
|
RUN mkdir -p $GITDIR $SCRIPTDIR /etc/pihole
|
||||||
|
ADD . $GITDIR
|
||||||
|
RUN cp $GITDIR/advanced/Scripts/*.sh $GITDIR/gravity.sh $GITDIR/pihole $GITDIR/automated\ install/*.sh $GITDIR/advanced/Scripts/COL_TABLE $SCRIPTDIR/
|
||||||
|
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$SCRIPTDIR
|
||||||
|
|
||||||
|
RUN true && \
|
||||||
|
chmod +x $SCRIPTDIR/*
|
||||||
|
|
||||||
|
ENV SKIP_INSTALL=true
|
||||||
|
|
||||||
|
#sed '/# Start the installer/Q' /opt/pihole/basic-install.sh > /opt/pihole/stub_basic-install.sh && \
|
||||||
18
test/_alpine_3_23.Dockerfile
Normal file
18
test/_alpine_3_23.Dockerfile
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
FROM alpine:3.23
|
||||||
|
|
||||||
|
ENV GITDIR=/etc/.pihole
|
||||||
|
ENV SCRIPTDIR=/opt/pihole
|
||||||
|
RUN sed -i 's/#\(.*\/community\)/\1/' /etc/apk/repositories
|
||||||
|
RUN apk --no-cache add bash coreutils curl git jq openrc shadow
|
||||||
|
|
||||||
|
RUN mkdir -p $GITDIR $SCRIPTDIR /etc/pihole
|
||||||
|
ADD . $GITDIR
|
||||||
|
RUN cp $GITDIR/advanced/Scripts/*.sh $GITDIR/gravity.sh $GITDIR/pihole $GITDIR/automated\ install/*.sh $GITDIR/advanced/Scripts/COL_TABLE $SCRIPTDIR/
|
||||||
|
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$SCRIPTDIR
|
||||||
|
|
||||||
|
RUN true && \
|
||||||
|
chmod +x $SCRIPTDIR/*
|
||||||
|
|
||||||
|
ENV SKIP_INSTALL=true
|
||||||
|
|
||||||
|
#sed '/# Start the installer/Q' /opt/pihole/basic-install.sh > /opt/pihole/stub_basic-install.sh && \
|
||||||
16
test/_debian_13.Dockerfile
Normal file
16
test/_debian_13.Dockerfile
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
FROM buildpack-deps:trixie-scm
|
||||||
|
|
||||||
|
ENV GITDIR=/etc/.pihole
|
||||||
|
ENV SCRIPTDIR=/opt/pihole
|
||||||
|
|
||||||
|
RUN mkdir -p $GITDIR $SCRIPTDIR /etc/pihole
|
||||||
|
ADD . $GITDIR
|
||||||
|
RUN cp $GITDIR/advanced/Scripts/*.sh $GITDIR/gravity.sh $GITDIR/pihole $GITDIR/automated\ install/*.sh $GITDIR/advanced/Scripts/COL_TABLE $SCRIPTDIR/
|
||||||
|
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$SCRIPTDIR
|
||||||
|
|
||||||
|
RUN true && \
|
||||||
|
chmod +x $SCRIPTDIR/*
|
||||||
|
|
||||||
|
ENV SKIP_INSTALL=true
|
||||||
|
|
||||||
|
#sed '/# Start the installer/Q' /opt/pihole/basic-install.sh > /opt/pihole/stub_basic-install.sh && \
|
||||||
17
test/_fedora_43.Dockerfile
Normal file
17
test/_fedora_43.Dockerfile
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
FROM fedora:43
|
||||||
|
RUN dnf install -y git initscripts
|
||||||
|
|
||||||
|
ENV GITDIR=/etc/.pihole
|
||||||
|
ENV SCRIPTDIR=/opt/pihole
|
||||||
|
|
||||||
|
RUN mkdir -p $GITDIR $SCRIPTDIR /etc/pihole
|
||||||
|
ADD . $GITDIR
|
||||||
|
RUN cp $GITDIR/advanced/Scripts/*.sh $GITDIR/gravity.sh $GITDIR/pihole $GITDIR/automated\ install/*.sh $GITDIR/advanced/Scripts/COL_TABLE $SCRIPTDIR/
|
||||||
|
ENV PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:$SCRIPTDIR
|
||||||
|
|
||||||
|
RUN true && \
|
||||||
|
chmod +x $SCRIPTDIR/*
|
||||||
|
|
||||||
|
ENV SKIP_INSTALL=true
|
||||||
|
|
||||||
|
#sed '/# Start the installer/Q' /opt/pihole/basic-install.sh > /opt/pihole/stub_basic-install.sh && \
|
||||||
@@ -51,29 +51,19 @@ def mock_command(script, args, container):
|
|||||||
in unit tests
|
in unit tests
|
||||||
"""
|
"""
|
||||||
full_script_path = "/usr/local/bin/{}".format(script)
|
full_script_path = "/usr/local/bin/{}".format(script)
|
||||||
mock_script = dedent(
|
mock_script = dedent(r"""\
|
||||||
r"""\
|
|
||||||
#!/bin/bash -e
|
#!/bin/bash -e
|
||||||
echo "\$0 \$@" >> /var/log/{script}
|
echo "\$0 \$@" >> /var/log/{script}
|
||||||
case "\$1" in""".format(
|
case "\$1" in""".format(script=script))
|
||||||
script=script
|
|
||||||
)
|
|
||||||
)
|
|
||||||
for k, v in args.items():
|
for k, v in args.items():
|
||||||
case = dedent(
|
case = dedent("""
|
||||||
"""
|
|
||||||
{arg})
|
{arg})
|
||||||
echo {res}
|
echo {res}
|
||||||
exit {retcode}
|
exit {retcode}
|
||||||
;;""".format(
|
;;""".format(arg=k, res=v[0], retcode=v[1]))
|
||||||
arg=k, res=v[0], retcode=v[1]
|
|
||||||
)
|
|
||||||
)
|
|
||||||
mock_script += case
|
mock_script += case
|
||||||
mock_script += dedent(
|
mock_script += dedent("""
|
||||||
"""
|
esac""")
|
||||||
esac"""
|
|
||||||
)
|
|
||||||
container.run(
|
container.run(
|
||||||
"""
|
"""
|
||||||
cat <<EOF> {script}\n{content}\nEOF
|
cat <<EOF> {script}\n{content}\nEOF
|
||||||
@@ -94,37 +84,23 @@ def mock_command_passthrough(script, args, container):
|
|||||||
"""
|
"""
|
||||||
orig_script_path = container.check_output("command -v {}".format(script))
|
orig_script_path = container.check_output("command -v {}".format(script))
|
||||||
full_script_path = "/usr/local/bin/{}".format(script)
|
full_script_path = "/usr/local/bin/{}".format(script)
|
||||||
mock_script = dedent(
|
mock_script = dedent(r"""\
|
||||||
r"""\
|
|
||||||
#!/bin/bash -e
|
#!/bin/bash -e
|
||||||
echo "\$0 \$@" >> /var/log/{script}
|
echo "\$0 \$@" >> /var/log/{script}
|
||||||
case "\$1" in""".format(
|
case "\$1" in""".format(script=script))
|
||||||
script=script
|
|
||||||
)
|
|
||||||
)
|
|
||||||
for k, v in args.items():
|
for k, v in args.items():
|
||||||
case = dedent(
|
case = dedent("""
|
||||||
"""
|
|
||||||
{arg})
|
{arg})
|
||||||
echo {res}
|
echo {res}
|
||||||
exit {retcode}
|
exit {retcode}
|
||||||
;;""".format(
|
;;""".format(arg=k, res=v[0], retcode=v[1]))
|
||||||
arg=k, res=v[0], retcode=v[1]
|
|
||||||
)
|
|
||||||
)
|
|
||||||
mock_script += case
|
mock_script += case
|
||||||
mock_script += dedent(
|
mock_script += dedent(r"""
|
||||||
r"""
|
|
||||||
*)
|
*)
|
||||||
{orig_script_path} "\$@"
|
{orig_script_path} "\$@"
|
||||||
;;""".format(
|
;;""".format(orig_script_path=orig_script_path))
|
||||||
orig_script_path=orig_script_path
|
mock_script += dedent("""
|
||||||
)
|
esac""")
|
||||||
)
|
|
||||||
mock_script += dedent(
|
|
||||||
"""
|
|
||||||
esac"""
|
|
||||||
)
|
|
||||||
container.run(
|
container.run(
|
||||||
"""
|
"""
|
||||||
cat <<EOF> {script}\n{content}\nEOF
|
cat <<EOF> {script}\n{content}\nEOF
|
||||||
@@ -141,29 +117,19 @@ def mock_command_run(script, args, container):
|
|||||||
in unit tests
|
in unit tests
|
||||||
"""
|
"""
|
||||||
full_script_path = "/usr/local/bin/{}".format(script)
|
full_script_path = "/usr/local/bin/{}".format(script)
|
||||||
mock_script = dedent(
|
mock_script = dedent(r"""\
|
||||||
r"""\
|
|
||||||
#!/bin/bash -e
|
#!/bin/bash -e
|
||||||
echo "\$0 \$@" >> /var/log/{script}
|
echo "\$0 \$@" >> /var/log/{script}
|
||||||
case "\$1 \$2" in""".format(
|
case "\$1 \$2" in""".format(script=script))
|
||||||
script=script
|
|
||||||
)
|
|
||||||
)
|
|
||||||
for k, v in args.items():
|
for k, v in args.items():
|
||||||
case = dedent(
|
case = dedent("""
|
||||||
"""
|
|
||||||
\"{arg}\")
|
\"{arg}\")
|
||||||
echo {res}
|
echo {res}
|
||||||
exit {retcode}
|
exit {retcode}
|
||||||
;;""".format(
|
;;""".format(arg=k, res=v[0], retcode=v[1]))
|
||||||
arg=k, res=v[0], retcode=v[1]
|
|
||||||
)
|
|
||||||
)
|
|
||||||
mock_script += case
|
mock_script += case
|
||||||
mock_script += dedent(
|
mock_script += dedent(r"""
|
||||||
"""
|
esac""")
|
||||||
esac"""
|
|
||||||
)
|
|
||||||
container.run(
|
container.run(
|
||||||
"""
|
"""
|
||||||
cat <<EOF> {script}\n{content}\nEOF
|
cat <<EOF> {script}\n{content}\nEOF
|
||||||
@@ -180,29 +146,19 @@ def mock_command_2(script, args, container):
|
|||||||
in unit tests
|
in unit tests
|
||||||
"""
|
"""
|
||||||
full_script_path = "/usr/local/bin/{}".format(script)
|
full_script_path = "/usr/local/bin/{}".format(script)
|
||||||
mock_script = dedent(
|
mock_script = dedent(r"""\
|
||||||
r"""\
|
|
||||||
#!/bin/bash -e
|
#!/bin/bash -e
|
||||||
echo "\$0 \$@" >> /var/log/{script}
|
echo "\$0 \$@" >> /var/log/{script}
|
||||||
case "\$1 \$2" in""".format(
|
case "\$1 \$2" in""".format(script=script))
|
||||||
script=script
|
|
||||||
)
|
|
||||||
)
|
|
||||||
for k, v in args.items():
|
for k, v in args.items():
|
||||||
case = dedent(
|
case = dedent("""
|
||||||
"""
|
|
||||||
\"{arg}\")
|
\"{arg}\")
|
||||||
echo \"{res}\"
|
echo \"{res}\"
|
||||||
exit {retcode}
|
exit {retcode}
|
||||||
;;""".format(
|
;;""".format(arg=k, res=v[0], retcode=v[1]))
|
||||||
arg=k, res=v[0], retcode=v[1]
|
|
||||||
)
|
|
||||||
)
|
|
||||||
mock_script += case
|
mock_script += case
|
||||||
mock_script += dedent(
|
mock_script += dedent(r"""
|
||||||
"""
|
esac""")
|
||||||
esac"""
|
|
||||||
)
|
|
||||||
container.run(
|
container.run(
|
||||||
"""
|
"""
|
||||||
cat <<EOF> {script}\n{content}\nEOF
|
cat <<EOF> {script}\n{content}\nEOF
|
||||||
|
|||||||
@@ -1,6 +1,6 @@
|
|||||||
pyyaml == 6.0.2
|
pyyaml == 6.0.3
|
||||||
pytest == 8.4.1
|
pytest == 9.0.2
|
||||||
pytest-xdist == 3.8.0
|
pytest-xdist == 3.8.0
|
||||||
pytest-testinfra == 10.2.2
|
pytest-testinfra == 10.2.2
|
||||||
tox == 4.27.0
|
tox == 4.35.0
|
||||||
pytest-clarity == 1.0.1
|
pytest-clarity == 1.0.1
|
||||||
|
|||||||
@@ -6,10 +6,8 @@ from .conftest import (
|
|||||||
info_box,
|
info_box,
|
||||||
cross_box,
|
cross_box,
|
||||||
mock_command,
|
mock_command,
|
||||||
mock_command_run,
|
|
||||||
mock_command_2,
|
mock_command_2,
|
||||||
mock_command_passthrough,
|
mock_command_passthrough,
|
||||||
run_script,
|
|
||||||
)
|
)
|
||||||
|
|
||||||
FTL_BRANCH = "development"
|
FTL_BRANCH = "development"
|
||||||
@@ -22,12 +20,11 @@ def test_supported_package_manager(host):
|
|||||||
# break supported package managers
|
# break supported package managers
|
||||||
host.run("rm -rf /usr/bin/apt-get")
|
host.run("rm -rf /usr/bin/apt-get")
|
||||||
host.run("rm -rf /usr/bin/rpm")
|
host.run("rm -rf /usr/bin/rpm")
|
||||||
package_manager_detect = host.run(
|
host.run("rm -rf /sbin/apk")
|
||||||
"""
|
package_manager_detect = host.run("""
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
package_manager_detect
|
package_manager_detect
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = cross_box + " No supported package manager found"
|
expected_stdout = cross_box + " No supported package manager found"
|
||||||
assert expected_stdout in package_manager_detect.stdout
|
assert expected_stdout in package_manager_detect.stdout
|
||||||
# assert package_manager_detect.rc == 1
|
# assert package_manager_detect.rc == 1
|
||||||
@@ -37,13 +34,11 @@ def test_selinux_not_detected(host):
|
|||||||
"""
|
"""
|
||||||
confirms installer continues when SELinux configuration file does not exist
|
confirms installer continues when SELinux configuration file does not exist
|
||||||
"""
|
"""
|
||||||
check_selinux = host.run(
|
check_selinux = host.run("""
|
||||||
"""
|
|
||||||
rm -f /etc/selinux/config
|
rm -f /etc/selinux/config
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
checkSelinux
|
checkSelinux
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = info_box + " SELinux not detected"
|
expected_stdout = info_box + " SELinux not detected"
|
||||||
assert expected_stdout in check_selinux.stdout
|
assert expected_stdout in check_selinux.stdout
|
||||||
assert check_selinux.rc == 0
|
assert check_selinux.rc == 0
|
||||||
@@ -77,14 +72,24 @@ def test_installPihole_fresh_install_readableFiles(host):
|
|||||||
},
|
},
|
||||||
host,
|
host,
|
||||||
)
|
)
|
||||||
|
mock_command_2(
|
||||||
|
"rc-service",
|
||||||
|
{
|
||||||
|
"rc-service pihole-FTL enable": ("", "0"),
|
||||||
|
"rc-service pihole-FTL restart": ("", "0"),
|
||||||
|
"rc-service pihole-FTL start": ("", "0"),
|
||||||
|
"*": ('echo "rc-service call with $@"', "0"),
|
||||||
|
},
|
||||||
|
host,
|
||||||
|
)
|
||||||
# try to install man
|
# try to install man
|
||||||
host.run("command -v apt-get > /dev/null && apt-get install -qq man")
|
host.run("command -v apt-get > /dev/null && apt-get install -qq man")
|
||||||
host.run("command -v dnf > /dev/null && dnf install -y man")
|
host.run("command -v dnf > /dev/null && dnf install -y man")
|
||||||
host.run("command -v yum > /dev/null && yum install -y man")
|
host.run("command -v yum > /dev/null && yum install -y man")
|
||||||
|
host.run("command -v apk > /dev/null && apk add mandoc man-pages")
|
||||||
# Workaround to get FTLv6 installed until it reaches master branch
|
# Workaround to get FTLv6 installed until it reaches master branch
|
||||||
host.run('echo "' + FTL_BRANCH + '" > /etc/pihole/ftlbranch')
|
host.run('echo "' + FTL_BRANCH + '" > /etc/pihole/ftlbranch')
|
||||||
install = host.run(
|
install = host.run("""
|
||||||
"""
|
|
||||||
export TERM=xterm
|
export TERM=xterm
|
||||||
export DEBIAN_FRONTEND=noninteractive
|
export DEBIAN_FRONTEND=noninteractive
|
||||||
umask 0027
|
umask 0027
|
||||||
@@ -93,8 +98,7 @@ def test_installPihole_fresh_install_readableFiles(host):
|
|||||||
runUnattended=true
|
runUnattended=true
|
||||||
main
|
main
|
||||||
/opt/pihole/pihole-FTL-prestart.sh
|
/opt/pihole/pihole-FTL-prestart.sh
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
assert 0 == install.rc
|
assert 0 == install.rc
|
||||||
maninstalled = True
|
maninstalled = True
|
||||||
if (info_box + " man not installed") in install.stdout:
|
if (info_box + " man not installed") in install.stdout:
|
||||||
@@ -103,7 +107,7 @@ def test_installPihole_fresh_install_readableFiles(host):
|
|||||||
maninstalled = False
|
maninstalled = False
|
||||||
piholeuser = "pihole"
|
piholeuser = "pihole"
|
||||||
exit_status_success = 0
|
exit_status_success = 0
|
||||||
test_cmd = 'su --shell /bin/bash --command "test -{0} {1}" -p {2}'
|
test_cmd = 'su -s /bin/bash -c "test -{0} {1}" -p {2}'
|
||||||
# check files in /etc/pihole for read, write and execute permission
|
# check files in /etc/pihole for read, write and execute permission
|
||||||
check_etc = test_cmd.format("r", "/etc/pihole", piholeuser)
|
check_etc = test_cmd.format("r", "/etc/pihole", piholeuser)
|
||||||
actual_rc = host.run(check_etc).rc
|
actual_rc = host.run(check_etc).rc
|
||||||
@@ -150,12 +154,6 @@ def test_installPihole_fresh_install_readableFiles(host):
|
|||||||
check_man = test_cmd.format("r", "/usr/local/share/man/man8", piholeuser)
|
check_man = test_cmd.format("r", "/usr/local/share/man/man8", piholeuser)
|
||||||
actual_rc = host.run(check_man).rc
|
actual_rc = host.run(check_man).rc
|
||||||
assert exit_status_success == actual_rc
|
assert exit_status_success == actual_rc
|
||||||
check_man = test_cmd.format("x", "/usr/local/share/man/man5", piholeuser)
|
|
||||||
actual_rc = host.run(check_man).rc
|
|
||||||
assert exit_status_success == actual_rc
|
|
||||||
check_man = test_cmd.format("r", "/usr/local/share/man/man5", piholeuser)
|
|
||||||
actual_rc = host.run(check_man).rc
|
|
||||||
assert exit_status_success == actual_rc
|
|
||||||
check_man = test_cmd.format(
|
check_man = test_cmd.format(
|
||||||
"r", "/usr/local/share/man/man8/pihole.8", piholeuser
|
"r", "/usr/local/share/man/man8/pihole.8", piholeuser
|
||||||
)
|
)
|
||||||
@@ -189,13 +187,11 @@ def test_update_package_cache_success_no_errors(host):
|
|||||||
"""
|
"""
|
||||||
confirms package cache was updated without any errors
|
confirms package cache was updated without any errors
|
||||||
"""
|
"""
|
||||||
updateCache = host.run(
|
updateCache = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
package_manager_detect
|
package_manager_detect
|
||||||
update_package_cache
|
update_package_cache
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = tick_box + " Update local cache of available packages"
|
expected_stdout = tick_box + " Update local cache of available packages"
|
||||||
assert expected_stdout in updateCache.stdout
|
assert expected_stdout in updateCache.stdout
|
||||||
assert "error" not in updateCache.stdout.lower()
|
assert "error" not in updateCache.stdout.lower()
|
||||||
@@ -206,13 +202,11 @@ def test_update_package_cache_failure_no_errors(host):
|
|||||||
confirms package cache was not updated
|
confirms package cache was not updated
|
||||||
"""
|
"""
|
||||||
mock_command("apt-get", {"update": ("", "1")}, host)
|
mock_command("apt-get", {"update": ("", "1")}, host)
|
||||||
updateCache = host.run(
|
updateCache = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
package_manager_detect
|
package_manager_detect
|
||||||
update_package_cache
|
update_package_cache
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = cross_box + " Update local cache of available packages"
|
expected_stdout = cross_box + " Update local cache of available packages"
|
||||||
assert expected_stdout in updateCache.stdout
|
assert expected_stdout in updateCache.stdout
|
||||||
assert "Error: Unable to update package cache." in updateCache.stdout
|
assert "Error: Unable to update package cache." in updateCache.stdout
|
||||||
@@ -248,16 +242,14 @@ def test_FTL_detect_no_errors(host, arch, detected_string, supported):
|
|||||||
host,
|
host,
|
||||||
)
|
)
|
||||||
host.run('echo "' + FTL_BRANCH + '" > /etc/pihole/ftlbranch')
|
host.run('echo "' + FTL_BRANCH + '" > /etc/pihole/ftlbranch')
|
||||||
detectPlatform = host.run(
|
detectPlatform = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
create_pihole_user
|
create_pihole_user
|
||||||
funcOutput=$(get_binary_name)
|
funcOutput=$(get_binary_name)
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
||||||
theRest="${funcOutput%pihole-FTL*}"
|
theRest="${funcOutput%pihole-FTL*}"
|
||||||
FTLdetect "${binary}" "${theRest}"
|
FTLdetect "${binary}" "${theRest}"
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
if supported:
|
if supported:
|
||||||
expected_stdout = info_box + " FTL Checks..."
|
expected_stdout = info_box + " FTL Checks..."
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
@@ -277,22 +269,18 @@ def test_FTL_development_binary_installed_and_responsive_no_errors(host):
|
|||||||
confirms FTL development binary is copied and functional in installed location
|
confirms FTL development binary is copied and functional in installed location
|
||||||
"""
|
"""
|
||||||
host.run('echo "' + FTL_BRANCH + '" > /etc/pihole/ftlbranch')
|
host.run('echo "' + FTL_BRANCH + '" > /etc/pihole/ftlbranch')
|
||||||
host.run(
|
host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
create_pihole_user
|
create_pihole_user
|
||||||
funcOutput=$(get_binary_name)
|
funcOutput=$(get_binary_name)
|
||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
||||||
theRest="${funcOutput%pihole-FTL*}"
|
theRest="${funcOutput%pihole-FTL*}"
|
||||||
FTLdetect "${binary}" "${theRest}"
|
FTLdetect "${binary}" "${theRest}"
|
||||||
"""
|
""")
|
||||||
)
|
version_check = host.run("""
|
||||||
version_check = host.run(
|
|
||||||
"""
|
|
||||||
VERSION=$(pihole-FTL version)
|
VERSION=$(pihole-FTL version)
|
||||||
echo ${VERSION:0:1}
|
echo ${VERSION:0:1}
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = "v"
|
expected_stdout = "v"
|
||||||
assert expected_stdout in version_check.stdout
|
assert expected_stdout in version_check.stdout
|
||||||
|
|
||||||
@@ -307,12 +295,10 @@ def test_IPv6_only_link_local(host):
|
|||||||
{"-6 address": ("inet6 fe80::d210:52fa:fe00:7ad7/64 scope link", "0")},
|
{"-6 address": ("inet6 fe80::d210:52fa:fe00:7ad7/64 scope link", "0")},
|
||||||
host,
|
host,
|
||||||
)
|
)
|
||||||
detectPlatform = host.run(
|
detectPlatform = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
find_IPv6_information
|
find_IPv6_information
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = "Unable to find IPv6 ULA/GUA address"
|
expected_stdout = "Unable to find IPv6 ULA/GUA address"
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
|
|
||||||
@@ -332,12 +318,10 @@ def test_IPv6_only_ULA(host):
|
|||||||
},
|
},
|
||||||
host,
|
host,
|
||||||
)
|
)
|
||||||
detectPlatform = host.run(
|
detectPlatform = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
find_IPv6_information
|
find_IPv6_information
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = "Found IPv6 ULA address"
|
expected_stdout = "Found IPv6 ULA address"
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
|
|
||||||
@@ -357,12 +341,10 @@ def test_IPv6_only_GUA(host):
|
|||||||
},
|
},
|
||||||
host,
|
host,
|
||||||
)
|
)
|
||||||
detectPlatform = host.run(
|
detectPlatform = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
find_IPv6_information
|
find_IPv6_information
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = "Found IPv6 GUA address"
|
expected_stdout = "Found IPv6 GUA address"
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
|
|
||||||
@@ -383,12 +365,10 @@ def test_IPv6_GUA_ULA_test(host):
|
|||||||
},
|
},
|
||||||
host,
|
host,
|
||||||
)
|
)
|
||||||
detectPlatform = host.run(
|
detectPlatform = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
find_IPv6_information
|
find_IPv6_information
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = "Found IPv6 ULA address"
|
expected_stdout = "Found IPv6 ULA address"
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
|
|
||||||
@@ -409,12 +389,10 @@ def test_IPv6_ULA_GUA_test(host):
|
|||||||
},
|
},
|
||||||
host,
|
host,
|
||||||
)
|
)
|
||||||
detectPlatform = host.run(
|
detectPlatform = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
find_IPv6_information
|
find_IPv6_information
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = "Found IPv6 ULA address"
|
expected_stdout = "Found IPv6 ULA address"
|
||||||
assert expected_stdout in detectPlatform.stdout
|
assert expected_stdout in detectPlatform.stdout
|
||||||
|
|
||||||
@@ -425,14 +403,10 @@ def test_validate_ip(host):
|
|||||||
"""
|
"""
|
||||||
|
|
||||||
def test_address(addr, success=True):
|
def test_address(addr, success=True):
|
||||||
output = host.run(
|
output = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
valid_ip "{addr}"
|
valid_ip "{addr}"
|
||||||
""".format(
|
""".format(addr=addr))
|
||||||
addr=addr
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
assert output.rc == 0 if success else 1
|
assert output.rc == 0 if success else 1
|
||||||
|
|
||||||
@@ -467,15 +441,13 @@ def test_validate_ip(host):
|
|||||||
def test_package_manager_has_pihole_deps(host):
|
def test_package_manager_has_pihole_deps(host):
|
||||||
"""Confirms OS is able to install the required packages for Pi-hole"""
|
"""Confirms OS is able to install the required packages for Pi-hole"""
|
||||||
mock_command("dialog", {"*": ("", "0")}, host)
|
mock_command("dialog", {"*": ("", "0")}, host)
|
||||||
output = host.run(
|
output = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
package_manager_detect
|
package_manager_detect
|
||||||
update_package_cache
|
update_package_cache
|
||||||
build_dependency_package
|
build_dependency_package
|
||||||
install_dependent_packages
|
install_dependent_packages
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
|
|
||||||
assert "No package" not in output.stdout
|
assert "No package" not in output.stdout
|
||||||
assert output.rc == 0
|
assert output.rc == 0
|
||||||
@@ -484,21 +456,17 @@ def test_package_manager_has_pihole_deps(host):
|
|||||||
def test_meta_package_uninstall(host):
|
def test_meta_package_uninstall(host):
|
||||||
"""Confirms OS is able to install and uninstall the Pi-hole meta package"""
|
"""Confirms OS is able to install and uninstall the Pi-hole meta package"""
|
||||||
mock_command("dialog", {"*": ("", "0")}, host)
|
mock_command("dialog", {"*": ("", "0")}, host)
|
||||||
install = host.run(
|
install = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
package_manager_detect
|
package_manager_detect
|
||||||
update_package_cache
|
update_package_cache
|
||||||
build_dependency_package
|
build_dependency_package
|
||||||
install_dependent_packages
|
install_dependent_packages
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
assert install.rc == 0
|
assert install.rc == 0
|
||||||
|
|
||||||
uninstall = host.run(
|
uninstall = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/uninstall.sh
|
source /opt/pihole/uninstall.sh
|
||||||
removeMetaPackage
|
removeMetaPackage
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
assert uninstall.rc == 0
|
assert uninstall.rc == 0
|
||||||
|
|||||||
@@ -1,31 +1,25 @@
|
|||||||
def test_key_val_replacement_works(host):
|
def test_key_val_replacement_works(host):
|
||||||
"""Confirms addOrEditKeyValPair either adds or replaces a key value pair in a given file"""
|
"""Confirms addOrEditKeyValPair either adds or replaces a key value pair in a given file"""
|
||||||
host.run(
|
host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/utils.sh
|
source /opt/pihole/utils.sh
|
||||||
addOrEditKeyValPair "./testoutput" "KEY_ONE" "value1"
|
addOrEditKeyValPair "./testoutput" "KEY_ONE" "value1"
|
||||||
addOrEditKeyValPair "./testoutput" "KEY_TWO" "value2"
|
addOrEditKeyValPair "./testoutput" "KEY_TWO" "value2"
|
||||||
addOrEditKeyValPair "./testoutput" "KEY_ONE" "value3"
|
addOrEditKeyValPair "./testoutput" "KEY_ONE" "value3"
|
||||||
addOrEditKeyValPair "./testoutput" "KEY_FOUR" "value4"
|
addOrEditKeyValPair "./testoutput" "KEY_FOUR" "value4"
|
||||||
"""
|
""")
|
||||||
)
|
output = host.run("""
|
||||||
output = host.run(
|
|
||||||
"""
|
|
||||||
cat ./testoutput
|
cat ./testoutput
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = "KEY_ONE=value3\nKEY_TWO=value2\nKEY_FOUR=value4\n"
|
expected_stdout = "KEY_ONE=value3\nKEY_TWO=value2\nKEY_FOUR=value4\n"
|
||||||
assert expected_stdout == output.stdout
|
assert expected_stdout == output.stdout
|
||||||
|
|
||||||
|
|
||||||
def test_getFTLPID_default(host):
|
def test_getFTLPID_default(host):
|
||||||
"""Confirms getFTLPID returns the default value if FTL is not running"""
|
"""Confirms getFTLPID returns the default value if FTL is not running"""
|
||||||
output = host.run(
|
output = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/utils.sh
|
source /opt/pihole/utils.sh
|
||||||
getFTLPID
|
getFTLPID
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = "-1\n"
|
expected_stdout = "-1\n"
|
||||||
assert expected_stdout == output.stdout
|
assert expected_stdout == output.stdout
|
||||||
|
|
||||||
@@ -36,8 +30,7 @@ def test_setFTLConfigValue_getFTLConfigValue(host):
|
|||||||
Requires FTL to be installed, so we do that first
|
Requires FTL to be installed, so we do that first
|
||||||
(taken from test_FTL_development_binary_installed_and_responsive_no_errors)
|
(taken from test_FTL_development_binary_installed_and_responsive_no_errors)
|
||||||
"""
|
"""
|
||||||
host.run(
|
host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
create_pihole_user
|
create_pihole_user
|
||||||
funcOutput=$(get_binary_name)
|
funcOutput=$(get_binary_name)
|
||||||
@@ -45,15 +38,12 @@ def test_setFTLConfigValue_getFTLConfigValue(host):
|
|||||||
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
binary="pihole-FTL${funcOutput##*pihole-FTL}"
|
||||||
theRest="${funcOutput%pihole-FTL*}"
|
theRest="${funcOutput%pihole-FTL*}"
|
||||||
FTLdetect "${binary}" "${theRest}"
|
FTLdetect "${binary}" "${theRest}"
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
|
|
||||||
output = host.run(
|
output = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/utils.sh
|
source /opt/pihole/utils.sh
|
||||||
setFTLConfigValue "dns.upstreams" '["9.9.9.9"]' > /dev/null
|
setFTLConfigValue "dns.upstreams" '["9.9.9.9"]' > /dev/null
|
||||||
getFTLConfigValue "dns.upstreams"
|
getFTLConfigValue "dns.upstreams"
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
|
|
||||||
assert "[ 9.9.9.9 ]" in output.stdout
|
assert "[ 9.9.9.9 ]" in output.stdout
|
||||||
|
|||||||
@@ -15,14 +15,10 @@ def mock_selinux_config(state, host):
|
|||||||
# getenforce returns the running state of SELinux
|
# getenforce returns the running state of SELinux
|
||||||
mock_command("getenforce", {"*": (state.capitalize(), "0")}, host)
|
mock_command("getenforce", {"*": (state.capitalize(), "0")}, host)
|
||||||
# create mock configuration with desired content
|
# create mock configuration with desired content
|
||||||
host.run(
|
host.run("""
|
||||||
"""
|
|
||||||
mkdir /etc/selinux
|
mkdir /etc/selinux
|
||||||
echo "SELINUX={state}" > /etc/selinux/config
|
echo "SELINUX={state}" > /etc/selinux/config
|
||||||
""".format(
|
""".format(state=state.lower()))
|
||||||
state=state.lower()
|
|
||||||
)
|
|
||||||
)
|
|
||||||
|
|
||||||
|
|
||||||
def test_selinux_enforcing_exit(host):
|
def test_selinux_enforcing_exit(host):
|
||||||
@@ -30,12 +26,10 @@ def test_selinux_enforcing_exit(host):
|
|||||||
confirms installer prompts to exit when SELinux is Enforcing by default
|
confirms installer prompts to exit when SELinux is Enforcing by default
|
||||||
"""
|
"""
|
||||||
mock_selinux_config("enforcing", host)
|
mock_selinux_config("enforcing", host)
|
||||||
check_selinux = host.run(
|
check_selinux = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
checkSelinux
|
checkSelinux
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = cross_box + " Current SELinux: enforcing"
|
expected_stdout = cross_box + " Current SELinux: enforcing"
|
||||||
assert expected_stdout in check_selinux.stdout
|
assert expected_stdout in check_selinux.stdout
|
||||||
expected_stdout = "SELinux Enforcing detected, exiting installer"
|
expected_stdout = "SELinux Enforcing detected, exiting installer"
|
||||||
@@ -48,12 +42,10 @@ def test_selinux_permissive(host):
|
|||||||
confirms installer continues when SELinux is Permissive
|
confirms installer continues when SELinux is Permissive
|
||||||
"""
|
"""
|
||||||
mock_selinux_config("permissive", host)
|
mock_selinux_config("permissive", host)
|
||||||
check_selinux = host.run(
|
check_selinux = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
checkSelinux
|
checkSelinux
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = tick_box + " Current SELinux: permissive"
|
expected_stdout = tick_box + " Current SELinux: permissive"
|
||||||
assert expected_stdout in check_selinux.stdout
|
assert expected_stdout in check_selinux.stdout
|
||||||
assert check_selinux.rc == 0
|
assert check_selinux.rc == 0
|
||||||
@@ -64,12 +56,10 @@ def test_selinux_disabled(host):
|
|||||||
confirms installer continues when SELinux is Disabled
|
confirms installer continues when SELinux is Disabled
|
||||||
"""
|
"""
|
||||||
mock_selinux_config("disabled", host)
|
mock_selinux_config("disabled", host)
|
||||||
check_selinux = host.run(
|
check_selinux = host.run("""
|
||||||
"""
|
|
||||||
source /opt/pihole/basic-install.sh
|
source /opt/pihole/basic-install.sh
|
||||||
checkSelinux
|
checkSelinux
|
||||||
"""
|
""")
|
||||||
)
|
|
||||||
expected_stdout = tick_box + " Current SELinux: disabled"
|
expected_stdout = tick_box + " Current SELinux: disabled"
|
||||||
assert expected_stdout in check_selinux.stdout
|
assert expected_stdout in check_selinux.stdout
|
||||||
assert check_selinux.rc == 0
|
assert check_selinux.rc == 0
|
||||||
|
|||||||
10
test/tox.alpine_3_21.ini
Normal file
10
test/tox.alpine_3_21.ini
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
[tox]
|
||||||
|
envlist = py3
|
||||||
|
|
||||||
|
[testenv:py3]
|
||||||
|
allowlist_externals = docker
|
||||||
|
deps = -rrequirements.txt
|
||||||
|
setenv =
|
||||||
|
COLUMNS=120
|
||||||
|
commands = docker buildx build --load --progress plain -f _alpine_3_21.Dockerfile -t pytest_pihole:test_container ../
|
||||||
|
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
|
||||||
10
test/tox.alpine_3_22.ini
Normal file
10
test/tox.alpine_3_22.ini
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
[tox]
|
||||||
|
envlist = py3
|
||||||
|
|
||||||
|
[testenv:py3]
|
||||||
|
allowlist_externals = docker
|
||||||
|
deps = -rrequirements.txt
|
||||||
|
setenv =
|
||||||
|
COLUMNS=120
|
||||||
|
commands = docker buildx build --load --progress plain -f _alpine_3_22.Dockerfile -t pytest_pihole:test_container ../
|
||||||
|
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
|
||||||
10
test/tox.alpine_3_23.ini
Normal file
10
test/tox.alpine_3_23.ini
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
[tox]
|
||||||
|
envlist = py3
|
||||||
|
|
||||||
|
[testenv:py3]
|
||||||
|
allowlist_externals = docker
|
||||||
|
deps = -rrequirements.txt
|
||||||
|
setenv =
|
||||||
|
COLUMNS=120
|
||||||
|
commands = docker buildx build --load --progress plain -f _alpine_3_23.Dockerfile -t pytest_pihole:test_container ../
|
||||||
|
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
|
||||||
10
test/tox.debian_13.ini
Normal file
10
test/tox.debian_13.ini
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
[tox]
|
||||||
|
envlist = py3
|
||||||
|
|
||||||
|
[testenv:py3]
|
||||||
|
allowlist_externals = docker
|
||||||
|
deps = -rrequirements.txt
|
||||||
|
setenv =
|
||||||
|
COLUMNS=120
|
||||||
|
commands = docker buildx build --load --progress plain -f _debian_13.Dockerfile -t pytest_pihole:test_container ../
|
||||||
|
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py
|
||||||
10
test/tox.fedora_43.ini
Normal file
10
test/tox.fedora_43.ini
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
[tox]
|
||||||
|
envlist = py3
|
||||||
|
|
||||||
|
[testenv]
|
||||||
|
allowlist_externals = docker
|
||||||
|
deps = -rrequirements.txt
|
||||||
|
setenv =
|
||||||
|
COLUMNS=120
|
||||||
|
commands = docker buildx build --load --progress plain -f _fedora_43.Dockerfile -t pytest_pihole:test_container ../
|
||||||
|
pytest {posargs:-vv -n auto} ./test_any_automated_install.py ./test_any_utils.py ./test_centos_fedora_common_support.py
|
||||||
Reference in New Issue
Block a user