Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Vercel Update and Metadata fixes #316

Merged
merged 2 commits into from
Jun 17, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ in following locations.
|---|---|---
| `PROTONVPN_SERVER` | REQUIRED | (String) ProtonVPN server to connect to.
| `WIREGUARD_PRIVATE_KEY` | Required if not specified via mount or secrets | (String) Wireguard Private key
| `IPCHECK_URL` | https://protonwire-api.vercel.app/v1/client/ip | (String) URL to check client IP.
| `IPCHECK_URL` | https://icanhazip.com/ | (String) URL to check client IP.
| `IPCHECK_INTERVAL` | `60` | (Integer) Interval between internal health-checks in seconds. Set this to `0` to disable IP checks.
| `SKIP_DNS_CONFIG` | false | (Boolean) Set this to `1` or `true` to skip configuring DNS.
| `KILL_SWITCH` | false | (Boolean) Enable KillSwitch (Experimental)
Expand Down
2 changes: 1 addition & 1 deletion docs/faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,7 +126,7 @@ Also these addresses cannot belong to any __other__ interfaces on the machine/co
## IP check endpoint URLs

You can use any of the following services for verification. They **MUST RETURN ONLY your public IP address**.
* https://protonwire-api.vercel.app/v1/client/ip (default)
* https://icanhazip.com/ (default)
* https://icanhazip.com/
* https://checkip.amazonaws.com/

Expand Down
6 changes: 3 additions & 3 deletions docs/help.md
Original file line number Diff line number Diff line change
Expand Up @@ -14,10 +14,10 @@ Script should take care of that by adding IPs of servers in the same pool to lis
[ERROR] Failed to verify connection!
```

## Unable to verify connection/resolve DNS at https://protonwire-api.vercel.app/v1/client/ip
## Unable to verify connection/resolve DNS at https://protonwire-api.vercel.app/v1/client/ip or https://icanhazip.com/

It appears that ProtonVPN DNS servers are blocking connection to `https://protonwire-api.vercel.app/v1/client/ip` when Netshield option is set to `Block malware, ads and trackers`.
This IP endpoint simply redirects a valid IPcheck endpoint which works for most users, currently set to `https://icanhazip.com`. It is [controlled by cloudflare and is hosted on cloudflare workers](https://major.io/p/a-new-future-for-icanhazip/). It is not a malware/tracker. Please ask Proton Support to either remove it from their blocklist, use another `IPCHECK_URL` endpoint, or set Netshield option to `Block malware only`
It appears that ProtonVPN DNS servers are blocking connection to `https://protonwire-api.vercel.app/v1/client/ip` and `https://icanhazip.com/` when Netshield option is set to `Block malware, ads and trackers`.
It is not a malware/tracker. Please ask Proton Support to either remove it from their blocklist, use another `IPCHECK_URL` endpoint, or set Netshield option to `Block malware only`
Following `IPCHECK_URL` endpoints can be used.

- `https://checkip.amazonaws.com/` (may not work with IPv6 servers)
Expand Down
29 changes: 19 additions & 10 deletions protonwire
Original file line number Diff line number Diff line change
Expand Up @@ -498,7 +498,7 @@ function __run_checks() {
done
fi

if [[ $IPCHECK_URL != "https://protonwire-api.vercel.app/v1/client/ip" ]]; then
if [[ $IPCHECK_URL != "https://icanhazip.com/" ]]; then
if ! __is_valid_ipcheck_url; then
((++errs))
fi
Expand Down Expand Up @@ -1562,12 +1562,16 @@ function __protonvpn_pre_connect_get_endpoints_and_keys() {
<<<"${__PROTONWIRE_SRV_INFO}" 2>/dev/null)
if [[ ${#endpoint_keys[@]} -gt 1 ]]; then
log_warning "Endpoint($endpoint) has multiple pub keys, only using first key"
__PROTONWIRE_KEY_MAP["$endpoint"]="${endpoint_keys[0]}"
__PROTONWIRE_KEY_MAP["${endpoint}"]="${endpoint_keys[0]}"
elif [[ ${#endpoint_keys[@]} -eq 1 ]]; then
log_debug "Endpoint($endpoint) has pubkey - ${endpoint_keys[0]}"
__PROTONWIRE_KEY_MAP["$endpoint"]="${endpoint_keys[0]}"
if [[ -z ${endpoint} ]]; then
log_error "Endpoint IP is empty!"
else
log_debug "Endpoint($endpoint) has pubkey - ${endpoint_keys[0]}"
fi
__PROTONWIRE_KEY_MAP["${endpoint}"]="${endpoint_keys[0]}"
else
log_error "Endpoint($endpoint) for server ${PROTONVPN_SERVER} returned no pubkeys"
log_error "Endpoint(${endpoint}) for server ${PROTONVPN_SERVER} returned no pubkeys"
return 1
fi
done
Expand Down Expand Up @@ -2379,12 +2383,17 @@ function server_lookup_cmd() {
'[.Nodes[] | select(.Endpoint==$endpoint)] | .[].PublicKey' \
<<<"${__PROTONWIRE_SRV_INFO}" 2>/dev/null)
if [[ ${#endpoint_keys[@]} -gt 1 ]]; then
log_warning "Endpoint($endpoint) has multiple pub keys, only using first key"
__PROTONWIRE_KEY_MAP["$endpoint"]="${endpoint_keys[0]}"
log_warning "Endpoint(${endpoint}) has multiple pub keys, only using first key"
__PROTONWIRE_KEY_MAP["${endpoint}"]="${endpoint_keys[0]}"
elif [[ ${#endpoint_keys[@]} -eq 1 ]]; then
__PROTONWIRE_KEY_MAP["$endpoint"]="${endpoint_keys[0]}"
if [[ -z ${endpoint} ]]; then
log_error "Endpoint IP is empty!"
else
log_debug "Endpoint($endpoint) has pubkey - ${endpoint_keys[0]}"
fi
__PROTONWIRE_KEY_MAP["${endpoint}"]="${endpoint_keys[0]}"
else
log_error "Endpoint($endpoint) for server ${PROTONVPN_SERVER} returned no pubkeys"
log_error "Endpoint(${endpoint}) for server ${PROTONVPN_SERVER} returned no pubkeys"
return 1
fi
done
Expand Down Expand Up @@ -2655,7 +2664,7 @@ function main() {
fi

if [[ -z ${IPCHECK_URL} ]]; then
IPCHECK_URL="https://protonwire-api.vercel.app/v1/client/ip"
IPCHECK_URL="https://icanhazip.com/"
log_variable "IPCHECK_URL"
fi

Expand Down
181 changes: 80 additions & 101 deletions scripts/generate-server-metadata
Original file line number Diff line number Diff line change
Expand Up @@ -798,94 +798,99 @@ def write_metadata(
)
exit_ips.append(__srv["ExitIP"])

logical_node = LogicalNode(
name=logical_server_name,
dns_name=logical_server_domain,
exit_country=logical_server["ExitCountry"],
tier=logical_server["Tier"],
p2p="P2P" in features,
streaming="STREAMING" in features,
secure_core="SECURE_CORE" in features,
tor="TOR" in features,
nodes=servers,
exit_ips=sorted(exit_ips),
server_status=logical_server_status,
)

logical_node_lowercase_name = LogicalNode(
name=logical_server_name.lower(),
dns_name=logical_server_domain,
exit_country=logical_server["ExitCountry"],
tier=logical_server["Tier"],
p2p="P2P" in features,
streaming="STREAMING" in features,
secure_core="SECURE_CORE" in features,
tor="TOR" in features,
nodes=servers,
exit_ips=sorted(exit_ips),
server_status=logical_server_status,
)
# For https://github.com/tprasadtp/protonvpn-docker/issues/304
# This ensures all logical nodes without nodes are ignored.
if len(servers) > 0:
logical_node = LogicalNode(
name=logical_server_name,
dns_name=logical_server_domain,
exit_country=logical_server["ExitCountry"],
tier=logical_server["Tier"],
p2p="P2P" in features,
streaming="STREAMING" in features,
secure_core="SECURE_CORE" in features,
tor="TOR" in features,
nodes=servers,
exit_ips=sorted(exit_ips),
server_status=logical_server_status,
)

logical_node_list.append(logical_node)
try:
logging.debug("Writing(JSON) - %s", logical_node.Name)
logical_node_file_upper = metadata_dir_json_v1_srv / Path(
logical_node.Name.replace("#", "-").upper()
logical_node_lowercase_name = LogicalNode(
name=logical_server_name.lower(),
dns_name=logical_server_domain,
exit_country=logical_server["ExitCountry"],
tier=logical_server["Tier"],
p2p="P2P" in features,
streaming="STREAMING" in features,
secure_core="SECURE_CORE" in features,
tor="TOR" in features,
nodes=servers,
exit_ips=sorted(exit_ips),
server_status=logical_server_status,
)
with open(logical_node_file_upper, "w", encoding="utf-8") as f:
f.writelines(
(json.dumps(obj=logical_node, indent=2, default=vars))

logical_node_list.append(logical_node)
try:
logging.debug("Writing(JSON) - %s", logical_node.Name)
logical_node_file_upper = metadata_dir_json_v1_srv / Path(
logical_node.Name.replace("#", "-").upper()
)
stat_file_count += 1
logical_node_file_lc = metadata_dir_json_v1_srv / Path(
logical_node_lowercase_name.Name.replace("#", "-")
)
with open(logical_node_file_lc, "w", encoding="utf-8") as f:
f.writelines(
(json.dumps(obj=logical_node_lowercase_name, indent=2, default=vars))
with open(logical_node_file_upper, "w", encoding="utf-8") as f:
f.writelines(
(json.dumps(obj=logical_node, indent=2, default=vars))
)
stat_file_count += 1
logical_node_file_lc = metadata_dir_json_v1_srv / Path(
logical_node_lowercase_name.Name.replace("#", "-")
)
stat_file_count += 1
except Exception:
logging.exception(
"Failed to write JSON - %s",
logical_node.Name.replace("#", "-"),
)
sys.exit(1)

try:
logging.debug("Writing(JSON) - %s", logical_node.DNS)
logical_node_file_dns = metadata_dir_json_v1_srv / Path(
logical_node.DNS
)
with open(logical_node_file_dns, "w", encoding="utf-8") as f:
f.writelines(
(json.dumps(obj=logical_node, indent=2, default=vars))
with open(logical_node_file_lc, "w", encoding="utf-8") as f:
f.writelines(
(json.dumps(obj=logical_node_lowercase_name, indent=2, default=vars))
)
stat_file_count += 1
except Exception:
logging.exception(
"Failed to write JSON - %s",
logical_node.Name.replace("#", "-"),
)
stat_file_count += 1
except Exception:
logging.exception("Failed to write JSON - %s", logical_node.DNS)
sys.exit(1)
sys.exit(1)

# IP Mappings
for srv_node in logical_node.Nodes:
try:
logical_node_single_endpoint = copy.deepcopy(logical_node)
logical_node_single_endpoint.Nodes = [srv_node]
if len(logical_node_single_endpoint.Nodes) != 1:
logging.error("More than one node found for : %s", srv_node.Endpoint)
logging.debug("Writing(IP) - %s", srv_node.Endpoint)
srv_ip_file_dns = metadata_dir_json_v1_srv / Path(
srv_node.Endpoint
logging.debug("Writing(JSON) - %s", logical_node.DNS)
logical_node_file_dns = metadata_dir_json_v1_srv / Path(
logical_node.DNS
)
with open(srv_ip_file_dns, "w", encoding="utf-8") as f:
with open(logical_node_file_dns, "w", encoding="utf-8") as f:
f.writelines(
(json.dumps(obj=logical_node_single_endpoint, indent=2, default=vars))
(json.dumps(obj=logical_node, indent=2, default=vars))
)
stat_file_count += 1
except Exception:
logging.exception("Failed to write JSON - %s", srv_node.Endpoint)
logging.exception("Failed to write JSON - %s", logical_node.DNS)
sys.exit(1)

# IP Mappings
for srv_node in logical_node.Nodes:
try:
logical_node_single_endpoint = copy.deepcopy(logical_node)
logical_node_single_endpoint.Nodes = [srv_node]
if len(logical_node_single_endpoint.Nodes) != 1:
logging.error("More than one node found for : %s", srv_node.Endpoint)
logging.debug("Writing(IP) - %s", srv_node.Endpoint)
srv_ip_file_dns = metadata_dir_json_v1_srv / Path(
srv_node.Endpoint
)
with open(srv_ip_file_dns, "w", encoding="utf-8") as f:
f.writelines(
(json.dumps(obj=logical_node_single_endpoint, indent=2, default=vars))
)
stat_file_count += 1
except Exception:
logging.exception("Failed to write JSON - %s", srv_node.Endpoint)
sys.exit(1)
else:
logging.error("No server/endpoint nodes found for logical server: %s", logical_server_name)

if generate_list:
logical_node_list_file = metadata_dir_json_v1 / Path("list")
try:
Expand Down Expand Up @@ -915,7 +920,7 @@ def write_metadata(
logging.info("Writing - %s", metadata_ts_file)
metadata_ts = {
"info": "Protonwire - Metadata API",
"repo": "https://github.com/tprasadtp/protonwire",
"repo": "https://github.com/tprasadtp/protonvpn-docker",
"stat": {
"files": stat_file_count,
},
Expand All @@ -931,32 +936,6 @@ def write_metadata(
json.dump(metadata_ts, f, indent=2)
logging.info("Sever files - %d", stat_file_count)

# Generate hashes for all the server metadata files.
# There may be duplicates, so merge them into a list.
# Because leaking server names is not desired, each configuration
# file is hashed and the hash is added to the list and written to disk.
# slsa generators then hash the list of hashes and generate slsa provenance for it.
# Other builder stages will pick up the provenance and hash list and merge it into
# a single json file which may be uploaded to api endpoint. This is required
# to avoid race conditions where fetched hash list may not correspond to the provenance.
hash_list_sha256: List[str] = []
logging.info("Generating metadata hashes")
with os.scandir(metadata_dir_json_v1_srv) as items:
for item in items:
if item.is_file():
logging.debug("hashing file - %s", item.path)
with open(item.path, "rb") as f:
sha256_hex_digest = hashlib.sha256(f.read()).hexdigest()
if sha256_hex_digest not in hash_list_sha256:
logging.debug("adding hash(%s) to list", sha256_hex_digest)
hash_list_sha256.append(sha256_hex_digest)
else:
logging.debug("hash(%s) is already in the list", sha256_hex_digest)
hash_list_file=metadata_dir_json_v1_slsa_srv/Path("hash-list")
logging.info("Generating hash list: %s", hash_list_file)
with open(hash_list_file, "w", encoding="utf-8") as f:
f.write('\n'.join(hash_list_sha256))


if __name__ == "__main__":
parser = argparse.ArgumentParser(description=__doc__)
Expand Down