This server could not prove that it is downloads.vyos.io; its security certificate is from *.dnsmadeeasy.com. This may be caused by a misconfiguration or an attacker intercepting your connection.
but it gets the XML error that xrobau posted above (which is the same error I get if I type garbage into that path, so presume a 404 rather than a permissions issue).
I’ll just change my script to check the downloads page for the latest URL for now.
Sure, it’s possibly breaking all sorts of style and best practice rules but it’s working for me. By default it will just echo the URL (for testing), switch the command assignment if you want it to actually add the image.
#!/usr/bin/env python3
import requests
import re
import os
url = "https://vyos.net/get/nightly-builds/"
regex = r'\"(https:\/\/s3\.[^\"]*?\.iso)\"'
command = "echo This is the ISO URL: "
#command = "/opt/vyatta/bin/vyatta-op-cmd-wrapper add system image"
try:
contents = requests.get(url)
if contents:
match = re.search(regex, contents.text, re.I)
if match:
os.system(command + " " + match.group(1))
else:
print("Error: Cannot find ISO URL")
else:
print("Error: Cannot retrieve web page")
except:
print("Error: Unhandled exception");
I’m always reluctant to do anything that will scrape a web page, one reason is due to load on the site. But this is only going to be run once per update anyway, just as if I’d browsed to the page myself and copied the latest URL.