I knew about httrack since >15yrs ago. Back then only for trying tools on Linux. I found it because back then I wanted to download a whole website. Internet connection was a privilege back then, at least here in Indonesia.
Funny thing is, I totally can’t remember what’s the exact command to actually download the whole web pages in a website. I had to google a bit harder to find the right command. Below is the example. I will put it here as a reference.
httrack https://www.targetwebsite.com/ -v -s0 -F "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/74.0.3729.169 Safari/537.36" -%c2 -%B -%s -%v -A0
*Replace targetwebsite.com with the actual website domain.
Why would I download a whole website in this era? Well.., I want to take down one of my website but still has archive for it locally.