I want to save this fantastic website (a project that one of my lecturers and tutors was involved in) to my computer just in case they ever decide to take it down from the internet. I naively tried just using the “Save Page As…” option on Firefox, but all it saved was the single page I was viewing. This website has a whole stack of pages I want to save all at once…can it be done?
Thanks for your help John.
Andrew
You want HTTrack. It’s freeware. Does exactly what you want. It’s also one of the programs listed in the list of The 46 Best-ever Freeware Utilities.
HTTrack has a lot of options to set up when copying a web site. It will probably take you a couple of tries to get the right set of options to successfully and fully copy the site. Keep an eye on what it’s downloading. If you set the options wrong it can end up copying a lot of excess pages and end up downloading many hundreds of MB more than you expected.
Re: Saving an entire website to my computer?
I knew I could count on you, thanks once again. I can see this will take a little bit of fiddling around to get what I want but it looks like a great program.
Andrew
Success…this is absolutely brilliant, thanks so much John.
I told you it was exactly what you wanted.
It is a handy program for snagging bits off of web pages.
Andrew how about saving Unicycling Tips & Tutorials aswell…seeing as its using free web designing prgrammes etc, and the whole nature of computers and the inernet, you never know what could happen. It would be nice to have a back up because we have A LOT of hard work on there now and it would really be a shame (thats a bit of an under-statment) to lose it all…
I’ve been trying to think of a good way for a while…would this work?
I guess so, but I don’t know how it all works with however you make the website…would all the files be split up the same way?
Andrew
It’ll do it. The pages hosted at different sites will get put in different directories. Everything will be organized like it is online.
Read through the help documents here: HTTrack Documentation. The program is complex in what it does and to use it right you need to know what it’s doing.
One option you’ll want to change is the footer that it adds to the HTML of every page that it downloads. The header says something to the effect of:
<!-- Mirrored from www.example.com by HTTrack Website Copier/3.x [XR&CO’2004], Thu, 17 Mar 2005 10:13:11 GMT –>
If you’re archiving your own site you may or may not want that added. If you ever have to restore from the archive you’ll need to manually strip out that header before reuploading everything. On the other hand, the header does let you know when you made the archive.