![mplayer dumpstream mplayer dumpstream](https://i.ytimg.com/vi/iXF_SIpHvvw/hqdefault.jpg)
Recipe: Download the 2008 lectures from Fosdem: To make the use a bit clearer, let's see some concrete examples. If the urls are ftp or the web server uses simple authentication, you can still post-process them to: same for http. If you have to authenticate yourself somehow in the browser to be able to download your media files, spiderfetch won't be able to download them (as with wget in general).If you can't match a certain url you're still stuck with grep and sed. Not guaranteed to find every last url, although the matching is pretty lenient.does not re-download completed files, resumes downloads, retries interrupted transfers. Semantics consistent with for url in urls do wget $url.Uses wget internally and relays its output as well.Can use an existing index file (with -useindex), but then if there are relative links among the urls, they will need post-processing, because the path of the index page on the server is not known after it has been stored locally.Downloads all the urls serially, or just outputs to screen (with -dump) if you want to filter/sort/etc.* to match any character, not * as in file globbing, (true|false) for choice and so on.) Ability to filter urls for a regular expression (keep in mind this is still Ruby's regex, so.Spiders the page for anything that looks like a url.asx file for the mms:// url, print it to the screen and let mplayer take it from there. You could even chain spiderfetch to do both: first spider the index page, download all the. (Unlikely scenario? If you wanted to download these freely available lectures on compilers from the University of Washington, you have little choice. asx files you have to download one by one, grab the mms:// url inside, and mplayer -dumpstream, you at least get the first half of the chain. If the urls on the page require additional post-processing, say they are. Internally, it uses wget, so you still get the desired functionality and the familiar output. It will then download the files one by one. It will download the page and find all the links (as best it can). You find a page that links to a bunch of media files. spiderfetch, whose name describes its function: spider a page for links and then fetch them, attacks the common scenario. So the problem is eventually reduced to: how can I still use wget? Well, browsers being as lenient as they are, it's difficult to guarantee that you can parse every page, but you can at least try. There's no way you can plug in grep somewhere in the chain to filter out things you don't want, for example. They assume they know what you want, and they take you from start to end. Most limiting of all, these extensions aren't Unix-y. Want to run Firefox 3? Oh sorry, your download extension isn't compatible.
MPLAYER DUMPSTREAM UPGRADE
Not to mention that every new extension slows down Firefox and adds another upgrade cycle you have to worry about.
MPLAYER DUMPSTREAM MANUAL
Some of them don't work, some only work for some types of files, some still require some amount of manual effort to pick the right urls and so on, some of them don't support resuming a download after Firefox crashes. I could use a Firefox extension for this, there are some of them for this purpose. That always works, but I have to rig up a new chain of grep, sed, tr and xargs wget (or a for loop) for every page, I can never reuse that and so the effort doesn't go a long way. This is painful and I last about 4-5 links before I get sick of it, download the web page and start parsing it instead. So there is a fallback option: copy all the links from Firefox and queue them up for wget: right click in document, Copy Link Location, right click in terminal window. And Firefox doesn't have a resume function (there is a button but it doesn't do anything :rolleyes: ). If Firefox crashes within the next few hours (which it probably will) then you'll likely end up with not even one file successfully downloaded. You don't want to download 20 videos of 200mb each in parallel, that's no good.
![mplayer dumpstream mplayer dumpstream](https://i.ytimg.com/vi/XFKD0JRXwos/maxresdefault.jpg)
But if it's larger files then it's not practical. I could just use Firefox, and if it's small files then I do just that, and click all the links in one fell swoop, then let them all download on their own.
![mplayer dumpstream mplayer dumpstream](https://i.ytimg.com/vi/m_LweFy4SGQ/maxresdefault.jpg)
It's wonderful, accepts http, https, ftp etc, has options to resume and retry, it never fails. How do you go about it? Personally, I use wget for anything that will take a while to download. You come to a web page that has links to a bunch of pictures, or videos, or documents that you want to download.