Repository which shows some of examples of downloading HTML from a URL
git clone git://
Log | Files | Refs | README (1026B) - raw

      1 This repository contains a couple of examples of how to download HTML from a URL
      2 using two different Perl modules:
      4 * LWP::UserAgent
      5 * Mojo::UserAgent
      7 Both of these approaches requiring knowing the Session ID of a valid cookie to
      8 the site you are trying to download from.  To find the Session ID, first log
      9 into a site in either Chrome or Firefox and then launch the "Developer Tools" or
     10 "Web Developer Tool".  Pressing `Ctrl+Shift+I` should work on both browsers.
     12 From there, go to "Storage", "Cookies" and then click on the cookie for the URL
     13 you want to download.  Note the "value" field where "Name" equals `session`.
     14 The value of "value" should be something like:
     16 `90ipwx7093le8uu5jjaiva12mdhdfftyb8ig44eydhvimjva9roqwmiutpwzphekeje82qr6469pt71
     17 9r86gmnp2z5ja4sxjbokvyj8pilaweo17tdcvhidayvzyt4yc`
     19 You will need to provide that in the hash reference for the `cookie_jar`.
     21 Both of these examples are an attempt to replicate the `curl` command:
     23 `curl --cookie "session=90ip...pt71" -o "output.txt"`