curl -s "http://suicidegirls.com/members/username/blog/page[x-y]/" | sed -n '/timestamp/,/blogContentEdit/p' | html2text > path/to/save/filename
Where: username is your SG user name (obviously); x is the latest page of your blog (1); and y is the last page of your blog (mine was 50).
So the actual command I used, as an example, was:
curl -s "http://suicidegirls.com/members/Rook/blog/page[1-50]/" | sed -n '/timestamp/,/blogContentEdit/p' | html2text > /home/rook/Documents/SG_Blog.txt
Some caveats, however:
1 - You need to have curl and html2text installed on your system. Without curl you won't be able to scrape the pages. Without html2text you'll get an almost unreadable mess of html mark-up (I guess you could save that to a html file then open it in a browser and manually scrape it).
2 - Your blog needs to be public (although you may be able to get around that with curl's username and password switches. I don't know, I didn't try it).
3 - The output file will need a little manual clean-up, unless you like spending as much time fine-tuning a sed or awk command as it would take to do it the old fashioned way.
4 - This command does not grab the comments on your posts. You'll get a count of comments for each post, but there's no way to expand every post to expose your comments' source for the command to scrape. Sorry about that.
5 - If anyone knows a better/cleaner way of doing this, I'd love to hear from you!
6 - For you people in the iCult; you could probably get this work on your shiny new Macbook with a little tinkering to get the required programs in place.