quick n dirty script to backup contents of saved posts #52
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
i made a short, ugly bash script that will create a new HTML file based on the contents of export_saved.html. it parses out links to saved posts, gets the post data in JSON format, writes the data to a local directory, then adds an HREF to the JSON data after the HREF to the actual reddit URL. It probably isn't the best way to do this since it should probably happen inside the actual python script itself (not to mention putting the backed up data inside the main HTML file instead of a directory), but it does the job. a trivial edit that might help make the backup more comprehensive would be to change the cURL request for the JSON data in to a recursive wget with a depth of 1 - this ought to get any images/other files the post may contain, but also might blow up the file size. it would be pretty easy to do tho