-
-
Notifications
You must be signed in to change notification settings - Fork 247
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow downloading more content from a webpage and index it #215
Comments
…p#215 added a new table that contains the information about assets for link bookmarks created migration code that transfers the existing data into the new table
…p#215 removed the old asset columns from the database updated the UI to use the data from the linkBookmarkAssets array
improved the mapping to be more easily extendible extracted out some duplicated code
* Allow downloading more content from a webpage and index it #215 added a new table that contains the information about assets for link bookmarks created migration code that transfers the existing data into the new table * Allow downloading more content from a webpage and index it #215 removed the old asset columns from the database updated the UI to use the data from the linkBookmarkAssets array * generalize the assets table to not be linked in particular to links * fix migrations post merge * fix missing asset ids in the getBookmarks call --------- Co-authored-by: MohamedBassem <me@mbassem.com>
…p#215 Added a worker that allows downloading videos depending on the environment variables refactored the code a bit added new video asset updated documentation
…p#215 Added a worker that allows downloading videos depending on the environment variables refactored the code a bit added new video asset updated documentation
…p#215 Added a worker that allows downloading videos depending on the environment variables refactored the code a bit added new video asset updated documentation
…p#215 Rebased onto master replaced redis queue with new db queue fixed some async/await issues
…p#215 fixed additional async/await issues
…p#215 Added a worker that allows downloading videos depending on the environment variables refactored the code a bit added new video asset updated documentation
…p#215 Rebased onto master replaced redis queue with new db queue fixed some async/await issues
True. I was planning a long (1 month) trip for over half a year and even in this "short" period of time we had the one or other reel that doesn't exist any longer. Not to start telling about my various collection of receipts (10+ years) -.- So linking but also downloading + indexing is a huge benefit, I love to see as well. |
For media downloading, I currently use ArchiveBox. |
…p#215 Added a worker that allows downloading videos depending on the environment variables refactored the code a bit added new video asset updated documentation
I would love to see it being able to index the transcript in Youtube links. I often bookmark interesting videos, interviews and am always frustrated when i try to find that particular resource again later. |
…app#525) * Allow downloading more content from a webpage and index it hoarder-app#215 Added a worker that allows downloading videos depending on the environment variables refactored the code a bit added new video asset updated documentation * Some tweaks * Drop the dependency on the yt-dlp wrapper * Update openapi specs * Dont log an error when the url is not supported * Better handle supported websites that dont download anything --------- Co-authored-by: Mohamed Bassem <me@mbassem.com>
I regularly bookmark youtube videos, Instagram videos, other videos.
It is not assured, that those videos stay online forever, so I prefer to download important videos (yes I am a real hoarder).
Would be great if you can enable downloading videos and serving them from hoarder for later viewing (Filesize does not matter to me, but I guess for some it matters).
Would be also great if the subtitles would be downloaded and indexed, so searching is possible also in the video content.
In the long run it would also be cool if it is possible to transcribe the video contents and make it searchable that way.
The text was updated successfully, but these errors were encountered: