-
Notifications
You must be signed in to change notification settings - Fork 1
Statistics
Ideas for Angualr-Fronted Definitions should go to https://github.com/shownotes/snotes20-angular-webapp/wiki/Statistics
All definite definitions should go to https://github.com/shownotes/snotes20-restapi/wiki/API#apistatistics
Perform analyses of all publications on shownot.es and generate some cool statistics with that data. Therefore we need to generate word frequency tables and tf-idf tables on each episode of each podcast. We also need to build an overall corpus and generate the same tables for this. We can do this analyses on all text and on all URLs separately. First of all there must be a definition of the tables in the database. By getting a clear structure the statistic feature must be implemented as a separate django application namely statistic.
Later on, the data from the extracted features must be reachable via REST. By achieving this, we need to build a proper API to generate and deliver the data for each graph to the angular frontend. The graphs will be implemented with the library d3.js. Therefore it's necessary to discover which JSON data is needed for the graphs.
We need:
- Word frequency and tf-idf analysis on
- all episodes
- all episodes form a podcast (combined result)
- over the whole corpora
The application must not effect the running web application. Assuming that the most episodes are quit short we could calculate the word frequencies in threads after a publication was generated or updated. Therefore the last state_id must be used. By doing this we were able to use this data on the searching feature (wordcloud) immediately. The calculating of the whole corpus and the overall Podcast could be done by a cronjob once per day.
- Build a database model for word frequencies and tf-idfs
- Implementation of word frequencies and tf-idfs as algorithms
- tf-idf as a task in manage.py
- external update mechanism should be discussed
- word-frequencies on create publication
- tf-idf as a task in manage.py
- Design a serielizers with Django-Rest for each feature in the frontend
- Definitions of needed data must be provided
- Implementation by using NLTK and SciPy Packages
- NLTK cloud be used to build the word frequency tables
- Deleting Stopwords and is included
- SciPy could be used for TF-IDF
- TfidfVectorizer is faster than the NLTK version of TF-IDF
- NLTK cloud be used to build the word frequency tables
-
A table for each publication
-
A combined table for each podcasts
-
A combined table over all shownotes from all publications as corpus
- needed by TF-IDF
-
Colums
- word
- absolute frequency
- relative frequency to all shownotes in publication
- rank?
Generate a Timeline on the top of the page, which enables the user to manually select a time periode. This could be used by a user to search for specific publication by time
- Selecting a time range in the overall overview will show a list of podcasts which has publications in that time periode.
- Selecting a time range while in the overview of a specific podcast the list of episode will be update in the same manner as above.
- The selected time range in the overall overview should persist, if the user clicked on a podcast to visit the detail publication list.
- All episodes have a created_date but not a date
- Date is the date where the episode was live (discovered by hoersuppe API)7
- create_date is the date, when the episode was manually created by a user
- Maybe we should use the amount of publications in a month as X on the graph
- Should be implemented in
- Should be implemented in
- /statistic/worldcloud?top=50
- top = n most frequent words from top
Returns JSON as described in https://github.com/shownotes/snotes20-angular-webapp/wiki/Statistics#required-json-for-graph-2