https://devpost.com/software/hear-podcasts-made-accessible
What inspired us was the relatively bad experiences we had when trying to find transcriptions for podcasts we regularly listened to. That got us thinking, if we are having trouble finding transcriptions how are people that are deaf or hard of hearing even listening or consuming this form of media? With all of the available tools we wondered how and why there was no cost-effective and user friendly solution to make podcasts accessible to all.
Provide a rich podcast experience for deaf and hard of hearing people (and everyone else too) by offering an automated, on-demand transcription and annotation service for audio files. You can read along or learn more while listening or skimming to your favorite parts of any podcasts.
We built it using these technologies
Google Cloud Natural Language
Google Cloud Speech-to-Text
Wikipedia
firebase realtime database
Express.js Prettier
Swagger Eslint
React hooks Node.js
Google cloud platform
Github
Figma
Typescript
We ran into time constraint challenges and hurdles to run many APIs in a series to achieve a presentable final and deliverable result. We also ran into scope challenges, wanting to accomplish so much in such a small amount of time.
We are proud of having delivered a functional demo with a full backend, working database, and a product we are proud of. We made something we did not think possible in less than 20 hours!
We learned to master the art of running APIs in series and tackling many implementation challenges in attempting to both deliver an optimal product as well as an efficient one.
We would like to integrate live transcription, language translation as well as build community and connecting people
react-hook
react
react-native
google-web-speech-api
natural-language-processing
google-cloud
figma
ai-applied-sentiment-analysis
firebase
futef-wikipedia-api
firebase-realtime-database
express.js
swagger
prettier
eslint
node.js
github
typescript