Online personnal website generator written in nodejs/kraken
The goal is to take some informations from the web and to group them in one same website adding them some data. The lib actually contains three "scrappers" Github: Grap every repository you worked on. LinkedIn: Getall informations from your profile. Twitter: Get your home timeline and you own timeline.
ResumeJs also contain a very simple theme module that allows you to display theese informations.
- Get this repository sources
- Copy /config/customConfig.sample.json to config/customConfig.json
- Copy /lib/scrappers/.sample.json to /lib/scrappers/.json
- Type
npm install
You can use npm start
or node index.js
or of course you can use supervisor or forever...
If you need to host your website on a server that can't run nodejs it's possible to generate static files from your current resumeJS instance. Just type grunt generate. A script will now parse all pages from your site and generate corresponding html files. Files can be found in <project_root>/build directory. You can controll witch pages are generated by editing the config file of your current theme.
As described before, scrappers are made to grap data from distant APIs and serve them to your website. By the way, it is really simple to write your own scrapper and or upgrade existants for your needs. Scrappers must have the following template:
var scrapper = {
isOauth: true; //determine if this scrapper will need a authentification to grap data.
getData(params, callback){ ... }// if scrapper is oauth params is the current session, else params is the login to use
getStoredData(){ ... }//get the data that the scrapper has stored is the file <scrapperName>Data.json, return an ocject or false
storeData(data, callback){ ... }//write data in the file <scrapperName>Data.json
auth(req, res){ ... }//ensure scrapper authentification, useless is scrapper.oauth is false, see twitter scrapper for a passportjs implementation.
authCallback(req, res){ ... }//second function to ensure authentification is the case of an oauth scrapper
}
And are found in <project_root>/lib/scrappers directory. Scrappers data are transmited to themes in the dustJS variable named scrappers. For example if you wan't to display facebook login data from facebook scrapper you have to write:
{.scrappers.facebookData.login}
ResumeJS nativelly contains two themes that deserve theese scrappers. That themes are written in dust.js and you can easyly write you own or adapt a existing html theme to use scrappers data. It's also possible to use resumeJS as an API and to make your theme in, for example, angularJS.
Themes contains following files:
- components: global libs like jquery or bootstrap
- css: style files, could be css or less files
- data: must contain model.json that describe what the theme need to work (scrappers needed and some custom fields to display in admin page)
- fonts: some fonts to use...
- images: seems clear !
- js: javascript files to load in theme.
- templates, the files to load when the front part of the site is called. could be dustjs or html files.
The lib used for tests is Kraken one, you can run tests by typing npm test
or grunt test
.
Test files can be found in <project_root>/test directory.