https://cloudinary.com/console/welcome
prints w2's
https://www.mass.gov/doc/w-2-electronic-filing-specifications-handbook/download
https://www.realtaxtools.com/state-w2-efile-frequently-asked-questions/Massachusetts.html
pulled data from gl for w2 and efw2 https://www.ssa.gov/employer/EFW2%26EFW2C.htm
maybe location of stuff is bad
added multiline queries to env. persons has /current that will probably never be used but scratch has a multiline that is almost enough info to write checks with
/**query to get current persons and rates for approved tcardwks***/
DROP TABLE IF EXISTS `timecards`.`cureff` ;
CREATE TABLE `timecards`.`cureff`
SELECT p.emailid , MAX(p.effective) AS curedate
FROM `timecards`.`persons` p
WHERE effective < CURDATE()
AND p.coid ='reroo'
GROUP BY p.emailid;
DROP TABLE IF EXISTS `timecards`.`cureffective` ;
CREATE TABLE `timecards`.`cureffective`
SELECT p.*
FROM `timecards`.`cureff` c
JOIN `timecards`.`persons` p
ON c.emailid=p.emailid
AND c.curedate=p.effective;
SELECT t.*, c.*
FROM `timecards`.`tcardwk` t
JOIN `timecards`.`cureffective` c
ON c.emailid = t.emailid
WHERE t.status='approved'
AND t.coid= 'reroo';
approved tcardwks with current rate records from persons
TODO except for status
combinPuJc now creates a blank week padded with a weeks punch and jcost. inout hours are summed.
Ok so now get (a week), put (a day), and delete(a day) all work
This is now what comes from the database, The server combines the results of tcardpu and tcardjsc into records for each day of the week that there is tcardpu data. If there is no tcardpu data but there is tcardjc data then it will leave that data on the server and return [].
parr:
{ wkarr:
[ { wdprt: '2018-W34-5',
hrs: 0,
inout: [],
jcost: [],
jchrs: 0,
idx: 0 },
{ wdprt: '2018-W35-1',
hrs: 8.25,
inout: '["7:30", "15:45"]',
jcost: [Array],
jchrs: 8.25,
idx: 1 },
{ wdprt: '2018-W35-2',
hrs: 8.5,
inout: '["7:15", "15:45"]',
jcost: [Array],
jchrs: 8.5,
idx: 2 },
{ wdprt: '2018-W35-3',
hrs: 0,
inout: [],
jcost: [],
jchrs: 0,
idx: 3 },
{ wdprt: '2018-W35-4',
hrs: 7.25,
inout: '["8:30", "15:45"]',
jcost: [Array],
jchrs: 7.25,
idx: 4 },
{ wdprt: '2018-W34-6',
hrs: 0,
inout: [],
jcost: [],
jchrs: 0,
idx: 5 },
{ wdprt: '2018-W34-7',
hrs: 0,
inout: [],
jcost: [],
jchrs: 0,
idx: 6 } ],
hrs: [ 8.25, 8.5, 7.25, 9.5 ],
jchrs: [ 8.25, 8.5, 7.25, 9.5 ] }
crud for app AddJobs and SortJobs
Currently emailid and appid is the level of identification offered by soauth and the associated api. What if you wanted to have multiple companies using these apps? You could just encode the company in env.json and that will have to do for now. The problem rears itself when you have an emailid and appid used by more than one company.
Possibly bearerTokenApp
has access to params and can then return a single record for emailid, appid, coid
SOLUTION: encode extra data in the appid part if the registration string. Get if from cfg and concatenate it like reroo-jobs
. In soauth the data gets encrypted in the apikey And then on auth it is compared to whats in whoapp
.
If all that works out it comes back to the spa app as an token which from then on gets sent out as a bearers token inside every page req. That bearerTokenApp
slips some data (coid,appid,email) back to the route in req.tokData which can then inform those queries in terms of who and for what comppany.
todo /jobs/post/wk probably usiing delete and insert minor updates of coid,
made it work with the reroox api. There was a problem in routes:211 with the url for the email link not working.
var baseURL = req.protocol+"s://"+req.headers.host+cfg.base
//var baseURL = req.headers.origin+cfg.base
social auth verifies your identity then checks with the app's api to see if you are allowed to play. The route app.use('/api/reg', regtokau);
now looks in the reroo.whoapp for api/reg/auth for a match and sets the auth
field for emailid+appid.
In this case it is the jobs
appid. Not taken into account is the coid
. that you woud need to deal with to generalize this for other companies.
server for restoring roots
reroox/sql
what happens in authentication?
- social auth gets a api url in the params after you press the register button
- once you prove you are who you are soical auth queries the apps api to see if you are listed among the valid users of that app.
mysqldump -uroot -pthepass --opt reroo > reroo.sql
post a whole week of scheds
https://stackoverflow.com/questions/7745609/sql-select-only-rows-with-max-value-on-a-column Some cool SQL added to some code to combineScheds() on the day the hold is over. The hold ends at a time at which point the normal schedule for that day resumes.
const combineScheds=(holdsched, resumesched, daymin)=>{
let holdarr = JSON.parse(holdsched)
let resuarr = JSON.parse(resumesched)
let hrmin = daymin.split(':')
let hr=hrmin[0]*1
let min =hrmin[1]*1
let srdlen=holdarr[0].length - 2
let res= holdarr.slice()
resuarr.map((prg,i)=>{
if(prg[0]==hr){
let np = [hr, min].concat(prg.slice(-srdlen))
res.push(np)
}else if(prg[0]>hr){
let tmp = resuarr[i-1]
if (tmp[0]<hr){
tmp[0]=hr
tmp[1]=min
res.push(tmp)
}
res.push(prg)
}
})
let resstr = JSON.stringify(res)
return resstr
}
All tests are in IOTbroker2.0/tests/atest.spec.js. Just run mocha
my.getTodaysSched
SELECT * FROM scheds
WHERE (devid,senrel,dow)
IN (
SELECT devid, senrel, MAX(dow)
FROM scheds
WHERE (dow=0 OR dow=1 OR dow=8)
AND (until = '0000-00-00 00:00' OR '2018-03-12' <= until)
GROUP BY devid, senrel
)
AND devid = 'CYURD001'
Device gets the schedule every day by calling my.grtTodyasSched
which searches for the schedule with the max dow
It doesn't work yet.
Should return a the appropriate schedule record for each senrel of a device.
Inserting a hold should look something like this
INSERT INTO scheds
SET
`devid` = 'CYURD001',
`senrel` = 0,
`dow` = 8,
`sched` = '[[0,0,55,53]]',
`until` = '2018-06-02 10:15'
ON DUPLICATE KEY
UPDATE
`devid` = 'CYURD001',
`senrel` = 0,
`dow` = 8,
`sched` = '[[0,0,55,53]]',
`until` = '2018-06-02 10:15'
cleaned up and replaced getTime.
Holds are now incorporated into the sched table and queries.
Its the day the hold is over. Move in the schedule that will come back into force now that the hold is done. Figuring that the day started with the hold still in play, start with the hold array. Push on all the regularly scheduled progs that start after after the hold is over. But before that... If there is a regularly scheduled prog for the hold end hour, replace its hours and minutes. For the regularly scheduled prog that would have been running before the hold ends, plug in the latter start time of the hold so it can start to run as soon as the hold is over.
let srdlen=holdarr[0].length - 2
let res= holdarr.slice()
resuarr.map((prg,i)=>{
if(prg[0]==hr){
let np = [hr, min].concat(prg.slice(-srdlen))
res.push(np)
}else if(prg[0]>hr){
let tmp = resuarr[i-1]
if (tmp[0]<hr){
tmp[0]=hr
tmp[1]=min
res.push(tmp)
}
res.push(prg)
}
})
TODO need to final mod my.getTodaysSched to incorporate holds TODO implement sched.modSchedIfHoldEndsToday
testing mocha now using mocha-clean and npm run test in IOTbroker2.0. reformulating processMessage getTime to gettimeandsched. Need to have sendsSched use the mapping as in sched.js:132
starting to think about changing both to get holds
Normalized databases are in place and queries have been written for version 2.0
Recording srstate data take place on a cassandra cluster via iotb.index.js.processIncoming.srstate cassClient.execute(q1). Reco.count is a mongo query which stores which dev/senrels are currently being recorded.
iotex handles saving a program through dedata/prg but iotb is the server that loads the current programs for each device into the device on every timecheck
Avoiding the now working publish callback, so moved sched.js.getTime.my.getTodaysSchedule out of the sched.js.getTime.mosca.publish
I'm running mosca: 2.3.0 and wanted to do some work in the service publish callback but it never seems to fire even though the message gets delivered. Here I just published a simple message after mosca is ready but console.log('done! SUPER SIMPLE') never runs. Any ideas?
moserver.on('ready', setup);
function setup() {
console.log('Mosca server is up and running')
console.log('device mqtt running on port '+cfg.port.mqtt)
console.log('browser mqtt over ws port '+cfg.port.ws)
var message = {
topic: '/hello/world',
payload: 'abcde', // or a Buffer
qos: 0, // 0, 1, or 2
retain: false // or true
};
moserver.publish(message, function() {
console.log('done! SUPER SIMPLE');
});
}
Actually the callback works on a new simple server, Something is fucking with it in the code I wrote.
and tells you if they can publish,
Right now it lets everyone publish but the nuts and bolts are almost in place.
SELECT * FROM devuserapp WHERE devid='CYURD001' AND appid='pahoRawSB' AND userid='tim@sitebuilt.net'
This should return a role for that user for that app on that device. Then mysqldb.js/dbBublish
should be able to a) make sure there is an entry, b)check if the role listed is user
or admin
. If anybody can view an app than there should be an anybody
user with obs
role. Register has to be jiggered so that's OK.
TODO move cascada over to api2
https://www.bountysource.com/issues/41088112-client-connection-closed-if-publish-is-not-authorized
also..
IOTexpress got modified to add get dedata/users/:devid
and post dedata/users
dedata/apps query changed to only get admin and super appids
To record or stop recording a sensor or relay
- you make a post/delete to iotex/api/dedata/rec with a body that looks like
{id:"CYURD001:1}
- the express router iotex/api/dedata/rec makes a mongoose database call
Reco.update
orReco.remove
which uses mongo/demiot/recos and adds/updates/removes that id. - Now, in
IOTbrokey/index.js
whenever a packet comes inmoserver.published
runsmq.selectAction(packet.topic)
to find the devices and job and thenmq.processIncoming(packet.payload)
. If the job issrstate
andpayload.new=true
(the data is new, not just a request for a copy(inside device C code)) then it does a quick check for devid:id in mongo/recos and if it is there the data for that sensor is saved to cassandra.
In order to use programs with these devices one must:
- save a program the client must make a post to
IOTexpress/api/dedata/prg
with a body like this:{"devid":"CYURD001","dow":4,"senrel":2,"sched":"[[12,20,77,75]]" }
- The program is saved to mysql/geniot/sched.
- copy mysql
- copy mongo -no
- redeploy social-auth - from sitebuilt.net/services/social-auth
rm -r *
mkdir app views
. From wamp64/www/services/social-auth./deploy.sh
- deploy IOTexpress - ./deploy.sh in wamp64/www/services/IOTexpress
- deploy IOTbroker - ./deploy.sh in wamp64/www/services/IOTbroker
no frontend yet for post and remove recorder (dome in mysql,spec.js)
summary of database stuff
https://blog.serverdensity.com/checking-if-a-document-exists-mongodb-slow-findone-vs-find/
mongodb
- mosca for internal use
- social-auth soauth for users and apps
- IOTbroker for checking if srstates should be saved (quick read of mong.demiot.recos {id:"CYURD001:0"}
- IOTexpress
dedata/index.js
/dedata/rec
for posting mongo.demiot.recos {id:"CYURD001:0"} as to be recorded or delete/remove it cassandra - IOTbroker in index.js:mq.processIncoming/srtate to either
tstat_by_day
ortimr_by_month
mysql - IOTexpress
devices
devuserapps
dedata/prg scheds
Cassandra is going to be used for devices and senrels that need their time series data stored. So far, there is a tstat_by_day table and a timr_by_month table
- in sitebuilt and parleyvale but not on iotup.stream
- tested in /services/testservice/cassandra.js.
- cql experiments in services/IOTbroker/cql.js
- Config is in /etc/cassandra/cassandra.yaml
- Use single quotes in WHERE clauses.
- 16.04 didn't need java setup but 14.04 did
https://academy.datastax.com/resources/getting-started-time-series-data-modeling
https://hostpresto.com/community/tutorials/how-to-install-apache-cassandra-on-ubuntu-14-04/
http://www.datastax.com/wp-content/uploads/2012/01/DS_CQL_web.pdf cheat sheet
cqlsh 162.217.250.109 9042
cqlsh>USE geniot;
cqlsh:geniot>SELECT * FROM tstat_by_day ;
cqlsh:geniot> truncate timr_by_month ;
You can start Cassandra with sudo service cassandra start and stop it with sudo service cassandra stop. However, normally the service will start automatically. For this reason be sure to stop it if you need to make any configuration changes. Verify that Cassandra is running by invoking nodetool status from the command line. The default location of configuration files is /etc/cassandra. The default location of log and data directories is /var/log/cassandra/ and /var/lib/cassandra. Start-up options (heap size, etc) can be configured in /etc/default/cassandra.
For some sensors storing the data would be cool. In https://www.mongodb.com/presentations/mongodb-time-series-data-part-2-analyzing-time-series-data-using-aggregation-framework ~20min in they suggest...
- deviceid:sensorid:timeperiod as _id: use regular expressions to do a range query. Regex must be left-anchored&case-sensitive /^CYURD001:2:1403/
Tested in mysql.spec.js and implemented in spas/apoj/rawPaho/utility/saveProg
Every time you get time you get that days programs too from the mysql scheds table. I think that is the end of using devid/progs
var getTime = function(devid, mosca, cb){
//var spot="America/Los_Angeles"
my.dbGetTimezone(devid, function(spot){
console.log(spot)
var dow = moment().tz(spot).isoWeekday()
var nynf = parseInt(moment().tz(spot).format("X"))
var nyf = moment().tz(spot).format('LLLL')
var nyz = parseInt(moment().tz(spot).format('Z'))
var pkt = {
unix: nynf,
LLLL: nyf,
zone: nyz,
};
console.log(JSON.stringify(pkt))
var topi = devid+'/devtime'
var oPacket = {
topic: topi,
payload: JSON.stringify(pkt),
retain: false,
qos: 0
};
console.log(topi)
mosca.publish(oPacket, function(){
console.log('dow is ', dow)
//var topic = devid+'/prg'
my.getTodaysSched(devid,dow,function(results){
results.map((res)=>{
cons.log(res)
var payload = `{"id":${res.senrel},"pro":${res.sched}}`
cons.log(payload)
setTimeout(function(){
sendSchedule(devid, mosca, payload, (payload)=>{
cons.log(payload, ' sent')
})
}, 1000)
})
})
});
})
}
var sendSchedule= function(devid, mosca, payload, cb){
var topi = devid+'/prg'
var oPacket = {
topic: topi,
payload: payload,
retain: false,
qos: 0
};
mosca.publish(oPacket, ()=>{
cons.log('prg sent')
});
}
sched.js now uses my.dbGetTimezone(devid, cb) every time time is checked for device
fixes super duplicates, and makes super authorized
both admind and pahoRaw can register
take2 callback true unless topic is prg or cmd
Regarding authorizePublish
if device its OK. For clients, return false for cmd
and prg
when role= obs
or any
.
if(res.role=='obs' || res.role=='any'){
cb(false)
}
if(cb){
callback(null, cb);
}else{
if(topic=='cmd' || topic=='prg'){
callback(null, cb);
}else{
callback(null,true)
}
}
https://github.com/mcollina/mosca/wiki/Authentication-&-Authorization two types: if client starts with 'CY' then it is a device else client
const dbAuth = (client, username,password, cb)=>{
console.log(client.id, username, password)
if(client.id.substr(0,2)=="CY"){
You can add data to a client but I ended up not using it (35)client.appId= tokdata.appId
instead I get it from client.id ala var appId = client.id.split('0.')[0]
.
For publishing I don't know, to handle observer case maybe just need to block certain topics TBD.
client.connect sends username and token, simple is authenticate, harder is authorize.
For subscribing, if its a device then go ahead subscribe. It ita a client then check for auth in devuserapp
in dbPubSub
.
Where are tables populated from? In theory..
- unauthorized users can be created by either the superuser or the owner of a device.
- once they log in then they are authorized
- devices are added by the super and given to an owner. At the same time the owner is added to
devuserapp
for that device and each of its apps. - anybody can try an app and if they are not registered, they can register. When they come back from registering and are authenticated as who they be, if they are not authorized for that app then they won't be able to use it
- putting it another way...
tim@sitebuilt.net
wants to usepahoRawLo
on theCYURD007
.social-auth
enters a record with appid and userid no matter what!!! Is that right??? No, it should only continue with the registration if that user and that app is already registered for some device. If not,social-auth
should fail!!! It shouldn't return a token to the spa!!! Spa should deal with it. Maybe spa lets anybody observe the device from that app??? Maybe there is anobserver
role for listed users and anany
role for anybody to observe???He can because he is the owner and has had andevuserapp
record entered for that app and device.
broker will authenticate with client.id using username=(owner or user) and passwork=(devpwd or token)
todo-levels of authentication FIX social-auth so expdays comes from spa app
- you get userName via
authenticate
on connection - the topic
set
needs to be restricted from all users and can only be published from server - clients can only subscribe and publish to/from devices in their
devappuser
records - some clients are only observers and cant change shit. This should be done somehow through the server (they can only req)
dbAuth -two kinds of client.id's, device.id or appid.
- devid.id uses Devices, database for auth
- appid uses devusersapps for auth
setin /spa/:appid read in /profile
but
facebook and github have For facebook you can also use the query strategy described here http://blog.pingzhang.io/javascript/2016/09/22/passport-facebook/
https://stackoverflow.com/questions/15513427/can-the-callback-for-facebook-pasport-be-dynamically-constructedreq.headers.referer
in the callback so you can dig out appId with mf.parseBetween(req.headers.referer, '/spa/', '?')
. Too bad google and twitter don't have that header.
HOORAY deployed as https://services.sitebuilt.net/soauth
Up on forever, note cd first so all the app redirects work
from sitebuilt.net root/forgone.sh
fuser -KILL -k -n tcp 7080 # hello
cd /home/services/social-auth
export NODE_ENV=production&&forever --uid soauth -a start server.js
sleep 2
Deals with the fact that facebook and twitter don't need email to register. Decided for the moment not to allow a separate user record with emailid as twitter or facebook userid.
passport can't handle done(err, null) it causes a server fault(500) but it will failure redirect if you use done(null,null). So failure redirects to cfg.base+message
and which has res.render(message.ejs, {message: mf.getMessage()
OK fooled around with base path in env.json and package.json.
req.headers.origin
was the only way to get the proper protocol, https:- pass cfg.base into all the forms so the catch when its souath
%=base %> signup/
- added a reverse deploy
copyFromDeploy
for those touchups to the code base that allow you to run multiple services onhttps://services.sitebuilt.net
ugh Got rid of passport.use(new ApikeyStrategy)
social-auth
creates a record for a user in devuserapp for the spa that used it. That spa also gets back a token created using the api's secret.
Since all the heavy lifting and passport stuff has been shuttled of to social-auth
modules/regtokau/strategy/bearerToken
just has to intercept a request, grab its token and tell the endware what's up.
works like so. Here is what middleware looks like
var bearerToken = function(req,res, next){
...check token (req.tokenAuth = what's up)
next()
}
router.get('/appsa', bearerToken, function(req,res){
... now you can do whatever to the request if req.tokenAuth.auth
res.jsonp(req.tokenAuth)
})
It is just like this kind of callback thingy
router.get('/appsa', function(req,res){
bearerToken(req,res, function(){
cons.log(req.tokenAuth)
res.jsonp(req.tokenAuth)
})
})
but cooler
jwt.decode croak on errors like bad or expired tokens so you have to catch them
try {
var tokdata = jwt.decode(toka[1], cfg.secret)
cons.log(tokdata)
} catch(e){
cons.log(e.message)
req.tokenAuth = {auth: false, message: e.message}
next()
return
}
Connnect social-auth to geniot.