Skip to content

Commit 599d6cf

Browse files
author
bitmage
committed
updated README
1 parent e91daff commit 599d6cf

File tree

1 file changed

+65
-32
lines changed

1 file changed

+65
-32
lines changed

README.md

+65-32
Original file line numberDiff line numberDiff line change
@@ -41,10 +41,7 @@ collector = new particle.Collector
4141
identity:
4242
sessionId: 'foo'
4343

44-
# I should recieve a delta event
45-
onData: (data, event) =>
46-
console.log {data, event}
47-
44+
# I should recieve delta events
4845
collector.on '**', (data, event) ->
4946

5047
collector.register (err) ->
@@ -56,32 +53,46 @@ collector.register (err) ->
5653
particle = require 'particle'
5754
MongoWatch = require 'mongo-watch'
5855
watcher = new MongoWatch {format: 'normal'}
59-
users = # a collection from mongo driver or mongoose
6056

6157
stream = new particle.Stream
6258
#onDebug: console.log
59+
adapter: watcher
6360

64-
identityLookup: (identity, done) ->
65-
done null, {accountId: 1}
66-
67-
dataSources:
68-
69-
users:
70-
manifest: # limit what fields should be allowed
71-
email: true
72-
todo: {list: true}
61+
# Identity Lookup, performed once upon registration.
62+
# identityLookup: (identity, done) ->
63+
# done null, {accountId: 1}
7364

74-
payload: # get initial data for users
75-
(identity, done) ->
76-
users.find().toArray (err, data) ->
77-
done err, {data: data, timestamp: new Date}
65+
# Cache Config, used to alias many-to-many lookups or otherwise force caching of fields
66+
cacheConfig:
67+
userstuffs:
68+
userId: 'users._id'
69+
stuffId: 'stuffs._id'
7870

79-
delta: # wire up deltas for users
80-
(identity, listener) ->
81-
watcher.watch "test.users", listener
82-
83-
disconnect: ->
84-
watcher.stopAll()
71+
# Data Sources, the data each connected client will have access to. Any fields used in
72+
# the criteria will be automatically cached.
73+
dataSources:
74+
myProfile:
75+
collection: 'users' # the source collection (in mongo or other adapter)
76+
criteria: {_id: '@userId'} # limit which records come back
77+
manifest: true # limit which fields come back
78+
myStuff:
79+
collection: 'stuffs'
80+
criteria: {_id: '@userId|users._id>userstuffs._id>stuffs._id'}
81+
manifest: true
82+
visibleUsers:
83+
collection: 'users'
84+
criteria: {accountId: '@accountId'}
85+
manifest:
86+
name: true
87+
_id: true
88+
notFound:
89+
collection: 'users'
90+
criteria: {notFound: true}
91+
manifest: true
92+
allUsers:
93+
collection: 'users'
94+
criteria: undefined
95+
manifest: true
8596
```
8697

8798
## Configuring Your Particle Stream
@@ -94,21 +105,43 @@ The fields below are required unless you see a * after the name.
94105

95106
Each data source corresponds to a root property on the client side data model (collector.data). The data source configuration controls where data will come from and how it will be filtered.
96107

97-
* Manifest
108+
* Collection
98109

99-
The manifest is a data structure which controls what fields get pushed out to a Collector. It treats arrays and objects the same, and does not care about data types. It will be applied to any data returned by the Payload or Delta functions. It is comprised of nested objects and boolean values. Data is not represented by default, so only the value 'true' has any meaning. You can use 'false' for explicit documentation if you would like.
110+
The MongoDB collection you want to monitor.
100111

101-
You can assign 'true' at any point in the data structure. For instance, if you put 'true' at the root then no data will be filtered.
112+
* Criteria
102113

103-
* Payload
114+
The recordset you want to pull back. This generally looks like a Mongo query, except it supports looking up data in the user's identity and Particle's internal cache. For instance:
104115

105-
This function is responsible for retrieving initial data when a new Collector registers. It is passed the Identity associated with this Collector, and a callback function (err, data) for returning the data. You can use the Identity to limit the fields returned. The data returned by a Mongo query is suitable for this. Essentially it needs to be a collection of documents, where each document is comprised of a root object and nested objects, arrays, and data values.
116+
```coffee-script
117+
criteria: {_id: '@userId|users._id>userstuffs._id>stuffs._id'}
118+
```
106119

107-
* Delta
120+
This means "Grab the 'userId' field from the identity, and pipe it through a relationship to find any related stuffs._id's."
108121

109-
This function is passed the Identity of the Collector, and a receiver function. It is responsible for wiring up some kind of event source which will be forwarded by the Stream to any registered Collectors.
122+
You can also supply a regular string or number:
110123

111-
I wrote a library for listening to the mongo oplog which can be used for this purpose. You can find it [here](https://github.com/torchlightsoftware/mongo-watch). You want to use the 'normal' format to get a message format compatible with Particle. Note that this will not work for shared DB hosting solutions. If you are interested in getting something working for that sort of platform I have some ideas... please contact me.
124+
```coffee-script
125+
criteria: {name: 'Joe'}
126+
```
127+
128+
Or a lookup by string:
129+
130+
```coffee-script
131+
criteria: {colorId: 'red|colors.name>colors._id'}
132+
```
133+
134+
Or just an identity:
135+
136+
```coffee-script
137+
criteria: {colorId: '@myColorId'}
138+
```
139+
140+
* Manifest
141+
142+
The manifest controls what fields get pushed out to a Collector. It acts like a Mongo 'project' query modifier.
143+
144+
You can assign 'true' at any point in the data structure. For instance, if you put 'true' at the root then no data will be filtered.
112145

113146
### Identity Lookup*
114147

0 commit comments

Comments
 (0)