You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: debugging_with_intellij_idea.md
+4Lines changed: 4 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -1,5 +1,9 @@
1
1
# Debugging with IntelliJ IDEA
2
2
3
+
Unfortunately as of PredictionIO it is not longer possible to debug your template with IntelliJ. The flowing instructions will work for older PredictionIO and IntelliJ. Please complain to the Aoache PredictionIO mailing list if this causes you as much of a pain as it does us.
4
+
5
+
# Deprecated Debugging Insructions
6
+
3
7
It is possible to run your template engine with IntelliJ IDEA. This makes the engine specific commands accessible for debugging, like `pio train`, `pio deploy`, and queries made to a deployed engine.
ActionML supports and directly commits code to Apache PredictionIO beginning with Apache PredictionIO-{{> pioversionnum}}. All added features that we created in our fork have been contributed and merged into Apache PredictionIO so we are now in sync. We may have some extra howtos, documents and certainly templates but check the [Apache site](http://predictionio.incubator.apache.org/) for more information.
3
+
ActionML supports and directly commits code to Apache PredictionIO beginning with Apache PredictionIO-0.10.0 and continuing to the pressent release PredictionIO-{{> pioversionnum}}. We may have some extra howtos, documents, and certainly templates but check the [Apache site](http://predictionio.incubator.apache.org/) for more information.
4
4
5
-
For help installing Apache PredictionIO-{{> pioversionnum}} please follow [these instructions](/docs/install) to install or upgrade from the ActionML version.
5
+
For help installing Apache PredictionIO-{{> pioversionnum}} please follow [these instructions](/docs/install) to install or upgrade. Bw aware that upgrading can **erase your data** so please backup before any upgrade.
6
6
7
7
For a description of past versions see the [history](/docs/pio_versions)
Copy file name to clipboardExpand all lines: pio_start_stop.md
+5-1Lines changed: 5 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -108,6 +108,10 @@ Shutdown is in the opposite order of startup but if the startup is automated the
108
108
/usr/local/hadoop/sbin/stop-dfs.sh
109
109
```
110
110
111
+
## PIO Events Accumulate Forever
112
+
113
+
PIO by default will continue to accumulate events forever, which will eventually make even Big Data fans balk at storage costs and will cause model training to take longer and longer. The answer to this is to trim and/or compress the PIO EventStore for a specific dataset. This can be done by using a template made for this purpose called the [DB Cleaner](/docs/db_cleaner_template).
114
+
111
115
## Monitoring
112
116
113
-
See [**Monitoring PredictionIO**](pio_monitoring)
117
+
See [**Monitoring PredictionIO**](/docs/pio_monitoring)
ActionML is a direct contributor to the Apache PredictionIO project. The current stable release is {{> pioversion}} Install from [here](/docs/install) or use one of several methods described on the [Apache PredictionIO site](http://predictionio.incubator.apache.org/install/)
4
-
5
-
# PIO Events Accumulate Forever
6
-
7
-
PIO by default will continue to accumulate events forever, which will eventually make even Big Data fans balk at storage costs and it will cause model training to take longer and longer. The answer to this is to trim and/or compress the PIO EventStore for a specific dataset. This can be done by a simple mod to your template code described below or can be run as a separate job using a template made for this purpose called the [/docs/db_cleaner_template]().
Copy file name to clipboardExpand all lines: ur.md
+12-4Lines changed: 12 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -1,8 +1,12 @@
1
1
# The Universal Recommender
2
2
3
-
The Universal Recommender (UR) is a new type of collaborative filtering recommender based on an algorithm that can use data from a wide variety of user taste indicators—it is called the Correlated Cross-Occurrence algorithm. Unlike the matrix factorization embodied in things like MLlib's ALS, The UR's CCO algorithm is able to **ingest any number of user actions, events, profile data, and contextual information**. It then serves results in a fast and scalable way. It also supports item properties for filtering and boosting recommendations and can therefor be considered a hybrid collaborative filtering and content-based recommender.
3
+
The Universal Recommender (UR) is a new type of collaborative filtering recommender based on an algorithm that can use data from a wide variety of user taste indicators—it uses the Correlated Cross-Occurrence algorithm (CCO). Unlike the matrix factorization embodied in things like MLlib's ALS, The UR's CCO algorithm is able to **ingest any number of user actions, events, profile data, and contextual information**. It then serves results in a fast and scalable way. It also supports item properties for filtering and boosting recommendations and can therefor be considered a hybrid collaborative filtering and content-based recommender.
4
4
5
-
The use of multiple **types** of data fundamentally changes the way a recommender is used and, when employed correctly, will provide a significant increase in quality of recommendations vs. using only one user event. Most recommenders, for instance, can only use "purchase" events. Using all we know about a user and their context allows us to much better predict their preferences.
5
+
The use of multiple **types** of data fundamentally changes the way a recommender is used and, when employed correctly, will provide a significant increase in quality of recommendations vs. using only one "conversion event". Most recommenders, for instance, can only use one indicator of user taste a "purchase" event. Using all we know about a user and their context allows us to much better predict their preferences.
6
+
7
+
Not only does this data give lift to recommendation quality but it allows users who have little or no conversions to get recommendations. Therefore is can be used in places where conversions not as common. It also allows us to enrich preference indicators by extracting entities for text or learning topics and inferring preferences when users read something from a topic.
8
+
9
+
Even though this may sound complex, the Universal Recommender can be used well in more typical cases with no complex setup.
6
10
7
11
## Quick Start
8
12
@@ -15,12 +19,16 @@ There is a reason we call this recommender "universal" and it's because of the n
15
19
***Personalized Recommendations**: "just for you", when you have user history
16
20
***Similar Item Recommendations**: "people who liked this also like these"
17
21
***Shopping Cart Recommendations**: more generally item-set recommendations. This can be applied to wishlists, watchlists, likes, any set of items that may go together. Some also call this "complimentary purchase" recommendations.
18
-
***Popular Items**: These can even be the primary form of recommendation if desired for some applications since serveral forms are supported. By default if a user has no recommendations popular items will backfill to achieve the number required.
22
+
***Popular Items**: These can even be the primary form of recommendation if desired for some applications since several forms are supported. By default if a user has no recommendations popular items will backfill to achieve the number required.
19
23
***Hybrid Collaborative Filtering and Content-based Recommendations**: since item properties can boost or filter recommendations a smooth blend of usage and content can be achieved.
20
24
***Recommendations with Business Rules**: The UR allows filters and boosts based user-defined properties that can be attached to items. So things like availability, categories, tags, location, or other user-defined properties can be used to rule in or out items to be recommended.
21
25
26
+
## Simple Configuration
27
+
28
+
All of the above use cases can be very simple to configure and setup. If you have an E-Commerce application, you may be able to get away with one type of input data and some item properties to get all of the benefits. If you have more complex needs, read the [Use Cases](ur_use_cases.md) section for tips.
29
+
22
30
## The Correlated Cross-Occurrence Algorithm (CCO)
23
31
24
32
For most of the history of recommenders the data science could only find ways to use one type in user-preference indicator. To be sure this was one type per application but there is so much more we know from user behavior that was going unused. Correlated Cross-Occurrence (CCO) was developed to discover what behavior of a give user correlated to the type of action you want to recommend. If you want to recommend ***buy***, ***play***, ***watch***, or ***read***, is it possible that other things known about a user correlates to this recommended action—things like a ***pageview***, a ***like***, a ***category preference***, the ***location*** logged in from, the ***device*** used, item detail ***views***, or ***anything else*** known about the user. Furthermore how would we test for correlation?
25
33
26
-
Enter the Log-Likelihood Ratio (LLR)—a probabilistic test for correlation between 2 events. This is super important because there is no linear relationship between the **event-types**. The correlation is at the indiviual user and event level and this is where LLR excels. To illustrate this ask yourself in an E-commerce situation is a product view 1/2 of a buy? You might think so but if the user viewed 2 things and bought one of them the correlation is 100% for one of the views and 0% for the other. So some view data is useful in predicting purchases and others are useless. LLR is a very well respected test for this type of correlation.
34
+
Enter the Log-Likelihood Ratio (LLR)—a probabilistic test for correlation between 2 events. This is super important because there is no linear relationship between the **event-types**. The correlation is at the indiviual user and event level and this is where LLR excels. To illustrate this ask yourself in an E-commerce situation is a product view 1/2 of a buy? You might think so but if the user viewed 2 things and bought one of them the correlation is 100% for one of the views and 0% for the other. So some view data is useful in predicting purchases and others are useless. LLR is a very well respected test for this type of correlation.
Everyone has seen apps like Amazon and Netflix, which show user recommendations but also may narrow down recommendations to a specific category of genre based on the user's location in the app or to fill a special row in the UI. This is done by applying Business Rules based on item properties. Most recommenders do not have this ability so an app must get many many recommendations then filter the ones that have the wrong properties. This is built into the UR in a most efficiently and simply way.
4
+
5
+
First the input can be as simple as a "buy" where we know a user-id and an item-id. This is sufficient to make the recommendation, but to use buiness rules we must also set properties for items, like category or genre in this use case. We sent the JSON.
6
+
7
+
```
8
+
{
9
+
"event": "buy",
10
+
"entityType": "user",
11
+
"entityId": "John Doe",
12
+
"targetEntityType": "item",
13
+
"tagetEntityId": "some-item",
14
+
"eventTime": "ISO-encoded-datetime"
15
+
}
16
+
```
17
+
18
+
To set the item property so the "some-item" has a category = "Electronics" we send the JSON:
19
+
20
+
```
21
+
{
22
+
"event": "$set", <-- special reserved event name
23
+
"entityType": "item", <-- must be "item"
24
+
"entityId": "some-item", <-- same type of id as in the "buy"
25
+
"properties": { <-- an object may have several properties
26
+
"category": ["Electronics"], <-- and array allows several categories
27
+
},
28
+
"eventTime": "ISO-encoded-datetime"
29
+
}
30
+
```
31
+
32
+
## Inclusion Business Rule
33
+
34
+
Once the UR has trained on this data a simple query will return recommendations for "John Doe" that are all in the "Electronics" category:
35
+
36
+
```
37
+
{
38
+
"user": "John Doe",
39
+
"fields": [{
40
+
"name": "category",
41
+
"values": ["Electronics"],
42
+
"bias": -1
43
+
}]
44
+
}
45
+
```
46
+
47
+
This is called an *inclusion rule* since no item will be returned as a recommendation unless it includes the correct category. The `"bias": -1` tell the recommender to include no other recommendations.
48
+
49
+
## Exclusion Business Rule
50
+
51
+
Imagine that instead of including recs from the "Electronics" category all you want to do is **exclude** "Toys":
52
+
53
+
```
54
+
{
55
+
"user": "John Doe",
56
+
"fields": [{
57
+
"name": "category",
58
+
"values": ["Toys"],
59
+
"bias": 0
60
+
}]
61
+
}
62
+
```
63
+
64
+
This is called an *exclusion rule* since `"bias": 0` excludes matching items.
65
+
66
+
## Boost Business Rules with Logical ANDs and ORs
67
+
68
+
69
+
Inclusion and exclusion rules are dangerous because they can lead to no recommendations returned. They do not limit items, the limit recommended items so if there is not enough data too return a recommendation with category = "Electronics" while excluding all with category = "Toys" we might use a boost. The `bias` will be > 1.0 for a positive boost and 0 < boost < 1.0 to de-boost or disfavor something by it's properties.
70
+
71
+
To boost "Electronics" AND de-boost "Toys" we would send the query:
72
+
73
+
```
74
+
{
75
+
"user": "John Doe",
76
+
"fields": [{
77
+
"name": "category",
78
+
"values": ["Electronics"],
79
+
"bias": 10.0
80
+
}{
81
+
"name": "category",
82
+
"values": ["Toys"],
83
+
"bias": 0.001
84
+
}]
85
+
}
86
+
```
87
+
88
+
This is does 2 things:
89
+
- Any recommendation matching category "Electronics" will have it's score multiplied by 10. This will greatly increase it's rank and may increase it above all other items but if there are no recommendations with category "Electronics" is will still return them.
90
+
- AND if the recommended item matches the category "Toys" is will have it's score multiplied by 0.001 greatly decreasing its overall rank so that it may not be returned with the number of recs requested.
91
+
92
+
This query shows how to create queries that will not disqualify all recommendations. You are not guaranteed the recs match "Electronics" and do not match "Toys" but if the rules for matching with `"bias" -1"` and `"bias": 0` would have led to no returns this query will fall back to other items, avoiding no recs at all. Use inclusion and exclusion rules if you know for sure that you don't want non-matching recommendations.
93
+
94
+
The other thing this does is show how to combine rules. Including 2 rules as different fields will AND them logically.
95
+
96
+
If we wanted to include recs from "Electronics" OR "Toys" we would send:
97
+
98
+
```
99
+
{
100
+
"user": "John Doe",
101
+
"fields": [{
102
+
"name": "category",
103
+
"values": ["Electronics", "Toys"],
104
+
"bias": 10.0
105
+
}]
106
+
}
107
+
```
108
+
109
+
This will boost by 10 the score of any recommended item that matches either category and boost by 20 anything matching both. Give enough possible recommendations this will return recommendations matching both categories if it can.
110
+
111
+
# WARNING
112
+
113
+
Business rules can be very effective in broadening recommendations when showing them from several categories. They can be used to exclude items that are not "Available" or "In-stock". But be aware that you are creating a bias in recs. You are bending the rules used to find the best thing for the user. Unless there is a hard rule for not show something, try to use boosts. And when using boosts try to find a prominent place to show un-biased recommendations. That way you are using the rules in such a way that they do not exclude what the recommender thinks are the best items for the user.
0 commit comments