Skip to content

Commit

Permalink
[minor] Fix grammar + typo issues
Browse files Browse the repository at this point in the history
Closes twitter#557, closes twitter#678, closes twitter#748, closes twitter#806, closes twitter#818, closes twitter#842, closes twitter#866, closes twitter#948, closes twitter#1024, closes twitter#1313, closes twitter#1458, closes twitter#1461, closes twitter#1465, closes twitter#1491, closes twitter#1503, closes twitter#1539, closes twitter#1611
  • Loading branch information
twitter-team committed Apr 4, 2023
1 parent 36588c6 commit bb09560
Show file tree
Hide file tree
Showing 20 changed files with 138 additions and 158 deletions.
14 changes: 7 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# Twitter Recommendation Algorithm
# Twitter's Recommendation Algorithm

The Twitter Recommendation Algorithm is a set of services and jobs that are responsible for constructing and serving the
Twitter's Recommendation Algorithm is a set of services and jobs that are responsible for constructing and serving the
Home Timeline. For an introduction to how the algorithm works, please refer to our [engineering blog](https://blog.twitter.com/engineering/en_us/topics/open-source/2023/twitter-recommendation-algorithm). The
diagram below illustrates how major services and jobs interconnect.

Expand All @@ -13,24 +13,24 @@ These are the main components of the Recommendation Algorithm included in this r
| Feature | [SimClusters](src/scala/com/twitter/simclusters_v2/README.md) | Community detection and sparse embeddings into those communities. |
| | [TwHIN](https://github.com/twitter/the-algorithm-ml/blob/main/projects/twhin/README.md) | Dense knowledge graph embeddings for Users and Tweets. |
| | [trust-and-safety-models](trust_and_safety_models/README.md) | Models for detecting NSFW or abusive content. |
| | [real-graph](src/scala/com/twitter/interaction_graph/README.md) | Model to predict likelihood of a Twitter User interacting with another User. |
| | [real-graph](src/scala/com/twitter/interaction_graph/README.md) | Model to predict the likelihood of a Twitter User interacting with another User. |
| | [tweepcred](src/scala/com/twitter/graph/batch/job/tweepcred/README) | Page-Rank algorithm for calculating Twitter User reputation. |
| | [recos-injector](recos-injector/README.md) | Streaming event processor for building input streams for [GraphJet](https://github.com/twitter/GraphJet) based services. |
| | [graph-feature-service](graph-feature-service/README.md) | Serves graph features for a directed pair of Users (e.g. how many of User A's following liked Tweets from User B). |
| Candidate Source | [search-index](src/java/com/twitter/search/README.md) | Find and rank In-Network Tweets. ~50% of Tweets come from this candidate source. |
| | [cr-mixer](cr-mixer/README.md) | Coordination layer for fetching Out-of-Network tweet candidates from underlying compute services. |
| | [user-tweet-entity-graph](src/scala/com/twitter/recos/user_tweet_entity_graph/README.md) (UTEG)| Maintains an in memory User to Tweet interaction graph, and finds candidates based on traversals of this graph. This is built on the [GraphJet](https://github.com/twitter/GraphJet) framework. Several other GraphJet based features and candidate sources are located [here](src/scala/com/twitter/recos) |
| | [user-tweet-entity-graph](src/scala/com/twitter/recos/user_tweet_entity_graph/README.md) (UTEG)| Maintains an in memory User to Tweet interaction graph, and finds candidates based on traversals of this graph. This is built on the [GraphJet](https://github.com/twitter/GraphJet) framework. Several other GraphJet based features and candidate sources are located [here](src/scala/com/twitter/recos). |
| | [follow-recommendation-service](follow-recommendations-service/README.md) (FRS)| Provides Users with recommendations for accounts to follow, and Tweets from those accounts. |
| Ranking | [light-ranker](src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/README.md) | Light ranker model used by search index (Earlybird) to rank Tweets. |
| Ranking | [light-ranker](src/python/twitter/deepbird/projects/timelines/scripts/models/earlybird/README.md) | Light Ranker model used by search index (Earlybird) to rank Tweets. |
| | [heavy-ranker](https://github.com/twitter/the-algorithm-ml/blob/main/projects/home/recap/README.md) | Neural network for ranking candidate tweets. One of the main signals used to select timeline Tweets post candidate sourcing. |
| Tweet mixing & filtering | [home-mixer](home-mixer/README.md) | Main service used to construct and serve the Home Timeline. Built on [product-mixer](product-mixer/README.md) |
| Tweet mixing & filtering | [home-mixer](home-mixer/README.md) | Main service used to construct and serve the Home Timeline. Built on [product-mixer](product-mixer/README.md). |
| | [visibility-filters](visibilitylib/README.md) | Responsible for filtering Twitter content to support legal compliance, improve product quality, increase user trust, protect revenue through the use of hard-filtering, visible product treatments, and coarse-grained downranking. |
| | [timelineranker](timelineranker/README.md) | Legacy service which provides relevance-scored tweets from the Earlybird Search Index and UTEG service. |
| Software framework | [navi](navi/navi/README.md) | High performance, machine learning model serving written in Rust. |
| | [product-mixer](product-mixer/README.md) | Software framework for building feeds of content. |
| | [twml](twml/README.md) | Legacy machine learning framework built on TensorFlow v1. |

We include Bazel BUILD files for most components, but not a top level BUILD or WORKSPACE file.
We include Bazel BUILD files for most components, but not a top-level BUILD or WORKSPACE file.

## Contributing

Expand Down
2 changes: 1 addition & 1 deletion ann/src/main/python/dataflow/faiss_index_bq_dataset.py
Original file line number Diff line number Diff line change
Expand Up @@ -91,7 +91,7 @@ def parse_metric(config):
elif metric_str == "linf":
return faiss.METRIC_Linf
else:
raise Exception(f"Uknown metric: {metric_str}")
raise Exception(f"Unknown metric: {metric_str}")


def run_pipeline(argv=[]):
Expand Down
4 changes: 2 additions & 2 deletions cr-mixer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,6 @@

CR-Mixer is a candidate generation service proposed as part of the Personalization Strategy vision for Twitter. Its aim is to speed up the iteration and development of candidate generation and light ranking. The service acts as a lightweight coordinating layer that delegates candidate generation tasks to underlying compute services. It focuses on Twitter's candidate generation use cases and offers a centralized platform for fetching, mixing, and managing candidate sources and light rankers. The overarching goal is to increase the speed and ease of testing and developing candidate generation pipelines, ultimately delivering more value to Twitter users.

CR-Mixer act as a configurator and delegator, providing abstractions for the challenging parts of candidate generation and handling performance issues. It will offer a 1-stop-shop for fetching and mixing candidate sources, a managed and shared performant platform, a light ranking layer, a common filtering layer, a version control system, a co-owned feature switch set, and peripheral tooling.
CR-Mixer acts as a configurator and delegator, providing abstractions for the challenging parts of candidate generation and handling performance issues. It will offer a 1-stop-shop for fetching and mixing candidate sources, a managed and shared performant platform, a light ranking layer, a common filtering layer, a version control system, a co-owned feature switch set, and peripheral tooling.

CR-Mixer's pipeline consists of 4 steps: source signal extraction, candidate generation, filtering, and ranking. It also provides peripheral tooling like scribing, debugging, and monitoring. The service fetches source signals externally from stores like UserProfileService and RealGraph, calls external candidate generation services, and caches results. Filters are applied for deduping and pre-ranking, and a light ranking step follows.
CR-Mixer's pipeline consists of 4 steps: source signal extraction, candidate generation, filtering, and ranking. It also provides peripheral tooling like scribing, debugging, and monitoring. The service fetches source signals externally from stores like UserProfileService and RealGraph, calls external candidate generation services, and caches results. Filters are applied for deduping and pre-ranking, and a light ranking step follows.
24 changes: 10 additions & 14 deletions recos-injector/README.md
Original file line number Diff line number Diff line change
@@ -1,13 +1,10 @@
# recos-injector
Recos-Injector is a streaming event processor for building input streams for GraphJet based services.
It is general purpose in that it consumes arbitrary incoming event stream (e.x. Fav, RT, Follow, client_events, etc), applies
filtering, combines and publishes cleaned up events to corresponding GraphJet services.
Each GraphJet based service subscribes to a dedicated Kafka topic. Recos-Injector enables a GraphJet based service to consume any
event it wants
# Recos-Injector

## How to run recos-injector-server tests
Recos-Injector is a streaming event processor used to build input streams for GraphJet-based services. It is a general-purpose tool that consumes arbitrary incoming event streams (e.g., Fav, RT, Follow, client_events, etc.), applies filtering, and combines and publishes cleaned up events to corresponding GraphJet services. Each GraphJet-based service subscribes to a dedicated Kafka topic, and Recos-Injector enables GraphJet-based services to consume any event they want.

Tests can be run by using this command from your project's root directory:
## How to run Recos-Injector server tests

You can run tests by using the following command from your project's root directory:

$ bazel build recos-injector/...
$ bazel test recos-injector/...
Expand All @@ -28,17 +25,16 @@ terminal:
$ curl -s localhost:9990/admin/ping
pong

Run `curl -s localhost:9990/admin` to see a list of all of the available admin
endpoints.
Run `curl -s localhost:9990/admin` to see a list of all available admin endpoints.

## Querying recos-injector-server from a Scala console
## Querying Recos-Injector server from a Scala console

Recos Injector does not have a thrift endpoint. It reads Event Bus and Kafka queues and writes to recos_injector kafka.
Recos-Injector does not have a Thrift endpoint. Instead, it reads Event Bus and Kafka queues and writes to the Recos-Injector Kafka.

## Generating a package for deployment

To package your service into a zip for deployment:
To package your service into a zip file for deployment, run:

$ bazel bundle recos-injector/server:bin --bundle-jvm-archive=zip

If successful, a file `dist/recos-injector-server.zip` will be created.
If the command is successful, a file named `dist/recos-injector-server.zip` will be created.
2 changes: 1 addition & 1 deletion simclusters-ann/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,7 @@ SimClusters from the Linear Algebra Perspective discussed the difference between
However, calculating the cosine similarity between two Tweets is pretty expensive in Tweet candidate generation. In TWISTLY, we scan at most 15,000 (6 source tweets * 25 clusters * 100 tweets per clusters) tweet candidates for every Home Timeline request. The traditional algorithm needs to make API calls to fetch 15,000 tweet SimCluster embeddings. Consider that we need to process over 6,000 RPS, it’s hard to support by the existing infrastructure.


## SimClusters Approximate Cosine Similariy Core Algorithm
## SimClusters Approximate Cosine Similarity Core Algorithm

1. Provide a source SimCluster Embedding *SV*, *SV = [(SC1, Score), (SC2, Score), (SC3, Score) …]*

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -513,12 +513,12 @@ public static void buildRetweetAndReplyFields(
Optional<Long> inReplyToUserId = Optional.of(inReplyToUserIdVal).filter(x -> x > 0);
Optional<Long> inReplyToStatusId = Optional.of(inReplyToStatusIdVal).filter(x -> x > 0);

// We have six combinations here. A tweet can be
// We have six combinations here. A Tweet can be
// 1) a reply to another tweet (then it has both in-reply-to-user-id and
// in-reply-to-status-id set),
// 2) directed-at a user (then it only has in-reply-to-user-id set),
// 3) not a reply at all.
// Additionally, it may or may not be a retweet (if it is, then it has retweet-user-id and
// Additionally, it may or may not be a Retweet (if it is, then it has retweet-user-id and
// retweet-status-id set).
//
// We want to set some fields unconditionally, and some fields (reference-author-id and
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,13 @@
/**
* Loads the scoring models for tweets and provides access to them.
*
* This class relies on a list ModelLoader objects to retrieve the objects from them. It will
* This class relies on a list of ModelLoader objects to retrieve the objects from them. It will
* return the first model found according to the order in the list.
*
* For production, we load models from 2 sources: classpath and HDFS. If a model is available
* from HDFS, we return it, otherwise we use the model from the classpath.
*
* The models used in for default requests (i.e. not experiments) MUST be present in the
* The models used for default requests (i.e. not experiments) MUST be present in the
* classpath, this allows us to avoid errors if they can't be loaded from HDFS.
* Models for experiments can live only in HDFS, so we don't need to redeploy Earlybird if we
* want to test them.
Expand Down
Loading

0 comments on commit bb09560

Please sign in to comment.