Skip to content

An example repository on how to start building graph applications on streaming data. Just clone and start building πŸ’» πŸ’ͺ

License

Notifications You must be signed in to change notification settings

memgraph/example-streaming-app

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

30 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

Example Streaming App πŸš€πŸš€

This repository serves as a point of reference when developing a streaming application with Memgraph and a message broker such as Kafka.

Example Streaming App

KafkaProducer represents the source of your data. That can be transactions, queries, metadata or something different entirely. In this minimal example we propose using a special string format that is easy to parse. The data is sent from the KafkaProducer to Kafka under a topic aptly named topic. The Backend implements a KafkaConsumer. It takes data from Kafka, consumes it, but also queries Memgraph for graph analysis, feature extraction or storage.

Installation

Install Kafka and Memgraph using the instructions in the homonymous directories. Then choose a programming language from the list of supported languages and follow the instructions given there.

List of supported programming languages

How does it work exactly

KafkaProducer

The KafkaProducer in ./kafka/producer creates nodes with a label Person that are connected with edges of type CONNECTED_WITH. In this repository we provide a static producer that reads entries from a file and a stream producer that produces entries every X seconds.

Backend

The backend takes a message at a time from kafka, parses it with a csv parser as a line, converts it into a openCypher query and sends it to Memgraph. After storing a node in Memgraph the backend asks Memgraph how many adjacent nodes does it have and prints it to the terminal.

Memgraph

You can think of Memgraph as two separate components: a storage engine and an algorithm execution engine. First we create a trigger: an algorithm that will be run every time a node is inserted. This algorithm calculates and updates the number of neighbors of each affected node after every query is executed.

About

An example repository on how to start building graph applications on streaming data. Just clone and start building πŸ’» πŸ’ͺ

Resources

License

Stars

Watchers

Forks

Contributors 3

  •  
  •  
  •