Cornac-AB is an open-source solution for A/B testing with integration from the Cornac framework.
This tool provides you a solution to let you experiment with different recommendation models, visualize A/B test results, and analyze user interactions.
User Interaction Solution | Recommendations Dashboard | Feedback Dashboard |
---|---|---|
- OpenSearch Integration: Provides robust data indexing, retrieval, and visualization.
- Easy Experiment Setup: Effortlessly create A/B tests and collect user feedback, leveraging Cornac’s comprehensive evaluation mechanisms.
- Interactive Dashboards: Analyze model behavior, user interactions, and A/B test outcomes with visually rich dashboards.
Cornac is an open-source Python library designed for multimodal recommender systems.
It offers a wide variety of models for collaborative filtering, content-based, explainable, and next-item or next-basket recommendation.
Cornac is endorsed by ACM RecSys for evaluating and reproducing recommendation algorithms.
The architecture consists of the following components:
- Cornac-AB Backend Server (Backend source code): Handles API endpoints and business logic.
- Books-AB User Interaction Frontend (User Interaction Frontend source code): Provides a frontend for user interactions.
- Cornac-AB Frontend (Frontend source code): Offers a user interface to setup, track and evaluate models for the A/B tests.
- OpenSearch & OpenSearch Dashboards (Official site): Data indexing, search, and visualization.
- GoodReads 10k Dataset (Goodbooks-10k repository): Preloaded data for demonstration purposes.
To get started with Cornac-AB, you need Docker. After installing Docker, run the following command to set up the solution:
docker compose up
To run solution without sample dataset:
docker compose -f docker-compose-nodata.yml up
This command will start all the required components and load the GoodReads dataset into OpenSearch for A/B testing and visualization.
Once the containers are running, you can access the various parts of the solution via the following URLs:
- Books-AB User Interaction Frontend
localhost:8082
- Cornac-AB Frontend
localhost:8081
- OpenSearch API
localhost:9200
- OpenSearch Dashboards
localhost:5601
This backend server is built on Spring. Spring is a production grade scalable framework for building web applications.
This solution connects to a local h2 database (which could be easily replaceable with most SQL databases supported by Spring).
Cornac instances (Based on Flask) are run on this container, and restarts automatically should it be found to be down.
The Spring Data OpenSearch library has been used to connect the backend to the OpenSearch service.
Included in this solution is a sample frontend that showcases how the solution receives user interactions.
Users will first enter their User ID. To decrease the size of the sample data, we only included past book history data of user IDs 100-199.
Click Explore Books. A particular model is allocated by the backend, which provides recommendations as on this explore page. Recommendation records are stored in OpenSearch as well.
Users could select to view more details by clicking on them.
When a user clicks on a book, we register that as a feedback. This feedback will be attributed with the click action on OpenSearch.
Further in this view, users will be able to rate the book by click the stars icon, which will be attributed with a rate action as a feedback on OpenSearch.
User interaction in this frontend will be recorded in OpenSearch as recommendations and feedbacks. These can be viewed in the Cornac-AB frontend dashboards in real-time, as shown in the next section.
You'll be welcomed with this screen.
Going to the dashboard screen will show you multiple dashboards, including the Users, Recommendations and Feedback dashboards. Sample data based on the Goodbooks 10k dataset has already been generated and inserted for you.
Under the Feedback Dashboard section, you will be able to filter data, and further compare your models using the Cornac evaluation features by selecting the Run Cornac Evaluation button.
A summary of the data that will be put through Cornac's evaluation services will be shown. You could add more metrics by selecting the Add Metric button will allow you to add more metrics as shown below.
You will then be shown with the metric results, along with the p-values of individual models to evaluate the performance of your models.
Cornac-AB is a solution which showcases how A/B Testing could be done and visualized as a forward testing experiment. Feel free to further contribute, or fork the repository and extend it to your own application needs.
This project welcomes contributions and suggestions. Before contributing, please see our contribution guidelines.
If you use Cornac in a scientific publication, we would appreciate citations to the following papers:
Cornac-AB: An Open-Source Recommendation Framework with Native A/B Testing Integration, Ong et al., In Companion Proceedings of the ACM Web Conference 2024.
@inproceedings{ong2024cornacab,
title={Cornac-AB: An Open-Source Recommendation Framework with Native A/B Testing Integration},
author={Ong, Darryl and Truong, Quoc-Tuan and Lauw, Hady W.},
booktitle={Companion Proceedings of the ACM on Web Conference 2024},
pages={1027–1030},
year={2024}
}
Cornac: A Comparative Framework for Multimodal Recommender Systems, Salah et al., Journal of Machine Learning Research, 21(95):1–5, 2020.
@article{salah2020cornac,
title={Cornac: A Comparative Framework for Multimodal Recommender Systems},
author={Salah, Aghiles and Truong, Quoc-Tuan and Lauw, Hady W},
journal={Journal of Machine Learning Research},
volume={21},
number={95},
pages={1--5},
year={2020}
}