This project aims to predict the success of marketing campaigns using multimodal data. By combining data, such as text and images with commonsence knowledge from Knowledge Graphs, we build a predictive model that helps identify the most effective marketing strategies.
📄 Related Paper:
Our research on this topic has been published on arXiv. You can read the full paper here:
🔗 Enhancing Cross-Modal Contextual Congruence for Crowdfunding Success using Knowledge-infused Learning
In this project, we leverage multimodal data to predict the success of marketing campaigns. By analyzing textual content, images, and other relevant features, we aim to build a model that can accurately predict the performance of different marketing strategies.
The dataset used in this project consists of a combination of textual data, image data, and numerical features. The dataset is collected from various marketing campaigns and contains information such as campaign text, campaign images, target demographics, and campaign success metrics.
The project has the following structure:
data/
: Directory containing the sample dataset files and images.src/
: Directory containing the different models used in the project, such as text models, image models, and the multimodal model.utils/
: Directory containing utility functions for data preprocessing and evaluation.notebooks/
: Directory containing Jupyter notebooks for data analysis, text modeling, image processing, and multimodal modeling.README.md
: This file you're currently reading.requirements.txt
: File specifying the project dependencies.
To set up the project, follow these steps:
-
Clone the repository:
-
Install the required dependencies:
-
Download the necessary dataset files and place them in the
data/
directory.
To run the project, you can use the provided Jupyter notebooks in the notebooks/
directory. Each notebook focuses on a specific aspect of the project, such as data analysis, text modeling, image processing, and multimodal modeling. Follow the instructions in the notebooks to execute the code and reproduce the results.
To run a file please use this :
python mmbt/train.py --batch_sz 4 --gradient_accumulation_steps 40 --savedir/result /path/to/savedir/ --name mmbt_model_run --data_path /path/to/datasets --task food101 --task_type classification --model mmbt --num_image_embeds 3 --freeze_txt 5 --freeze_img 3 --patience 5 --dropout 0.1 --lr 5e-05 --warmup 0.1 --max_epochs 100 --seed 1
The project aims to achieve accurate predictions of marketing campaign success using multimodal data. The final model's performance is evaluated using appropriate metrics, and the results are presented in the notebooks or in a separate evaluation report.
Contributions to this project are welcome! If you have any ideas, suggestions, or improvements, please create an issue or submit a pull request.
If you find our work useful, please consider citing our paper:
@inproceedings{padhi2024enhancing,
author = {Trilok Padhi and others},
title = {Enhancing Cross-Modal Contextual Congruence for Crowdfunding Success using Knowledge-infused Learning},
booktitle = {2024 IEEE International Conference on Big Data (BigData)},
publisher = {IEEE},
year = {2024}
}
This project is licensed under the MIT License.