Skip to content

Neurosymbolic Multimodal Marketing ( Accepted at IEEE Bigdata 2024)

License

Notifications You must be signed in to change notification settings

SWAN-AI/Marketing-AI

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Marketing-AI

Multimodal Marketing Success Prediction

This project aims to predict the success of marketing campaigns using multimodal data. By combining data, such as text and images with commonsence knowledge from Knowledge Graphs, we build a predictive model that helps identify the most effective marketing strategies.

📄 Related Paper:
Our research on this topic has been published on arXiv. You can read the full paper here:
🔗 Enhancing Cross-Modal Contextual Congruence for Crowdfunding Success using Knowledge-infused Learning

Table of Contents

Introduction

In this project, we leverage multimodal data to predict the success of marketing campaigns. By analyzing textual content, images, and other relevant features, we aim to build a model that can accurately predict the performance of different marketing strategies.

Data

The dataset used in this project consists of a combination of textual data, image data, and numerical features. The dataset is collected from various marketing campaigns and contains information such as campaign text, campaign images, target demographics, and campaign success metrics.

Project Structure

The project has the following structure:

  • data/: Directory containing the sample dataset files and images.
  • src/: Directory containing the different models used in the project, such as text models, image models, and the multimodal model.
  • utils/: Directory containing utility functions for data preprocessing and evaluation.
  • notebooks/: Directory containing Jupyter notebooks for data analysis, text modeling, image processing, and multimodal modeling.
  • README.md: This file you're currently reading.
  • requirements.txt: File specifying the project dependencies.

Setup

To set up the project, follow these steps:

  1. Clone the repository:

  2. Install the required dependencies:

  3. Download the necessary dataset files and place them in the data/ directory.

Usage

To run the project, you can use the provided Jupyter notebooks in the notebooks/ directory. Each notebook focuses on a specific aspect of the project, such as data analysis, text modeling, image processing, and multimodal modeling. Follow the instructions in the notebooks to execute the code and reproduce the results.

To run a file please use this : python mmbt/train.py --batch_sz 4 --gradient_accumulation_steps 40 --savedir/result /path/to/savedir/ --name mmbt_model_run --data_path /path/to/datasets --task food101 --task_type classification --model mmbt --num_image_embeds 3 --freeze_txt 5 --freeze_img 3 --patience 5 --dropout 0.1 --lr 5e-05 --warmup 0.1 --max_epochs 100 --seed 1

Results

The project aims to achieve accurate predictions of marketing campaign success using multimodal data. The final model's performance is evaluated using appropriate metrics, and the results are presented in the notebooks or in a separate evaluation report.

Contributing

Contributions to this project are welcome! If you have any ideas, suggestions, or improvements, please create an issue or submit a pull request.

Citation 📖

If you find our work useful, please consider citing our paper:

@inproceedings{padhi2024enhancing,
  author    = {Trilok Padhi and others},
  title     = {Enhancing Cross-Modal Contextual Congruence for Crowdfunding Success using Knowledge-infused Learning},
  booktitle = {2024 IEEE International Conference on Big Data (BigData)},
  publisher = {IEEE},
  year      = {2024}
}

License

This project is licensed under the MIT License.

About

Neurosymbolic Multimodal Marketing ( Accepted at IEEE Bigdata 2024)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages