Skip to content

jdh-algo/JoyType

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

JoyType: A Robust Design for Multilingual Visual Text Creation

examples

News

[2024.07.01] - Inference code is now available.
[2024.07.01] - Hugging Face Online demo is available here!
[2024.06.30] - Our Online demo is available here!

TODOS

  • Release online demo
  • Release our latest checkpoint
  • Release model and trainning code
  • Support JoyType in ComfyUI
  • Release our research paper

Methodology

The Figure introduces the whole framework of our method, including data collection, training pipeline, and inference pipeline. In the data collection phase, we leveraged the open-source CapOnImage2M dataset, selecting a subset of 1M images. For each selected image, we employed a Visual Language Model (e.g., CogVLM) to generate textual descriptions, thereby obtaining prompts associated with the images. We applied the canny algorithm to extract edges from text regions within the images, creating a canny map. The training pipeline comprises three primary components: the latent diffusion module, the Font ControlNet module, and the loss design module. More precisely, during training, the raw image, canny map, and prompt are fed into the Variational Autoencoder (VAE), Font ControlNet, and text encoder, respectively. The loss function is bifurcated into two segments: the latent space and the pixel space. Within the latent space, we utilize the loss function $L_{LDM}$ associated with Latent Diffusion Models as outlined in the source paper. The latent features are then decoded back into images via the VAE decoder. Within the pixel space, the text regions of both the predicted and the ground truth images are cropped and processed through an OCR model independently. We extract the convolutional layer features from the OCR model and compute the Mean Squared Error (MSE) loss between the features of each layer, thereby constituting the loss $L_{ocr}$. During the inference phase, the image prompt, textual content, and specified areas for text generation are input into the text encoder and Font ControlNet, respectively. The final image is then generated by the VAE decoder.

framework_1
framework_2

Installation

# Initial a conda enviroment
conda create -n joytype python=3.9
conda activate joytype
# Clone joytype repo
git clone ...
cd JoyType
# Install requirements
pip install -r requirements.txt

Inference

[Recommend]: We already released a demo on JDHealth and HuggingFace!

you can run with this code to infer:

python infer.py --prompt "a card" --input_yaml examples/test.yaml --img_name test
  • prompt corresponds to the text description of the image you want to generate
  • input_yaml corresponds to the information of texts' layout in the generated image
  • img_name corresponds to the file name of the generated image

You can see more arguments by:

python infer.py --help

Please note that the model will be pulled from Hugging Face by default, if you want to load it locally, please pre-download the model from here and modify the argument: --load_path.

Gallery

gallery

Releases

No releases published

Packages

No packages published

Contributors 3

  •  
  •  
  •  

Languages