Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multimodal emo and transformer #202

Merged
merged 9 commits into from
Apr 9, 2020
Merged

Conversation

xhyandwyy
Copy link
Collaborator

Thank you for submitting a pull request! Please provide the following information for code review:

Pull Request Summary

Refactor transformer, and add multimodal attention align model.

Test Plan

Add relevant tests

@@ -109,8 +109,8 @@ def generate_data(self):
lambda x: compute_sen_lens(x, padding_token=utils.PAD_IDX),
num_parallel_calls=self.num_parallel_calls)

src_ds = src_ds.map(
self.exclude_padding, num_parallel_calls=self.num_parallel_calls)
# src_ds = src_ds.map(
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

why not remove it?

@@ -42,6 +43,10 @@ def _load_text(text_path):
return text


def _process_text(text):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can make it a class static method.

@applenob applenob merged commit fda4e59 into master Apr 9, 2020
@applenob applenob deleted the multimodal-emo-and-transformer branch April 9, 2020 06:42
Jacke pushed a commit to NocturnalGlory/delta that referenced this pull request Sep 29, 2022
Resolve "update requirements.txt"

Closes Delta-ML#202

See merge request DELTA_Group/delta!25
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants