Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature extraction for sequential labelling #64

Closed
zhaoxy92 opened this issue Nov 29, 2018 · 47 comments
Closed

Feature extraction for sequential labelling #64

zhaoxy92 opened this issue Nov 29, 2018 · 47 comments
Labels
Discussion Discussion on a topic (keep it focused or open a new issue though) wontfix

Comments

@zhaoxy92
Copy link

Hi, I have a question in terms of using BERT for sequential labeling task.
Please correct me if I'm wrong.
My understanding is:

  1. Use BertModel loaded with pretrained weights instead of MaskedBertModel.
  2. In such case, take a sequence of tokens as input, BertModel would output a list of hidden states, I only use the top layer hidden states as the embedding for that sequence.
  3. Then to fine tune the model, add a linear fully connected layer and softmax to make final decision.

Is this entire process correct? I followed this procedure but could not have any results.

Thank you!

@thomwolf
Copy link
Member

Well that seems like a good approach. Maybe you can find some inspiration in the code of the BertForQuestionAnswering model? It is not exactly what you are doing but maybe it can help.

@zhaoxy92
Copy link
Author

Thanks. It worked. However, a interesting issue about BERT is that it's highly sensitive to learning rate, which makes it very difficult to combine with other models

@bheinzerling
Copy link

bheinzerling commented Nov 30, 2018

@zhaoxy92 what sequence labeling task are you doing? I've got CoNLL'03 NER running with the bert-base-cased model, and also found the same sensitivity to hyper-parameters.

The best dev F1 score i've gotten after half a day a day of trying some parameters is 92.4 94.6, which is a bit lower than the 96.4 dev score for BERT_base reported in the paper. I guess more tuning will increase the score some more.

The best configuration for me so far is:

  • Batch size: 160 (on four P40 GPUs with 24GB RAM each). Smaller batch sizes that fit on one or two GPUs give bad results.
  • Optimizer: Adam with learning rate 1e-4. Tried BertAdam with learning rate 1e-5, but it didn't seem to converge.
  • fp16/fp32: Only fp32 works. Tried fp16 (half precision) to allow larger batch sizes, but this gave really low scores, with and without loss scaling.

Also, properly averaging the loss is important: Not just loss /= batch_size. You need to take into account padding and word pieces without predictions (google-research/bert#33 (comment)). If you have a mask tensor that indicates which bert inputs correspond to tagged tokens, then the proper averaging is loss /= mask.float().sum

Another tip, truncating the input (#66) enables much larger batch sizes. Without it the largest possible batch size was 56, but with truncating 160 is possible.

@zhaoxy92
Copy link
Author

I am also working on CoNLL03. Similar results as you got.

@srslynow
Copy link

srslynow commented Dec 3, 2018

@bheinzerling with the risk of going off topic here, would you mind sharing your code? I'd love to read and adapt it for a similar sequential classification task.

@bheinzerling
Copy link

bheinzerling commented Dec 3, 2018

I have some code for preparing batches here:

https://github.com/bheinzerling/dougu/blob/2f54b14d588f17d77b7a8bca9f4e5eb38d6a2805/dougu/bert.py#L98

The important methods are subword_tokenize_to_ids and subword_tokenize, you can probably ignore the other stuff.

With this, feature extraction for each sentence, i.e. a list of tokens, is simply:

bert = dougu.bert.Bert.Model("bert-base-cased")
featurized_sentences = []
for tokens in sentences:
    features = {}
    features["bert_ids"], features["bert_mask"], features["bert_token_starts"] = bert.subword_tokenize_to_ids(tokens)
    featurized_sentences.append(features)

Then I use a custom collate function for a DataLoader that turns featurized_sentences into batches:

def collate_fn(featurized_sentences_batch):
    bert_batch = [torch.cat(features[key] for features in featurized_sentences], dim=0) for key in ("bert_ids", "bert_mask", "bert_token_starts")]
    return bert_batch

A simple sequence tagger module would look something like this:

class SequenceTagger(torch.nn.Module):
    def __init__(self, data_parallel=True):
           bert = BertModel.from_pretrained("bert-base-cased").to(device=torch.device("cuda"))
           if data_parallel:
                self.bert = torch.nn.DataParallel(bert)
           else:
               self.bert = bert
           bert_dim = 786 # (or get the dim from BertEmbeddings)
           n_labels = 5  # need to set this for your task
           self.out = torch.nn.Linear(bert_dim, n_labels)
           ...  # droput, log_softmax...
    
     def forward(self, bert_batch, true_labels):
            bert_ids, bert_mask, bert_token_starts = bert_batch
            # truncate to longest sequence length in batch (usually much smaller than 512) to save GPU RAM
            max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item()
            if max_length < bert_ids.shape[1]:
                  bert_ids = bert_ids[:, :max_length]
                  bert_mask = bert_mask[:, :max_length]

            segment_ids = torch.zeros_like(bert_mask)  # dummy segment IDs, since we only have one sentence
            bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1]
            # select the states representing each token start, for each instance in the batch
            bert_token_reprs = [
                   layer[starts.nonzero().squeeze(1)]
                   for layer, starts in zip(bert_last_layer, bert_token_starts)]
            # need to pad because sentence length varies
            padded_bert_token_reprs = pad_sequence(
                   bert_token_reprs, batch_first=True, padding_value=-1)
            # output/classification layer: input bert states and get log probabilities for cross entropy loss
            pred_logits = self.log_softmax(self.out(self.dropout(padded_bert_token_reprs)))
            mask = true_labels != -1  # I did set label = -1 for all padding tokens somewhere else
            loss = cross_entropy(pred_logits, true_labels)
            # average/reduce the loss according to the actual number of of predictions (i.e. one prediction per token).
            loss /= mask.float().sum()
            return loss

Wrote this without checking if it runs (my actual code is tied into some other things so I cannot just copy&paste it), but it should help you get started.

@rremani
Copy link

rremani commented Jan 12, 2019

@bheinzerling Thanks a lot for the starter, got awesome results!

@nijianmo
Copy link

Thanks for sharing these tips here! It helps a lot.

I tried to finetune BERT on multiple imbalanced datasets and found the result quite unstable... For an imbalanced dataset, I mean there are much more O labels than the others under the {B,I,O} tagging scheme. Tried weighted cross-entropy loss but the performance is still not as expected. Has anyone met the same issue?

Thanks!

@kugwzk
Copy link

kugwzk commented Jan 27, 2019

Hi~@bheinzerling
I uesd batch size=16, and lr=2e-5, get the dev F1=0.951 and test F1=0.914 which lower than ELMO. What about your result now?

@bheinzerling
Copy link

@kugwzk I didn't do any more CoNLL'03 runs since the numbers reported in the BERT paper were apparently achieved by using document context, which is different from the standard sentence-based evaluation. You can find more details here: allenai/allennlp#2067 (comment)

@kugwzk
Copy link

kugwzk commented Jan 28, 2019

Hmmm...I think they should tell that in the paper...And do you know where to find that they used document context?

@bheinzerling
Copy link

That's what the folks over at allennlp said. I don't know where they got this information, maybe personal communication with one of the BERT authors?

@kugwzk
Copy link

kugwzk commented Jan 28, 2019

Anyway, thank you very much for tell me that.

@kamalkraj
Copy link
Contributor

https://github.com/kamalkraj/BERT-NER
Replicated results from BERT paper

@JianLiu91
Copy link

https://github.com/JianLiu91/bert_ner gives a solution that is very easy to understand.
However, I still wonder whether is the best practice.

@dangal95
Copy link

dangal95 commented May 14, 2019

Hi all,

I am trying to train the BERT model on some data that I have. However, I am having trouble understanding how to adjust the labels following tokenization. I am trying to perform word level classification (similar to NER)

If I have the following tokenized sentence and its' labels:

original_tokens = ['The', <start>', 'eng-30-01258617-a', '<end>', 'frailty']
original_labels = [0, 2, 3, 4, 1]

Then after using the BERT tokenizer I get the following:
bert_tokens = ['[CLS]', 'the', '<start>', 'eng-30-01258617-a', '<end>', 'frail', '##ty', '[SEP]']

Also, I adjust my label array as follows:
bert_labels = [0, 2, 3, 4, 1, 1]

N.B. Tokens such as eng-30-01258617-a are not tokenized further as I included an ignore list which contains words and tokens that I do not want tokenized and I swapped them with the [unusedXXX] tokens found in the vocab.txt file.

Notice how the last word 'frailty' is transformed into ['frail', '##ty'] and the label '1' which was used for the whole word is now placed under each word piece. Is this the correct way of doing it? If you would like a more in-depth explanation of what I am trying to achieve you can read the following: https://stackoverflow.com/questions/56129165/how-to-handle-labels-when-using-the-berts-wordpiece-tokenizer

Any help would be greatly appreciated! Thanks in advance

@bheinzerling
Copy link

@dangal95, adjusting the original labels is probably not the best way. A simpler method that works well is described in this issue, here #64 (comment)

@stale
Copy link

stale bot commented Jul 14, 2019

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jul 14, 2019
@g-jing
Copy link

g-jing commented Jul 18, 2019

@nijianmo Hi, I am recently considering using weighted loss in NER task. I wonder if you have tried weighted crf or weighted softmax in pytorch implementation. If so, did you get a good performance ? Thanks in advance.

@stale stale bot removed the wontfix label Jul 18, 2019
@srslynow
Copy link

@zhaoxy92 @thomwolf @bheinzerling @srslynow @rremani
Sorry about tag all of you. I wonder how to set the weight decay other than the BERT structure, for example the crf parameter after BERT output. Should I set it to be 0.01 or 0? Sorry again for tagging all of you because it is kind of urgent.

This repository does not use a CRF for NER classification? Anyway, parameters of a CRF depend on the data distribution you have. These links might be usefull: https://towardsdatascience.com/conditional-random-field-tutorial-in-pytorch-ca0d04499463 and https://pytorch.org/tutorials/beginner/nlp/advanced_tutorial.html

@g-jing
Copy link

g-jing commented Sep 10, 2019

@srslynow Thanks for your answer! I am familiar with CRF, but kind of confused how to set the weight decay when the CRF is connected with BERT. The authors or huggingface seem not to have mentioned how to set weight decay beside the BERT structure.

@chnsh
Copy link

chnsh commented Oct 14, 2019

Thanks to #64 (comment), I could get the implementation to work - for anyone else that's struggling to reproduce the results: https://github.com/chnsh/BERT-NER-CoNLL

@kamalkraj
Copy link
Contributor

BERT-NER in Tensorflow 2.0
https://github.com/kamalkraj/BERT-NER-TF

@antgr
Copy link

antgr commented Nov 1, 2019

ple sequence tagger

I have some code for preparing batches here:

https://github.com/bheinzerling/dougu/blob/2f54b14d588f17d77b7a8bca9f4e5eb38d6a2805/dougu/bert.py#L98

The important methods are subword_tokenize_to_ids and subword_tokenize, you can probably ignore the other stuff.

With this, feature extraction for each sentence, i.e. a list of tokens, is simply:

bert = dougu.bert.Bert.Model("bert-base-cased")
featurized_sentences = []
for tokens in sentences:
    features = {}
    features["bert_ids"], features["bert_mask"], features["bert_token_starts"] = bert.subword_tokenize_to_ids(tokens)
    featurized_sentences.append(features)

Then I use a custom collate function for a DataLoader that turns featurized_sentences into batches:

def collate_fn(featurized_sentences_batch):
    bert_batch = [torch.cat(features[key] for features in featurized_sentences], dim=0) for key in ("bert_ids", "bert_mask", "bert_token_starts")]
    return bert_batch

A simple sequence tagger module would look something like this:

class SequenceTagger(torch.nn.Module):
    def __init__(self, data_parallel=True):
           bert = BertModel.from_pretrained("bert-base-cased").to(device=torch.device("cuda"))
           if data_parallel:
                self.bert = torch.nn.DataParallel(bert)
           else:
               self.bert = bert
           bert_dim = 786 # (or get the dim from BertEmbeddings)
           n_labels = 5  # need to set this for your task
           self.out = torch.nn.Linear(bert_dim, n_labels)
           ...  # droput, log_softmax...
    
     def forward(self, bert_batch, true_labels):
            bert_ids, bert_mask, bert_token_starts = bert_batch
            # truncate to longest sequence length in batch (usually much smaller than 512) to save GPU RAM
            max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item()
            if max_length < bert_ids.shape[1]:
                  bert_ids = bert_ids[:, :max_length]
                  bert_mask = bert_mask[:, :max_length]

            segment_ids = torch.zeros_like(bert_mask)  # dummy segment IDs, since we only have one sentence
            bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1]
            # select the states representing each token start, for each instance in the batch
            bert_token_reprs = [
                   layer[starts.nonzero().squeeze(1)]
                   for layer, starts in zip(bert_last_layer, bert_token_starts)]
            # need to pad because sentence length varies
            padded_bert_token_reprs = pad_sequence(
                   bert_token_reprs, batch_first=True, padding_value=-1)
            # output/classification layer: input bert states and get log probabilities for cross entropy loss
            pred_logits = self.log_softmax(self.out(self.dropout(padded_bert_token_reprs)))
            mask = true_labels != -1  # I did set label = -1 for all padding tokens somewhere else
            loss = cross_entropy(pred_logits, true_labels)
            # average/reduce the loss according to the actual number of of predictions (i.e. one prediction per token).
            loss /= mask.float().sum()
            return loss

Wrote this without checking if it runs (my actual code is tied into some other things so I cannot just copy&paste it), but it should help you get started.

bert_last_layer

Hi, I am trying to make your code work, and here is my setup: I re-declare as free functions and constants everything that is needed

import numpy as np
from pytorch_transformers import BertModel
tokenizer = BertTokenizer.from_pretrained('bert-base-uncased')
SEP = "[SEP]"
MASK = '[MASK]'
CLS = "[CLS]"
max_len = 100
def flatten(list_of_lists):
    for list in list_of_lists:
        for item in list:
            yield item
def convert_tokens_to_ids(tokens, pad=True):
        token_ids = tokenizer.convert_tokens_to_ids(tokens)
        ids = torch.tensor([token_ids]).to(device="cpu")
        assert ids.size(1) < max_len
        if pad:
            padded_ids = torch.zeros(1, max_len).to(ids)
            padded_ids[0, :ids.size(1)] = ids
            mask = torch.zeros(1, max_len).to(ids)
            mask[0, :ids.size(1)] = 1
            return padded_ids, mask
        else:
            return ids
    
def subword_tokenize(tokens):
        """Segment each token into subwords while keeping track of
        token boundaries.
        Parameters
        ----------
        tokens: A sequence of strings, representing input tokens.
        Returns
        -------
        A tuple consisting of:
            - A list of subwords, flanked by the special symbols required
                by Bert (CLS and SEP).
            - An array of indices into the list of subwords, indicating
                that the corresponding subword is the start of a new
                token. For example, [1, 3, 4, 7] means that the subwords
                1, 3, 4, 7 are token starts, while all other subwords
                (0, 2, 5, 6, 8...) are in or at the end of tokens.
                This list allows selecting Bert hidden states that
                represent tokens, which is necessary in sequence
                labeling.
        """
        subwords = list(map(tokenizer.tokenize, tokens))
        print ("subwords: ", subwords)
        subword_lengths = list(map(len, subwords))
        subwords = [CLS] + list(flatten(subwords)) + [SEP]
        print ("subwords: ", subwords)
        token_start_idxs = 1 + np.cumsum([0] + subword_lengths[:-1])
        return subwords, token_start_idxs

def subword_tokenize_to_ids(tokens):
        """Segment each token into subwords while keeping track of
        token boundaries and convert subwords into IDs.
        Parameters
        ----------
        tokens: A sequence of strings, representing input tokens.
        Returns
        -------
        A tuple consisting of:
            - A list of subword IDs, including IDs of the special
                symbols (CLS and SEP) required by Bert.
            - A mask indicating padding tokens.
            - An array of indices into the list of subwords. See
                doc of subword_tokenize.
        """
        subwords, token_start_idxs = subword_tokenize(tokens)
        subword_ids, mask = convert_tokens_to_ids(subwords)
        token_starts = torch.zeros(1, 100).to(subword_ids)
        token_starts[0, token_start_idxs] = 1
        return subword_ids, mask, token_starts

and then i try to add your extra code.
i try to understand the code for this simple case:

sentences = [["the", "rolerationing", "ends"], ["A", "sequence", "of", "strings" ,",", "representing", "input", "tokens", "."]]

it is
max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item()
which is 11

Some questions:
1)

bert(bert_ids, segment_ids)

is this the same with
bert(bert_ids) ?
In that case the following is not needed: segment_ids = torch.zeros_like(bert_mask) # dummy segment IDs, since we only have one sentence

Also i do not understand what the comment means... ( # dummy segment IDs, since we only have one sentence)

bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1]
why do you take the last one? Here -1 is the last sentence. Why do we say last layer?
Also for the above simple example its size is torch.Size([11, 768]). Is this what we want?

@antgr
Copy link

antgr commented Nov 2, 2019

Is this development makes outdated this conversation? Can you please clarify?

def convert_examples_to_features(examples,

@ghost
Copy link

ghost commented Nov 12, 2019

I guess so

@stale
Copy link

stale bot commented Jan 11, 2020

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions.

@stale stale bot added the wontfix label Jan 11, 2020
@stale stale bot closed this as completed Jan 19, 2020
@imayachita
Copy link

Thanks for sharing these tips here! It helps a lot.

I tried to finetune BERT on multiple imbalanced datasets and found the result quite unstable... For an imbalanced dataset, I mean there are much more O labels than the others under the {B,I,O} tagging scheme. Tried weighted cross-entropy loss but the performance is still not as expected. Has anyone met the same issue?

Thanks!

Hi @nijianmo, did you find any workaround for this? Thanks!

@AlxndrMlk
Copy link

Hi everyone!

Thanks for your posts! I was wondering - could anyone post an explicit example of how the properly formatted data for NER using BERT would look like? It is not entirely clean to me from the paper and the comments I've found.

Let's say we have a following sentence and labels:

sent = "John Johanson lives in Ramat Gan."
labels = ['B-PERS', 'I-PERS', 'O', 'O', 'B-LOC', 'I-LOC']

Would data that we input to the model be something like this:

sent = ['[CLS]', 'john', 'johan',  '##son', 'lives',  'in', 'ramat', 'gan', '.', '[SEP]']
labels = ['O', 'B-PERS', 'I-PERS', 'I-PERS', 'O', 'O', 'B-LOC', 'I-LOC', 'O', 'O']
attention_mask = [0, 1, 1, 1, 1, 1, 1, 1, 1, 0]
sentence_id = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

?

Thank you!

@Single430
Copy link

labels = ['B-PERS', 'I-PERS', 'O', 'B-LOC', 'I-LOC']
labels2id = {'B-PERS': 0, 'I-PERS': 1, 'O': 2, 'B-LOC': 3, 'I-LOC': 4}
sent = ['[CLS]', 'john', 'johan',  '##son', 'lives',  'in', 'ramat', 'gan', '.', '[SEP]']
labels = [2, 0, 1, 1, 2, 2, 3, 4, 2, 2]
attention_mask = [0, 1, 1, 1, 1, 1, 1, 1, 1, 0]
sentence_id = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

@AlxndrMlk

@sougata-fiz
Copy link

sougata-fiz commented Mar 29, 2020

I have some code for preparing batches here:

https://github.com/bheinzerling/dougu/blob/2f54b14d588f17d77b7a8bca9f4e5eb38d6a2805/dougu/bert.py#L98

The important methods are subword_tokenize_to_ids and subword_tokenize, you can probably ignore the other stuff.

With this, feature extraction for each sentence, i.e. a list of tokens, is simply:

bert = dougu.bert.Bert.Model("bert-base-cased")
featurized_sentences = []
for tokens in sentences:
    features = {}
    features["bert_ids"], features["bert_mask"], features["bert_token_starts"] = bert.subword_tokenize_to_ids(tokens)
    featurized_sentences.append(features)

Then I use a custom collate function for a DataLoader that turns featurized_sentences into batches:

def collate_fn(featurized_sentences_batch):
    bert_batch = [torch.cat(features[key] for features in featurized_sentences], dim=0) for key in ("bert_ids", "bert_mask", "bert_token_starts")]
    return bert_batch

A simple sequence tagger module would look something like this:

class SequenceTagger(torch.nn.Module):
    def __init__(self, data_parallel=True):
           bert = BertModel.from_pretrained("bert-base-cased").to(device=torch.device("cuda"))
           if data_parallel:
                self.bert = torch.nn.DataParallel(bert)
           else:
               self.bert = bert
           bert_dim = 786 # (or get the dim from BertEmbeddings)
           n_labels = 5  # need to set this for your task
           self.out = torch.nn.Linear(bert_dim, n_labels)
           ...  # droput, log_softmax...
    
     def forward(self, bert_batch, true_labels):
            bert_ids, bert_mask, bert_token_starts = bert_batch
            # truncate to longest sequence length in batch (usually much smaller than 512) to save GPU RAM
            max_length = (bert_mask != 0).max(0)[0].nonzero()[-1].item()
            if max_length < bert_ids.shape[1]:
                  bert_ids = bert_ids[:, :max_length]
                  bert_mask = bert_mask[:, :max_length]

            segment_ids = torch.zeros_like(bert_mask)  # dummy segment IDs, since we only have one sentence
            bert_last_layer = self.bert(bert_ids, segment_ids)[0][-1]
            # select the states representing each token start, for each instance in the batch
            bert_token_reprs = [
                   layer[starts.nonzero().squeeze(1)]
                   for layer, starts in zip(bert_last_layer, bert_token_starts)]
            # need to pad because sentence length varies
            padded_bert_token_reprs = pad_sequence(
                   bert_token_reprs, batch_first=True, padding_value=-1)
            # output/classification layer: input bert states and get log probabilities for cross entropy loss
            pred_logits = self.log_softmax(self.out(self.dropout(padded_bert_token_reprs)))
            mask = true_labels != -1  # I did set label = -1 for all padding tokens somewhere else
            loss = cross_entropy(pred_logits, true_labels)
            # average/reduce the loss according to the actual number of of predictions (i.e. one prediction per token).
            loss /= mask.float().sum()
            return loss

Wrote this without checking if it runs (my actual code is tied into some other things so I cannot just copy&paste it), but it should help you get started.

@bheinzerling
The line bert_last_layer = bert_layers[0][-1] just takes the hidden representation of the last training example in the batch. Is this intended?

@bheinzerling
Copy link

@sougata-fiz

When I wrote that code, self.bert(bert_ids, segment_ids) returned a tuple, of which the first element contained all hidden states. I think this changed at some point. What BertModel's forward returns now is described here: https://github.com/huggingface/transformers/blob/master/src/transformers/modeling_bert.py#L648, so you would have to make the appropriate changes.
Alternatively, you could also try the TokenClassification models, which have since been added: https://huggingface.co/transformers/v2.5.0/model_doc/auto.html#automodelfortokenclassification

@chutaklee
Copy link
Contributor

chutaklee commented Apr 5, 2020

@dangal95, adjusting the original labels is probably not the best way. A simpler method that works well is described in this issue, here #64 (comment)

Hi, could you explain why adjusting the original labels is not suggested? It seems quite easy and straightforward.

# reference: https://github.com/huggingface/transformers/issues/64#issuecomment-443703063
def flatten(list_of_lists):
    for list in list_of_lists:
        for item in list:
           yield item

def subword_tokenize(tokens, labels):
    assert len(tokens) == len(labels)

    subwords = list(map(tokenizer.tokenize, tokens))
    subword_lengths = list(map(len, subwords))
    subwords = [CLS] + list(flatten(subwords)) + [SEP]
    token_start_idxs = 1 + np.cumsum([0] + subword_lengths[:-1])
    bert_labels = [[label] + (sublen-1) * ["X"] for sublen, label in zip(subword_lengths, labels)]
    bert_labels = ["O"] + list(flatten(bert_labels)) + ["O"]

    assert len(subwords) == len(bert_labels)
    return subwords, token_start_idxs, bert_labels
>> tokens = tokenizer.basic_tokenizer.tokenize("John Johanson lives in Ramat Gan.")
>> print(tokens)
['john', 'johanson', 'lives', 'in', 'ramat', 'gan', '.']
>> labels = ['B-PERS', 'I-PERS', 'O', 'O', 'B-LOC', 'I-LOC', 'O']
>> subword_tokenize(tokens, labels)
(['[CLS]',   'john',   'johan',   '##son',   'lives',   'in',   'rama',   '##t',   'gan',   '.',   '[SEP]'],  
array([1, 2, 4, 5, 6, 8, 9]),  
['O', 'B-PERS', 'I-PERS', 'X', 'O', 'O', 'B-LOC', 'X', 'I-LOC', 'O', 'O'])

@zhouyongjie
Copy link

zhouyongjie commented Aug 25, 2020

labels = ['B-PERS', 'I-PERS', 'O', 'B-LOC', 'I-LOC']
labels2id = {'B-PERS': 0, 'I-PERS': 1, 'O': 2, 'B-LOC': 3, 'I-LOC': 4}
sent = ['[CLS]', 'john', 'johan',  '##son', 'lives',  'in', 'ramat', 'gan', '.', '[SEP]']
labels = [2, 0, 1, 1, 2, 2, 3, 4, 2, 2]
attention_mask = [0, 1, 1, 1, 1, 1, 1, 1, 1, 0]
sentence_id = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

@AlxndrMlk

Hello,if we have the following sentence:

sent = "Johanson lives in Ramat Gan."
labels = ['B-PERS', 'O', 'O', 'B-LOC', 'I-LOC']

Would “Johanson” be processed like this?

'johan',  '##son'  
'B-PERS'    'I-PERS'

or like this?

'johan',  '##son'  
'B-PERS'   'B-PERS'      

thank you!

@Single430
Copy link

labels = ['B-PERS', 'I-PERS', 'O', 'B-LOC', 'I-LOC']
labels2id = {'B-PERS': 0, 'I-PERS': 1, 'O': 2, 'B-LOC': 3, 'I-LOC': 4}
sent = ['[CLS]', 'john', 'johan',  '##son', 'lives',  'in', 'ramat', 'gan', '.', '[SEP]']
labels = [2, 0, 1, 1, 2, 2, 3, 4, 2, 2]
attention_mask = [0, 1, 1, 1, 1, 1, 1, 1, 1, 0]
sentence_id = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

@AlxndrMlk

Hello,if we have the following sentence:

sent = "Johanson lives in Ramat Gan."
labels = ['B-PERS', 'O', 'O', 'B-LOC', 'I-LOC']

Would “Johanson” be processed like this?

'johan',  '##son'  
'B-PERS'    'I-PERS'

or like this?

'johan',  '##son'  
'B-PERS'   'B-PERS'      

thanks you!

The middle one is right, you need to add a label to labels ‘I-PERS’

@hkmztrk
Copy link

hkmztrk commented Mar 31, 2021

labels = ['B-PERS', 'I-PERS', 'O', 'B-LOC', 'I-LOC']
labels2id = {'B-PERS': 0, 'I-PERS': 1, 'O': 2, 'B-LOC': 3, 'I-LOC': 4}
sent = ['[CLS]', 'john', 'johan',  '##son', 'lives',  'in', 'ramat', 'gan', '.', '[SEP]']
labels = [2, 0, 1, 1, 2, 2, 3, 4, 2, 2]
attention_mask = [0, 1, 1, 1, 1, 1, 1, 1, 1, 0]
sentence_id = [0, 0, 0, 0, 0, 0, 0, 0, 0, 0]

Hello, I'm confused about the labels for [CLS] and [PAD] tokens. Assume that I have originally have 4 labels for each word [0, 1, 2, 3, 4] should I add [CLS] and [PAD] as another label? I see that in the example here [CLS] and [SEP] takes labels '2'. Does making the attention 0 for those positions solve this?

@shushanxingzhe
Copy link

This repository have showed how to add a CRF layer on transformers to get a better performance on token classification task.
https://github.com/shushanxingzhe/transformers_ner

@linhlt-it-ee
Copy link

tks alot @shushanxingzhe

@linhlt-it-ee
Copy link

@shushanxingzhe : I think you are using label 'O' as padding label in your code. From my view point, you should have another label 'PAD' for padding instead using 'O' label

@linhlt-it-ee
Copy link

Could someone please tell me how to use CRF with decode padding. When i code as below, i always get err: expected seq=18 but got 13 for next line "tags = torch.Tensor(tags)"
if labels is not None:
log_likelihood, tags = self.crf(logits, labels,attn_mask), self.crf.decode(logits,attn_mask)
loss = 0 - log_likelihood
else:
tags = self.crf.decode(logits,attn_mask)

@zycalice
Copy link

zycalice commented Mar 14, 2022

Can we just remove the non-first subtokens during feature processing if we are treating NER problem as a classification problem?

Example:
labels = ['B-PERS', 'I-PERS', 'O', 'B-LOC', 'I-LOC']
labels2id = {'B-PERS': 0, 'I-PERS': 1, 'O': 2, 'B-LOC': 3, 'I-LOC': 4}
sent = ['[CLS]', 'john', 'johan', '##son', 'lives', 'in', 'ramat', 'gan', '.', '[SEP]']

cleaned_sent = ['[CLS]', 'john', 'johan', 'lives', 'in', 'ramat', 'gan', '.', '[SEP]']

jameshennessytempus pushed a commit to jameshennessytempus/transformers that referenced this issue Jun 1, 2023
ZYC-ModelCloud pushed a commit to ZYC-ModelCloud/transformers that referenced this issue Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Discussion Discussion on a topic (keep it focused or open a new issue though) wontfix
Projects
None yet
Development

No branches or pull requests