You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
ED mini-batches currently consist of single documents. As a consequence of this, GPU utilization is limited as the number of mentions per document varies and can, at many times, be small. A potential line of improvement is to:
Introduce an additional dimension for the number of documents that are processed per mini-batch. This would result in the following dimensions for a given mini-batch: (n_documents, n_mentions, n_features).
This will require the dimensions to align across documents, meaning that padding will be required for the n_mentions. We need to investigate how much padding would roughly be required and what the variance is across the number of mentions per document.
During training, it is essential that there is still randomness across the batches, so grouping documents by their number of mentions is assumed to be suboptimal. Now, during inference, this is no longer an issue. As such, if our goal is to improve inference (which I believe it is), we can actually group documents based on their number of mentions to reduce the amount of padding that is required.
ED mini-batches currently consist of single documents. As a consequence of this, GPU utilization is limited as the number of mentions per document varies and can, at many times, be small. A potential line of improvement is to:
(n_documents, n_mentions, n_features)
.Related to #90
The text was updated successfully, but these errors were encountered: