Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Restructure TextCorpus code to share multiprocessing and preprocessing logic. #1478

Closed
wants to merge 23 commits into from

Conversation

macks22
Copy link
Contributor

@macks22 macks22 commented Jul 9, 2017

Implements #1477.

@macks22
Copy link
Contributor Author

macks22 commented Jul 16, 2017

@piskvorky @menshikh-iv I believe the build failures on this only have to do with importing in the main guard in the test_word2vec module. This appears to be for a good reason based on the comment there. Is there anything else that should be changed for this PR? Thanks!

@macks22
Copy link
Contributor Author

macks22 commented Jul 16, 2017

I tested out the WikiCorpus before and after on the full wikipedia corpus. I did this by building the Dictionary. My goal was to ensure the speed is comparable now to what was implemented before.

  • Before: built Dictionary(2012943 unique tokens: [u'tripolitan', u'ftdna', u'fi\u0250', u'soestdijk', u'farmobil']...) from 4265002 documents (total 2338950602 corpus positions)
    • 206m26.452s
  • After: built Dictionary(2014666 unique tokens: [u'tripolitan', u'ftdna', u'fi\u0250', u'soestdijk', u'farmobil']...) from 3765193 documents (total 1379989194 corpus positions)
    • 211m35.043s

The speed is comparable. This implementation also performs deaccenting and stopword removal, which I suspect is why it takes a few minutes longer. The difference in number of documents comes from the removal of stopwords, which results in many more empty documents which are pruned. I think the difference in number of terms is due to the Dictionary pruning encountering different documents.

Copy link
Owner

@piskvorky piskvorky left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A quick shallow scan of coding style; I didn't have time to verify the actual logic.

self.init_state(state_kwargs)

def init_state(self, state_kwargs):
for name, value in state_kwargs.items():
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not simply self.__dict__.update?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good suggestion; done

@@ -0,0 +1,160 @@
import multiprocessing as mp
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Missing file header.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

added file header

# So just split the token sequence arbitrarily into sentences of length
# `max_sentence_length`.
sentence, rest = [], b''
with utils.smart_open(self.source) as fin:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Best to open files in binary mode (rb), and convert to text explicitly where needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The default is 'rb', but I updated to set mode explicitly to future-proof.

break

last_token = text.rfind(b' ') # last token may have been split in two... keep for next iteration
words, rest = (text[:last_token].split(),
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No vertical indent -- please use hanging indent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

# no need to lowercase and unicode, because the tokenizer already does that.
character_filters = [textcorpus.deaccent, textcorpus.strip_multiple_whitespaces]
super(WikiCorpus, self).__init__(source, dictionary, metadata, character_filters, tokenizer,
token_filters, processes)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No vertical indent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@@ -20,7 +19,7 @@
import numpy as np

from gensim.corpora import (bleicorpus, mmcorpus, lowcorpus, svmlightcorpus,
ucicorpus, malletcorpus, textcorpus, indexedcorpus)
ucicorpus, malletcorpus, indexedcorpus)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No vertical indent please.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done


def test_texts_file():
fpath = os.path.join(tempfile.gettempdir(), 'gensim_corpus.tst')
with open(fpath, 'w') as f:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

smart_open + binary mode please.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this function was actually not being used, so I just removed it


def corpus_from_lines(self, lines):
fpath = tempfile.mktemp()
with codecs.open(fpath, 'w', encoding='utf8') as f:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Dtto.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed mode to 'wb'

gensim/utils.py Outdated
@@ -1263,3 +1269,45 @@ def _iter_windows(document, window_size, copy=False, ignore_below_size=True):
else:
for doc_window in doc_windows:
yield doc_window.copy() if copy else doc_window


def walk_with_depth(top, topdown=True, onerror=None, followlinks=False, depth=0):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this really needed? The depth can be deduced easily from normal walk(), by comparing the root directories.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a very good point; I replaced this with a wrapper on os.walk that just deduces the depth in the manner you suggested.

@piskvorky
Copy link
Owner

This looks like a massive PR; are the changes 100% backward compatible?

If not, what is the upgrade plan for users = how do they modify their existing code so it continues to work?

util.debug('worker exiting after %d tasks' % completed)


class _PatchedPool(mp.pool.Pool):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this for, what is being patched (and why)?

Needs a clear comment.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've added documentation throughout this module to clarify. Some added context: when I initially implemented this refactor, I was serializing the token_filters, tokenizer, and character_filters used in TextCorpus for text preprocessing. This pickling overhead was causing a significant slowdown. So I wanted to include them in each worker process at startup to speed it up. Doing so ruled out the use of the builtin multiprocessing.Pool class.

Rather than write a complicated custom pool, I decided that reuse via patching of the existing pool would be more robust and probably useful elsewhere in the code (for instance, in the text_analysis module used by the probability_estimation module). That is why this module came about.

Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha, thanks, that's interesting. @gojomo @menshikh-iv can you please review this extended multiprocessing logic?

I'm curious whether others do it this way too, since this seems a very common use-case.

"""
for i in range(self._processes - len(self._pool)):
w = self.Process(args=(self._inqueue, self._outqueue,
self._initializer,
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No vertical indent in gensim.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

util.debug('added worker')


class TextProcessingPool(object):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is this wrapper for? Why not use the default Pool?

I'd prefer to stick to built-ins, unless absolutely necessary. And if absolutely necessary, will need a better documentation describing the rationale.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Added documentation with rationale. See comment on _PatchedPool above for additional context.

@macks22
Copy link
Contributor Author

macks22 commented Jul 17, 2017

@piskvorky I've addressed your review comments; thank you for the quick feedback! If I can add anything else to make your review of the logic easier or otherwise clarify things, I will gladly do so.

gensim/utils.py Outdated
path = os.path.abspath(top)
for dirpath, dirnames, filenames in os.walk(path, topdown, onerror, followlinks):
sub_path = dirpath.replace(path, '')
depth = sub_path.count(os.sep)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this safe even with unnormalized paths (/tmp vs /tmp/ etc)? How does walk handle symlinks?

os.path.relpath/commonprefix may be safer, I'm not sure.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

docs say this on symlinks:

By default, os.walk does not follow symbolic links to subdirectories on
systems that support them. In order to get this functionality, set the
optional argument 'followlinks' to true.

I did look at both of the functions you referenced (which I was not familiar with), but I believe the current code handles unnormalized paths correctly. I've added tests to verify this.

Sweeney, Mack added 15 commits July 26, 2017 07:53
…ove tests for text corpora classes to `test_textcorpus` module.
…that serializes all preprocessing functions once on initialization and then only passes the documents to the workers and the tokens back to the master.
…`textcorpus.walk` to `walk_with_depth` and move to `utils` module. Update tests and other referencing modules to adjust to the moves, resolving some circular references that arose in the process.
…r to provide multiprocessing and additional preprocessing options.
…agical ways. Also, adjust `LineSentence` default kwargs to use single process and allow other preprocessing options.
…ove tests for text corpora classes to `test_textcorpus` module.
…that serializes all preprocessing functions once on initialization and then only passes the documents to the workers and the tokens back to the master.
…r to provide multiprocessing and additional preprocessing options.
@macks22 macks22 force-pushed the text_corpus_restructure branch from 6c12b54 to 2db0aaa Compare July 26, 2017 12:26
@macks22
Copy link
Contributor Author

macks22 commented Jul 26, 2017

@piskvorky I believe this is fully backwards-compatible in terms of interfaces. The only thing I expect will be different is the default preprocessing used for the WikiCorpus. In particular, it is now removing stopwords and deaccenting non-ascii text. It is also removing tokens shorter than 3 characters, instead of just those shorter than 2. I expect this will be a happy change for most users, but it is also possible to achieve the old behavior by initializing with WikiCorpus(fname, character_filters=[], token_filters=[]).

Also, I have updated the PR to address your most recent comments; thank you for your review. I believe you'd asked for thoughts from @gojomo and @menshikh-iv regarding the modified multiprocessing pool; I'm also curious to know if the approach I took here has been used elsewhere and if any alternative approaches might be more suitable for this problem. Thanks!

path = os.path.abspath(top)
for dirpath, dirnames, filenames in os.walk(path, topdown, onerror, followlinks):
sub_path = dirpath.replace(path, '')
depth = sub_path.count(os.sep)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This construct still makes me a little uneasy. Can we at least os.path.normpath, to get rid of any double/trailing/leading slashes? Or does os.walk normalize the dirpath somehow? Although in that case, we'd have to normalize path and dirpath in exactly the same way, so that the .replace() above works.

Copy link
Contributor Author

@macks22 macks22 Jul 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see your concern. I think os.path.abspath (called as the first line of that function) handles the situation you're worried about:

In [4]: os.path.abspath('/test/path/')
Out[4]: '/test/path'

In [5]: os.path.abspath('/test/path')
Out[5]: '/test/path'

In [6]: os.path.abspath('/test/path//')
Out[6]: '/test/path'

Copy link
Owner

@piskvorky piskvorky Jul 27, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What if walk hits a symlinked dir -- does it return dirpath as a canonical path (de-sym-linked), or is path still its prefix?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tree tmp

tmp
├── subdir
│   └── test
└── symlink -> subdir

2 directories, 1 file
In [56]: list(os.walk('tmp'))
Out[56]: [('tmp', ['subdir', 'symlink'], []), ('tmp/subdir', [], ['test'])]
In [58]: list(os.walk('tmp', followlinks=True))
Out[58]:
[('tmp', ['subdir', 'symlink'], []),
 ('tmp/subdir', [], ['test']),
 ('tmp/symlink', [], ['test'])]

@piskvorky
Copy link
Owner

piskvorky commented Jul 27, 2017

@macks22 thanks!

In particular, it is now removing stopwords and deaccenting non-ascii text. It is also removing tokens shorter than 3 characters. Why this change?

Does the new code support custom tokenization / text normalization? That sounds really useful. Same defaults (backward compatibility), but allow injecting your own function to normalize and tokenize a text.

We had a recent ticket where a Thai user complained our wiki processing returns rubbish. Which is 100% true -- not only do/did we not support custom text processing, we didn't even notice where our hardwired processing didn't make sense, and happily produced garbage output without any error/warning.

…dd the `tokenizer` argument to allow users to override the default lemmatizer/tokenizer functions.
@macks22
Copy link
Contributor Author

macks22 commented Jul 27, 2017

@piskvorky I had mainly made those changes so the preprocessing defaults would be as close to the default for the TextCorpus as possible. I've added a commit changing the defaults back to the way they were before. And yes, the new code does support custom tokenization, text normalization, and any other preprocessing desired by the user. The preprocessing pipeline is the same as that used for TextCorpus, which consists of 0+ character_filters, 1 tokenizer, and 0+ token_filters. Here is the relevant excerpt from the TextCorpus.__init__ docstring:

character_filters (iterable of callable): each will be applied to the text of each
    document in order, and should return a single string with the modified text.
    For Python 2, the original text will not be unicode, so it may be useful to
    convert to unicode as the first character filter. The default character filters
    lowercase, convert to unicode (strict utf8), perform ASCII-folding, then collapse
    multiple whitespaces.
tokenizer (callable): takes as input the document text, preprocessed by all filters
    in `character_filters`; should return an iterable of tokens (strings).
token_filters (iterable of callable): each will be applied to the iterable of tokens
    in order, and should return another iterable of tokens. These filters can add,
    remove, or replace tokens, or do nothing at all. The default token filters
    remove tokens less than 3 characters long and remove stopwords using the list
    in `gensim.parsing.preprocessing.STOPWORDS`.

@piskvorky
Copy link
Owner

Nice! Should tokenizer return byte strings, or unicode strings?

@piskvorky
Copy link
Owner

@macks22 is there a way to reach you privately (email)? Please ping me at radim@rare-technologies.com.

…g within `TextCorpus`. Update docstring for `TextCorpus` for new parameters. Convert `PathLineSentences` to a `TextDirectoryCorpus` subclass and adjust the tests to account for this.
@macks22
Copy link
Contributor Author

macks22 commented Jul 28, 2017

@piskvorky Updated to improve the docstrings around the tokenizer. I also included the PathLineSentences corpus to textcorpus and made it inherit from TextDirectoryCorpus so it can share the same preprocessing/multiprocessing and filename filtering functionality.

logger.debug("sorting filepaths")
paths = list(paths)
paths.sort(key=lambda path: os.path.basename(path))
logger.debug("found {} files: {}".format(len(paths), paths))
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The rest of the code uses C-style formatting -- best keep it consistent.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The intention here was to get the auto-formatting for the list of paths, as opposed to having to do my own '[' + ', '.join(paths) + ']', which seemed much messier. Should I still change it do this instead?

Copy link
Owner

@piskvorky piskvorky Jul 29, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All these formatting alternatives should work identically (call str/repr on their arguments), so I'm not sure what you mean. Are you seeing a difference?

One advantage of a C-style format is that the argument types will be immediately apparent to the reader (%d and %s or %r in this case).

Unrelated: the arguments should be passed to logger.debug as arguments, to avoid formatting the string in case the message is not emitted by logging (doesn't pass the log level threshold etc). We want to leave the string formatting (which can sometimes be expensive) for the last moment possible.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah, I see; I simply wasn't aware of '%r' as an option to get the repr. Updated to use the C-style formatting.


logging.info('files read into PathLineSentences:' + '\n'.join(self.input_files))
logger.debug("finished reading %d files", num_files)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not info?

token_filters (iterable of callable): each will be applied to the iterable of tokens
in order, and should return another iterable of tokens. These filters can add,
remove, or replace tokens, or do nothing at all. The default token filters
remove tokens less than 3 characters long and remove stopwords using the list
in `gensim.parsing.preprocessing.STOPWORDS`.
processes (int): number of processes to use for text preprocessing. The default is
-1, which will use (number of virtual CPUs - 1) worker processes, in addition
Copy link
Owner

@piskvorky piskvorky Jul 28, 2017

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What happens when number of virtual CPUs == 1?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No worker pool is used; all preprocessing occurs in the master process. I've updated the docstring to inform on this.

For Python 2, the original text will not be unicode, so it may be useful to
convert to unicode as the first character filter. The default character filters
lowercase, convert to unicode (strict utf8), perform ASCII-folding, then collapse
For Python 2, the original text will not be unicode (unless you modify your
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I propose dropping the (unless ... bracket. This is already very complicated as it is.

Also, why single out Python 2? Is the behaviour different between Python 2 vs Python 3?
If so, I'd consider that a bug.

Let's keep the API as simple as possible: getstream returns unicode (no matter the Python version); all filters expect unicode.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done; I've put the unicode conversion in the master process. For the sake of speed, it may make sense to have getstream return bytes in all versions, move the encoding parameters to the workers, and have them do the unicode conversion. Based on the ongoing Phrases refactor, that seems to be more of a bottleneck than I would've expected. Despite these considerations, I think it is sensible to do it in the master for the sake of simplicity for now.

For Python 2, the original text will not be unicode (unless you modify your
`getstream` method to convert it to unicode), so it may be useful to convert to
unicode as the first character filter. The default character filters lowercase,
convert to unicode (strict utf8), perform ASCII-folding, then collapse
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why lowercase before converting to unicode? Could lead to bugs for non-ASCII capitals.

processes (int): number of processes to use for text preprocessing. The default is
-1, which will use (number of virtual CPUs - 1) worker processes, in addition
to the master process. If set to a number greater than the number of virtual
CPUs available, the value will be reduced to (number of virtual CPUs - 1).
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

-1 on this: why override a user's explicit request?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

you're right; this mistrust of users is not suitable for a Python code base! Modified to remove upper bounding

…ry filtering arguments to discard no tokens by default.
@@ -552,7 +554,7 @@ def getstream(self):
"""
for path in self.iter_filepaths():
logging.debug("reading file: %s", path)
with utils.smart_open(path) as f:
with utils.smart_open(path, 'rt') as f:
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This looks fragile. Best to always open files in binary mode rb, and convert to text (unicode) explicitly, with an explicit encoding, where needed.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

changed to 'rb' followed by explicit unicode conversion

…nd add unicode decoding arguments to `TextCorpus`. Also open source files in 'rb' mode. Lowercase after deaccenting to prevent deaccent confusion. Do not upper bound the number of processes the user passes to `TextCorpus` constructor.
@macks22
Copy link
Contributor Author

macks22 commented Aug 5, 2017

@piskvorky thank you for your many reviews! I believe I have addressed all your comments and requests for changes. From what I can tell, the build check failures are only due to the imports in the __main__ block of the word2vec module, which seem to be necessary. Is there anything else you'd like me to change for this PR? Thanks!

@piskvorky
Copy link
Owner

Thanks for all the fixes and good work :)

I'll defer to @menshikh-iv for a final thorough review and decision (and fixing the unrelated build errors).

@menshikh-iv menshikh-iv added breaks backward-compatibility Change breaks backward compatibility and removed breaks backward-compatibility Change breaks backward compatibility labels Sep 13, 2017
Copy link
Contributor

@menshikh-iv menshikh-iv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Beautiful work!
Please resolve merge conflict & fix small issues, this code LGTM.

self._pool.terminate()


if __name__ == "__main__":
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This code isn't needed here (remove OR refactor it and add as test, it's better solution)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

moved to test class

else:
yield f.read().strip()
num_texts += 1
# endclass TextDirectoryCorpus
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No needed # endclass ..., please remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

@menshikh-iv
Copy link
Contributor

@macks22 please pay attention to Appveyour problems, a lot of tests breaks, but all of this looks like one problem.

@menshikh-iv
Copy link
Contributor

Ping @macks22, what's a status here?

@menshikh-iv
Copy link
Contributor

Ping @macks22

@macks22
Copy link
Contributor Author

macks22 commented Oct 2, 2017

@menshikh-iv I'm hoping to update this in the coming weeks. Having trouble finding time to put towards it on the weekends. I'm thinking to refactor it according to some discussion I had with @michaelwsherman in regards to #1506. He had proposed a decomposition of responsibilities into something like a TextCorpusLoader and a TextPreprocessor. I think splitting the preprocessing logic out into its own class instead of dynamically generating classes and transplanting methods (as I'm doing now) will resolve the errors on Windows.

@menshikh-iv
Copy link
Contributor

@macks22 thanks for clarification, good luck :)

@macks22
Copy link
Contributor Author

macks22 commented Oct 14, 2017

@menshikh-iv hope all is well; I'm still working to find time to update this to fix the tests on Windows in the manner I described above. Hopefully next weekend.

@menshikh-iv
Copy link
Contributor

ping @macks22, have you time to finish this now?

@menshikh-iv
Copy link
Contributor

Ping @macks22, we are waiting you :)

@macks22
Copy link
Contributor Author

macks22 commented Nov 7, 2017

@menshikh-iv sorry for the latency in reply. I haven't had sufficient time to finish this. It's still on my Todo list, but TBH, I may not have time again until end of December holidays.

@menshikh-iv
Copy link
Contributor

Ping @macks22, December has come, I remind you of us :)

@menshikh-iv
Copy link
Contributor

ping @macks22, I remind you about PR :)

@menshikh-iv
Copy link
Contributor

I'm sorry, but I'm closing this PR.
@macks22 feel free to re-open when you'll have time to finish this.

@menshikh-iv menshikh-iv closed this Jan 8, 2018
@macks22
Copy link
Contributor Author

macks22 commented Jan 12, 2018

Sorry for the delay in responding; I have been busier than expected. I will try to re-open and finish when I can.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants