Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Added support for German DNS provider Core Networks #494

Open
wants to merge 98 commits into
base: master
Choose a base branch
from

Conversation

MasinAD
Copy link

@MasinAD MasinAD commented Apr 24, 2020

I implemented support for the German DNS provider Core Networks. They provide a simple API with a few caveats (most importantly rate limiting logins).

My implementation passes all the tests:

===================== 27 passed, 2 skipped, 6 warnings in 105.94s (0:01:45) ======================

One major problem might be that the API requires changes to be committed which is a slow operation. That's why I decided to put the commit call in the destructor. During my tests I experienced that the destructor wasn't called anymore because Python was shutting down. Maybe there's a better solution.

Forgot some stuff

Reduced API calls by 1

Removed debug output

Finalized provider

Removed wrongly added cassette
@MasinAD
Copy link
Author

MasinAD commented Apr 24, 2020

Okay, I don't get it. My _filter_post_data_parameters seems to get ignored as the post data is still included in the recording. Without the real login data the simulated tests always fail. With login data the simulated and live tests succeed.

OTOH, I got it running with Python 2.7.

new test data (removed)

removed some overrides

removed test data, debug functions in provider
Copy link

@chkpnt chkpnt left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Didn't know this project until now. And then there is even already a pull-request for the provider I'm using too! 🤗

So I've tested it locally on my Mac within a python 2.7 environment and can confirm: It's working. 🥳

But as I do not know this project, I won't approve or deline this PR, but just leave some comments.

lexicon/providers/corenetworks.py Outdated Show resolved Hide resolved
lexicon/providers/corenetworks.py Outdated Show resolved Hide resolved
lexicon/providers/corenetworks.py Outdated Show resolved Hide resolved
records = self._list_records( rtype=rtype, name=self._relative_name(name) )
if len(records) > 0:
for record in records:
return self._update_record( record['id'], rtype, name, content )
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Mh, haven't tried out, but looks like only the first match will be updated, right?

Copy link
Collaborator

@adferrand adferrand May 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a design decision here: either you update all matching records, or only the first one, or fail totally.

Usually we update the first one, but generate a log warning to say that more than one record was found, and only first one will be updated.

Copy link
Collaborator

@adferrand adferrand left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

On top of comments done by @chkpnt, here are mine. Thanks a lot !

self.auth_file_path = self._get_provider_option('auth_file') or '/tmp/corenetworks_auth.json'
self.api_endpoint = 'https://beta.api.core-networks.de'

def __del__(self):
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your comment on the PR about slow operations. Which delay are we talking about ?

In theory, when Lexicon returns from the current operation one would assume that changes are committed. So for the sake of simplicity I would just commit right after the actual modification, and so hang until it is done.

If it is really a big problem, we can find a better way than calling the constructor. Using atexit standard module, you can register handlers that will be invoked when Python close the process. It gives a consistent way to ensure that it is executed, since the only way to avoid these handlers is to kill -9 the current process.

Still the Python process will hang while leaving until the commit is effective.

Copy link
Collaborator

@adferrand adferrand May 4, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thinking about it again. So if the slow operation is still acceptable to be done after each effective modification (create, update, delete).

If you want a pythonic approach to ensure that something is always done at the end of some logic, you can use a contextmanager. It is basically a try/finally, but more elegant. Here an implementation I propose for your use case:

from contextlib import contextmanager


class _CommitTrigger:
    def __init__(self):
        self._need_commit = False

    @property
    def need_commit(self):
        return self._need_commit

    @need_commit.setter
    def need_commit(self, set_need_commit):
        self._need_commit = set_need_commit


class Provider(BaseProvider):
    @contextmanager
    def ensure_commit(self):
        commit_trigger = _CommitTrigger()
        try:
            yield commit_trigger
        finally:
            if commit_trigger.need_commit:
                self._post("/dnszones/{0}/records/commit".format(self.domain))

    def _list_records(self, rtype=None, name=None, content=None):
        with self.ensure_commit() as commit_trigger:
            # Do logic ...

    def _create_record(self, rtype, name, content):
        with self.ensure_commit() as commit_trigger:
            # Do logic ...
            commit_trigger.need_commit = True
            
    # Add logic ...

Here the contextmanager will ensure that commit is done whatever happened in _create_record as soon as commit_trigger.need_commit = True is called. Similarly, list_record will not commit at the end, since commit_trigger.need_commit is not set to True.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I see your comment on the PR about slow operations. Which delay are we talking about ?

I don't know. When asking the provider for the limits they just mentioned the login limit. The commit limit seems to be different: The API responds with a variable delay; the more often the commit function is called the slower it gets. For normal operations it's almost unnoticeable but when deleting a lot of records e.g. after running the integration tests it takes much more time to commit each single delete than to commit all deletions at once.

I will take a closer look at your proposal. As I'm new to Python I'll need a little bit more time to understand what's happening there … 😃

I already worked in all other comments. I guess I'll be done tomorrow evening CEST.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@adferrand: Okay, finally got to this point. We have decorators and contextmanager, with and yield statements, generators and iterables. Just let me say that I can wrap my head around the last two but with, yield and decorators are still beyond my scope 😆

I try to slowly understand what's happening here but that's too much at once. I guess I have to live with that for now.

With those changes I run again into this error in two tests:

requests.exceptions.HTTPError: 429 Client Error: Too Many Requests for url: https://beta.api.core-networks.de/dnszones/fluchtkapsel.de/records/commit

The rest works as intended.

lexicon/providers/corenetworks.py Outdated Show resolved Hide resolved
lexicon/providers/corenetworks.py Outdated Show resolved Hide resolved
lexicon/tests/providers/test_corenetworks.py Outdated Show resolved Hide resolved
@adferrand
Copy link
Collaborator

Hello @MasinAD, are you still willing to finish the PR?

@MasinAD
Copy link
Author

MasinAD commented Jun 2, 2020

Hello @MasinAD, are you still willing to finish the PR?

Oh, sorry, yes, I am. I will finish it at the weekend.

@adferrand
Copy link
Collaborator

Hello @MasinAD, any update here?

@adferrand
Copy link
Collaborator

Hello @MasinAD, quick reminder if you have the time to finish your PR :)

@MasinAD
Copy link
Author

MasinAD commented Jan 7, 2021

Hello @MasinAD, quick reminder if you have the time to finish your PR :)

Sorry for the late answer. I have to time to work on this again and I am motivated to work on this again.

@renne
Copy link
Contributor

renne commented Mar 28, 2022

Sorry for the late answer. I have to time to work on this again and I am motivated to work on this again.

@MasinAD Hi, what is the current state of the PR?

dependabot bot and others added 11 commits March 29, 2022 19:48
Bumps [pytest](https://github.com/pytest-dev/pytest) from 6.2.5 to 7.0.0.
- [Release notes](https://github.com/pytest-dev/pytest/releases)
- [Changelog](https://github.com/pytest-dev/pytest/blob/main/CHANGELOG.rst)
- [Commits](pytest-dev/pytest@6.2.5...7.0.0)

---
updated-dependencies:
- dependency-name: pytest
  dependency-type: direct:development
  update-type: version-update:semver-major
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Updated valid record types for dreamhost integration

* Forgot remaining
Bumps [black](https://github.com/psf/black) from 21.12b0 to 22.1.0.
- [Release notes](https://github.com/psf/black/releases)
- [Changelog](https://github.com/psf/black/blob/main/CHANGES.md)
- [Commits](https://github.com/psf/black/commits/22.1.0)

---
updated-dependencies:
- dependency-name: black
  dependency-type: direct:development
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [types-pyyaml](https://github.com/python/typeshed) from 6.0.3 to 6.0.4.
- [Release notes](https://github.com/python/typeshed/releases)
- [Commits](https://github.com/python/typeshed/commits)

---
updated-dependencies:
- dependency-name: types-pyyaml
  dependency-type: direct:development
  update-type: version-update:semver-patch
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Bumps [dnspython]() from 2.1.0 to 2.2.0.

---
updated-dependencies:
- dependency-name: dnspython
  dependency-type: direct:production
  update-type: version-update:semver-minor
...

Signed-off-by: dependabot[bot] <support@github.com>

Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
* Add Webgo Provider

* Update Tests for Webgo

* Change Formatting

* Correction for Flake8

* Update Handling for Main DNS Entry

Co-authored-by: mb <michael.bruenker@nextevolution.de>
@MasinAD MasinAD force-pushed the master branch 4 times, most recently from 10d5380 to 219455d Compare March 29, 2022 19:29
@MasinAD
Copy link
Author

MasinAD commented Mar 29, 2022

@MasinAD Hi, what is the current state of the PR?

It fails 2 tests I cannot figure out how to fix but otherwise seems to work. I updated it to latest upstream.

@MasinAD MasinAD requested a review from adferrand March 30, 2022 18:48
@adferrand
Copy link
Collaborator

Hello @MasinAD! Thanks a lot for your update. Have you achieved the fix on the remaining tests? If so, the only thing to do to finish the PR is to update the cassettes for the integration tests, and we are good to go!

@MasinAD
Copy link
Author

MasinAD commented Nov 7, 2022

I thought I updated the cassettes. Are they outdated?

@renne
Copy link
Contributor

renne commented Feb 2, 2023

@adferrand
@MasinAD
Why is the pull-request failing?

@adferrand
Copy link
Collaborator

I think there was an issue with tox, not supporting anymore the whitelist_externals parameter. I just hit that one. Master branch is updated, so we should rebase the PR and see if it solves the issues.

@MasinAD
Copy link
Author

MasinAD commented Feb 9, 2023

Ummm, do I have anything to do? How do I rebase the PR?

@renne
Copy link
Contributor

renne commented Jun 28, 2023

@adferrand
@MasinAD

Are there any plans to go on with this PR?

@adferrand
Copy link
Collaborator

Well, this PR fell in the limbo given that the tests are failing and have never been fixed.

Either the original contributor can work on the issue or myself if you provide me valid credentials to actually test the provider.

@renne
Copy link
Contributor

renne commented Jun 28, 2023

@adferrand
How can I send you a private message with credentials?

@MasinAD
Copy link
Author

MasinAD commented Jun 28, 2023

If someone can tell me how to rebase the PR I can work on fixing stuff.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.