Skip to content

URLs validation should use urlparse #189

@waynew

Description

@waynew

Python already parses URLs, and does it correctly:

>>> from urllib.parse import urlparse
>>> urlparse('https://google.com')
ParseResult(scheme='https', netloc='google.com', path='', params='', query='', fragment='')
>>> urlparse('gopher://gopher.waynewerner.com')
ParseResult(scheme='gopher', netloc='gopher.waynewerner.com', path='', params='', query='', fragment='')
>>> urlparse('tel://555-555-5555')
ParseResult(scheme='tel', netloc='555-555-5555', path='', params='', query='', fragment='')
>>> urlparse('file:///path-to-some-file')
ParseResult(scheme='file', netloc='', path='/path-to-some-file', params='', query='', fragment='')
>>> urlparse('missing-scheme.com')
ParseResult(scheme='', netloc='', path='missing-scheme.com', params='', query='', fragment='')

I had to chase down this library because click-params uses validators to validate URLs, but totally valid URLs aren't parsed correctly because the scheme wasn't expected by this library 😞

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementIssue/PR: A new featureoutdatedIssue/PR: Open for more than 3 months

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions