Skip to content

Commit

Permalink
Dp 3047 present additional metadata items (#91)
Browse files Browse the repository at this point in the history
* update form with updated fields

* update form with updated fields

* implement custom property search

* update pre-commit
- add sync_with_poetry hook to keep pre-commit in sync with pyproject.toml
- config flake8
- config isort to use black via pyproject.toml
Add .secrets.baseline for detect-secrets
move testing libraries to dev.dependencies in pyproject.toml

* linting

* linting and formatting for readability

* revert domains (plural) -> domain (singular) changes (may be implemented in later PR)

* add descriptions and code annotations for added helper functions
add default for dict.get() [linting]

* keep original indent for domain filters

* linting

* add pre-commit instructions/details to readme

* Changed value for selection option an empty string and updated where_to_access choice value to 'Analytical_Platform' to avoid spaces.
q
exit
quit()
q
��[200~classifications~classifications

* Make domain a single choice field

* Removed admin.py as not required

* Moved filter code to new partial template

* refactor domain and subdomain choices to get the urn value

* build filter strings method to create filter search strings

* refactor classifications and where to access string generation, remove full query builder

* refactor where to access href creation

* add get_keys and format_label template filter functions

* Update broken tests

* add selected filters partial

* update packages

* refactor encode_without_filter function

* remove redundant code

* update tests with singular domain, classifications and where_to_access

* Update dingular domain and add linting fixes

* linter updates

* add service for dataproductdetails

* update templates - data product details and search

* url for search link and details redirect in views

* add tests for data product details and view

* add selenium test for data product detail page

* update poetry

* fix selenium tests that sometimes hit a dead end

* actually make the tests work for data product details

* this template will always be data product and align case

* rename view, service and add blank dataset template

* markdown and trim for table descriptions in data product detail page

* remove self.data_product_name from dataproductservice

* try reinstalling deps for selenium tests

* revert workflow change

* Add javascript for domain filter widget

Domain will have top level and subdomain selections, and work similarly
to "Topic/Sub-Topic" on GOV.UK search.

The form will submit domain and subdomain as separate fields, so we
need to combine them on the backend.

If javascript is not enabled, then the subdomain field is not displayed
and it will work as before.

This is tested using jest and jest-dom.

* Revert template - will add backend later

* Add test

* Install a compatable version of chrome/chromedriver

The chromedriver library updates more frequently than the chrome
distributed in ubuntu-latest, but these need to be the same version,
otherwise axe-core breaks.

As a workaround, try to install a version that matches whatever chrome
is on the path.

See also dequelabs/axe-core-npm#401 (comment)

* update form with updated fields

* update form with updated fields

* implement custom property search

* revert domains (plural) -> domain (singular) changes (may be implemented in later PR)

* Make domain a single choice field

* refactor classifications and where to access string generation, remove full query builder

* refactor where to access href creation

* update packages

* linter updates

* remove duplicated function

* update search result ui fields

* fix result type in conftest

* remove official-sensitive classification

* add Custom Properties match display and rename 'list' to value_list

---------

Co-authored-by: Tom Webber <thomas.webber@digital.justice.gov.uk>
Co-authored-by: LavMatt <mattlaverty@gmail.com>
Co-authored-by: Mat Moore <MatMoore@users.noreply.github.com>
  • Loading branch information
4 people authored Feb 26, 2024
1 parent 6048171 commit 91ed261
Show file tree
Hide file tree
Showing 32 changed files with 2,005 additions and 180 deletions.
26 changes: 23 additions & 3 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -11,17 +11,30 @@ repos:
exclude: tf$|j2$

- repo: https://github.com/psf/black
rev: 23.11.0
rev: 23.12.1
hooks:
- id: black
name: black formatting
# args: [--config=./pyproject.toml]

- repo: https://github.com/PyCQA/flake8
rev: 6.1.0
rev: 7.0.0
hooks:
- id: flake8
name: flake8 lint
args: [
'--ignore=E203,E266,W503,F403',
'--exclude=".git, .mypy_cache, .pytest_cache, build, dist"',
'--max-line-length=89',
'--max-complexity=18',
'--select="B,C,E,F,W,T4,B9"'
]
additional_dependencies:
- flake8-broken-line
- flake8-bugbear
- flake8-comprehensions
- flake8-debugger
- flake8-string-format

- repo: https://github.com/Yelp/detect-secrets
rev: v1.4.0
Expand All @@ -31,7 +44,14 @@ repos:
exclude: package.lock.json

- repo: https://github.com/pycqa/isort
rev: 5.12.0
rev: 5.13.2
hooks:
- id: isort
name: isort (python)
additional_dependencies: ["toml"]

- repo: https://github.com/floatingpurr/sync_with_poetry
rev: "1.1.0"
hooks:
- id: sync_with_poetry
args: []
122 changes: 122 additions & 0 deletions .secrets.baseline
Original file line number Diff line number Diff line change
@@ -0,0 +1,122 @@
{
"version": "1.4.0",
"plugins_used": [
{
"name": "ArtifactoryDetector"
},
{
"name": "AWSKeyDetector"
},
{
"name": "AzureStorageKeyDetector"
},
{
"name": "Base64HighEntropyString",
"limit": 4.5
},
{
"name": "BasicAuthDetector"
},
{
"name": "CloudantDetector"
},
{
"name": "DiscordBotTokenDetector"
},
{
"name": "GitHubTokenDetector"
},
{
"name": "HexHighEntropyString",
"limit": 3.0
},
{
"name": "IbmCloudIamDetector"
},
{
"name": "IbmCosHmacDetector"
},
{
"name": "JwtTokenDetector"
},
{
"name": "KeywordDetector",
"keyword_exclude": ""
},
{
"name": "MailchimpDetector"
},
{
"name": "NpmDetector"
},
{
"name": "PrivateKeyDetector"
},
{
"name": "SendGridDetector"
},
{
"name": "SlackDetector"
},
{
"name": "SoftlayerDetector"
},
{
"name": "SquareOAuthDetector"
},
{
"name": "StripeDetector"
},
{
"name": "TwilioKeyDetector"
}
],
"filters_used": [
{
"path": "detect_secrets.filters.allowlist.is_line_allowlisted"
},
{
"path": "detect_secrets.filters.common.is_ignored_due_to_verification_policies",
"min_level": 2
},
{
"path": "detect_secrets.filters.heuristic.is_indirect_reference"
},
{
"path": "detect_secrets.filters.heuristic.is_likely_id_string"
},
{
"path": "detect_secrets.filters.heuristic.is_lock_file"
},
{
"path": "detect_secrets.filters.heuristic.is_not_alphanumeric_string"
},
{
"path": "detect_secrets.filters.heuristic.is_potential_uuid"
},
{
"path": "detect_secrets.filters.heuristic.is_prefixed_with_dollar_sign"
},
{
"path": "detect_secrets.filters.heuristic.is_sequential_string"
},
{
"path": "detect_secrets.filters.heuristic.is_swagger_file"
},
{
"path": "detect_secrets.filters.heuristic.is_templated_secret"
}
],
"results": {
"templates/base/base.html": [
{
"type": "Base64 High Entropy String",
"filename": "templates/base/base.html",
"hashed_secret": "f6538b22f89b1e2b05570de751f2932c6bca9969",
"is_verified": false,
"line_number": 40
}
]
},
"generated_at": "2024-02-21T10:23:47Z"
}
14 changes: 12 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,8 @@ You will need npm (for javascript dependencies) and poetry (for python dependenc

1. Run `poetry install` to install python dependencies
2. Copy `.env.example` to `.env`.
3. You wil need to obtain an access token from Datahub catalogue and populate the `CATALOGUE_TOKEN` var in .env to be able to retrieve search data.
https://datahub.apps-tools.development.data-platform.service.justice.gov.uk/settings/tokens
3. You wil need to obtain an access token from Datahub catalogue and populate the
`CATALOGUE_TOKEN` var in .env to be able to retrieve search data.
4. Run `poetry run python manage.py runserver`

Run `npm install` and then `npm run sass` to compile the stylesheets.
Expand All @@ -16,6 +16,16 @@ Run `npm install` and then `npm run sass` to compile the stylesheets.

![Screenshot of the service showing the search page](image.png)

## Contributing

Run `pre-commit install` from inside the poetry environment to set up pre commit hooks.

- Linting and formatting handled by `black`, `flake8`, `pre-commit`, and `isort`
- `isort` is configured in `pyproject.toml`
- `detect-secrets` is used to prevent leakage of secrets
- `sync_with_poetry` ensures the versions of the modules in the pre-commit specification
are kept in line with those in the `pyproject.toml` config.

## Testing

- Python unit tests: `pytest -m 'not slow'`
Expand Down
2 changes: 1 addition & 1 deletion core/__init__.py
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
# -*- coding: utf-8 -*-
from .settings import *
from .settings import * # noqa: F401
11 changes: 4 additions & 7 deletions core/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -67,7 +67,7 @@

AUTH_PASSWORD_VALIDATORS = [
{
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator",
"NAME": "django.contrib.auth.password_validation.UserAttributeSimilarityValidator", # noqa: E501
},
{
"NAME": "django.contrib.auth.password_validation.MinimumLengthValidator",
Expand All @@ -86,10 +86,8 @@

LANGUAGE_CODE = "en-gb"

TIME_ZONE = "UTC"

USE_I18N = True

TIME_ZONE = "Europe/London"
USE_I18N = False
USE_TZ = True


Expand All @@ -113,8 +111,7 @@
"django.contrib.staticfiles.finders.AppDirectoriesFinder",
)

SAMPLE_SEARCH_RESULTS_FILENAME = BASE_DIR / \
"sample_data/sample_search_page.yaml"
SAMPLE_SEARCH_RESULTS_FILENAME = BASE_DIR / "sample_data/sample_search_page.yaml"

with open(SAMPLE_SEARCH_RESULTS_FILENAME) as f:
SAMPLE_SEARCH_RESULTS = yaml.safe_load(f)
Expand Down
1 change: 1 addition & 0 deletions core/urls.py
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@
1. Import the include() function: from django.urls import include, path
2. Add a URL to urlpatterns: path('blog/', include('blog.urls'))
"""

from django.contrib import admin
from django.urls import include, path

Expand Down
3 changes: 0 additions & 3 deletions home/admin.py

This file was deleted.

47 changes: 39 additions & 8 deletions home/forms/search.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,12 +4,13 @@
from django import forms


def get_domain_choices():
def get_domain_choices() -> list[tuple[str, str]]:
"""Make API call to obtain domain choices"""
# TODO: pull in the domains from the catalogue client
# facets = client.search_facets()
# domain_list = facets.options("domains")
return [
("", "All Domains"),
("urn:li:domain:HMCTS", "HMCTS"),
("urn:li:domain:HMPPS", "HMPPS"),
("urn:li:domain:HQ", "HQ"),
Expand Down Expand Up @@ -69,6 +70,18 @@ def get_sort_choices():
]


def get_classification_choices():
return [
("OFFICIAL", "Official"),
("SECRET", "Secret"),
("TOP-SECRET", "Top-Secret"),
]


def get_where_to_access_choices():
return [("analytical_platform", "Analytical Platform")]


class SearchForm(forms.Form):
"""Django form to represent data product search page inputs"""

Expand All @@ -78,9 +91,27 @@ class SearchForm(forms.Form):
required=False,
widget=forms.TextInput(attrs={"class": "govuk-input search-input"}),
)
domains = forms.MultipleChoiceField(
domain = forms.ChoiceField(
choices=get_domain_choices,
required=False,
widget=forms.Select(
attrs={
"form": "searchform",
"class": "govuk-select",
"aria-label": "domain",
}
),
)
classifications = forms.MultipleChoiceField(
choices=get_classification_choices,
required=False,
widget=forms.CheckboxSelectMultiple(
attrs={"class": "govuk-checkboxes__input", "form": "searchform"}
),
)
where_to_access = forms.MultipleChoiceField(
choices=get_where_to_access_choices,
required=False,
widget=forms.CheckboxSelectMultiple(
attrs={"class": "govuk-checkboxes__input", "form": "searchform"}
),
Expand All @@ -103,13 +134,13 @@ def __init__(self, *args, **kwargs):
super().__init__(*args, **kwargs)
self.initial["sort"] = "relevance"

def encode_without_filter(self, filter_to_remove):
def encode_without_filter(self, filter_name: str, filter_value: str):
"""Preformat hrefs to drop individual filters"""
# Deepcopy the cleaned data dict to avoid modifying it inplace
query_params = deepcopy(self.cleaned_data)

query_params["domains"].remove(filter_to_remove)
if len(query_params["domains"]) == 0:
query_params.pop("domains")

value = query_params.get(filter_name)
if isinstance(value, list) and filter_value in value:
value.remove(filter_value)
elif isinstance(value, str) and filter_value == value:
query_params.pop(filter_name)
return f"?{urlencode(query_params, doseq=True)}"
2 changes: 1 addition & 1 deletion home/helper.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,5 +8,5 @@ def filter_seleted_domains(domain_list, domains):

def get_domain_list(client):
facets = client.search_facets()
domain_list = facets.options("domains")
domain_list = facets.options("domain")
return domain_list
Loading

0 comments on commit 91ed261

Please sign in to comment.