Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relative uncertainty does not result in successful properties unless relative_uncertainty > 1.0 #592

Open
lilyminium opened this issue Jan 5, 2025 · 0 comments

Comments

@lilyminium
Copy link
Contributor

Describe the bug

When determining whether a property has been successfully computed using the SimulationLayer, Evaluator checks whether the uncertainty/error of the property is within either the user-specified absolute_tolerance or relative_tolerance. However, as written, the relative_tolerance will only successfully terminate if it's >1. I think this may be due to specifying the tolerance to check incorrectly.

To Reproduce

#!/usr/bin/env python
# coding: utf-8

# In[4]:


from openff.units import unit
from openff.evaluator.backends import ComputeResources
from openff.evaluator.backends.dask import DaskLocalCluster
from openff.evaluator.datasets import (
    MeasurementSource,
    PhysicalPropertyDataSet,
    PropertyPhase,
)
from openff.evaluator.client import EvaluatorClient, RequestOptions, Request, ConnectionOptions
from openff.evaluator.server.server import Batch, EvaluatorServer

from openff.evaluator.forcefield import (
    LigParGenForceFieldSource,
    SmirnoffForceFieldSource,
    TLeapForceFieldSource,
)

from openff.evaluator.properties import Density, EnthalpyOfMixing
from openff.evaluator.substances import Substance
from openff.evaluator.thermodynamics import ThermodynamicState


dataset = PhysicalPropertyDataSet()
thermodynamic_state = ThermodynamicState(
    temperature=298.15 * unit.kelvin,
    pressure=101.325 * unit.kilopascal,
)
dataset.add_properties(
    Density(
        thermodynamic_state=thermodynamic_state,
        phase=PropertyPhase.Liquid,
        value=1.0 * Density.default_unit(),
        uncertainty=1.0 * Density.default_unit(),
        source=MeasurementSource(doi=" "),
        substance=Substance.from_components("CO"),
    ),
)
for i, prop in enumerate(dataset.properties, 1):
    prop.id = str(i)


options = RequestOptions()
options.calculation_layers = ["SimulationLayer"]
schema = Density.default_simulation_schema(
    n_molecules=256,
    relative_tolerance=0.5
)

options.add_schema(
    "SimulationLayer",
    "Density",
    schema,
)


force_field_path = "openff-2.1.0.offxml"
force_field_source = SmirnoffForceFieldSource.from_path(
    force_field_path
)



with DaskLocalCluster(
    # uncomment options below to use a GPU to compute (much faster than CPU-only).
    
    number_of_workers=1,
    resources_per_worker=ComputeResources(
        number_of_threads=1,
        number_of_gpus=1,
        preferred_gpu_toolkit=ComputeResources.GPUToolkit.CUDA,
    ),
) as calculation_backend:
    server = EvaluatorServer(
        calculation_backend=calculation_backend,
        working_directory=".",
        port=8998,
        delete_working_files=False
    )
    with server:
        client = EvaluatorClient(connection_options=ConnectionOptions(server_port=8998))

        request, error = client.request_estimate(
            dataset,
            force_field_source,
            options,
        )

        # block until computation finished
        results, exception = request.results(synchronous=True, polling_interval=30)
        assert exception is None


print(len(results.queued_properties))

print(len(results.estimated_properties))

print(len(results.unsuccessful_properties))
print(len(results.exceptions))

Output

0
0
1
0

Computing environment (please complete the following information):

  • Operating system
  • Output of running conda list
  • Evaluator: 0.4.10

Additional context

The particular check is here:

elif (
options.relative_tolerance != UNDEFINED
and options.relative_tolerance * uncertainty < uncertainty
):
continue

IMO it makes more sense to me to compare the (measured uncertainty * relative_tolerance) to the computed uncertainty.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant