Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

various model evaluation fixes and enhancements #5123

Merged
merged 1 commit into from
Nov 18, 2024

Conversation

imanjra
Copy link
Contributor

@imanjra imanjra commented Nov 15, 2024

What changes are proposed in this pull request?

(Please fill in changes proposed in this fix)

How is this patch tested? If it is not, please explain why.

(Details)

Release Notes

Is this a user-facing change that should be mentioned in the release notes?

  • No. You can skip the rest of this section.
  • Yes. Give a description of this change to be included in the release
    notes for FiftyOne users.

(Details in 1-2 sentences. You can just refer to another PR with a description
if this PR is part of a larger change.)

What areas of FiftyOne does this PR affect?

  • App: FiftyOne application changes
  • Build: Build and test infrastructure changes
  • Core: Core fiftyone Python library changes
  • Documentation: FiftyOne documentation changes
  • Other

Summary by CodeRabbit

Release Notes

  • New Features

    • Enhanced evaluation metrics display with improved configurability for class performance and confusion matrices.
    • New filtering capabilities for evaluation metrics.
    • Additional methods for generating and sorting confusion matrices.
  • Bug Fixes

    • Increased maximum height for notes display to improve readability.
  • Improvements

    • Simplified user interactions by limiting options in the plot mode bar.
    • Streamlined logic for rendering evaluation components, enhancing overall user experience.
  • Chores

    • Refactored internal logic for clarity and maintainability across various components.

@imanjra imanjra requested a review from a team November 15, 2024 14:05
Copy link
Contributor

coderabbitai bot commented Nov 15, 2024

Walkthrough

The changes in this pull request primarily focus on enhancing the Evaluation component and related components within the Native Model Evaluation View. Key updates include the introduction of new state variables for managing configurations of class performance and confusion matrices, improved filtering capabilities, and adjustments to rendering logic. Additionally, modifications to the EvaluationNotes, EvaluationPlot, and EvaluationPanel classes enhance user interaction, caching behavior, and metrics handling. Overall, these changes improve the configurability and functionality of the evaluation metrics display and associated components.

Changes

File Change Summary
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx Added new state variables for class performance and confusion matrix configurations; updated rendering logic and added a new useActiveFilter hook.
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationNotes.tsx Increased MAX_NOTES_HEIGHT from 72 to 128; refactored color logic and rendering conditions for clarity.
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationPlot.tsx Removed "select2d" and "toImage" from modeBarButtonsToRemove, refining user interaction options.
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx Updated logic for showEmptyOverview to require both computedEvaluations.length and pending_evaluations.length to be zero.
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/utils.ts Renamed parameters in getNumericDifference for clarity; updated internal logic accordingly.
fiftyone/operators/builtins/panels/model_evaluation/__init__.py Added caching constants and methods in EvaluationPanel; replaced get_confusion_matrix with get_confusion_matrix_colorscale and added get_confusion_matrices method.

Possibly related PRs

  • fix model evaluation panel name #5090: The changes in this PR involve modifications to the EvaluationPanel class, specifically updating the config property and enhancing error handling, which relates to the changes made in the main PR regarding the Evaluation component and its handling of evaluation data.

Suggested labels

cleaning

Suggested reviewers

  • Br2850
  • ritch

Poem

🐰 In the meadow where metrics bloom,
New states and filters dispel the gloom.
With colors and configs, we take our flight,
Evaluations shine, oh what a sight!
Hopping through data, with joy we play,
Analyzing models in a brighter way! 🌼


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

❤️ Share
🪧 Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>, please review it.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai gather interesting stats about this repository and render them as a table. Additionally, render a pie chart showing the language distribution in the codebase.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (Invoked using PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Other keywords and placeholders

  • Add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.
  • Add @coderabbitai summary to generate the high-level summary at a specific location in the PR description.
  • Add @coderabbitai anywhere in the PR title to generate the title automatically.

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@imanjra imanjra force-pushed the bugfig/model-evaluation-x1 branch 5 times, most recently from eee487d to f9c7665 Compare November 16, 2024 20:33
@imanjra imanjra force-pushed the bugfig/model-evaluation-x1 branch from f9c7665 to b43cfc3 Compare November 17, 2024 15:01
@imanjra imanjra marked this pull request as ready for review November 17, 2024 15:01
Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 7

🧹 Outside diff range and nitpick comments (9)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/utils.ts (1)

25-31: Improve error handling and type safety

The function has several areas for improvement:

  1. With proper TypeScript types, the runtime type check becomes redundant
  2. Missing handling for division by zero when calculating percentage
  3. Return type is not explicitly defined

Consider this improved implementation:

export function getNumericDifference(
  value: number,
  compareValue: number,
  percentage = false,
  fractionDigits?: number
): number {
-  if (typeof value === "number" && typeof compareValue === "number") {
    const difference = value - compareValue;
    if (percentage) {
+     if (compareValue === 0) {
+       throw new Error("Cannot calculate percentage difference with zero as compare value");
+     }
      const percentageDifference = (difference / compareValue) * 100;
      return formatValue(percentageDifference, fractionDigits);
    }
    return formatValue(difference, fractionDigits);
-  }
}
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationPlot.tsx (1)

Line range hint 1-89: Consider enhancing component robustness and flexibility

The component implementation is solid, but could benefit from these improvements:

  1. Make the plot mode bar configuration customizable through props
  2. Add error boundaries to handle Plot rendering failures gracefully
  3. Consider using a more specific type for configDefaults instead of Partial<PlotParams["config"]>

Example implementation:

import { ErrorBoundary } from 'react-error-boundary';

type ModeBarButton = 
  | "select2d" 
  | "toImage" 
  | "autoScale2d" 
  | "lasso2d" 
  | "pan2d" 
  | "resetScale2d" 
  | "zoom2d" 
  | "zoomIn2d" 
  | "zoomOut2d";

interface PlotConfig extends Partial<PlotParams["config"]> {
  modeBarButtonsToRemove?: ModeBarButton[];
}

interface EvaluationPlotProps extends Omit<PlotParams, "layout" | "style" | "config"> {
  layout?: Partial<PlotParams["layout"]>;
  style?: React.CSSProperties;
  config?: PlotConfig;
  disabledModeBarButtons?: ModeBarButton[];
}

export default function EvaluationPlot(props: EvaluationPlotProps) {
  const { 
    layout = {}, 
    data, 
    style = {}, 
    disabledModeBarButtons = ["select2d", "toImage", ...], 
    ...otherProps 
  } = props;
  
  // ... existing code ...

  const configDefaults: PlotConfig = useMemo(() => ({
    displaylogo: false,
    scrollZoom: false,
    modeBarButtonsToRemove: disabledModeBarButtons,
  }), [disabledModeBarButtons]);

  return (
    <ErrorBoundary fallback={<div>Error loading plot</div>}>
      <Plot
        config={configDefaults}
        layout={{ ...layoutDefaults, ...layout }}
        style={{ height: "100%", width: "100%", zIndex: 1, ...style }}
        data={data}
        {...otherProps}
      />
    </ErrorBoundary>
  );
}
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationNotes.tsx (1)

67-67: Consider explicit padding values for better maintainability

While the shorthand padding of 0.5 works, explicit values might be more maintainable and less prone to misinterpretation.

Consider using explicit values or theme spacing:

-p: 0.5,
+p: (theme) => theme.spacing(0.5),
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx (2)

Line range hint 8-8: Add TypeScript interface for component props

The component lacks proper TypeScript types for its props, which reduces type safety and IDE support.

Consider adding an interface:

interface NativeModelEvaluationViewProps {
  data?: {
    evaluations?: Array<{ key: string; id: string }>;
    view?: Record<string, unknown>;
    statuses?: Record<string, unknown>;
    notes?: Record<string, unknown>;
    permissions?: Record<string, unknown>;
    pending_evaluations?: Array<unknown>;
  };
  schema: {
    view: {
      on_change_view: unknown;
      on_evaluate_model: unknown;
      load_evaluation: unknown;
      load_evaluation_view: unknown;
      set_status: unknown;
      set_note: unknown;
      load_view: unknown;
    };
  };
  onChange: (path: string, value: unknown) => void;
  layout?: {
    height?: number;
  };
}

Line range hint 8-107: Consider splitting component for better maintainability

The component handles multiple responsibilities including state management, navigation, and rendering different views. This could make it harder to maintain as complexity grows.

Consider:

  1. Extracting the view state logic into a custom hook
  2. Moving the event handlers into a separate utility
  3. Splitting the component into smaller, focused components

Example structure:

// useEvaluationState.ts
export const useEvaluationState = (evaluations, onChange) => {
  // State management logic
};

// EvaluationEventHandlers.ts
export const createEvaluationHandlers = (triggerEvent, onChange) => {
  // Event handler logic
};

// NativeModelEvaluationView.tsx
const NativeModelEvaluationView: React.FC<NativeModelEvaluationViewProps> = (props) => {
  const state = useEvaluationState(props.evaluations, props.onChange);
  const handlers = createEvaluationHandlers(triggerEvent, props.onChange);
  
  return (
    <Box>
      <EvaluationRouter
        page={state.page}
        handlers={handlers}
        {...props}
      />
    </Box>
  );
};
fiftyone/operators/builtins/panels/model_evaluation/__init__.py (2)

309-309: Use logging instead of printing exception tracebacks

Using traceback.print_exc() directly prints the traceback to standard error, which may not be appropriate in production environments. Consider using the logging module to log exceptions, providing better control over logging levels and destinations.

Apply this diff to use logging:

-import traceback
+import traceback
+import logging

...

except Exception:
-    traceback.print_exc()
+    logging.exception("Error caching evaluation data")

318-321: Catch specific exceptions instead of broad 'Exception'

Catching a broad Exception can mask unexpected errors and make debugging more difficult. It's better to catch specific exceptions anticipated in this context, such as KeyError or AttributeError.

Apply this diff to catch specific exceptions:

try:
    return bool(dataset._doc.evaluations[eval_key].results)
-except Exception:
+except (KeyError, AttributeError):
    return False
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (2)

73-80: State variables lack initial types and values.

While declaring state variables with useState, it's best practice to provide explicit type annotations and initial values for better type safety and code clarity.

Apply this diff to specify types and initial values:

 const [classPerformanceConfig, setClassPerformanceConfig] =
-  useState<PLOT_CONFIG_TYPE>({});
+  useState<PLOT_CONFIG_TYPE>({ sortBy: "az", limit: undefined });

 const [classPerformanceDialogConfig, setClassPerformanceDialogConfig] =
-  useState<PLOT_CONFIG_DIALOG_TYPE>(DEFAULT_BAR_CONFIG);
+  useState<PLOT_CONFIG_DIALOG_TYPE>({ ...DEFAULT_BAR_CONFIG, open: false });

 const [confusionMatrixConfig, setConfusionMatrixConfig] =
-  useState<PLOT_CONFIG_TYPE>({ log: true });
+  useState<PLOT_CONFIG_TYPE>({ sortBy: "az", limit: undefined, log: true });

 const [confusionMatrixDialogConfig, setConfusionMatrixDialogConfig] =
-  useState<PLOT_CONFIG_DIALOG_TYPE>(DEFAULT_BAR_CONFIG);
+  useState<PLOT_CONFIG_DIALOG_TYPE>({ ...DEFAULT_BAR_CONFIG, open: false });

1485-1505: Add TypeScript type annotations to function parameters for better type safety.

The functions formatPerClassPerformance, getMatrix, getConfigLabel, and useActiveFilter lack type annotations for their parameters. Adding types improves code readability and helps prevent bugs.

Example:

-function formatPerClassPerformance(perClassPerformance, barConfig) {
+function formatPerClassPerformance(
+  perClassPerformance: PerformanceMetric[],
+  barConfig: PLOT_CONFIG_TYPE
+): PerformanceMetric[] {

...

-function getMatrix(matrices, config) {
+function getMatrix(
+  matrices: ConfusionMatrices,
+  config: PLOT_CONFIG_TYPE
+): ConfusionMatrixResult | undefined {

...

-function getConfigLabel({ config, type, dashed }) {
+function getConfigLabel({
+  config,
+  type,
+  dashed,
+}: {
+  config: PLOT_CONFIG_TYPE;
+  type?: string;
+  dashed?: boolean;
+}): string {

...

-function useActiveFilter(evaluation, compareEvaluation) {
+function useActiveFilter(
+  evaluation: EvaluationType,
+  compareEvaluation: EvaluationType
+): ActiveFilter | undefined {

Please define appropriate types like PerformanceMetric, ConfusionMatrices, ConfusionMatrixResult, EvaluationType, and ActiveFilter based on the expected input and output.

Also applies to: 1507-1515, 1517-1528, 1530-1555

📜 Review details

Configuration used: .coderabbit.yaml
Review profile: CHILL

📥 Commits

Reviewing files that changed from the base of the PR and between 9a13da7 and b43cfc3.

📒 Files selected for processing (6)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (26 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationNotes.tsx (4 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationPlot.tsx (1 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx (1 hunks)
  • app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/utils.ts (1 hunks)
  • fiftyone/operators/builtins/panels/model_evaluation/__init__.py (6 hunks)
🧰 Additional context used
📓 Path-based instructions (5)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationNotes.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationPlot.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/utils.ts (1)

Pattern **/*.{ts,tsx}: Review the Typescript and React code for conformity with best practices in React, Recoil, Graphql, and Typescript. Highlight any deviations.

🪛 Biome
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx

[error] 1318-1318: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.

See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.

(lint/suspicious/noGlobalIsNan)


[error] 1391-1391: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.

See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.

(lint/suspicious/noGlobalIsNan)

🔇 Additional comments (7)
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/utils.ts (1)

20-31: Verify usage of getNumericDifference across the codebase

The parameter renaming and behavior changes might affect existing usages. Let's verify all call sites to ensure they're updated accordingly.

✅ Verification successful

Function usage is localized and consistent

The getNumericDifference function is only used within the NativeModelEvaluationView component, specifically in Evaluation.tsx. All usages follow the same parameter pattern, passing value and compareValue consistently. The function is properly scoped to its specific use case in the model evaluation view.

  • All 4 usages are in app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx
  • The function is defined in the adjacent utils.ts file in the same directory
  • Parameter usage is consistent across all call sites
🏁 Scripts executed

The following scripts were executed for the analysis:

Script:

#!/bin/bash
# Search for all usages of getNumericDifference
rg "getNumericDifference\(" --type ts --type tsx -A 2

Length of output: 83


Script:

#!/bin/bash
# Search for all usages of getNumericDifference without file type restrictions
rg "getNumericDifference\(" -A 2

Length of output: 2004


Script:

#!/bin/bash
# Find all TypeScript files that might contain the function
fd "\.tsx?$" | xargs rg "getNumericDifference\(" -A 2

Length of output: 2025


Script:

#!/bin/bash
# Search for the function definition to understand its location
ast-grep --pattern 'function getNumericDifference($_) { $$ }'

Length of output: 63


Script:

#!/bin/bash
# Get more context around the usage in Evaluation.tsx
rg "getNumericDifference\(" app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx -B 2 -A 2

Length of output: 936

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationPlot.tsx (1)

53-60: Verify the removal of plot interaction capabilities

The addition of "select2d" and "toImage" to modeBarButtonsToRemove removes important plot interaction capabilities:

  • Data point selection functionality ("select2d")
  • Plot export functionality ("toImage")

These features are commonly used in data analysis workflows. Could you please:

  1. Confirm if removing these capabilities aligns with user requirements
  2. Document the rationale for removing these interactions

If these features are being removed to simplify the UI, consider:

  • Making these features configurable through props
  • Documenting alternative ways for users to achieve these functionalities
app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/EvaluationNotes.tsx (3)

8-8: Verify layout impact of increased MAX_NOTES_HEIGHT

The maximum height for notes has been increased from 72px to 128px. While this allows more content to be visible by default, please verify that this change doesn't negatively impact the overall layout or cause unwanted scrolling in various viewport sizes.

Consider making this value configurable through theme constants or component props to allow for more flexible customization based on different use cases or viewport sizes.


38-39: LGTM! Improved code readability

The introduction of the isOverview variable and refactored color logic improves code readability and maintainability while maintaining the same functionality.


57-57: LGTM! Enhanced conditional rendering

The addition of !isOverview check appropriately prevents the expand button from showing in overview mode, which aligns with the component's intended behavior in different variants.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/index.tsx (1)

43-44: LGTM! Enhanced empty state logic

The updated condition properly considers both computed and pending evaluations before showing the empty state, which is more accurate.

app/packages/core/src/plugins/SchemaIO/components/NativeModelEvaluationView/Evaluation.tsx (1)

94-95: Ensure getMatrix handles undefined inputs safely.

When calling getMatrix, if evaluation?.confusion_matrices or compareEvaluation?.confusion_matrices is undefined, it may lead to unintended errors inside getMatrix.

Consider adding checks before calling getMatrix:

 if (evaluation?.confusion_matrices) {
   return getMatrix(evaluation.confusion_matrices, confusionMatrixConfig);
 } else {
   return null;
 }

Alternatively, ensure getMatrix function handles undefined inputs appropriately.

Also applies to: 97-101

Comment on lines +20 to 24
value,
compareValue,
percentage = false,
fractionDigits?: number
) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Add type annotations to function parameters

The parameters lack explicit type annotations, which could lead to type-related issues at runtime. Consider adding proper TypeScript types to improve type safety and code maintainability.

export function getNumericDifference(
-  value,
-  compareValue,
+  value: number,
+  compareValue: number,
  percentage = false,
  fractionDigits?: number
)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
value,
compareValue,
percentage = false,
fractionDigits?: number
) {
value: number,
compareValue: number,
percentage = false,
fractionDigits?: number
) {

from fiftyone import ViewField as F
from fiftyone.operators.categories import Categories
from fiftyone.operators.panel import Panel, PanelConfig
from fiftyone.operators.utils import is_new
import fiftyone.operators.types as types
from fiftyone.core.plots.plotly import _to_log_colorscale
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Avoid importing private functions from other modules

Importing _to_log_colorscale from fiftyone.core.plots.plotly accesses a private function (indicated by the leading underscore), which may not be intended for external use and could change without notice. It's better to use a public API or refactor the code to avoid relying on internal functions.

Comment on lines +332 to +349
updated_pending_evaluations_for_dataset_in_stored = []
update_store = False
for pending in pending_evaluations_for_dataset_in_store:
pending_eval_key = pending.get("eval_key")
if pending_eval_key in eval_keys:
update_store = True
else:
pending_evaluations.append(pending)
updated_pending_evaluations_for_dataset_in_stored.append(
pending
)
for key in eval_keys:
if not self.has_evaluation_results(ctx.dataset, key):
pending_evaluations.append({"eval_key": key})
if update_store:
pending_evaluations_in_store[
dataset_id
] = updated_pending_evaluations_for_dataset_in_stored
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Correct typo in variable name: '..._in_stored' should be '..._in_store'

The variable updated_pending_evaluations_for_dataset_in_stored is misspelled. It should be updated_pending_evaluations_for_dataset_in_store to reflect the intended meaning. This typo occurs in multiple places and could lead to NameError exceptions and incorrect behavior.

Apply this diff to correct the variable name:

-updated_pending_evaluations_for_dataset_in_stored = []
+updated_pending_evaluations_for_dataset_in_store = []

...

-updated_pending_evaluations_for_dataset_in_stored.append(
+updated_pending_evaluations_for_dataset_in_store.append(

...

pending_evaluations_in_store[
    dataset_id
] = updated_pending_evaluations_for_dataset_in_stored
+                                  ^^^^
] = updated_pending_evaluations_for_dataset_in_store
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
updated_pending_evaluations_for_dataset_in_stored = []
update_store = False
for pending in pending_evaluations_for_dataset_in_store:
pending_eval_key = pending.get("eval_key")
if pending_eval_key in eval_keys:
update_store = True
else:
pending_evaluations.append(pending)
updated_pending_evaluations_for_dataset_in_stored.append(
pending
)
for key in eval_keys:
if not self.has_evaluation_results(ctx.dataset, key):
pending_evaluations.append({"eval_key": key})
if update_store:
pending_evaluations_in_store[
dataset_id
] = updated_pending_evaluations_for_dataset_in_stored
updated_pending_evaluations_for_dataset_in_store = []
update_store = False
for pending in pending_evaluations_for_dataset_in_store:
pending_eval_key = pending.get("eval_key")
if pending_eval_key in eval_keys:
update_store = True
else:
pending_evaluations.append(pending)
updated_pending_evaluations_for_dataset_in_store.append(
pending
)
for key in eval_keys:
if not self.has_evaluation_results(ctx.dataset, key):
pending_evaluations.append({"eval_key": key})
if update_store:
pending_evaluations_in_store[
dataset_id
] = updated_pending_evaluations_for_dataset_in_store

setClassPerformanceDialogConfig((state) => {
return {
...state,
limit: isNaN(newLimit) ? undefined : newLimit,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Replace isNaN with Number.isNaN to avoid unsafe type coercion.

Using isNaN can lead to unexpected type coercion. It's safer to use Number.isNaN, which checks if the value is NaN without type coercion.

Apply this diff to fix the issue at both locations:

-        limit: isNaN(newLimit) ? undefined : newLimit,
+        limit: Number.isNaN(newLimit) ? undefined : newLimit,

Also applies to: 1391-1391

🧰 Tools
🪛 Biome

[error] 1318-1318: isNaN is unsafe. It attempts a type coercion. Use Number.isNaN instead.

See the MDN documentation for more details.
Unsafe fix: Use Number.isNaN instead.

(lint/suspicious/noGlobalIsNan)

Comment on lines +1010 to +1013
if (selectedPoints?.[0] === points[0]?.pointIndices[0]) {
return loadView("clear", {});
}
setViewState({
type: "class",
view: {
x,
selectedClasses: isCompare ? [] : [index],
selectedCompareClasses: isCompare ? [index] : [],
},
});
loadView("class", { x });
loadView("class", { x: points[0]?.x });
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue

Handle possible undefined values in event handlers.

When accessing points[0], it's good practice to ensure that points array is not empty to prevent runtime errors.

Apply this diff to add a safety check:

 const firstPoint = points[0];
+if (!firstPoint) return;
 if (selectedPoints?.[0] === firstPoint.pointIndices[0]) {
   return loadView("clear", {});
 }
 loadView("class", { x: firstPoint?.x });
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
if (selectedPoints?.[0] === points[0]?.pointIndices[0]) {
return loadView("clear", {});
}
setViewState({
type: "class",
view: {
x,
selectedClasses: isCompare ? [] : [index],
selectedCompareClasses: isCompare ? [index] : [],
},
});
loadView("class", { x });
loadView("class", { x: points[0]?.x });
const firstPoint = points[0];
if (!firstPoint) return;
if (selectedPoints?.[0] === firstPoint.pointIndices[0]) {
return loadView("clear", {});
}
loadView("class", { x: firstPoint?.x });

@@ -1163,7 +1240,7 @@
</Stack>
<TextField
multiline
rows={3}
rows={10}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Set defaultValue to a controlled value in TextField.

Using defaultValue with onChange can lead to uncontrolled component warnings. It's better to use value and manage the state accordingly.

Apply this diff to control the TextField component:

<TextField
   multiline
-  rows={10}
-  defaultValue={evaluationNotes}
+  rows={10}
+  value={editNoteState.note}
   placeholder="Note (markdown) for the evaluation..."
   onChange={(e) => {
     setEditNoteState((note) => ({ ...note, note: e.target.value }));
   }}
/>

Also, initialize editNoteState.note with evaluationNotes when opening the dialog.

Committable suggestion skipped: line range outside the PR's diff.

Comment on lines +1530 to +1555
function useActiveFilter(evaluation, compareEvaluation) {
const evalKey = evaluation?.info?.key;
const compareKey = compareEvaluation?.info?.key;
const [stages] = useRecoilState(view);
if (stages?.length === 1) {
const stage = stages[0];
const { _cls, kwargs } = stage;
if (_cls.endsWith("FilterLabels")) {
const [_, filter] = kwargs;
const filterEq = filter[1].$eq;
const [filterEqLeft, filterEqRight] = filterEq;
if (filterEqLeft === "$$this.label") {
return { type: "label", value: filterEqRight };
} else if (filterEqLeft === `$$this.${evalKey}`) {
return {
type: "metric",
value: filterEqRight,
isCompare: false,
};
} else if (filterEqLeft === `$$this.${compareKey}`) {
return {
type: "metric",
value: filterEqRight,
isCompare: true,
};
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion

Use useRecoilValue instead of useRecoilState when only reading state.

Since you are only reading the state from Recoil and not setting it, using useRecoilValue is more appropriate and prevents unnecessary re-renders.

Apply this diff to fix the issue:

-import { useRecoilState } from "recoil";
+import { useRecoilValue } from "recoil";

...

-function useActiveFilter(evaluation, compareEvaluation) {
-  const [stages] = useRecoilState(view);
+function useActiveFilter(evaluation, compareEvaluation) {
+  const stages = useRecoilValue(view);

Committable suggestion skipped: line range outside the PR's diff.

Copy link
Contributor

@Br2850 Br2850 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍🏽

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants