Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Allow selecting multiple metrics on compare page #133

Draft
wants to merge 10 commits into
base: main
Choose a base branch
from

Conversation

dbutenhof
Copy link
Collaborator

@dbutenhof dbutenhof commented Nov 13, 2024

Type of change

  • Refactor
  • New feature
  • Bug fix
  • Optimization
  • Documentation Update

Description

Support selection of multiple metrics using the pulldown in the comparison page. The update occurs when the pulldown closes.

To simplify the management of "available metrics" across multiple selected runs, which might have entirely different metrics, the reducer no longer tries to store separate metric selection lists for each run. This also means that the "default" metrics selection remains when adding another comparison run, or expanding another row.

This is chained from #122 (crucible) -> #123 (ilab API) -> #124 (ilab UI) -> #125 (multi-run API) -> #127 (multi-run UI) -> #129 (statistics aggregation) -> #130 (statistics display) -> #131 (metadata flyover) -> #132 (multiple metrics selection) -> #133 (compare multiple metrics)

Related Tickets & Documents

PANDA-645 support multiple metrics selection in compare view

Checklist before requesting a review

  • I have performed a self-review of my code.
  • If it is a core feature, I have added thorough tests.

Testing

Manual testing on local deployment.
image

dbutenhof and others added 9 commits October 18, 2024 12:51
This encapsulates substantial logic to encapsulate interpretation of the
Crucible Common Data Model OpenSearch schema for the use of CPT dashboard API
components. By itself, it does nothing.
This builds on the `crucible_svc` layer in cloud-bulldozer#122 to add a backend API.
This relies on the ilab API in cloud-bulldozer#123, which in turn builds on the crucible
service in cloud-bulldozer#122.
When graphing metrics from two runs, the timestamps rarely align; so we add a
`relative` option to convert the absolute metric timestamps into relative
delta seconds from each run's start.
This adds the basic UI to support comparison of the metrics of two InstructLab
runs. This compares only the primary metrics of the two runs, in a relative
timeline graph.

This is backed by cloud-bulldozer#125, which is backed by cloud-bulldozer#124, which is backed by cloud-bulldozer#123,
which is backed by cloud-bulldozer#122. These represent a series of steps towards a complete
InstructLab UI and API, and will be reviewed and merged from cloud-bulldozer#122 forward.
This PR is primarily CPT dashboard backend API (and Crucible service) changes
to support pulling and displaying multiple Crucible metric statistics. Only
minor UI changes are included to support API changes. The remaining UI changes
to pull and display statistics will be pushed separately.
Add statistics charts for selected metric in row expansion and comparison
views.
Extract the "Metadata" into a separate component, which allows it to be reused
as an info flyover on the comparison page to help in identifying target runs
to be compared.
Modify the metrics pulldown to allow multiple selection. The statistical
summary chart and graph will show all selected metrics in addition to the
inherent benchmark primary benchmark (for the primary period).
Support selection of multiple metrics using the pulldown in the comparison
page. The update occurs when the pulldown closes.

To simplify the management of "available metrics" across multiple selected
runs, which might have entirely different metrics, the reducer no longer
tries to store separate metric selection lists for each run. This also means
that the "default" metrics selection remains when adding another comparison
run, or expanding another row.
Copy link
Member

@jaredoconnell jaredoconnell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed the compare commit. Looks fine overall.

try {
if (getState().ilab.metrics?.find((i) => i.uid == uid)) {
return;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this the case for when it already has the data synced? If so I would just add a simple comment like this:

     return; // already fetched

This also applies to the other instances of this.

Comment on lines 138 to 157
periods?.periods?.forEach((p) => {
if (p.is_primary) {
summaries.push({
run: uid,
metric: p.primary_metric,
periods: [p.id],
});
}
if (metrics) {
metrics.forEach((metric) => {
if (
avail_metrics.find((m) => m.uid == uid)?.metrics?.includes(metric)
) {
summaries.push({
run: uid,
metric,
aggregate: true,
periods: [p.id],
})
);
}
});
const response = await API.post(
`/api/v1/ilab/runs/multisummary`,
summaries
);
if (response.status === 200) {
dispatch({
type: TYPES.SET_ILAB_SUMMARY_DATA,
payload: { uid, data: response.data },
});
}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can some basic comments be included to differentiate these two? Looking closely I can see that the bottom one is aggregate.

@dbutenhof dbutenhof self-assigned this Nov 18, 2024
Copy link
Member

@jaredoconnell jaredoconnell left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All of the new changes look good.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants