-
Notifications
You must be signed in to change notification settings - Fork 77
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix(sequencer): allow benchmarks to run on macOS and fix issues #1842
base: main
Are you sure you want to change the base?
Conversation
Can we rearchitect our benchmarks to move away from Divan toward Criterion instead? I am thinking of breaking out code parts that are benched into free standing crates for example. I understand one advantage is that Divan makes it relatively easy to bench code that's not in the public API. On the other hand we seem to be needing lots of workarounds for it. |
Yes, but I'm not sure if we should in the short-term. The workarounds for Divan are constrained to just the benchmarking code, except for possibly one which would be required for Criterion anyway, so I'm not too worried about them. Most of the workarounds will hopefully no longer be required in the future according to Divan's roadmap, but whether progress is made on that or not is debatable. It does not seem to be under particularly active development. I'm not convinced that breaking out benchmarked sections of the code into standalone crates is worthwhile though tbh. I don't see a compelling argument for breaking out We could instead look to feature gate public access to the required modules behind the existing |
// ensure we have enough balance to cover inclusion | ||
for (denom, cost) in tx.cost() { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nice refactor 👍
This lgtm, great that it runs on macs now :) |
Summary
This updates the sequencer benchmarks to actually run on macOS, and fixes a couple of issues around the mempool benchmarks.
Background
The benchmarks could previously only be run on Linux. Since they're also not being run regularly, they got broken at some point.
Since I was working on fixing them anyway, I took the opportunity to improve the initialization of the test data. This has resulted in the total time to execute the full mempool suite dropping from ~6.5 mins to ~1.5 mins and means all tests are now able to complete 100 iterations (several could only manage one iteration previously).
Changes
astria-sequencer
crate in the benchmark'smain
function to resolve the issue of the tests not running on macOS.InsertionError::AlreadyPresent
error.PendingTransactionsForAccount::find_demotables
to use an existing method, deduplicating code.Testing
Confirmed the benchmarks ran on macOS locally. The
app
ones take a very long time to run and this should be investigated and fixed. To run only the mempool ones:Changelogs
No updates required.
Related Issues
Closes #1355.