Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discussion about API #4

Open
Mart-Bogdan opened this issue Feb 10, 2015 · 2 comments
Open

Discussion about API #4

Mart-Bogdan opened this issue Feb 10, 2015 · 2 comments

Comments

@Mart-Bogdan
Copy link
Contributor

I'd like to add Action benchmarks in addition to Function< T > benchmarks.

I guess that should be alternative class for that, or perhaps, just overloaded method?

In most cases void method is enought for benchmarking. Additional values can be feed to "Copiler.ConsumeValue"

I'd like to implement this, but I'd prefer we to discuss this first.

P.S. I'm studying 1st grade students and would like them to use this to benchmarks sortig algorithms, data structures, etc.

@biboudis
Copy link
Owner

Back then, I needed only Func but this is clearly not the case for a general purpose lambda-testing framework.

Of the top of my head:

  • We should consider encapsulating the lambda under test in a separate data structure (something like JMH's BenchmarkListEntry. In our case we should store the lambda under test. What about the different overloads? Should we include all delegates?
  • What is the overhead of measuring lambdas instead of their corresponding functions? Should we measure lambdas as they are or should we discover the generated methods and measure these? Should we use the Roslyn compiler to provide a tool that rewrites methods as JMH does?
  • Will dead-code elimination hit us if we use a thunk for all measured lambdas as an action delegate? We need to examine compiled and JITTed code. In JMH several techniques are used some of them are about fighting dead-code elimination. Always the return value is used. Another one is about Blackholes the applicability of which should be tested and measured in our scenario.

Our goal at this point would be to provide a library for micro-benchmarking automation with a set of minimum requirements and improve it towards precision gradually by learning best practices from industrial strength tools like JMH.

@Mart-Bogdan to answer your question: I say to start improving the API of the library incrementally, after careful thinking, and try to measure if test code validate (or not) our corresponding hypotheses for each decision we make. So, for now, yes! Write down a snippet of how you would like to use this part, the declaration of the lambda. This will help us through with the data type decisions.

@biboudis
Copy link
Owner

BTW, take a look at this interesting reddit discussion about a similar project for Java 8 Lambdas. It contains useful comments.

http://www.reddit.com/r/java/comments/299vlz/nanobench_tiny_benchmarking_framework_for_java_8/

I am renaming this issue to "Discussion about API" as your question not only is valid but it may generate comments for the API in general.

ping @Mart-Bogdan.

@biboudis biboudis changed the title [Proposal] Add void returning benchmarks Discussion about API Feb 11, 2015
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants