Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempt to associate benchmarks with readily available computational materials #3

Open
davidlmobley opened this issue Sep 11, 2016 · 0 comments

Comments

@davidlmobley
Copy link
Member

This is the computational analog of #2 -- one would like to make it easy to do new studies on existing benchmark systems for a variety of possible tests as detailed in the paper, including things like:

  • Test a new method on systems studied with an existing forcefield and method
  • Test a new forcefield
  • Cross-compare simulation packages
  • Test sampling methods

etc.

We need to plan how to facilitate this. We'll need to sort out how to make available computational materials - structures, input files, etc. Ultimately, we will likely even want a way to specify specific order parameters to analyze for convergence, etc. (e.g., something machine-handleable which can tell automated analysis to be sure to check sampling of Val103 in lysozyme L99A).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant