Skip to content
Stefano Maestri edited this page Oct 19, 2010 · 1 revision

More metadata

collecting more metadata will give us more power representing what unit tests are representing in a project. Unit test coul be seen as tests, contracts, integration tests, performance tests. Moreover code they are stressing could be both part of the same package/module/component or in a different one. This could be very important from an Arquillian perspective.

Metadata enriched with test runs infos

Another point of view could be focused on test most runned. If in an application under developement could be considered normal to have some tests runned a lot of time in a development cycle because the class under test has a lot of active development done and so change a lot of time and it's compiled a lot of time, ina stable application it could be considered something strange. IOW in a stable application a test runned a lot of time coul reveal a design error both in tests (stressing too many classes or behaviour) and in CUT revealing a strong coupled architecture.

Interesting too to know which are tests most failing in last runs for (more or less) same reasons...to be elaborate what more this info could reveal

Interesting also to know test performance (how many times get a test to run?) and be able to monitor the evolution of them. Moreover collecting with instrumentation entry points of CUT (or also entry point for module too) we can collect time of exection of a test spent on a particular class, drevealing which is the component(s)/class(es) getting whole test performance worst.

we need to keep story, not be based on clean

for above reasons

adding expected stressed classes/modules and verify with instrumentation