Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revise use of fails in tests #85

Closed
srid opened this issue Jan 10, 2022 · 1 comment
Closed

Revise use of fails in tests #85

srid opened this issue Jan 10, 2022 · 1 comment
Labels

Comments

@srid
Copy link
Member

srid commented Jan 10, 2022

This doesn't seem right. Per the semantics of fails it expects either user error (EvaluationError) or engine error/bug (EvaluationException). Even EvaluationError is not precise:

data EvaluationError user internal
    = InternalEvaluationError internal
      -- ^ Indicates bugs.
    | UserEvaluationError user
      -- ^ Indicates user errors.

What we want is to test that the result is exactly a perror (and not general evaluation error/exception).

Originally posted by @srid in #84 (comment)

@srid
Copy link
Member Author

srid commented Feb 14, 2022

We need IntersectMBO/plutus#4270

srid added a commit that referenced this issue Feb 14, 2022
@L-as L-as closed this as completed in 3c07d71 Feb 16, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

1 participant