| (xref:OP_IS[`is`] (`PREDICATE` `EXPECTED` `ACTUAL`)) | check that, according to `PREDICATE` our `ACTUAL` is the same as our `EXPECTED`
| (xref:OP_IS[`is-true`] VALUE) | check that a value is non-NIL
| (xref:OP_RUN![`run!`] TEST-NAME) | run one (or more) tests and print the results
-| (xref:OP_RUN![`!`]) | rerun the most recently run test.
|================================
See the xref:API_REFERENCE[api] for details.
Lather, rinse, repeat:
--------------------------------
-CL-USER> (!)
+CL-USER> (run!)
..
Did 2 checks.
Pass: 2 (100%)
Where `TEST-NAME` is either a test object (as returned by `get-test`)
or a symbol naming a single test or a test suite.
-=== Re-running Tests ===
-
-The `run!` function stores its arguments in a set of variables and,
-via the functions `!`, `!!` and `!!!` will rerun those named
-tests. Note that we're deliberatly talking about names, and not test
-objects, `!` will take the last argument passed to `run!` and call
-`run!` with that again, looking up the test again if the argument was
-a symbol.
-
-This ensures that `!` will always run the current definition of a
-test, even if the test has been redefined since the last time `run!`
-was called.
-
=== Running Tests at Test Definition Time ===
Often enough, especially when fixing regression bugs, we'll always
reasons you have to set this variable manually after having loaded
your test suite.
-[NOTE]
-Setting `*run-test-when-defined*` will cause `run!` to get called far
-more often than normal. `!` and `!!` and `!!!` don't know that they're
-getting called semi-automatically and will therefore tend to all
-reduce to the same test (which still isn't totally useless behaviour).
-
=== Debugging failures and errors ===
`*debug-on-error*`::
== Random Testing (QuickCheck) ==
-TODO.
+Sometimes it's hard to come up with edge cases for tests, or sometimes
+there are so many that it's hard to list them all one by one. Random
+testing is a way to tell the test suite how to generate input and how
+to test that certain conditions always hold. One issue when writing
+random tests is that you can't, usually, test for specific results,
+you have to test that certain relationships hold.
+
+For example, if we had a function which reverses a list, we could
+define a relationship like this:
+
+--------------------------------
+(equalp the-list (reverse (reverse the-list)))
+--------------------------------
+
+or
-Every FiveAM test can be a random test, just use the for-all macro.
+--------------------------------
+(equalp (length the-list) (length (reverse the-list)))
+--------------------------------
+
+Random tests are defined via `def-test`, but the random part is then
+wrapped in a xref:OP_FOR-ALL[`for-all`] macro which runs its body
+`*num-trials*` times with different inputs:
+
+--------------------------------
+(for-all ((the-list (gen-list :length (gen-integer :min 0 :max 37)
+ :elements (gen-integer :min -10 :max 10))))
+ (is (equalp a (reverse (reverse the-list))))
+ (is (= (length the-list) (length (reverse the-list)))))
+--------------------------------
== Fixtures ==
include::docstrings/OP_RUN.txt[]
================================
-=== ! / !! / !!! ===
-
-================================
-----
-(!)
-----
-
-include::docstrings/OP_-EPOINT-.txt[]
-================================
-
-================================
-----
-(!!)
-----
-
-include::docstrings/OP_-EPOINT--EPOINT-.txt[]
-================================
-
-================================
-----
-(!!!)
-----
-
-include::docstrings/OP_-EPOINT--EPOINT--EPOINT-.txt[]
-================================
-
[[OP_DEF-FIXTURE]]
=== DEF-FIXTURE ===