1.0.15.12: better scaling in the PCL cache
* When the cache reaches its maximum size, and entries need to be
dropped, drop a random 50% of them, instead of the more
deterministic set "ones that don't fit": this avoids getting stuck
in a "add A dropping B, add B dropping A, ..." cycle which eats up
ginormous amounts of time. Additionally, dropping 50% seems to be
the best ratio -- experimentally, at least -- but it would be nice
to have a proper analysis...
Note: there is a point (possibly even before our current maximum
cache size) where the allowed probe-depth grows so large that a
tree would work better then a table. It would be good to gracefully
replace the table based cache with a tree when it grows so large.