Content-Transfer-Encoding: binary Content-Disposition: inline Content-Type: text/html; charset="utf-8" X-RT-Original-Encoding: utf-8 Content-Length: 1664

I believe this reveals a bigkey bug in our version of libdb2.  The bug reproduces when feeding the attached input to "dbtest -i -psize=512 -o out btree input".  This input inserts a small key/value pair into the database, and then a pair with key size 225 and value size 26.  This pair is just big enough to require both a bigkey and bigdata representation for the databases's 512-byte page size.  Bigkey and bigdata representations replace the key and data values with eight-byte {pgno, size} references, where the actual value is stored in one or more pages at the given pgno.

When inserting the large key and value, __bt_put()'s invocation of __bt_search() compares the eight-byte {page, size} bigkey reference to the existing key in the page to determine the insertion point.  When we later look up the 225-byte key, __bt_get()'s call to __bt_search() compares the actual 225-byte key to the first entry in the page and gets the opposite result, and as a consequence does not look at the matching entry.  I believe the search performed by __bt_put() should be using the actual key and not the bigkey reference.

This seems like a pretty low-priority bug.  It could manifest if people used really long principal names (around 2000 bytes, since the default block size is typically 4K), but people don't generally do that.  It is of course an annoyance for the test suite to fail based on what happens to be lying around in /usr/dict/words on the host system.  Possibly run.test should be modified to always use the internal dictionary.