Sorry I'm late to the game, but I've seen a similar issue when retrieving large documents. The report was 80k pages (with large object support turned on), and an end user was retrieving the first chunk of the large object with the thick client, then issuing a find command. The client was in a loop of retrieving 100 pages, then decompressing them, then searching them, then not finding the text, then fetching the next 100 pages, etc. etc. The snag was that each round trip (including search time) was several seconds... And if the search term didn't appear, then that was 800 round-trips. It routinely took 20 to 30 minutes to get zero results.
We simply increased the large object size to 1000 pages, and performance increased dramatically. Not only were there ten times fewer round trips to the server for large object 'chunks', the compression was more efficient, so each chunk size was reduced as well, meaning less data to send across the network, meaning quicker round-trips.
Consider trying it with larger object sizes?
-JD.