Sounds fantastic. From a review by Brian Hayes in American Scientist:
In William Goldbloom Bloch’s mathematical companion to “The Library of Babel,” the first task is to calculate the number of distinct books that can be created in this way. There’s not much to it. Borges tells us that the alphabet of the books is restricted to 25 symbols (22 letters, the comma, the period and the word space). He also mentions that each book has 410 pages, with 40 lines of 80 characters on each page. Thus a book consists of 410 × 40 × 80 = 1,312,000 symbols. [...]
These combinatorial exercises are the most obvious instances where mathematics can illuminate the Borges story, but Bloch finds much else to comment on as well. [...] Borges describes the library as a close-packed array of hexagonal rooms. Four walls of each hexagon are lined with bookshelves, holding a total of 640 books; the other two walls provide portals to adjacent hexagons. Vertically, the levels of the library are connected by ventilation shafts and spiral stairways. This design has some curious consequences. For example, Bloch points out that somewhere in the library there must be at least one hexagon whose shelves are not full. The reason is that 251,312,000 is not evenly divisible by 640
For me the biggest surprise in Bloch’s commentary comes in a chapter that applies ideas from graph theory to the layout of the library. [...]
The first thing that jumps out is how absurdly fast our processors are. Most simple instructions on the Core 2 take one clock cycle to execute, hence a third of a nanosecond at 3.0Ghz. For reference, light only travels ~4 inches (10 cm) in the time taken by a clock cycle. It’s worth keeping this in mind when you’re thinking of optimization - instructions are comically cheap to execute nowadays. [...]
Normally when the CPU needs to touch the contents of a memory region they must either be in the L1/L2 caches already or be brought in from the main system memory. Here we see our first major hit, a massive ~250 cycles of latency that often leads to a stall, when the CPU has no work to do while it waits. To put this into perspective, reading from L1 cache is like grabbing a piece of paper from your desk (3 seconds), L2 cache is picking up a book from a nearby shelf (14 seconds), and main system memory is taking a 4-minute walk down the hall to buy a Twix bar. [...]
Sadly the southbridge hosts some truly sluggish performers, for even main memory is blazing fast compared to hard drives. Keeping with the office analogy, waiting for a hard drive seek is like leaving the building to roam the earth for one year and three months. This is why so many workloads are dominated by disk I/O and why database performance can drive off a cliff once the in-memory buffers are exhausted.