Hash functions.

© Copyright 2004-2008 by Paul Hsieh

Why look at hash functions?

In a previous job, I was asked to look at hashing functions and got into a dispute with my boss about how such functions should be designed. I had advocated the used of LFSRs or CRCs that would be customized to the size of the table, as discussed in "Numerical Recipes". My boss advocated simply performing a modulo by prime operation and cited Knuth's 30 years old "the Art of Computer Programming". I showed him examples where modulo by prime had extremely bad collisions, but this did not seem to sway him at all. It seems Knuth's rewrites have come too late.

A coworker "settled" the dispute by discovering Bob Jenkin's hash function. This outperformed both of our suggestions while being based on better analysis regarding collisions. I had bookmarked the webpage and occassionally referred to it in future projects, and noticed the two additions of the "One at a time Hash" and "FNV hash" as updates to the page over time. The thing about the Bob Jenkin's function is that the code is messy, and uses a large number of mystery constants whose construction I did not understand. Both the "One at a time Hash" and "FNV Hash" are extremely elegant with very few magic constants.

Bob Jenkins himself indicated that FNV outperformed his own function, so at first I simply took his word for it, and started using the FNV hash blindly on all occassions. After that, I had finally had the occassion to measure the real performance inside of a project. After a number of miscalculations and mismeasurements, I decided that I needed to study the problem for real.

Analyzing existing hash functions

The first thing I noticed was that on my system (an Athlon XP) the Bob Jenkins function outperformed basically everything else (including the FNV function) by a large margin. How do we resolve this contradiction with Bob Jenkins' claim? Simple -- he was measuring on a "Pentium". Intel's latest Pentium IV is known to have very slow shifters which slows down every hash function except the FNV function which uses a multiply instead. The Opteron/Athlon 64 architecture has a vastly improved integer multiplier (unchallenged by any other architecture, save perhaps the Itanium) which suggests that the FNV hash should do well on that system as well.

But at a more fundamental level I wanted to understand what the real performance limiters were for these functions to see if a recoding of them might help (I performed a similar exercise for some reference MD5 code and got a drammatic performance boost, so I was optimistic.) The Bob Jenkins' code is too convoluted and it seemed that the compiler or out-of-order CPU architectures could easily find the parallelism that was there (and there is some to be had.)

But the CRCs and One at a time Hash are completely instruction after instruction dependent. So I split the input data into odd and even bytes and calculated two parallel CRCs and One at a time Hashes then at the end combined one into the other as if it were more data. This markedly improved the performance of these functions, but not quite up to the point of outperforming the Bob Jenkins hash. So I did not completely pursue the task of proving the suitability of these modified functions.

If I was going to try to outperform Bob Jenkins' function, I would have to take a step back and understand the functional nature of the bottleneck in these functions. The functions other than Bob Jenkins' basically operated on the idea of consuming one 8-bit byte at a time of input data and mixing each in some injective way into some 32-bit accumulator which, after possible post processing, is simply output. One can see the motivation of this idea -- each input byte can be mixed twice with a large degree of freedom in the 32 bit accumulator without self overlapping. Thus in successive steps its only really required to sort of "spread out" consecutive sequences of at most 8 bits in such a way that previous bytes don't obviously cancell out.

This is explicitely seen in the "One at a Time hash" function. In fact, for each byte only a few very simple operations are performed -- a final short sequence of operations is required at the end. These operations at the end are required to make sure the bits in the last few bytes fully "avalanche" to all the output bytes. Avalanching is the property between an input and output bit where the output bit will flip with a probability p ("close" to 0.5) if the input bit is flipped relative to any random input data. A good hash function requires avalanching from all input bits to all the output bits. (Incidentally, Bob Jenkins overly chastizes CRCs for their lack of avalanching -- CRCs are not supposed to be truncated to fewer bits as other more general hash functions are; you are supposed to construct a custom CRC for each number of bits you require.)

Creating a new hash function

Using the One at a Time hash function as a model, the next obvious question to ask is "why not try to use fewer operations between data fragments"? The idea would be to rely more heavily on fixups at the end to produce the final avalanching which adds a constant time overhead in hopes of reducing the linear overhead. I.e., the mixing function would in fact operate much more slowly relative to the stream of input bytes, but this would not matter to the bulk of the early bytes because they would eventually reach a maximal point of avalanching anyway.

So my thought was to use fewer instructions per input fragment and to increase the size of the input fragment from 8 bits to 16 bits. On the x86 this latter idea has a particularly high impact on performance since these architectures have hardware support for unaligned 16 bit word accesses. Using Bob Jenkin's definition of avalanching, I chose an inner loop instruction sequence that I thought might work by interleaving two 16 bit words, then wrote a program to search for parameters which gave the greatest amount of avalanching before requiring a fix up for the end. I then added instructions that would be equivalent of unrolling the inner loop corresponding to padding the input with a fixed number zeros, then scanned for the set of parameters which could complete the avalanching for all the real input bits.

I was shocked to find that there were no significant impediments to this exercise, and I easily found a hash function with all these properties after a few hours or work. I then subjected all realistic sub-bit patterns of the hash output to a simple statistical test and verified that it had a distribution equivalent to a uniformly random map.

The moment of truth came with the performance test -- but given the architecture, it was a forgone conclusion. My hash function performs around 66% faster than Bob Jenkin's functions tested with various compilers.

Below is the code:

IMPORTANT NOTE: Since there has been a lot of interest for the code below, I have decided to additionally provide it under the LGPL 2.1 license. This provision applies to the code below only and not to any other code including other source archives or listings from this site unless otherwise specified.

The LGPL 2.1 is not necessarily a more liberal license than my derivative license, but this additional licensing makes the code available to more developers. Note that this does not give you multi-licensing rights. You can only use the code under one of the licenses at a time.

#include "pstdint.h" /* Replace with <stdint.h> if appropriate */
#undef get16bits
#if (defined(__GNUC__) && defined(__i386__)) || defined(__WATCOMC__) \
  || defined(_MSC_VER) || defined (__BORLANDC__) || defined (__TURBOC__)
#define get16bits(d) (*((const uint16_t *) (d)))
#endif

#if !defined (get16bits)
#define get16bits(d) ((((uint32_t)(((const uint8_t *)(d))[1])) << 8)\
                       +(uint32_t)(((const uint8_t *)(d))[0]) )
#endif

uint32_t SuperFastHash (const char * data, int len) {
uint32_t hash = len, tmp;
int rem;

    if (len <= 0 || data == NULL) return 0;

    rem = len & 3;
    len >>= 2;

    /* Main loop */
    for (;len > 0; len--) {
        hash  += get16bits (data);
        tmp    = (get16bits (data+2) << 11) ^ hash;
        hash   = (hash << 16) ^ tmp;
        data  += 2*sizeof (uint16_t);
        hash  += hash >> 11;
    }

    /* Handle end cases */
    switch (rem) {
        case 3: hash += get16bits (data);
                hash ^= hash << 16;
                hash ^= ((signed char)data[sizeof (uint16_t)]) << 18;
                hash += hash >> 11;
                break;
        case 2: hash += get16bits (data);
                hash ^= hash << 11;
                hash += hash >> 17;
                break;
        case 1: hash += (signed char)*data;
                hash ^= hash << 10;
                hash += hash >> 1;
    }

    /* Force "avalanching" of final 127 bits */
    hash ^= hash << 3;
    hash += hash >> 5;
    hash ^= hash << 4;
    hash += hash >> 17;
    hash ^= hash << 25;
    hash += hash >> 6;

    return hash;
}
Below is the results of a benchmark:
AMD Athlon XP 1.620Ghz
Power4 1Ghz
UltraSparc
III 1.2Ghz
Intel C/C++
/O2 /G6 /Qaxi /Qxi /Qip
MSVC
/O2 /Ot /Og /G6
WATCOM C/C++
/otexan /6r
GCC
-O3 -march=athlon-xp
GCC
-O3 -mpowerpc64
CC 32bit
-O
CRC32 6.42 5.66 5.66 5.67 14.06 8.75
One at a Time 5.76 5.66 5.66 5.69 12.79 5.57
Alpha Numeric 3.29 4.06 4.06 5.67 10.26 5.52
FNV Hash 4.88 4.84 4.83 4.87 8.92 11.98
Bob Jenkins 2.08 2.36 2.03 2.07 6.16 3.08
SuperFastHash 1.54 1.92 1.59 1.34 3.71 2.15

Data is time in seconds taken to hash a random buffer of 256 bytes 5 million times. Download test here
MSVC seems to have a hard time optimizing the two faster hash functions, and surprisingly the open source gcc is able to turn in the outright fastest result. Well done!

For the hash function to have the correct properties, it is assumed that CHAR_BIT is 8 and computations use 2s complement arithmetic.

I was initially worried that using a portable way of accessing 16 bits at a time would erode the performance significantly. Although none of the x86 compilers correctly reduced the portable version to the direct version (something worth complaining about), subsequent testing showed that this did not lead to the drastic performance drop that I thought it would (only about 20%). This leads me to believe that even on RISC architectures that this function should perform very well versus the Bob Jenkins, or other hashes.

Update(1): David C. wrote: I tested your hash function against all of the popular ones, including Bob Burtles. It turns out it was not only the quickest but had the best distribution (I created histograms of the chain lengths). The architecture I work on is IBM Z/OS (S/390 mainframes). Well done mate, will be using your code from now on! Ok, not exactly RISC, but at least this demonstrates that this function is good beyond the x86 architecture.

Update(2): I have recently gained access to a Power4 based Linux machine, as can be seen in the updated performance table above. (Interestingly, even normalizing for clock rate differences, the Athlon XP is 35-40% faster than the Power4 architecture). I did not see any appreciable performance difference between gcc and the native cc compiler, so I just reported the results from gcc. The performance ratio between SuperFastHash and Bob Jenkins is only slightly less impressive, so the main point about its advantage still holds.

Update(3): Feedback from Tim Rentsch suggested that to be fair, Bob Jenkin's hash should leverage the x86's unaligned access feature as well (this helps it even more than for my function because it accesses 32 bits at once.) I have also rescheduled the operations of SuperFastHash to maximally leverage the pipelines of modern CPUs. I have made the code more uniform in its treatment of integers, so besides being portable from a compilation point of view, it is now more portable from a semantic point of view. And finally I have added results from an alpha numeric hash that has been discussed on USENET.

Update(4): It seems that this hash function has gotten a little bit of attention lately. The function has been tested, and implemented in Apple Corporation's open source WebKit (used in safari) sources. Presumably this may mean that the code will eventually make its way back into Konqueror as well. In addition, MacroMedia appears to have adopted it and will use it in a future deployment. I guess people think there's something to this. (Update (4a): Google's Chrome Browser is based on Webkit and which continues to use this hash function.)

Update(5): People have been asking for an incremental version of the hash function. Here is a simple diff sequence that shows how to do this:
13,14c13,14
< uint32_t SuperFastHash (const char * data, int len) {
< uint32_t hash = len, tmp;
---
> uint32_t SuperFastHash (const char * data, int len, uint32_t hash) {
> uint32_t tmp;

You initialize it, by initially setting hash to some constant (like 0, or some other number if you want to avoid 0 sinking) and you keep it going by feeding the incremental output of each call into the hash parameter of each successive call. This will make the algorithm functionally comparable to Bob Jenkin's hash from an incremental point of view. You can improve this by splitting out the final avalanching code into a kind of "finalization step". But it should be pointed out that if you split the same data in two different partitionings, then the hash value that you get out will not necessarily be the same (Bob Jenkin's hash has the same problem). If you wish to use an incremental hash which is partition independent and has very low "per-call" overhead, then the Bob Jenkin's "One-At-A-Time" hash is recommended.

Update(6): In Google's open source "sparse hash table" project, the documentation makes the following observation: " ... in one test of the default SGI STL string hash function against the Hsieh hash function ..., for a particular set of string keys, the Hsieh function resulted in hashtable lookups that were 20 times as fast as the STLPort hash function. The string keys were chosen to be "hard" to hash well, so these results may not be typical, but they are suggestive."

Future work to be considered

The newest generation of CPUs are capable of 64 bit computation and certainly in a few years we can expect that there will be widespread development with tool availability for 64 bit software. So should this idea work by reading 32 bits at a time within a 64 bit accumulator? Probably, and we could expect the result to have roughly twice the asymptotic performance.

There's also the question of the inline dependencies. There are 6 instruction dependencies in the inner loop, so its quite possible that the odd and even word splits and recombination might lead to a substantial performance boost. (Rescheduling the operations actually saturates even the 3-pipeline Athlon, so unrolling is not necessary.)

Update(1): There have been some requests for an incremental version of SuperFastHash. This is straightforward enough to do, by accepting the "hash" value as a parameter, and initializing it with some non-zero constant (since the point is to assume that the length is not available until all the data is read). The only sticking issue, is which constant to choose.

Update(2): Tim Rentsch has noticed that the bit avalanching probability of SuperFastHash deviates from 50% more than Bob Jenkin's hash -- this is true, in fact it is between 5/12 and 7/12 (by design), while Bob Jenkin's hash appears to be far closer to 50%. There are some easy hacks limited to adding to the avalanche code at the end (for example, adding hash += (hash << 16) | (hash >> 16) to the end) to make all the bits of my hash function avalanche with a probability between .485 and .515, however its probably best that I revisit this to see how to achieve this with the least additional impact. (See next note.)

Update(3): Following comments by Tim Rentsch and Bob Jenkins I have revisted the final tempering code and have updated it to reduce the bit correllation range to between .485 and .515 without introducing other anomolies. It now also directly passes Bob Jenkins' lookup2 test.


Home Programming Cases Mail me!