Gnutella - Query Routing Protocol

Our blue logo

Gnutella Protocol Development

Home :: Developer :: Press :: Research :: Servents

Query Routing for the Gnutella Network

Version 0.4

 

Christopher Rohrs

 Lime Wire LLC

December 18, 2001

 

 

Abstract: We present a novel scheme for avoiding broadcast queries on the Gnutella network.  In this scheme, hosts create query route tables by hashing file keywords and regularly exchange them with their neighbors.  Standard compression techniques minimize the cost of exchanging tables. Our scheme could dramatically reduce Gnutella’s bandwidth requirements, though careful study and simulation is needed before widespread deployment.

 

Introduction

 

The Gnutella protocol uses broadcasts to query for files.  A back-of-the-envelope calculation shows that this quickly limits the number of hosts that can be searched.  Despite popular rumor, this does not necessarily limit the overall size of the network [Bil].  Still, it is clearly desirable to eliminate broadcast queries.  Reducing query traffic increases scalability and leaves more bandwidth for uploads and downloads.

 

Many ideas have been proposed to eliminate broadcasts.  Freenet, for example, avoids broadcasts through a clever lexical routing scheme [Lan].  However, this scheme is imperfect.  First, it requires you to know the exact name of the file you wish to download.  For example, “constitution.txt” will not match “constitution.html”.  Secondly, it only returns a single match for your query.  Users searching for “book” are bound to be disappointed.

 

Recently Prinkey presented a clever scheme for routing queries based on keywords [Pri].  The idea behind this scheme is to have hosts periodically exchange keyword route tables with their neighbors.  To reduce bandwidth, Prinkey describes an efficient way of encoding these keywords through hashing and bitmaps.  The result is that queries are normally only sent to hosts that can respond to them.  Furthermore, Prinkey’s scheme does not suffer from any of Freenet’s problems.  Unfortunately it requires organizing the network in a tree-like fashion.

 

This paper generalizes Prinkey’s routing scheme for use on the Gnutella network.  Our key ideas are to include distance information in routing tables and to compress the tables with well-known algorithms.  First we review Prinkey’s original keyword-based routing scheme.  Then we describe the changes that are necessary for non-tree networks.  Next, we propose a new Gnutella message for this purpose.  We also discuss hashing in some depth.  Finally we conclude with remarks about the effects of this routing scheme on the network.

 

Prinkey’s Scheme

 

This section reviews Prinkey’s query routing scheme for rooted tree topologies.  We present this scheme through an example and a series of refinements.  Readers familiar with the scheme may choose to skip to the next section.

 

Consider four hosts in a tree-like hierarchy rooted at host A, as shown below.  Hosts B, C, and D are serving files named “bad joke”, “good book”, and “good joke”, respectively.  For simplicity assume that only node A is interested in querying the network.  With Gnutella’s current search scheme, any query from A must be sent to all of the other hosts.  For example, a query for “bad joke” will needlessly be sent to hosts C and D.  This is inefficient.

Figure 1: Route tables in a rooted tree.

Keywords are not hashed for simplicity.

 

Broadcasts can be avoided if hosts maintain some sort of routing table for each connection.  One possibility is to have hosts propagate all file keywords [1] up the tree.  For example, host B would know that keywords {“good”, “book”} are reachable through C, and A would know that {“good”, “book”, “bad”, “joke”} are (indirectly) reachable through B.  More formally, the routing table for a connection to a host X is the union of all routing tables for the connections from X to its children.  This is illustrated above.  With this information, a query is only sent along a connection if all keywords of the query are found in that connection’s routing table.  For example, a query for “bad joke” would be sent from A to B, but not from B to C or D, since neither of these connections contains the keyword “bad”.   Likewise, a query for “good joke” would be sent from A to B to D, but not to C.

 

This scheme guarantees there are never false negatives.  That is, a query will always be sent along a connection if some host reachable via that connection can respond to the query.  However it does not eliminate false positives.  For example, a query for “joke book” or “bad joke” may be sent from A to B even though neither C nor D can respond to the query.  It is hard to avoid these false positives, but one can reasonably assume that they are rare.

 

Propagating route tables can potentially consume a lot of bandwidth.  To solve this problem, Prinkey advocates hashing each keyword and storing the resulting numbers in a bitmap.  For example, if “good” hashes to 2 and “book” to 5, the routing table for {“good”, “book”} can be compressed to the bitmap 001001….  Note that the second and fifth bits (starting from zero at the left) are set in this bitmap.  In other words, each multibyte string is compressed to a single bit.  The resulting tables are an example of Bloom filters.  Table propagation is done by logically OR’ing bitmaps instead of unioning sets of keywords.  Of course there is a chance that two strings could hash to the same value, but this only causes harmless false positives.  

 

However, this scheme has two serious problems.  First, it requires a tree topology with a designated root.  This is difficult to form in a distributed fashion, especially if the tree is to account for the underlying structure of the network.  Secondly, if a host other than the root wants to query the network, it must send the query up the tree to the root.  This forces the nodes at the top of the tree to handle a disproportionate amount of traffic.  Modifications are needed before this scheme can be adopted to Gnutella.  

 

Routing Without Trees

 

One way of routing queries in a non-tree network like Gnutella is to simply propagate route tables along all connections.  That is, at every time step, a host sets its route tables to be the logical OR of its neighbors’ route tables and its own files. [2]   So at after T time steps, all hosts within T hops of a host X are aware of X’s files.  The problem is that X’s routing information will continue to propagate long after it has left the network.  This will dramatically increase the false positive rate—resulting in much wasted bandwidth.

 

For this reason, it is critical to limit the span of a host’s routing information.  This can be done by associating a hops value with each keyword.  Routing tables are now an array of numbers instead of an array of bits, where each number is the minimum distance to a matching file.  More formally, for any connection C, RTC[i] is the number of hops to a file with a keyword that hashes to i, or infinity if no such file exists.  Note that RTC[i] ≥1, for all i.  Queries are forwarded to those connections whose route tables have entries for all query keywords.  That is, a query with TTL N and keywords K1, .., KM is only sent along a connection C if RTC[HASH(Ki)]≤N, for all 1<i≤M. 

 

Hosts now update their tables via dynamic programming; at every time step, hosts exchange all their tables with neighbors and update them accordingly. [3]   Let host X have connections to hosts Y1…YM.  Let RTYi->X be the route table received by X from host Yi.  If X is not sharing any files with keywords hashing to h, the table RTX->Yi propagated by X to each host Yi is given by RTX->Yi[h]=minj≠i(RTYj->X[h])+1.  Otherwise, RTX->Yi[h]=1.

 

As an example, consider the network described in the previous section.  The fact that this network is a tree is a coincidence; it need not have a root and may have cycles.  After one time step, hosts have exchanged routing tables for files one hops away.  This is illustrated below on the left-hand side.  Here sets of keywords are shown instead of arrays of hashed values for simplicity.  For example, the table “{bad/1, joke/2}” would really be represented as the array [∞, 2, ∞, 1, ∞, ...] if “bad” hashed to 3 and “joke” hashed to 1.  After a second time step, hosts have exchanged routing tables for files two hops away.  This is illustrated below on the right-hand side.  Note that A now has routing entries for all files.  On the other hand, B is aware that no files are reachable through A.

 

Figure 2: Propagation of route tables by TTL.

Keywords are not hashed for simplicity.

 

At first this scheme seems wasteful.  Instead of exchanging one bit per keyword, hosts exchange a lg(M) bit number, where M is the maximum TTL.  So with a typical TTL of 10, neighbors exchange 4 times more data than in Prinkey’s original scheme.  However experiments show that traditional data compression algorithms can reduce the size of these route tables quite substantially.  As an example, a very sparse table (one where most entries are infinity) will have almost no information, and hence can be encoded with few bits.  Furthermore, one expects an uneven distribution of numbers, which is ideal for certain kinds of statistical compression schemes. [4]   Hence we need not design our own encoding scheme; this greatly simplifies our update protocol. [5]

 

An important question is when to update these tables.  Ideally a host would send a route table update after adding or removing connections, or changing its shared file list.  However, several improvements can be made that will greatly reduce the amount of bandwidth used.  First, hosts should only propagate route table information from connections that have been up for a few minutes.   This means that it may take a few minutes for a host to be searchable, but it tends to weed out the majority of short-lived connections. [6]   More importantly, hosts should not send out more than one message per connection per T minutes.  In other words, if a host receives several update messages from its neighbors within a T-minute window, they will be consolidated into a single message that is forwarded on.  So the number of update messages sent per connection is independent of the network size or the number of the host’s connections.

 

If the route table RT to be sent to a host is very similar to the last table RT’ sent, it may be advantageous to send an incremental update.  First, the sender creates a “patch” by subtracting each element of RT and RT’.  Note that this patch contains all zeroes if the table has not changed, and positive or negative values otherwise.  Then this patch is compressed as discussed above and sent along the connection.  The receiver then decompresses this patch and adds it to the route table for that connection.  In a relatively stable network, such patches may save a considerable amount of bandwidth.

 

Gnutella Protocol Changes

 

To implement the above scheme for the Gnutella network, we propose the addition of a single extensible message to the protocol, called ROUTE_TABLE_UPDATE.  The proposed function code of this message is 0x20, since Gnutella uses even function codes for broadcast messages.

 

ROUTE_TABLE_UPDATE messages come in several variants.  The length of the payload is defined in the Gnutella general message header and must always be at least 1. The first byte of the payload determines the message’s variant.  Only two variants are currently defined: RESET (0x0) and PATCH (0x1).  The format of these messages is shown below.  More variants could be defined later if desired. This allows us to modify the route table mechanism without adding new Gnutella function codes.

 

Field Name

Bytes

Meaning

VARIANT

1

The message variant.  Always 0x0 for RESET.

TABLE_LENGTH

4

The length of the sender’s route table, i.e., the number of entries.  (Earlier versions of this document incorrectly stated the meaning of this value.)  For hashing reasons, this must be a power of 2.

INFINITY

1

The route table value for infinity, i.e., the maximum distance to any file in the table+1.

Format of the RESET variant payload

 

Field Name

Bytes

Meaning

VARIANT

1

The message variant.  Always 0x1 for PATCH.

SEQ_NO

1

The position of this message in the update sequence.

SEQ_SIZE

1

The total number of messages in this update sequence.

COMPRESSOR

1

The algorithm to use when decompressing data.  Currently defined values:

0x0 no compression

0x1 ZLIB compression

ENTRY_BITS

1

The number of bits per uncompressed patch entry, including the sign bit.  Must be 4 or 8.

DATA

to end of payload

The compressed table patch.

Format of the PATCH variant payload

 

A RESET variant must precede any PATCH variants.  This message resets the receiver’s route table to a completely empty table of TABLE_LENGTH elements.  For hashing reasons discussed below, TABLE_LENGTH must be a power of two.  Each element of this table has a numerical value of INFINITY.  Typically INFINITY is chosen to be one greater than the maximum TTL, e.g., 11.  Hosts may send more RESET messages at any point in the future, e.g., if there has been a drastic change in the route table.  After receiving a RESET message along a connection C, a host is encouraged to disregard the route table for C and forward all queries along the C until the table has been fully updated via PATCH messages. [7]   This means, for example, that a client will broadcast all queries for a few seconds after startup.

 

The PATCH variant incrementally changes the receiver’s route table in the manner described in the previous section.  To prepare a patch, the sender first subtracts the last route table sent (or INFINITY if no table has been sent) from the current route table, yielding an array of signed numbers.  (Note that the sender must therefore store two route tables per connection.)  Then the sender interprets this array as a binary string and compresses it with a well-known algorithm.  Finally, the sender chunks this data into PATCH messages and sends these messages one-by-one.  Each message has a TTL of 1 and hops value of 0, and all messages of the sequence share the same SEQ_SIZE, COMPRESSOR, and ENTRY_BITS fields.  (These are discussed later.)  The receiver inverts this process to update its route table. This is illustrated below. 

Converting route tables to PATCH messages

 

One may ask why multiple PATCH messages are sent for each table update; an alternative is to send a single giant PATCH every T minutes.   Unfortunately this would stall other Gnutella traffic for unacceptable periods of time.  Interleaving PATCH messages with normal traffic requires negligible extra bandwidth and decreases the worst-case latency of normal queries.  A maximum message size of 1KB might be a good choice.  This would block query traffic for at most 200 ms on a modem with 5KB/s bandwidth.  Clients are encouraged to spread the PATCH messages equally over the T minute window if possible. 

 

The receiver uses the SEQ_NO and SEQ_SIZE fields to reassemble the compressed patch from a sequence of PATCH messages.  The SEQ_SIZE field is the number of PATCH messages in this sequence.  The SEQ_NO field is the message’s position in the sequence.  Messages in the sequence must be sent in-order, so the SEQ_SIZE fields should increase by one in each message of the sequence.  If either of these conditions is violated (e.g., because the sender is improperly dropping buffered messages [8] ), the receiver should close the connection; because table updates are generally incremental, there is no way to recover from a missing message.  Note that the sequence numbers are intended only to provide a “sanity check”, not to make the use of a reliable transport layer (i.e., TCP) obsolete.  For this reason, no provisions are made for sequence restart. The sequence is completed when receiving a message whose SEQ_NO==SEQ_SIZE.  Note that sequences are limited to 255 messages, since only one unsigned byte is used per SEQ_SIZE field.  Also remember that the sender may send a RESET message at any time to start a new patch sequence

 

Once the receiver has reassembled the compressed patch, it must decompress it with the algorithm specified in the COMPRESSOR field.  Currently two algorithms are specified: 0x0 for no algorithm or 0x1 for the ZLIB algorithm [9] .  All implementations must support these algorithms.  New algorithms could be added in the future, though an out-of-band mechanism must be used to discover any such additional capabilities.

 

The output of the compressor is simply a string of bits.  The receiver must interpret these bits as an array of signed numbers.  It does so by breaking the bits into elements that are ENTRY_BITS bits long.  The length of the resulting patch array must be the same length as the TABLE_SIZE field passed in the latest RESET message.  For simplicity, ENTRY_BITS is restricted to 4 or 8.  For example, the uncompressed string 001011110000 would be interpreted as the patch [2, -1, 0] if ENTRY_BITS=4.  One could argue that a decent compression algorithm should eliminate the need vary this value, but our experiments show that setting ENTRY_BITS to 4 when possible can reduce table sizes by around 10%.

 

In this scheme, table updates are not necessarily atomic.  That is, the receiver need not wait until receiving the last PATCH message before modifying or even propagating part of its route table for that connection.  We believe that this is actually a good thing.  First, it reduces the memory requirements of the client, as there is no need to buffer received PATCH messages.  Second, it may reduce the worst-case latency for table propagations.

 

Older clients present difficulty in any sort of routing scheme, as they will not propagate route tables, creating “black holes” in the network.  For example, consider a string of clients A-B-X-C.  If X is an older client, queries from A will rarely reach C, though queries from C will reach A.  One option is for B to propagate a fake routing table from X with entries for all keywords, but this will undo the benefits of routing.  A better choice is for older clients to prefer newer clients whenever possible.  This can be implemented through host discovery services like router.limewire.com and GUID versioning schemes. [10]   Whether or not a client connects to older clients—at the cost of wasted bandwidth—should be user configurable.  In the above example, A connects to X only if it wants to search older Gnutella clients.  Note that older clients will be able to search new clients.  This is illustrated below.

 

Partitioning the network into new and old clients

 

 

Hashing Details

 

A query string contains one or more arbitrarily long keywords.  These keywords should be separated by the ASCII value for a space (‘ ‘), i.e., the byte 0x20.  For example, the query string “graphics book” contains the keywords “graphics” and “book”.  A consistent hash function h(k,m) is needed to map each keyword string k to an entry of a route table of length m.  This function must satisfy three properties:

 

  1. It hashes keys of arbitrary length reasonably uniformly.
  2. It is implemented easily and efficiently on most platforms.
  3. It allows one to convert between hash tables of different sizes.  More formally, given h(k, m) and m—but not k—it is possible to compute h(k, m’) for an arbitrary m’.  This allows clients with different route tables to interoperate, an important feature for upwards compatibility.

 

Any standard hash function should satisfy the first two properties.  The third property, however, is harder to satisfy.  For example, this effectively rules out division methods. i.e., those based on congruence modulo m.   Thankfully other methods will suffice.  To start, we need a way of converting an arbitrary length string k into a 32-bit natural number n.  We recommend calculating n from k as follows:

 

n = k[3]k[2]k[1]k[0] ^ k[8]k[7]k[6]k[5] ^ ...

 

Here wxyz refers to concatenation of bytes x through z, with z being the least significant byte.  x^y is the logical XOR of bytes x and y.  In other words, one interprets each keyword as a sequence of 32-bit little-endian numbers—­padded with 1 to 3 zeroes if needed—and XORs the numbers together.

 

Now one must map the natural number n onto the range 0 to (m-1).  An ideal function might be

 

h’(n, m) = ë m ((n A) mod 1) û

where A = (√5-1)/2≈0.6180339887...

 

Here ëxû means the floor of x, and “x mod 1” means the fractional part of  x, i.e., x-ëxû.  This function is discussed in depth in Introduction to Algorithms by Cormen, Leiserson, and Rivest.  The value of A is suggested by Knuth to avoid collisions for a wide range of data.  Unfortunately this function is bound to result in different values on different platforms because of rounding errors.  However, if we restrict m a power of two, the function can be computed using integer operations as follows: first one multiplies the 32-bit value of n by the 32 bit-constant 0x4F1BBCDC, producing a 64-bit product.  The hash is then bits 31-b-1 through 31 of the product, inclusive, where b=log2(m).  (Assume  bits are labelled 0 through 63.)  A Java implementation of this function is shown in the appendix, along with sample outputs.

 

It is easy to interpolate/extrapolate this function when converting between tables of different sizes.  Assume we wish to interpolate a table T of size m to a table T’ of size m’.  If we visualize the tables as number lines, it becomes clear that an element T[i] should be set to the minimum of all overlapping elements of T’.  This is illustrated below, along with pseudocode.  Here ‘X’ is the symbol for infinity.  In the pseudocode, we assume C-style indices (from 0 to m-1), assume that all elements of T’ are initialized to infinity, and assume that T and T’ use the same value for infinity.

 

for all i, where 0 ≤ i < m,

for all i’, where ë i m’/m û ≤ i’< é(i+1) m’/mù

T’[i’]=min(T’[i’], T[i])

 

Scaling route tables.  ‘X’ means infinity.

 

Hosts should canonicalize all keywords in a human language-specific manner before adding them to route tables or sending them in queries.  For English, this means ensuring all letters are lower case.

Conclusion

 

The key assumption behind routing is that the savings in query reduction outweighs the cost of exchanging route tables.  The size of query routing tables depends on two factors: the number of distinct keywords in the horizon and the efficiency with which these can be encoded.  The number of keywords, in turn, depends on the number of users within the horizon and the number of distinct items they are sharing.  This latter is difficult to estimate.

 

However, preliminary evidence suggests that these tables can be encoded quite efficiently.  As an experiment, we recorded query replies over several hours and built the routing tables corresponding to those replies.  We found that a 65536 element route table with 12000 keywords and INFINITY=7 could be transmitted in just over 12 KB if ENTRY_BITS=4—one byte per keyword—or 13 KB if ENTRY_BITS=8.

 

As with the existing Gnutella protocol, this scheme limits your search radius to N hops.  That is not necessarily a problem.  Because routing allows you to hold more connections with the same amount of bandwidth, the diameter of the network will be greatly decreased.  In fact, Gnutella may already have a “small world” model, meaning that most hosts are reachable through a short number of hops. 

 

Query routing has the potential to reduce Gnutella’s bandwidth requirements by orders of magnitude, achieving true scalability.  However, query routing should be carefully studied in simulation before wider deployment.

 

Appendix: Java Implementation of Hash Function

 

/**

 * The official platform-independent hashing function for query-routing.  The

 * key property is that it allows interpolation of hash tables of different

 * sizes.  More formally k*hash(x,n)<=hash(x, kn)<=k*hash(x,n)+1.<p>

 */

public class HashFunction {

    private static final int A_INT=0x4F1BBCDC;

 

    /*

     * Returns the n-<b>bit</b> hash of x.toLowerCase(), where n="bits".  That

     * is, the returned value value can fit in "bits" unsigned bits, and is

     * between 0 and (2^bits)-1.

     *

     * @param x the string to hash

     * @param bits the number of bits to use in the resulting answer

     * @return the hash value

     */   

    public static int hash(String x, byte bits) {

        return hash(x, 0, x.length(), bits);

    }      

 

    /**

     * Returns the same value as hash(x.substring(start, end), bits),

     * but tries to avoid allocations.  Note that x is lower-cased

     * when hashing.

     *

     * @param x the string to hash

     * @param bits the number of bits to use in the resulting answer

     * @param start the start offset of the substring to hash

     * @param end just PAST the end of the substring to hash

     * @return the hash value

     */  

    public static int hash(String x, int start, int end, byte bits) {

        //1. First turn x[start...end-1] into a number by treating all 4-byte

        //chunks as a little-endian quadword, and XOR'ing the result together.

        //We pad x with zeroes as needed.

        //    To avoid having do deal with special cases, we do this by XOR'ing

        //a rolling value one byte at a time, taking advantage of the fact that

        //x XOR 0==x.

        int xor=0;  //the running total

        int j=0;    //the byte position in xor.  INVARIANT: j==(i-start)%4

        for (int i=start; i<end; i++) {

            int b=Character.toLowerCase(x.charAt(i)) & 0xFF;

            b=b<<(j*8);

            xor=xor^b;

            j=(j+1)%4;

        }

        //2. Now map number to range 0 - (2^bits-1).

        return hashFast(xor, bits);

    }

       

    /**

     * Returns the n-<b>bit</b> hash of x, where n="bits".  That is, the

     * returned value value can fit in "bits" unsigned bits, and is

     * between 0 and (2^bits)-1.

     */

    private static int hashFast(int x, byte bits) {

        //Multiplication-based hash function.  See Chapter 12.3.2. of CLR.

        long prod= (long)x * (long)A_INT;

        long ret= prod << 32;

        ret = ret >>> (32 + (32 - bits));

        return (int)ret;

    }

 

    /** Unit tests. */

    public static void main(String args[]) {

        //1. Basic hash tests.  These unit tests were generated by the reference

        //implementation of HashFunction.  Some I've checked manually.

        Assert.that(hash("", (byte)13)==0);

        Assert.that(hash("eb", (byte)13)==6791);

        Assert.that(hash("ebc", (byte)13)==7082);

        Assert.that(hash("ebck", (byte)13)==6698);

        Assert.that(hash("ebckl", (byte)13)==3179);

        Assert.that(hash("ebcklm", (byte)13)==3235);

        Assert.that(hash("ebcklme", (byte)13)==6438);

        Assert.that(hash("ebcklmen", (byte)13)==1062);

        Assert.that(hash("ebcklmenq", (byte)13)==3527);

        Assert.that(hash("", (byte)16)==0);

        Assert.that(hash("n", (byte)16)==65003);

        Assert.that(hash("nd", (byte)16)==54193);

        Assert.that(hash("ndf", (byte)16)==4953);

        Assert.that(hash("ndfl", (byte)16)==58201);

        Assert.that(hash("ndfla", (byte)16)==34830);

        Assert.that(hash("ndflal", (byte)16)==36910);

        Assert.that(hash("ndflale", (byte)16)==34586);

        Assert.that(hash("ndflalem", (byte)16)==37658);

        Assert.that(hash("ndflaleme", (byte)16)==45559);

        Assert.that(hash("ol2j34lj", (byte)10)==318);

        Assert.that(hash("asdfas23", (byte)10)==503);

        Assert.that(hash("9um3o34fd", (byte)10)==758);

        Assert.that(hash("a234d", (byte)10)==281);

        Assert.that(hash("a3f", (byte)10)==767);

        Assert.that(hash("3nja9", (byte)10)==581);

        Assert.that(hash("2459345938032343", (byte)10)==146);

        Assert.that(hash("7777a88a8a8a8", (byte)10)==342);

        Assert.that(hash("asdfjklkj3k", (byte)10)==861);

        Assert.that(hash("adfk32l", (byte)10)==1011);

        Assert.that(hash("zzzzzzzzzzz", (byte)10)==944);

 

        //2. Case tests.

        Assert.that(hash("3nja9", (byte)10)==581);

        Assert.that(hash("3NJA9", (byte)10)==581);

        Assert.that(hash("3nJa9", (byte)10)==581);

    }

}

 



[1] Generally speaking, it is better to propagate keywords than full filenames.  First, it generally saves bandwidth when propagating tables.  For example the files “good book” and “bad book” can be compressed into the keyword set {“good”, “bad”, “book”}.  Secondly, it makes query processing easier, since hosts don’t need  to check whether a query is a substring of each filename.  Finally, this scheme allows hosts greater flexibility in query responses.  For example, it allows a host to match queries with the contents of a file—not just its name—if it desires. 

[2] Logically OR’ing bitmaps is analogous to unioning sets of keywords.

[3] This is similar to the improved horizon counting algorithm described by Rohrs [Roh].

[4] Intuitively, one expects more files available at N+1 hops than N hops, since a query of TTL N+1 can search a much larger horizon.  Hence smaller hops values should be much less common than large hop values in our routing tables.  Statistical compression schemes such as Huffman encodings take advantage of precisely this.

[5] An original draft of our protocol had separate sparse and dense update messages for use with different kinds of route tables, as suggested by Prinkey.  Now such distinctions are not needed; the compression scheme relieves us of this burden.

[6] A study of 300 connections over several days showed that the average uptime of a connection was 24 minutes.   The average uptime of a connection surviving 1 minute was 88 minutes.  The average uptime of a connection surviving 5 minutes was 133 minutes—more than two hours.  So the longer a connection is up, the longer one can expect it to remain up.

[7] An alternative is for RESET messages to initialize the receiver’s route table to be a completely full route table, i.e., one for which all elements are 0.  Then no special rules are needed when forwarding queries.  The problem is that the client must not propagate the RESET’ed route table along other connections until the last PATCH message has been received.  Doing so would quickly flood the network with full tables, making query routing useless.

[8] Flow control schemes should make sure that route table messages are never dropped.

[9] Revision 0.1 of our proposal used GZIP instead of ZLIB.  GZIP uses ZLIB internally but also adds a header and checksum.  Hence using ZLIB saves a few bytes.  We also found Java’s support for ZLIB to be better than its support for GZIP.  C programmers can use the zlib library to create and read ZLIB data.  Revision 0.3 of this document mistakenly referred to the “DEFLATE” algorithm instead of ZLIB.  While the two are related, they are not identical.

[10] All “modern” Gnutella clients use marked GUIDs for identification purposes.  The ninth byte of these GUIDs (guid[8]) is all 1’s to distinguish them from the Gnutella 0.56 client.  The last byte is a version number to allow Gnutella protocol versioning.  The current version number is 0.  The remaining bytes are typically random, though this is not required.



[Bil] Bildson, Greg.   Gnutella: Settings the Record Straight.  http://www.limewire.com/index.jsp/setting_record.

[Lan] Langley, Adam.  “Freenet.”  Peer-to-Peer: Harnessing the Power of Disruptive Technologies.  Ed. Andy Oram.  O’Reilly, 2001.  123-132.

[Pri] Prinkey, Michael T.  An Efficient Scheme for Query Processing on Peer-to-Peer Networks http://aeolusres.homestead.com/files/index.html.

[Roh] Rohrs, Christopher.  Improving Gnutella’s Ping/Pong Scheme. http://www.limewire.com/index.jsp/pingpong. 

 

 

 

Home :: Developer :: Press :: Research :: Servents

SourceForge.net Logo