logo

ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION

PDF Publication Title:

ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION ( algorithms-for-pagerank-sensitivity-dissertation )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 041

and it is the latter approach that is most appropriate for the PageRank context. One of the simplest techniques is called compensated summation. The compensated summation algorithm requires storing one extra floating- point value to accumulate an approximation of the error in the current sum- mation. As given in Higham [2002, section 4.3], the following algorithm is due to Kahan. sumx = 0; err = 0; for i=1:numel(x) temp = sumx; y = x(i) + err; sumx = temp + y; err = (temp - sumx); err = err + y; end % save the current sum % add the error to the summand % increase the sum by the summand and the error % compute the exact difference after adding y % err should be -y, add y to find the true error Instead of nε error, this computation has error 2ε + O(nε2). For most con- ceivable PageRank problems, this accuracy should be sufficient. This difference is not academic. If we call the simple summation algorithm ssum and the compensated summation algorithm csum then the difference appears even for 107 summands. rand(’state’,1); x = rand(1e7,1); % y = x./ssum(x); z = x./csum(x); % ssum(y) % csum(y) % ssum(z) % csum(z) % ensure repeatable results normalize for comparison 1.0000000000000302 0.9999999999999633 1.0000000000000664 1.0000000000000000 The problem with ssum is that it cannot reproduce its own normalization! Normalization is an important part of a PageRank code (as we’ll see in a mo- ment). We therefore need compensated summation for these computations. Wills and Ipsen [2009] originally highlighted the need for compensated summation in their discussion of the stability of the power method for Page- Rank. The law codes for PageRank [Vigna et al., 2008] implement this feature as well. We have not checked for any further history on this aspect of Page- Rank computation. Quite amazingly, the problem of accurate summation is still studied [Zhu and Hayes, 2009]. 2.4.2 The power method For an eigenvalue problem Ax = λx, the power method is a classic al- gorithm to compute the eigenvector with largest eigenvalue (in magnitude) [Golub and van Loan, 1996]. Given an almost arbitrary x(0)2, then 2 Theory requires us to state that x(0) must not be orthogonal to the desired x(k+1) = ρ(k+1)Ax(k) ρ(k+1) = 1/ ∥Ax(k)∥ (2.17) eigenvector. Experience with floating point approximations tells us exact converges to the eigenvector with maximum eigenvalue when the largest eigenvalue (in magnitude) of A is unique and real. The scalar quantities ρ normalize the iterates so that they do not grow arbitrarily large. orthogonality is not required. This is one of the rare cases when round-off actually helps. An exactly orthogonal vector quickly loses its orthogonality during the floating-point computation. After that happens, the power method succeeds. 2.4 ⋅ algorithms 21

PDF Image | ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION

algorithms-for-pagerank-sensitivity-dissertation-041

PDF Search Title:

ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION

Original File Name Searched:

gleich.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com | RSS | AMP