PDF Publication Title:
Text from PDF Page: 047
This difference is not academic. If we call the simple summation algorithm ssum and the compensated summation algorithm csum then the difference appears even for 107 summands. rand(’state’,1); x = rand(1e7,1); % y = x./ssum(x); z = x./csum(x); % ssum(y) % csum(y) % ssum(z) % csum(z) % ensure repeatable results normalize for comparison 1.0000000000000302 0.9999999999999633 1.0000000000000664 1.0000000000000000 The problem with ssum is that it cannot reproduce its own normalization! Normalization is an important part of a PageRank code (as we’ll see in a mo- ment). We therefore need compensated summation for these computations. Wills and Ipsen [2009] originally highlighted the need for compensated summation in their discussion of the stability of the power method for Page- Rank. The law codes for PageRank [Vigna et al., 2008] implement this feature as well. We have not checked for any further history on this aspect of Page- Rank computation. Quite amazingly, the problem of accurate summation is still studied [Zhu and Hayes, 2009]. 2.4.2 The power method For an eigenvalue problem Ax = λx, the power method is a classic al- gorithm to compute the eigenvector with largest eigenvalue (in magnitude) [Golub and van Loan, 1996]. Given an almost arbitrary x(0)2, then 2 Theory requires us to state that x(0) must not be orthogonal to the desired eigenvector. Experience with floating point approximations tells us exact orthogonality is not re- quired. This is one of the rare cases when round-off actually helps. An exactly orthogonal vector quickly loses its orthogonality during the floating-point computation. After that happens, the power method succeeds. x(k+1) = ρ(k+1)Ax(k) ρ(k+1) = 1/ ∥Ax(k)∥ (2.17) converges to the eigenvector with maximum eigenvalue when the largest eigenvalue (in magnitude) of A is unique and real. The scalar quantities ρ normalize the iterates so that they do not grow arbitrarily large. Consider the power method on the PageRank eigensystem Mx = x. The largest eigenvalue is unique and equal to 1. Normalization at each step is not required because the largest eigenvalue of M is 1. Eliminating that step and expanding M (with (2.2)) yields the iteration x(k+1) = αPx(k) + (1 − α)veT x(k). (2.18) Recall that eT x = 1. A quick computation shows that when eT x(0) = 1, then eT x(k) = 1 for all iterations and thus x(k+1) = αPx(k) + (1 − α)v. (2.19) 2.4 ⋅ algorithms 25PDF Image | Instagram Cheat Sheet
PDF Search Title:
Instagram Cheat SheetOriginal File Name Searched:
pagerank-sensitivity-thesis-online.pdfDIY PDF Search: Google It | Yahoo | Bing
Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info
Cruising Review Topics and Articles More Info
Software based on Filemaker for the travel industry More Info
The Burgenstock Resort: Reviews on CruisingReview website... More Info
Resort Reviews: World Class resorts... More Info
The Riffelalp Resort: Reviews on CruisingReview website... More Info
CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com (Standard Web Page)