ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION

PDF Publication Title:

ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION ( algorithms-for-pagerank-sensitivity-dissertation )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 062

42 3 ⋅ the pagerank derivative 3.1 formulations Given that the introduction to the chapter mentions that the derivative of the PageRank vector exists, we first address the burning question: what is it? Remember figure 2.2 and all the different ways of looking at the PageRank problem from the previous chapter? Algorithms Substochastic matrix Strongly preferential PageRank Weakly preferential PageRank Sink preferential PageRank Theory Other transformations Graph or Web graph PseudoRank PageRank Eigensystems Linear systems Must we compute a derivative for all of these formulations? As hinted by the top of the figure, only a few formulations are theoretically relevant. The difference between strongly, weakly, and sink preferential are irrelevant for the derivative: all that matters is P.3 With P from any of these variations, the derivative vector satisfies the same formulation in terms of P.4 Thus, it suffices to look at the derivative of the core PageRank problem alone. The core problem is still either a linear system or an eigensystem, and thus the difference between that choice may matter. Though, as shown shortly, it does not. The core PageRank problem has enough structure to support the following lemma. It is important to have this lemma about the derivative, because it uses only properties of the PageRank problem—nothing else. It could tell us if a formulation were wrong, for instance. Lemma 6. Let x(α) be the solution of a PageRank problem (problem 1) for P, v, and α. Then the derivative of PageRank with respect to α, denoted x′(α), sums to 0. Proof. By definition, x(α + ω) − x(α) x′(α) = lim . ω→0 ω Because the limit of each component exists, we can move the summation 3 The matrix P is the fully column stochastic matrix in the definition of PageRank. 4 This statement should not be surprising. All the variations converted P ̄ to P and did not involve α. The conversion has no effect on differentiating with respect to α. inside the limit operation: eT x′(α) = lim eT . x(α + ω) − x(α) ω→0 ω But x(α + ω) and x(α) are both distribution vectors, which implies eT x(α + ω) = eT x(α) = 1. The difference of these scalars eT (x(α + ω) − x(α)) = 0. Consequently, the derivative sums to 0.

PDF Image | ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION

PDF Search Title:

ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION

Original File Name Searched:

gleich.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com (Standard Web Page)