ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION

PDF Publication Title:

ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION ( algorithms-for-pagerank-sensitivity-dissertation )

Previous Page View | Next Page View | Return to Search List

Text from PDF Page: 065

which follows because (I − αP ̄ ) is an M-matrix (with a non-negative inverse) and P ̄y(α) is positive. Second, PseudoRank has the property that the deriva- tive of PseudoRank is another PseudoRank system, albeit for a possibility different value of θ.9 This second property is exploitable and is a component in an algorithm to compute the PageRank derivative given in the next section. 3.2 algorithms So far, PageRank has been differentiated to construct the PageRank deriva- tive algebraically. The next step is to analyze these derivatives, but experimen- tation is a powerful technique to suggest analysis. To experiment with the derivative requires computing a derivative—that is, an algorithm. Although the derivative vector in (3.1) is the solution of a linear system, can we solve the system more efficiently than a standard problem? It seems likely. After all, PageRank solves (I − αP)x(α) = (1 − α)v, whereas the derivative solves (3.1) (I − αP)x′(α) = Px(α) − v. These systems differ only in the right-hand side. Surely something efficient must be possible. It is. As shown towards the end of this section, a strongly personalized PageRank solver suffices to compute the derivative. Despite the similarities of (3.1) to PageRank, an algorithm with this property is not entirely trivial, unless the goal is merely to approximate the PageRank derivative, in which case consider the following. a trivial idea A first idea for computing the derivative vector is a central finite difference method, x′(α)≈ 1 (x(α+ε)−x(α−ε)) 2ε for small ε. While attractive for its simplicity, this method requires computing a PageRank vector for a value of α larger than the value of α at the derivative. Additionally, it yields only an approximation to the derivative vector. A first order backward finite difference formula x(α) − x(α − ε) ε avoids computing PageRank at a larger value of α, but is less accurate—and dangerously so—when using inexact solutions x(α). Rather simple analysis shows that using the first order difference is unwise unless the PageRank problems are solved accurately.10 9 If P ̄ is taken to be a stochastic matrix instead, then θ does not change. x′(α) ≈ 3.2 ⋅ algorithms 45 10 If the PageRank problems are solved to a tolerance of γ, then each computed vector is roughly x(α) + γe for an error vector e. Both the central and backwards differences yield an error of γ/ε, and this suggests using a large ε. To get an accurate solution with a large ε requires central differencing.

PDF Image | ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION

PDF Search Title:

ALGORITHMS FOR PAGERANK SENSITIVITY DISSERTATION

Original File Name Searched:

gleich.pdf

DIY PDF Search: Google It | Yahoo | Bing

Cruise Ship Reviews | Luxury Resort | Jet | Yacht | and Travel Tech More Info

Cruising Review Topics and Articles More Info

Software based on Filemaker for the travel industry More Info

The Burgenstock Resort: Reviews on CruisingReview website... More Info

Resort Reviews: World Class resorts... More Info

The Riffelalp Resort: Reviews on CruisingReview website... More Info

CONTACT TEL: 608-238-6001 Email: greg@cruisingreview.com (Standard Web Page)