# Ranking Algorithms¶

Let’s assume that you ran a set X of community discovery algorithms on a given graph G and that, for each of the obtained clustering, you computed a set Y of fitness scores.

• Is there a way to rank the obtained clusterings by their quality as expressed by Y?
• Is it possible to validate the statistical significance of the obtained ranking?
• Can we do the same while comparing different clustering (e.g., using NMI, NF1, ARI, AMI…)?

Don’t worry, cdlib got you covered!

(Yes, we are aware that Community Detection is an ill-posed problem for which No Free-Lunch can be expected… however, we’re not aiming at a general ranking here!)

## Ranking by Fitness Scores¶

class FitnessRanking(graph: <Mock id='140619399568912'>, partitions: list)
bonferroni_post_hoc() → list

Performs a Bonferroni-Dunn post-hoc test using the pivot quantities obtained by a ranking test. Tests the hypothesis that the ranking of the control method (best ranked clustering) is different to each of the other methods.

Returns: a list of named tuples reporting the pairwise statistical significant comparisons among the best ranked clustering and the others (in terms of z-value, p-value, adjusted-p-value)
>>> import networkx as nx
>>> from cdlib import evaluation
>>> from cdlib import algorithms
>>> g = nx.karate_club_graph()
>>> coms = algorithms.louvain(g)
>>> coms2 = algorithms.demon(g, 0.25)
>>> coms3 = algorithms.label_propagation(g)
>>> coms4 = algorithms.angel(g, 0.6)
>>> rk = evaluation.FitnessRanking(g, [coms2, coms, coms3, coms4])
>>> rk.rank(evaluation.fraction_over_median_degree)
>>> rk.rank(evaluation.edges_inside)
>>> rk.rank(evaluation.cut_ratio)
>>> rk.rank(evaluation.erdos_renyi_modularity)
>>> rk.rank(evaluation.newman_girvan_modularity)
>>> rk.rank(evaluation.modularity_density)
>>> rnk, p_value = rk.friedman_ranking()
>>> pc = rk.bonferroni_post_hoc()


O.J. Dunn, Multiple comparisons among means, Journal of the American Statistical Association 56 (1961) 52–64.

friedman_ranking() → [<class 'list'>, <class 'float'>]

Performs a Friedman ranking test. Tests the hypothesis that in a set of k dependent samples groups (where k >= 2) at least two of the groups represent populations with different median values.

Returns: a tuple whose first element is a dictionary assigning a rank to each Clustering object, while the second is the p-value associated to the ranking.
>>> import networkx as nx
>>> from cdlib import evaluation
>>> from cdlib import algorithms
>>> g = nx.karate_club_graph()
>>> coms = algorithms.louvain(g)
>>> coms2 = algorithms.demon(g, 0.25)
>>> coms3 = algorithms.label_propagation(g)
>>> coms4 = algorithms.angel(g, 0.6)
>>> rk = evaluation.FitnessRanking(g, [coms2, coms, coms3, coms4])
>>> rk.rank(evaluation.fraction_over_median_degree)
>>> rk.rank(evaluation.edges_inside)
>>> rk.rank(evaluation.cut_ratio)
>>> rk.rank(evaluation.erdos_renyi_modularity)
>>> rk.rank(evaluation.newman_girvan_modularity)
>>> rk.rank(evaluation.modularity_density)
>>> rnk, p_value = rk.friedman_ranking()

1. Friedman, The use of ranks to avoid the assumption of normality implicit in the analysis of variance, Journal of the American Statistical Association 32 (1937) 674–701.
1. D.J. Sheskin, Handbook of parametric and nonparametric statistical procedures. crc Press, 2003, Test 25: The Friedman Two-Way Analysis of Variance by Ranks
rank(scoring_function: Callable[[<Mock id='140619399569680'>, object], object]) → [<class 'str'>, <class 'dict'>]

Computes the specified scoring function to all the Clustering objects for which a ranking is required.

Parameters: scoring_function – a fitness function from cdlib.evaluation a tuple whose first element is the scoring_function name, while the second is a dictionary having as key the clustering name and as value the computed score.
>>> import networkx as nx
>>> from cdlib import evaluation
>>> from cdlib import algorithms
>>> g = nx.karate_club_graph()
>>> coms = algorithms.louvain(g)
>>> coms2 = algorithms.demon(g, 0.25)
>>> coms3 = algorithms.label_propagation(g)
>>> coms4 = algorithms.angel(g, 0.6)
>>> rk = evaluation.FitnessRanking(g, [coms2, coms, coms3, coms4])
>>> rk.rank(evaluation.fraction_over_median_degree)

topsis() → [<class 'list'>, None]

The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method. TOPSIS is based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution (PIS) and the longest geometric distance from the negative ideal solution (NIS).

Returns: a tuple whose first element is the ranking dictionary assigning a TOPSIS score to each Clustering object, while the second is None (to maintain coherence with friedman_ranking).
>>> import networkx as nx
>>> from cdlib import evaluation
>>> from cdlib import algorithms
>>> g = nx.karate_club_graph()
>>> coms = algorithms.louvain(g)
>>> coms2 = algorithms.demon(g, 0.25)
>>> coms3 = algorithms.label_propagation(g)
>>> coms4 = algorithms.angel(g, 0.6)
>>> rk = evaluation.FitnessRanking(g, [coms2, coms, coms3, coms4])
>>> rk.rank(evaluation.fraction_over_median_degree)
>>> rk.rank(evaluation.edges_inside)
>>> rk.rank(evaluation.cut_ratio)
>>> rk.rank(evaluation.erdos_renyi_modularity)
>>> rk.rank(evaluation.newman_girvan_modularity)
>>> rk.rank(evaluation.modularity_density)
>>> rnk, _ = rk.topsis()

1. Hwang, C.L.; Yoon, K. (1981). Multiple Attribute Decision Making: Methods and Applications. New York: Springer-Verlag.
2. Yoon, K. (1987). “A reconciliation among discrete compromise situations”. Journal of the Operational Research Society. 38 (3): 277–286. doi:10.1057/jors.1987.44.

### Methods¶

 FitnessRanking.rank(scoring_function, …) Computes the specified scoring function to all the Clustering objects for which a ranking is required. FitnessRanking.topsis() The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method. FitnessRanking.friedman_ranking() Performs a Friedman ranking test. FitnessRanking.bonferroni_post_hoc() Performs a Bonferroni-Dunn post-hoc test using the pivot quantities obtained by a ranking test.

## Ranking by Clustering Similarity¶

class ComparisonRanking(partitions: list)
bonferroni_post_hoc() → list

Performs a Bonferroni-Dunn post-hoc test using the pivot quantities obtained by a ranking test. Tests the hypothesis that the ranking of the control method (best ranked clustering) is different to each of the other methods.

Returns: a list of named tuples reporting the pairwise statistical significant comparisons among the best ranked clustering and the others (in terms of z-value, p-value, adjusted-p-value)
>>> import networkx as nx
>>> from cdlib import evaluation
>>> from cdlib import algorithms
>>> g = nx.karate_club_graph()
>>> coms = algorithms.louvain(g)
>>> coms2 = algorithms.kclique(g, 2)
>>> coms3 = algorithms.label_propagation(g)
>>> rk = evaluation.ComparisonRanking([coms, coms2, coms3])
>>> rk.rank(evaluation.overlapping_normalized_mutual_information_LFK)
>>> rk.rank(evaluation.overlapping_normalized_mutual_information_MGH)
>>> rk.rank(evaluation.omega)
>>> rnk, p_value = rk.friedman_ranking()
>>> pc = rk.bonferroni_post_hoc()


O.J. Dunn, Multiple comparisons among means, Journal of the American Statistical Association 56 (1961) 52–64.

friedman_ranking() → [<class 'list'>, <class 'float'>]

Performs a Friedman ranking test. Tests the hypothesis that in a set of k dependent samples groups (where k >= 2) at least two of the groups represent populations with different median values.

Returns: a tuple whose first element is a dictionary assigning a rank to each Clustering object, while the second is the p-value associated to the ranking.
>>> import networkx as nx
>>> from cdlib import evaluation
>>> from cdlib import algorithms
>>> g = nx.karate_club_graph()
>>> coms = algorithms.louvain(g)
>>> coms2 = algorithms.kclique(g, 2)
>>> coms3 = algorithms.label_propagation(g)
>>> rk = evaluation.ComparisonRanking([coms, coms2, coms3])
>>> rk.rank(evaluation.overlapping_normalized_mutual_information_LFK)
>>> rk.rank(evaluation.overlapping_normalized_mutual_information_MGH)
>>> rk.rank(evaluation.omega)
>>> rnk, p_value = rk.friedman_ranking()

1. Friedman, The use of ranks to avoid the assumption of normality implicit in the analysis of variance, Journal of the American Statistical Association 32 (1937) 674–701.
1. D.J. Sheskin, Handbook of parametric and nonparametric statistical procedures. crc Press, 2003, Test 25: The Friedman Two-Way Analysis of Variance by Ranks
rank(comparison_function: Callable[[object, object], object])

Computes the specified comparison function to all the Clustering objects for which a ranking is required.

Parameters: scoring_function – a fitness function from cdlib.evaluation a tuple whose first element is the scoring_function name, while the second is a dictionary having as key the clustering name and as value the computed score.
>>> import networkx as nx
>>> from cdlib import evaluation
>>> from cdlib import algorithms
>>> g = nx.karate_club_graph()
>>> coms = algorithms.louvain(g)
>>> coms2 = algorithms.kclique(g, 2)
>>> coms3 = algorithms.label_propagation(g)
>>> rk = evaluation.ComparisonRanking([coms, coms2, coms3])
>>> rk.rank(evaluation.overlapping_normalized_mutual_information_LFK)

topsis() → [<class 'list'>, None]

The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method. TOPSIS is based on the concept that the chosen alternative should have the shortest geometric distance from the positive ideal solution (PIS) and the longest geometric distance from the negative ideal solution (NIS).

Returns: a tuple whose first element is the ranking dictionary assigning a TOPSIS score to each Clustering object, while the second is None (to maintain coherence with friedman_ranking).
>>> import networkx as nx
>>> from cdlib import evaluation
>>> from cdlib import algorithms
>>> g = nx.karate_club_graph()
>>> coms = algorithms.louvain(g)
>>> coms2 = algorithms.kclique(g, 2)
>>> coms3 = algorithms.label_propagation(g)
>>> rk = evaluation.ComparisonRanking([coms, coms2, coms3])
>>> rk.rank(evaluation.overlapping_normalized_mutual_information_LFK)
>>> rk.rank(evaluation.overlapping_normalized_mutual_information_MGH)
>>> rk.rank(evaluation.omega)
>>> rnk, _ = rk.topsis()

1. Hwang, C.L.; Yoon, K. (1981). Multiple Attribute Decision Making: Methods and Applications. New York: Springer-Verlag.
2. Yoon, K. (1987). “A reconciliation among discrete compromise situations”. Journal of the Operational Research Society. 38 (3): 277–286. doi:10.1057/jors.1987.44.

### Methods¶

 ComparisonRanking.rank(comparison_function, …) Computes the specified comparison function to all the Clustering objects for which a ranking is required. ComparisonRanking.topsis() The Technique for Order of Preference by Similarity to Ideal Solution (TOPSIS) is a multi-criteria decision analysis method. ComparisonRanking.friedman_ranking() Performs a Friedman ranking test. ComparisonRanking.bonferroni_post_hoc() Performs a Bonferroni-Dunn post-hoc test using the pivot quantities obtained by a ranking test.