Thus, we have been invited to advertise in their QS Top University Guide 2013 (with discounts if we opt to advertise in more than one language) and in other publications, to attend seminars and conferences (with registration fees of course), and so on. Can we rely primarily on reputations to decide ranks? Academics all over the world are asked their opinion of the top institutions globally and in their country. The chances of getting an IITs name included by a US professor are quite slim. The number of respondents is proportional to the number of institutes available for selection in that country. So the responses are heavily weighted in favour of developed countries. Respondents are not asked to give their inputs for each of the listed universities (it may be impractical to do so, as there is a large number of them). Instead, each respondent is asked to give a list of 5-10 universities he or she thinks are globally well known, and well known in their country. This method perpetuates the existing ranks.
Four, consider the categories CF and citations. The total number of citations in the last five years is divided by the number of faculty in the last year by QS. IIT-G had 323 faculty members in 2013, but only 220 in 2009. So its numbers clearly cannot be compared with institutions like Cambridge and Oxford, where the faculty numbers are almost constant. Further, since a five-year average is taken, one or two star papers can make a huge difference to the numbers. For example, a review paper The Hallmarks of Cancer authored by two professors from the University of California, San Francisco, and the Massachusetts Institute of Technology, has about 10,000 citations. This paper alone will have boosted the CF figure of both these institutions significantly. THE uses a different method for citations and probably does not remove self-citations. The high scores of Panjab and IIT-G vis-à-vis IIT-D could be explained by this. Panjab Universitys high energy physics group (and to a lesser extent IIT-Gs) is part of global experiments at CERN and Fermi Labs, and papers from that project have very high citations. Thus, a small of group of international collaborations are providing a high score. Isnt the median number of citations per faculty a better measure than the average (there are other issues, for example, citations in the sciences are usually much more than