World University Rankings: Are They Necessary?

0

The Quacquarelli Symonds (QS) ranking of world universities has just been released for the year 2022. There is a lot of jubilation in India at the Indian Institute of Science (IISc) ranked the best research university in the world. I am delighted that the IISc is ranked at the top because one can look at the rankings critically without being subjected to the criticism that they are just sour grapes.

So, we should take a hard look at the ranking process and ask ourselves how justified is the ranking of the IISc compared to those ranked at par or below the IISc in research? These other universities include well-known education and research centers such as MIT, Harvard University, Caltech, Stanford University, University of California – Berkeley, and University of Cambridge. Therefore, it is essential to analyze the validity of search rankings and not to place so much importance on the IISc ranking that we become complacent about our search performance.

It is quite ridiculous that the performance of entire institutes can be quantified and reduced to a single digit, much like an individual’s abilities are quantified by her IQ. After all, education and research involve the interplay of creativity, innovation, and mentorship, and are far too complex to boil down to a single measure of performance. Human nature is to compare and quantify, and as long as people are willing to value rankings, there will be companies to do the job. But it’s up to people with decision-making abilities to think about them critically and not to make important decisions just on the basis of rankings.

The QS ranking system works by considering six criteria. These are: 1) academic reputation, 2) reputation, as perceived by employers, 3) faculty / student ratio, 4) citations by faculty, 5) international faculty / national faculty ratio, and 6) ratio of international students to national students.

The first criterion – academic reputation – is decided by a survey of around 100,000 teaching and research experts. The subjective perception of these experts governs the score for academic level, yet this criterion carries the most weight of 40 percent in determining rankings. A similar survey of around 50,000 employer responses regarding the second criterion contributes 10 percent of the weight of the ranking system. Thus, half of the intangible assets decide the maximum ranking points.

The remaining four criteria can be quantified on the basis of data provided by universities. A university’s faculty-student ratio weighs 20 percent. However, a higher faculty-to-student ratio does not always translate into better education for students. In many research institutes, support professors such as science and technology managers are counted as professors, although they are only marginally involved in teaching. In many universities and institutes, well-established senior professors are less accessible to students. They are often less involved in the academic activities of the institution as they tend to be busy working in various committees, nationally and internationally.

Read also: IISc and its desire to excel

Another criterion is the number of citations per faculty, and this parameter constitutes 20 percent of the ranking points. Citations are the number of times other publications cite a published article. However, in scientific circles, to increase citations for his papers, one needs to circulate among scientists by attending conferences, inviting other scientists to seminars, and personally socializing his ideas.

Thus, the number of citations may increase more with the visibility of the article than the true scientific impact of the work. It is not uncommon to see a work cited to discredit it; it still counts as a quote. Despite these well-known drawbacks, the number of citations remains one of the main measures used by the research community and funding agencies to judge research performance. For QS rankings, citations received by faculty during the five years from seven years before the assessment are taken into account. Citations that count for ranking exclude self-citations, that is, publications that cite the authors’ own articles. The number of citations is also normalized against the total number of citations in the field, as not all fields have the same number of researchers or the same number of publications.

Quotes can sometimes lead to wrong conclusions. One example is the University of Punjab, which was ranked first Indian university / institute in the Times Higher Education ranking of Asian institutes in 2013, surpassing all other institutes that have continuously outperformed the University of Punjab on almost every metric. The anomaly has been attributed to numerous quotes received by a few professors and their students who were co-authors of articles on the discovery of the Higgs boson at the Large Hadron Collider (LHC) managed by CERN in Switzerland. A few thousand authors from a few hundred universities and institutes have written these articles.

Apart from such anomalies, the impact of an article is generally not realized within a few years after its publication. The impact of an article is best judged by the longevity of its citations. Thus, restricting citations of publications to only a few years is wrong. Instead, the total number of citations received by all articles published since an institute or university started in the past five years may be a better metric. While I am generally opposed to the idea of ​​ranking, this new way of counting citation score will weigh on trailblazing posts that have remained prominent over long periods of time. In another ranking system, the number of articles published by an institute was used as a parameter to decide on rankings. This led to an anomaly in the ranking in 2010 and involved the University of Alexandria. In that year’s ranking, the University of Alexandria was surprisingly placed very high. This anomaly has been attributed to the practice of a single professor abusing his position as editor-in-chief of a journal to publish a large number of articles in that journal.

The remaining two measures, namely the fraction of international faculty and international students, are determined by the sociological, financial and geographic aspects of an institute’s location. Although these two criteria carry the least weight, they have the possibility of unfairly favoring one institute over the other for mainly non-academic reasons.

Whether it is necessary and desirable to rank universities and institutes is a moot point. Many universities often use the data facade to improve their rankings. Often they have staff responsible for finding ways to improve rankings and embellish the achievements of students and faculty. Often scarce resources are wasted to develop the institute.

Critical appraisal of the ranking system has become important as funding agencies find it easy to base their decisions on it. The ranking process is further encouraged by the scientific publishing mechanism (which is already a lucrative business). It has much more to gain when researchers tend to increase the number of their publications to improve the institute’s ranking, in addition to helping to increase their own individual metrics.

I think just as we should remove student grades and rankings on exams (which I talked about in Deccan Herald on May 16, 2019), we should also remove university rankings. Instead, it is enough to divide the universities into different levels and let the clientele, who must decide, dig deeper to find what is right for them. Such a division by levels can also be done for specific categories such as best teacher, best campus, best peer group, etc. It will certainly help future students and their parents to make decisions based on the categories they care about most, instead of giving institutes or universities bragging rights to make themselves known.

(The author is Professor Emeritus and Principal Investigator at INSA in the Solid State and Structural Chemistry Unit, Indian Institute of Science)

Disclaimer: The opinions expressed above are those of the author. They do not necessarily reflect the views of DH.


Source link

Leave A Reply

Your email address will not be published.