Why the QS world universities rankings should be ditched

Faisal Wali

Choosing which university to go to is by no means an easy choice

Quaquarelli Symonds, a study abroad and education specialist company is well known for publishing yearly the annual rankings of universities using a set of criteria. The QS World University rankings as it is more popularly known ranks universities on their overall performance according to five criteria, each of which takes up a different amount of weightage.

The five criteria are academic peer review with a weightage of 40%, recruiter review at 10%, faculty to student ratio at 20%, citations per faculty at 20% and international orientation at 10%. With respect to international orientation, the criteria is split half at 5% each for international staff with international students taking the other half.

It does not surprise one that the QS rankings includes a criteria on international orientation given its position as a study abroad company. In terms of ranking methodology, the company conducts survey of world’s academics and graduate employers. Survey responses from academics have greater bearing on the rankings as compared with employers due to the higher weightage accorded to the former.

QS also ranks universities based in five different areas -1)  arts and humanities 2) engineering and IT 3)  natural sciences 4) life sciences and medicine 5) social sciences. Ranking of the five areas use the same five criteria as highlighted earlier.

The latest QS rankings specifically for Medicine shed interesting light on its methodology. Firstly, universities that did not have a medical faculty, that is one that produces medical graduates, are included in the rankings. Massachusetts Institute of Technology (MIT) and California Institute of Technology (Caltech) both come to mind. Both are ranked 3rd and 11th in the world respectively.

The probable reason for their inclusion is due to the broad definition of medicine adopted by QS. Medicine per se comprises of biomedical science domains such as anatomy, biochemistry, physiology, pharmacology and pathology. Such sub-fields are subjects of research and taught by some institutions even if they do not graduate future doctors.

However, the inclusion of institutions that do not graduate future doctors can be subjected to criticism. The main criticism is that it is like comparing apples to orange when we should be comparing only apples. Medical schools have a clinical faculty to train students in their clinical skills, which comprise real-life skills in interacting with, monitoring and treating a patient, things that doctors do.

Whereas institutions without medical schools are geared towards graduates with knowledge in sub-fields of medicine or scientific aspects of medicine, they are still not clinicians. Thus, we can surmise that QS interprets the medicine discipline as one that produces not only doctors, but also anatomists, biochemists, physiologists and pharmacologists, which is a broadly defined criterion.

However, such ranking information is not useful for an aspiring doctor, which exposes the way ranking surveys and research are conducted to further attacks. One is that the survey of academics will likely include non-clinician ones such as your anatomists, physiologists and pharmacologists, besides the clinicians.

In addition, the survey of employers will likely include those that employ non-clinician graduates with background in biomedical sciences. If we are purely interested in survey responses by employers who employ doctors, it will make more sense to limit survey responses to those from clinical establishments such as hospitals and clinics.

Thus, the all-inclusive definition of medicine will obfuscate the information that a prospective medical student is interested in. He will be interested in whether the medical programme adequately prepares him for a career in medicine.

At this point of time, he may not be interested in becoming a good biochemist, physiologist or pharmacologist for that matter. However, this is not meant to downplay the importance of achievements in sub-fields of medicine like biochemistry, pharmacology or physiology.

A medical student will definitely benefit from a medical programme in a school that has strong achievements in terms of teaching and research within the field of physiology.

The second criticism is that the way the rankings are done may not do justice to the reputation and achievements of institutions. One good example is the ranking of National University of Singapore (NUS) for medicine at 18th in the world. The ranking puts NUS above University of Pennsylvania (21st), Cornell University (23rd), University College London (25th) and Karolinska Institute (26th).

Without any intended disrespect to NUS, the nature of such rankings border on absurdity. NUS is a decent university with a good international reputation, and the same can be said of the hospitals affiliated with its medical school.

However, to say that NUS is better than Cornell University in the area of medicine is hard to believe. Especially when Cornell University’s Weill Cornell Medical College is affiliated with world-renowned medical institutions such as the New York Presbyterian Hospital and the Memorial Sloan-Kettering Cancer Centre, a cancer treatment and research institution.

The same goes for University College London. The institution is affiliated to the world renowned Great Ormond Street Hospital, which is a children’s hospital for teaching and research.

An explanation for the ranking is obviously attributed to the criteria used in ranking. While employer survey and feedback is relevant, 70% of the rankings score are attributed to academic survey, faculty to student ratio and international orientation. Achievements in the field of medicine as reflected in faculty citations account for only 20% of the rank score.

The Shanghai Jiaotong University (SJU) also publishes its set of world universities rankings, and in different fields. The criteria for the SJU’s rankings are based purely on achievements within these fields such as alumni or staff winning Nobel Prizes and Fields Medals, highly cited researchers, research publications and publications in top journals.

In the rankings by SJU published in 2010 for clinical medicine and pharmacy, University College London and Karolinska Instute performed well at 9th and 11th in the world respectively. University of Pennsylvania and Cornell University were ranked 22nd and 26th respectively. NUS, on the other hand, did not make it into the top 100.

The SJU ranking more or less demonstrated the consistency of universities’ reputation with their achievements in various subject areas. Secondly, regarding the disparity in performance of NUS in comparison with the QS rankings, it could be attributed to higher weight accorded to areas outside achievements in the field of medicine in the ranking criteria.

That means NUS did well in areas besides field achievements, e.g. positive survey response from academics and employers and its international orientation.

That was why QS rankings was the subject of criticisms from academics worldwide. Andrew Oswald, a University of Warwick economics professor pointed out the incredulity of QS rankings which placed Stanford University at 19th in the world, and Oxford and Cambridge University at joint 2nd for one of the years. This despite the fact that Stanford garnered more Nobel prizes over the past 2 decades than Oxford and Cambridge combined!

The most damning criticism however came from  Fred L. Bookstein, Horst Seidler, Martin Fieder and Georg Winckler in the journal Scientomentrics where they  pointed out the unreliability of QS’ methods. The QS’ overall score, reported staff-to-student ratio, and academic peer ratings demonstrated unacceptably high fluctuations from year to year, according to the quartet.

The reason for the fluctuations could perhaps be attributed to issues in ranking research, and improper surveying methods of academic peers. Information on faculty staff to student ratio and academic peer rating already account for 60% of the rank score.

The issue is that if there are problems in ranking methodology (research and surveys), especially when they account for the bulk of ranking score, then the whole exercise will generate an unreliable ranking list.

Which is why the QS rankings should be ditched. This sentiment echos that of Alex Usher, vice president of Higher Education Strategy Associates in Canada, who was quoted as saying that the latter is an  inferior product in comparison to the SJU’s Academic Ranking of World Universities.

Photo courtesy of King’s College, Cambridge, Flickr Commons.