University Ranking and The Arts

By Don Bowyer • 03 March 2018 •

Recently, I had the opportunity to complete a survey for one of the university ranking organizations. At the end, there was a box for additional thoughts. These were mine:

I believe that the focus on STEM publication makes the ____ university rankings nearly useless as regards arts fields. I understand that assessing arts contributions is not so easily turned into quantifiable objective measurements. This means simply that it is a difficult undertaking, not one that should be discounted or ignored.

As humans, we mix objective and subjective data in decision-making on a daily basis If this were not true, we would never eat unhealthy foods. I encourage you to be a leader in exploring ways to measure subjective data and incorporate that into the rankings. Otherwise, the outsize influence that your organization has on higher education will continue to have a negative impact on how funding, hiring, promotion, and curriculum decisions are made at universities around the world.

In short, in addition to teaching, universities contribute to the public good through the creation of knowledge – contributing to the sum total of human accomplishment. This knowledge can include the results of a study on how mice react to toothpaste, a patent for gene-splicing, or a new opera. All have value. The study may be assessed through peer-review publication. The gene-splicing may be assessed through patent applications and market value. How is the opera assessed? At present, it appears to only have value in university rankings if an article is published about the opera. This is absurd enough that someone should compose an opera about college rankings.


After posting the above on Facebook, I was asked if I have “any ideas for incorporating subjective evaluations into the largely objective whole?” My response:

We use subjective assessment in the arts all the time. Music students perform a jury at the end of every semester. We don’t assess it by counting right notes and wrong notes, or by measuring the dynamic contrast. All these could be measured by a computer, but they would be meaningless in assessing artistic quality. Instead, we use recognized musical experts as jurors, who listen and provide subjective feedback. To reduce the risk of personal bias, we insist on multiple jurors.

We do something similar when considering faculty promotion and tenure in the arts. The department committee includes those best able to subjectively assess the work of the applicant. Was the new opera performed locally, regionally, nationally, internationally? Was it selected for performance through a rigorous peer-review process? As with the other scholarship examples above, the thing we are ultimately trying to measure is impact. We want to get an idea of how much this work contributes to “the sum total of human accomplishment.”

In the sciences, this impact is often measured by ranking the impact of the journals that publish scientific articles. Then, we can count the number of articles one publishes in a tier one journal, the number of times one’s articles are cited in those journals, etc. The actual peer-review process for these journals, however, is very similar to the way the local committee assesses the opera: peers are asked to subjectively evaluate the article. The difference is that an opera is rarely published; it is performed.

Unfortunately, the ranking business (and it is that) wants to make everything black-and-white. The journal rankings work (sort of) in the sciences and business. But that model doesn’t fit the arts or the humanities (where books are more valued than journal articles).