Citation analysis is a quantifiable measure of academic output. This guide is designed to help faculty members and librarians use and understand the tools available to us. We are fortunate to have access to the top paid resources used for citation metrics – Web of Science, Scopus, and Journal Citation Reports.
Users need to be aware of the limitations and incongruities of citation metrics. The databases referenced above, and including Google Scholar, do not correct errors in citing papers. This means that one paper may be cited many different ways and appear as separate entries in these tools. Also, author and institutional naming inconsistencies complicate these analyses.
Comparisons between these tools should be avoided. The databases use different sources to generate data and some are more comprehensive than others. In addition, the literature suggests that these tools are skewed towards the STEM community of scholars.
The recommended methods for citation analyses are detailed this guide. Another useful metric is the h-index which can be generated Web of Science, Scopus, and Google Scholar. The h-index is defined as:
A scientist has index h if h of [his/her] Np papers have at least h citations each, and the other (Np − h) papers have at most h citations each.
Useful data can be found in each tool but direct comparisons across databases are problematic. These resources use different pools of data, date ranges and may interpret citations differently. Correct attribution of authorship can also cause reporting errors. Take control of your scholarly output - check your author profiles and register for an ORCID ID.
This chart illustrates reporting differences. Exercising as much consistency as possible, the same author was profiled (11/2012) in each resource.