The BRR offers the following indicators of the scientific impact of an organization:
Citations are counted until the end of 2013 in the above indicators. Author self citations are excluded. Both the MNCS indicator and the PP(top 10%) indicator correct for differences in citation practices between scientific fields. 828 fields are distinguished. These fields are defined at the level of individual publications. Using a computer algorithm, each publication in the Web of Science database has been assigned to a field based on its citation relations with other publications. Because the PP(top 10%) indicator is more stable than the MNCS indicator, the PP(top 10%) indicator is regarded as the most important impact indicator of the BRR.
The following indicators of scientific collaboration are provided in the BRR:
A journal is considered a core journal if it meets the following two conditions:
In the calculation of the BRR indicators, only publications in core journals are included. The MNCS and PP(top 10%) indicators become significantly more accurate by excluding publications in non-core journals. About 16% of the publications in the Web of Science database are excluded because they have appeared in non-core journals. A list of core and non-core journals is available in this Excel file.
The BRR by default reports size-independent indicators. These indicators provide average statistics per publication, such as an organizations average number of citations per publication. The advantage of size-independent indicators is that they enable comparisons between smaller and larger organizations. As an alternative to size-independent indicators, the BRR can also report size-dependent indicators, which provide overall statistics of the publications of an organization. An example is the total (rather than the average) number of citations of the publications of an organization. Size-dependent indicators are strongly influenced by the size of an organization (i.e., an organization total publication output) and therefore tend to be less useful for comparison purposes.
The impact indicators included in the BRR can be calculated using either a full counting method or a fractional counting method. The full counting method gives equal weight to all publications of an organization. The fractional counting method gives less weight to collaborative publications than to non-collaborative ones. For instance, if the address list of a publication contains five addresses and two of these addresses belong to a particular university, then the publication has a weight of 2 / 5 = 0.4 in the calculation of the indicators for this university. The fractional counting method leads to a more proper field normalization of impact indicators and to fairer comparisons between organizations active in different fields. Fractional counting is therefore regarded as the preferred counting method in the BRR. Collaboration indicators are always calculated using the full counting method.
A stability interval indicates a range of values of an indicator that are likely to be observed when the underlying set of publications changes. For instance, the MNCS indicator may be equal to 1.50 for a particular university, with a stability interval from 1.40 to 1.65. This means that the true value of the MNCS indicator equals 1.50 for this university, but that changes in the set of publications of the university may relatively easily lead to MNCS values in the range from 1.40 to 1.65. The BRR employs 95% stability intervals constructed using a statistical technique known as bootstrapping.
The BRR is based on the principles developed for the Leiden Ranking. More information on the Leiden Ranking methodology can be found in a number of publications by CWTS researchers. An extensive discussion of the Leiden Ranking is offered by Waltman et al. (2012). This publication relates to the 2011/2012 edition of the Leiden Ranking. Although not entirely up to date anymore, the publication still provides a lot of relevant information on the Leiden Ranking. The bottom-up approach taken in the Leiden Ranking to define scientific fields is described in detail by Waltman and Van Eck (2012). The methodology adopted in the Leiden Ranking for identifying core journals is outlined by Waltman and Van Eck (2013a, 2013b).