For some time now, bibliometric methods and methods of expert evaluation are considered as two prevailing groups of methods in the area of scientific performance evaluation. Even though significant differences consist between them in their nature, some sort of common results and characteristics can be expected, since they both attempt to measure the same thing, i.e. scientific quality. To determine the actual relationship, we thoroughly examine both methods, and create some basic assumptions considering the issue. In the next phase, we look upon this relationship by performing statistical analysis on results of the last call for research project proposals. It was revealed, that the lowest differences between bibliometric and expert ratings exist in the areas of natural sciences and humanities, wheareas the analysis at micro level showed, that the highest level of connection exists between indicators "normalised number of pure citations" and "assessment of project´s leader outstanding research achievements". Referring to subsequent analysis, those coefficients could have been more favourable, if reviewers had shown greater level of agreement about quality of research project proposals. At the same time this is the main blemish of peer review system under observation, so reflection about some propositions for reducing this phenomenon is welcome, such as number of assigned reviewers per grant, ways of its evaluation, ways of assigning reviewers etc. With small exception in the field of technical sciences, we also found out, that the applicants, whose research proposals were eventually approved, on average didn`t have significantly better bibliometric indicators from those applicants, who didn´t succed in receiving a grant.
|