About this Journal Submit a Manuscript Table of Contents
The Scientific World Journal
Volume 2014 (2014), Article ID 135812, 6 pages
http://dx.doi.org/10.1155/2014/135812
Review Article

Rating and Ranking the Role of Bibliometrics and Webometrics in Nursing and Midwifery

1Johns Hopkins University (JHU), Baltimore, MD 21218, USA
2University of Technology, Sydney (UTS), Sydney, NSW 2007, Australia

Received 16 August 2013; Accepted 25 September 2013; Published 6 January 2014

Academic Editors: K. Finlayson, S. Read, and M. A. Rose

Copyright © 2014 Patricia M. Davidson et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Background. Bibliometrics are an essential aspect of measuring academic and organizational performance. Aim. This review seeks to describe methods for measuring bibliometrics, identify the strengths and limitations of methodologies, outline strategies for interpretation, summarise evaluation of nursing and midwifery performance, identify implications for metric of evaluation, and specify the implications for nursing and midwifery and implications of social networking for bibliometrics and measures of individual performance. Method. A review of electronic databases CINAHL, Medline, and Scopus was undertaken using search terms such as bibliometrics, nursing, and midwifery. The reference lists of retrieved articles and Internet sources and social media platforms were also examined. Results. A number of well-established, formal ways of assessment have been identified, including h- and c-indices. Changes in publication practices and the use of the Internet have challenged traditional metrics of influence. Moreover, measuring impact beyond citation metrics is an increasing focus, with social media representing newer ways of establishing performance and impact. Conclusions. Even though a number of measures exist, no single bibliometric measure is perfect. Therefore, multiple approaches to evaluation are recommended. However, bibliometric approaches should not be the only measures upon which academic and scholarly performance are evaluated.

1. Introduction

Increasingly, individual researchers and academic institutions are being required to rate and rank publications as a metric of both individual researcher and organisational performance [1, 2]. This trend is international, and while bibliometrics are not new, increased surveillance on the outputs from academic sectors through evaluation measures such as Excellence in Research for Australia (ERA), Research Assessment Exercise (RAE) in the United Kingdom, and Performance-Based Research Fund (PBRF) in New Zealand have spurred interest and attention of academic nurses and midwives to ensure that these metrics adequately represent the quality of the research [3].

The term “bibliometrics” describes a mathematical method for counting the number of academic publications and related citations and is based on authorship. Measures such as citations, impact factors (IFs), h- and c-indices are commonly calculated. Data to inform a bibliometric analysis can be extracted from a range of online databases such as Thompson Reuters Web of Science or Elsevier-Scopus.

Bibliometrics are one way used to measure the impact of the research although the notion of impact is often difficult to measure. Another common criticism is that historical approaches to bibliometrics, such as impact factors and citations, disadvantage some disciplines. Reasons for this are complex including the often competing demands of a practice-based discipline, whereby published research findings may be measured in changed and improved clinical practice rather than citations [4, 5]. In addition, social networking and the World Wide Web are also changing the way the influence of a researcher and organisations are profiled. As a consequence considering the influence of infometrics more broadly is considered [6].

2. Methods

The electronic databases CINAHL, Medline, and Scopus were interrogated using the search terms including bibliometrics, nursing, and midwifery. The reference lists of retrieved articles and Internet sources and social media platforms were also examined. Initial search results yielded 367 articles. Following review of titles and abstracts, 167 articles were identified as providing information to address the aims of the review. These articles were not only descriptive in terms of bibliometrics but also address implementation issues as well as the benefits and shortcomings of approaches. These articles were synthesized and methodological approaches and strengths and limitations of approaches identified.

3. Current Approaches to Bibliometrics

3.1. Impact Factors

Journal impact factors (IFs) are the measure of the frequency in which an “average article” in a journal has been cited over a defined period [7]. They are calculated by Thomson-Reuters’ Institute of Scientific Information (ISI) and published each June in Journal Citations Reports. Since their inception in 1955, improvements have been made, including showing the 5-year IF and increasing the number of non-English journals included in the analysis. Data are also available for ranking the Immediacy Index of articles, which measures the number of times an article was cited in the year in which it was published [8].

However, impact factors have been subject to ongoing criticism by academics and scholars for both methodological and procedural imperfections. There is also debate about how IFs should be used. Whilst a higher impact factor may indicate journals that are considered to be more prestigious, it does not necessarily reflect the quality or impact of an individual article or researcher. Other metrics have therefore been developed to provide alternative measures to impact factors, such as the Journal Evaluation Tool [1, 911].

3.2. h- and c-Indices

Originally developed in 2005, the (Hirsch) h-index was developed to estimate the importance, significance, and broad impact of a researcher’s cumulative research contributions [12]. The h-index was designed to overcome the limitations of previous measures of the quality and productivity of researchers. It is a single number reporting an author’s papers that have at least the equivalent number of citations [13]. For example, a researcher with an h-index of 5 means that they have published at least 5 papers that have been cited 5 times or more. To obtain a high h-index, a researcher needs to be productive (quantity), but these papers also need to be highly cited (quality). This is likely driving individuals to publish in open access journals.

Papers can be cited for many reasons, such as proposing contentious positions; other than being of high quality and with the h-index, the quality of journals is not considered. Hirsch openly acknowledges that a single number cannot truly reflect the multifaceted profile of an individual author, as well as other limitations of the h-index [12], such as duration of publishing. An important consideration when using the h-index is that stage of career is a factor, and so one limitation of the h-index is that more junior researchers are inevitably going to have a lower h-index. Several recent studies have quantified the h-index for leading nurse academics and researchers in Canada, the United Kingdom, and Australia. These findings show significant diversity in h-indices of nurse researchers from these different countries and reported scores between 4 and 26 [1315].

Similar to the Immediacy Index for journals, the c-index reports the number of articles that have been cited more than once by other researchers in the most recent calendar year and therefore provides information about the current research impact of an article. The c-index has been proposed as an addition to the h-index [16].

3.3. The Performance of Nursing and Midwifery

A number of studies have been undertaken tracking the increased performance of both nursing journals and individual researchers. Wilkes and Jackson analysed a total of 530 articles from five Australian and five USA and UK journals and found an increase in output from the period of prior analyses in 2000 [17]. Publication analyses of Canadian publications [13] and UK [15] and Australian nurses have been undertaken. Hack and colleagues observed that nurses with an h-index of 10–14 indicated an excellent publication record [13]. Thompson and Clark cite the five top bad reasons nurses do not publish in high impact journals and among these are the need to influence nurse clinicians and reach a particular audience [18]. They argue that ignoring bibliometrics is folly and we should strive to publish in journals that are highly influential across disciplines. Discrete specialties have also undertaken reviews demonstrating trends in citation rates and trends using particular publishing patterns [19].

The notion of impact is also not easy to measure, a case for many disciplines. In health care, measures of impact can be constructed as being of scholarly impact (where citation measures are very useful) or impact on clinical practice [20]. This latter aspect of impact may be of greater importance in a practice-based discipline, but it can be difficult to provide evidence on this, as there are so many factors that influence uptake of research findings into health care. This in itself is an area of increasing research and scholarly interest [21].

3.4. New Approaches to Measuring Performance

In order to deal with the complexity of citation impact analysis, a range of approaches have been introduced including percentile rank scores, as indicators of relative performance [22]. The Integrated Impact Indicator (I3) has been suggested as a congruous indicator of absolute performance overcoming other aspects of measurement issues [22].

Globally, the use of the Internet is increasing exponentially [23]. Web 2.0 allows Internet users to independently create and publish content rapidly. Never before has it been easier for academics to provide rapid responses to media requests and publically provide opinion and commentary on both current affairs and scientific findings. The influence of social media is truly changing the academic publishing landscape. So much that there has been increased recognition for measures of scholarly impact to be drawn from Web 2.0 data [24].

The World Wide Web has not only revolutionized how information is gathered, stored, and shared but also provided a mechanism of measuring access to information. The current debate and discussion of the online publishing forum and the importance of access to information and challenging traditional gatekeepers to knowledge are a critical consideration [25]. Moreover, the use of blogs as scholarly sources has been introduced [26]. This increases the view of assessing performance than merely moving traditional bibliometrics to a more germane view of infometrics [6].

3.5. Webometrics

Webometrics refers to the quantitative analysis of activity on the World Wide Web, such as downloads, which draws on infometric methods [27, 28]. Webometrics recognises that the Internet is a repository for a vast number of documents and a powerful vehicle for knowledge dissemination and access [29, 30]. Ranking involves measuring the volume, visibility, and impact of web pages published by universities, with special emphasis on scientific output (refereed papers, conference contributions, preprints, monographs, theses, and reports), but also examines other materials (courseware, seminars or workshop documentations, digital libraries, databases, multimedia, and personal pages and blogs) and the general information on the institution, their departments, research groups or supporting services, and people working or attending courses. Ranking can be undertaken using a number of approaches.

Thus it can be seen that measurement of scholarship and impact can occur using a range of metrics. Within both traditional and evolving approaches it is useful to review the performance of nursing and midwifery according to established measures.

3.6. Innovations in Bibliometrics

Criticisms of traditional approaches to bibliometrics, such as impact factors and citations, included a perceived disadvantage for certain disciplines. However, conversely it can be said that citations in nursing and midwifery, like other areas of health, can accumulate relatively quickly. This is attributable to the large number of journals, the volume of research being conducted, and also the rapidly changing nature of the field and the increasing representation of nurses and midwives in research. This is particularly so when compared to disciplines with fewer journals or disciplines in which change or evidence of impact is achieved rather more slowly, such as in mathematics, for example.

3.7. Journal Evaluation Tool

Sponsored by the Council of Deans of Nursing and Midwifery in Australia and New Zealand, the Journal Evaluation Tool (JET) rates journals according to four quality band scores [9]. The JET involves peer ranking of journals and was designed as a strategy to overcome some of the traditional metrics of impact factors which have been said to disadvantage some groups such as nurses, midwives, and general practitioners. One significant drawback of the JET tool is that it has no standing outside of Australia and New Zealand. Clearly, researchers and scholars need to operate in an international environment and so need to be mindful of internationally (rather than locally) recognised measures.

3.8. Web 2.0 and Social Media

Twitter is a microblogging platform that allows users to “tweet” text of up to 140 characters to users and is publically available to anyone with online access. Twitter is commonly used as an online communication platform for personal communications; however it is rapidly becoming used for work related purposes, particularly scholarly communication, as a method of sharing and disseminating information which is central to the work of an academic [31]. Recently, there has been rapid growth in the uptake of Twitter by nursing and midwifery academics to network, share ideas and common interests, and promote their scientific findings.

3.9. Twitter Citation, Twimpact Factor, and Twindex

A study conducted by Eysenbach [32] investigated the predictive ability of Twitter for “citations,” defined as “direct or indirect links from a tweet to a peer-reviewed scholarly article online” [33]. Eysenbach developed a metric he termed as the “twimpact” factor and suggested that this may be useful and timely for measuring the uptake of research findings and for filtering research findings resonating with the public in real time [32].

The twimpact (tw ) factor is a novel metric for immediate impact in social media, defined by the cumulative number of tweetations within days after publications (e.g., tw7 means totals number of tweetations after days). Here tweetations are URL mentions if we apply this to other social media platforms [32].

The twindex is a metric ranging from 0 to 100 indicating the relative standing of an article compared to other articles. The twindex7 of specific articles is the rank percentile of an article when all articles are ranked by the twimpact factor tw7. For example, if an article has the highest twimpact factor tw7 among its comparator articles, it has a twindex of 100. In Eysenbach’s seminal work on the ability of tweets to predict citations, twindex articles with >75 often turned out to be the most cited [32].

Whilst the study identified that the buzz of the blogosphere is measurable, many limitations are also noted including the fundamental observation that the number of hits is a metric of success. The authors also identified that correlation is not causation, and it is difficult to decide whether additional citations are a results of the social media buzz or whether it is the underlying quality of the article or news trustworthiness that drives both the buzz and the citations—it is most likely a combination of both [32]. This novel study warrants further investigation into the sensitivity and specificity of such metrics to predict citations, particularly in nursing and midwifery.

3.10. Forecasting Popularity in Social Media

A preliminary study conducted by Yan and Kaziunas identified that merely measuring the dominance of an academic institution in Twitter is not a comprehensive measure of the true worth of a tweet. Additionally, users in academic institutions are more likely to derive value from the quality of the content. Results of this study are limited due to the small sample size [34]. Bandari and colleagues [35] suggested that one of the most significant predictors of popularity in social media was the news source of the article and that this is supported by the reality that readers are often likely to be influenced by the news source disseminating the article [35]. While popularity or number of hits or tweets may not be directly related to quality and impact, one could extrapolate that a hit or tweet does indicate interest at least, with the possibility that the article will be read and may be used in some way to either inform clinical practice or scholarly work.

3.11. Klout!, PeerIndex, and Kred

A range of online services, such as Klout!, PeerIndex, and Kred, attempt to measure influence in social media using various (undisclosed) algorithms and metrics; all are available free of charge. Klout! (http://www.klout.com/) uses 35 variables to compile “influence” scores, including the number of active followers a user has on Twitter, number of responses or retweets, and how influential the audience is. A higher Klout score indicates a stronger influence of the individual on the social media community [36]. A Klout score begins at 40.

Similarly, PeerIndex (http://www.peerindex.com/) calculates a score that is a relative measure of a user’s online authority, reflecting the impact of a user’s online activities and the extent to which they have built up social and reputational capital on the Internet [37]. There are three components to a PeerIndex score: authority, audience, and activity. Authority is the measure of trust, which calculates how much other users reply on recommendations and opinions. Audience is an indication of a user’s reach, accounting for the relative size of a user’s audience. Activity is a measure of how much the user does that is related to the topic communities that the user is part of [37]. Lastly, Kred (http://www.kred.com/) measures influence of and outreach to a user’s social communities in real time [38]. Influence scores range from 1 to 1000, where influence is measured by the ability of the user to persuade others to take action such a retweets or replies on Twitter or Facebook “likes” or “shares.” Outreach points are combined into levels. Kred scans the Twittersphere for trending topics by communities and looks through the list of followers to find communities to identify content user’s followers have not published [38]. Of these three online influence calculators, Kred claims to have the most transparent measures of influence and outreach in social media, through the generation of unique scores for every domain of expertise [38].

These are a taste of the tools available to measure and examine impact in the social media and online world. Others exist including Twitter Grader and Social Bro. The main disadvantage with such tools is that they merely measure activity and engagement. However, central to an academic’s work are credibility and peer review.

4. Discussion

In reducing impact to a quantitative, numerical score, it could be argued that bibliometrics are highly reductionistic and, when viewed in isolation, are not representative of a researcher’s performance or capacity. In taking this view, one would view bibliometric measures as only one aspect of performance upon which academic/scientific standing can be judged. However, bibliometrics have a high utility, and this is likely to continue because in pragmatic terms they represent a relatively simple, notwithstanding any weaknesses, and accurate data source.

As we have suggested earlier in this paper, there are various sources of bibliometric data, each with their strengths and limitations. What is needed is broad agreement on the most useful indices. Though bibliometric measures are best applied in a combination of methods of impact and esteem, other measures of these are far more difficult to quantify. Measures of esteem are defined as the recognition of researchers by peers, for their achievements, leadership, and contribution to a field of research [39]. This is most easily demonstrated through the award of prizes and prestigious invitations such as international keynote addresses, editorial roles, or membership of peak bodies. However, these measures are also controversial and in some quarters such activities may be viewed as being indicative of an individual’s personal network, rather than real evidence of wider professional standing or esteem. Increasingly other measures of influence, using criteria of social media, are identified.

5. Where to from Here?

5.1. Open Access

Expanding access to research findings is paramount for scientific progress. Debate continues concerning the public’s right to access taxpayer, publically funded research findings. The National Institute of Health (NIH) in the Unites States now mandates that published results of all NIH funded research are archived in the National Library of Medicine’s PubMed Central, to be available to the public no later than 12 months after publication [40]. Other public funding bodies around the world have begun to use a similar approach. Increasing access to scientific work has seen many institutions creating open access repositories of articles published by their staff, in a manner consistent with relevant copyright laws. This normally involves making a copy of the electronic version of their final, peer-reviewed manuscript available. Finding a model that is acceptable to the scientific community, funding agencies, governments, and publishers is however proving difficult.

Open access journals are becoming increasingly in evidence, and their presence presents new options for scholars seeking to disseminate their work. Their open access status is a major advantage. The fact that these papers are widely (and freely) available should assist in ensuring optimal citations. A limitation of open access for readers however means that many of these journals charge authors a publishing fee. This fee is applied in addition to the usual stringent peer review process. A recent survey of peer-reviewed, English-language open access nursing journals ( ) reported that only five (of the 11 journals) had h-indices on Scopus and five had a listed JIF (range: 0.21–2.00) and that publication fees ranged from zero ( ) to AU$1945 [41].

5.2. The Individual or the Organisation?

Ranking universities as a single entity may not be the most appropriate way to identify where the best discipline-based research is performed, and it is unlikely that any single university will excel in all disciplinary areas. Therefore the ranking of disciplines (as independent entities) may have some broader utility, although an unintended consequence of this may be the stifling of interdisciplinary research, clearly an important goal within constrained funding environments.

6. Conclusions

Nurse and midwife researchers can no longer choose to avoid the process and politics of bibliometrics or measure of impact. The productivity and quality of research produced by individual researchers, research groups, and universities are an important metric of their success and contribution to the productivity of the economy. Despite the criticism and acknowledged weaknesses of bibliometric measures they form a vital function of this equation. Like most measures these indices should be scrutinised for validity and fitness for purpose. This will require ongoing development and evaluation on a regular basis as new opportunities emerge, particularly though online media.

Conflicts of Interests

The authors declare that they do not have any financial conflict of interests.

References

  1. D. F. Polit and S. Northam, “Impact factors in nursing journals,” Nursing Outlook, vol. 59, no. 1, pp. 18–28, 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. R. D. Shelton and L. Leydesdorff, “Publish or patent: bibliometric evidence for empirical trade-offs in national funding strategies,” Journal of the American Society for Information Science and Technology, vol. 63, no. 3, pp. 498–511, 2012. View at Publisher · View at Google Scholar · View at Scopus
  3. H. McKenna, J. Daly, P. Davidson, C. Duffield, and D. Jackson, “RAE and ERA-spot the difference,” International Journal of Nursing Studies, vol. 49, no. 4, pp. 375–377, 2012. View at Publisher · View at Google Scholar · View at Scopus
  4. M. C. Dougherty, S.-Y. Lin, H. P. Mckenna, K. Seers, and S. Keeney, “Analysis of international content of ranked nursing journals in 2005 using ex post facto design,” Journal of Advanced Nursing, vol. 67, no. 6, pp. 1358–1369, 2011. View at Publisher · View at Google Scholar · View at Scopus
  5. S. Ketefian and M. C. Freda, “Impact factors and citations counts: a state of disquiet,” International Journal of Nursing Studies, vol. 46, no. 6, pp. 751–752, 2009. View at Publisher · View at Google Scholar · View at Scopus
  6. M. Goltaji and M. S. Shirazi, “The situation of top research centers' websites in the Islamic world countries: a webometric study,” International Journal of Information Science and Management (IJISM), vol. 10, pp. 71–85, 2012.
  7. E. Garfield, “The agony and the ecstasy: the history and meaning of the journal impact factor,” in Proceedings of the International Congress on Peer Review and Biomedical Publication, Chicago, Ill, USA, 2005.
  8. C. Schloegl and J. Gorraiz, “Global usage versus global citation metrics: the case of pharmacology journals,” Journal of the American Society for Information Science and Technology, vol. 62, no. 1, pp. 161–170, 2011. View at Publisher · View at Google Scholar · View at Scopus
  9. P. A. Crookes, S. L. Reis, and S. C. Jones, “The development of a ranking tool for refereed journals in which nursing and midwifery researchers publish their work,” Nurse Education Today, vol. 30, no. 5, pp. 420–427, 2010. View at Publisher · View at Google Scholar · View at Scopus
  10. D. R. Smith, “A longitudinal analysis of bibliometric and impact factor trends among the core international journals of nursing, 1977–2008,” International Journal of Nursing Studies, vol. 47, no. 12, pp. 1491–1499, 2010. View at Publisher · View at Google Scholar · View at Scopus
  11. M.-J. Johnstone, “Journal impact factors: implications for the nursing profession: original article,” International Nursing Review, vol. 54, no. 1, pp. 35–40, 2007. View at Publisher · View at Google Scholar · View at Scopus
  12. J. E. Hirsch, “An index to quantify an individual's scientific research output,” Proceedings of the National Academy of Sciences of the United States of America, vol. 102, no. 46, pp. 16569–16572, 2005. View at Publisher · View at Google Scholar · View at Scopus
  13. T. F. Hack, D. Crooks, J. Plohman, and E. Kepron, “Research citation analysis of nursing academics in Canada: identifying success indicators,” Journal of Advanced Nursing, vol. 66, no. 11, pp. 2542–2549, 2010. View at Publisher · View at Google Scholar · View at Scopus
  14. G. E. Hunt, M. Cleary, D. Jackson, R. Watson, and D. R. Thompson, “Citation analysis: focus on leading Australian nurse authors,” Journal of Clinical Nursing, vol. 20, no. 23-24, pp. 3273–3275, 2011. View at Publisher · View at Google Scholar · View at Scopus
  15. D. R. Thompson and R. Watson, “H-indices and the performance of professors of nursing in the UK,” Journal of Clinical Nursing, vol. 19, no. 21-22, pp. 2957–2958, 2010. View at Publisher · View at Google Scholar · View at Scopus
  16. D. F. Taber, “Quantifying publication impact,” Science, vol. 309, no. 5744, p. 2166, 2005. View at Scopus
  17. L. Wilkes and D. Jackson, “Trends in publication of research papers by Australian-based nurse authors,” Collegian, vol. 18, no. 3, pp. 125–130, 2011. View at Publisher · View at Google Scholar · View at Scopus
  18. D. R. Thompson and A. M. Clark, “The five top bad reasons nurses don't publish in impactful journals,” Journal of Advanced Nursing, vol. 68, pp. 1675–1678, 2012.
  19. G. E. Hunt, B. Happell, S. W. C. Chan, and M. Cleary, “Citation analysis of mental health nursing journals: how should we rank thee?” International Journal of Mental Health Nursing, vol. 21, no. 6, pp. 576–580, 2012.
  20. S. Payne, J. Seymour, G. Grande et al., “An evaluation of research capacity building from the Cancer Experiences Collaborative,” BMJ Supportive and Palliative Care, vol. 2, pp. 280–285, 2012.
  21. P. M. Ironside, “Advancing the science of nursing education: rethinking the meaning and significance of impact factors,” Journal of continuing education in nursing, vol. 38, no. 3, pp. 99–100, 2007. View at Scopus
  22. L. Leydesdorff and L. Bornmann, “Integrated impact indicators compared with impact factors: an alternative research design with policy implications,” Journal of the American Society for Information Science and Technology, vol. 62, no. 11, pp. 2133–2146, 2011. View at Publisher · View at Google Scholar · View at Scopus
  23. Minimatts Marketing Group, “Internet World Stats,” 2012.
  24. J. Priem and B. M. Hemminger, “Scientometrics 2.0: toward new metrics of scholarly impact on the social Web,” First Monday, vol. 15, no. 7, 2010. View at Scopus
  25. C. Graf, “What IJCP authors think about open access: exploring one possible future for publishing clinical research in a general and internal medicine journal,” International Journal of Clinical Practice, vol. 66, no. 2, pp. 116–118, 2012. View at Publisher · View at Google Scholar · View at Scopus
  26. L. Armstrong, M. Berry, and R. Lamshed, “Blogs as electronic learning journals,” E-Journal of Instructional Science and Technology, vol. 7, no. 1, 2012.
  27. L. Björneborn and P. Ingwersen, “Toward a basic framework for webometrics,” Journal of the American Society for Information Science and Technology, vol. 55, no. 14, pp. 1216–1227, 2004. View at Publisher · View at Google Scholar · View at Scopus
  28. K. Kousha, M. Thelwall, and S. Rezaie, “Using the Web for research evaluation: the Integrated Online Impact indicator,” Journal of Informetrics, vol. 4, no. 1, pp. 124–135, 2010. View at Publisher · View at Google Scholar · View at Scopus
  29. T. C. Almind and P. Ingwersen, “Informetric analyses on the world wide web: methodological approaches to ‘webometrics’,” Journal of Documentation, vol. 53, no. 4, pp. 404–426, 1997. View at Scopus
  30. M. Thelwall, “Bibliometrics to webometrics,” Journal of Information Science, vol. 34, no. 4, pp. 605–621, 2008. View at Publisher · View at Google Scholar · View at Scopus
  31. A. Java, X. Song, T. Finin, and B. Tseng, “Why we twitter: understanding microblogging usage and communities,” in Proceedings of the 9th WebKDD and 1st SNA-KDD Workshop 2007 on Web Mining and Social Network Analysis, pp. 56–65, San Jose, Calif, USA, August 2007. View at Publisher · View at Google Scholar · View at Scopus
  32. G. Eysenbach, “Can tweets predict citations? Metrics of social impact based on Twitter and correlation with traditional metrics of scientific impact,” Journal of Medical Internet Research, vol. 13, no. 4, Article ID e123, 2011. View at Scopus
  33. J. Priem and K. L. Costello, How and Why Scholars Cite on Twitter, ASIST, Pittsburg, Pa, USA, 2010.
  34. J. L. S. Yan and E. Kaziunas, “What is a tweet worth? Measuring the value of social media for an academic institution,” in Proceedings of the iConference: Culture, Design, Society (iConference '12), pp. 565–566, Ontario, Canada, February 2012. View at Publisher · View at Google Scholar · View at Scopus
  35. R. Bandari, S. Asur, and B. Huberman, “The pulse of news in social media: forecasting popularity,” in Proceedings of the 6th International AAAI Conference on Weblogs and Social Media, Dublin, Ireland, June 2012.
  36. Klout, “Klout: the standard for influence,” 2012, http://klout.com.
  37. PeerIndex, “PeerIndex Scoring Methodology,” 2012, http://www.peerindex.com/help/scores.
  38. Kred, “The Kred Guide,” 2012.
  39. S. McKay, “Social policy excellence-peer review or metrics? Analyzing the 2008 research assessment exercise in social work and social policy and administration,” Social Policy and Administration, vol. 46, no. 5, pp. 526–543, 2012. View at Publisher · View at Google Scholar
  40. National Institutes of Health, “The NIH Public access policy,” 2012http://publicaccess.nih.gov/.
  41. R. Watson, M. Cleary, D. Jackson, and G. E. Hunt, “Open access and online publishing: a new frontier in nursing?” Journal of Advanced Nursing, vol. 68, pp. 1905–1908, 2012.