Free listing of scholarships for students majoring in Social Sciences. Apply to scholarships for your major now!
Wednesday, 18 November 2009
Article-Level Metrics And The Evolution Of Scientific Impact
Formally published papers that have been through a traditional prepublication peer review process remain the most important means of communicating science today. Researchers depend on them to learn about the latest advances in their fields and to report their own findings. The intentions of traditional peer review are certainly noble: ... . In principle, this system enables science to move forward on the collective confidence of previously published work. Unfortunately, the traditional system has inspired methods of measuring impact that are suboptimal for their intended use.
Measuring Impact
Peer-reviewed journals have served an important purpose in evaluating submitted papers and readying them for publication. In theory, one could browse the pages of the most relevant journals to stay current with research on a particular topic. But as the scientific community has grown, so has the number of journals—to the point where over 800,000 new articles appeared in PubMed in 2008 ... and the total is now over 19 million ... . The sheer number makes it impossible for any scientist to read every paper relevant to their research, and a difficult choice has to be made about which papers to read. Journals help by categorizing papers by subject, but there remain in most fields far too many journals and papers to follow.
As a result, we need good filters for quality, importance, and relevance to apply to scientific literature. There are many we could use but the majority of scientists filter by preferentially reading articles from specific journals—1those they view as the highest quality and the most important. These selections are highly subjective but the authors' personal experience is that most scientists, when pressed, will point to the Thomson ISI Journal Impact Factor [1] as an external and “objective” measure for ranking the impact of specific journals and the individual articles within them.
Yet the impact factor, which averages the number of citations per eligible article in each journal, is deeply flawed both in principle and in practice as a tool for filtering the literature. It is mathematically problematic ... with around 80% of a journal impact factor attributable to around 20% of the papers, even for journals like Nature ... . It is very sensitive to the categorisation of papers as “citeable” ... and it is controlled by a private company that does not have any obligation to make the underlying data or processes of analysis available. [snip]
Though the impact factor is flawed, it may be useful for evaluating journals in some contexts, and other more sophisticated metrics for journals are emerging ... . But for the job of assessing the importance of specific papers, the impact factor—or any other journal-based metric for that matter—cannot escape an even more fundamental problem: it is simply not designed to capture qualities of individual papers.
Article-Level Metrics
If choosing which articles to read on the basis of journal-level metrics is not effective, then we need a measure of importance that tells us about the article. It makes sense that when choosing which of a set of articles to read, we should turn to “article-level metrics,” yet in practice data on individual articles are rarely considered, let alone seriously measured.
Perhaps the main reason for this absence is a practical one. Accurate determining the importance of an article takes years and is very difficult to do objectively. The “gold standard” of article impact is formal citations in the scholarly literature, but citation metrics have their own challenges. One is that citation metrics do not take the “sentiment” of the citation into account, so while an article that is heavily cited for being wrong is perhaps important in its own way ... , using citation counts without any context can be misleading. The biggest problem, though, is the time-delay inherent in citations. [snip]
The Trouble with Comments
A common solution proposed for getting rapid feedback on scientific publications is inspired by the success of many Web-based commenting forums. Sites like Stack Overflow, Wikipedia, and Hacker News each have an expert community that contributes new information and debates its value and accuracy. It is not difficult to imagine translating this dynamic into a scholarly research setting where scientists discuss interesting papers. A spirited, intelligent comment thread can also help raise the profile of an article and engage the broader community in a conversation about the science.
Unfortunately, commenting in the scientific community simply hasn't worked, at least not generally. [snip]
[snip]
Part of this resistance to commenting may relate to technical issues, but the main reason is likely social. For one thing, researchers are unsure how to behave in this new space. We are used to criticizing articles in the privacy of offices and local journal clubs, not in a public, archived forum. [snip]
Another issue is that the majority of people making hiring and granting decisions do not consider commenting a valuable contribution. [snip]
Then there is simply the size of the community. [snip] But it also means that if only 100 people read a paper, it will be lucky if even one of them leaves a comments
Technical Solutions to Social Problems
Given the lack of incentive, are there ways of capturing article-level metrics from what researchers do anyway? A simple way of measuring interest in a specific paper might be via usage and download statistics; for example, how many times a paper has been viewed or downloaded, how many unique users have shown an interest, or how long they lingered. [snip] These statistics may not be completely accurate but they are consistent, comparable, and considered sufficiently immune to cheating to be the basis for a billion dollar Web advertising industry.
A more important criticism of download statistics is that it is a crude measure of actual use. How many of the downloaded papers are even read, let alone digested in detail and acted upon? What we actually want to measure is how much influence an article has, not how many people clicked on the download button thinking they “might read it later.” A more valuable metric might be the number of people who have actively chosen to include the paper in their own personal library. [snip]
Examples of such tools are Zotero, Citeulike, Connotea, and Mendeley, which all allow the researcher to collect papers into their library while they are browsing on the Web, often in a single click using convenient “bookmarklets.” The user usually has the option of adding tags, comments, or ratings as part of the bookmarking process. [snip]
Metrics collected by reference management software are especially intriguing because they offer a measure of active interest without requiring researchers to do anything more than what they are already doing. Scientists collect the papers they find interesting, take notes on them, and store the information in a place that is accessible and useful to them. [snip]
Part of the solution to encouraging valuable contributions, then, may simply be that the default settings involve sharing and that people rarely change them. A potentially game-changing incentive, however, may be the power to influence peers. [snip]
It is too early to tell whether any specific tools will last, but they already demonstrate an important principle: a tool that works within the workflow that researchers are already using can more easily capture and aggregate useful information. [snip]
The Great Thing about Metrics…Is That There Are So Many to Choose From
There are numerous article-level metrics ... and each has its own advantages and problems. Citation counts are an excellent measure of influence and impact but are very slow to collect. Download statistics are rapid to collect but may be misleading. Comments can provide valuable and immediate feedback, but are currently sparse ... .. Bookmarking statistics can be both rapid to collect and contain high quality information but are largely untested and require the widespread adoption of unfamiliar tools. Alongside these we have “expert ratings” by services such as Faculty of 1000 and simple rating schemes.
[snip]
“Other Indicators of Impact” include ratings and comments, which, like page views, are immediate but may offer more insight because users are more likely to have read the article and found it compelling enough to respond. Additional other indicators are bookmarks, used by some people to keep track of articles of interest to them, and blog posts and trackbacks, which indicate where else on the Web the article has been mentioned and can be useful for linking to a broader discussion. It is clear that all of the types of data provide different dimensions, which together can give a clearer picture of an article's impact.
[snip] As recently shown ... , scientific impact is not a simple concept that can be described by a single number. The key point is that journal impact factor is a very poor measure of article impact. And, obviously, the fact that an article is highly influential by any measure does not necessarily mean it should be.
Many researchers will continue to rely on journals as filters, but the more you can incorporate effective filtering tools into your research process, the more you will stay up-to-date with advancing knowledge. The question is not whether you should take article-level metrics seriously but how you can use them most effectively to assist your own research endeavours. We need sophisticated metrics to ask sophisticated questions about different aspects of scientific impact and we need further research into both the most effective measurement techniques and the most effective uses of these in policy and decision making. For this reason we strongly support efforts to collect and present diverse types of article-level metrics without any initial presumptions as to which metric is most valuable. [snip]
As Clay Shirky famously said ... , you can complain about information overload but the only way to deal with it is to build and use better filters. It is no longer sufficient to depend on journals as your only filter; instead, it is time to start evaluating papers on their own merits. Our only options are to publish less or to filter more effectively, and any response that favours publishing less doesn't make sense, either logistically, financially, or ethically. The issue is not how to stop people from publishing, it is how to build better filters, both systematically and individually. At the same time, we can use available tools, networks, and tools built on networks to help with this task.
So in the spirit of science, let's keep learning and experimenting, and keep the practice and dissemination of science evolving for the times.
References
Source
[http://www.plosbiology.org/article/info%3Adoi%2F10.1371%2Fjournal.pbio.1000242]
!!! Thanks To / Garrett Eastman / Librarian / Rowland Institute at Harvard / For The HeadsUp !!!
>>> While These Insights and Suggestions Are An Important Contribution To The Conversation , In Many Ways The Views And Recommendation Are Far From Radical <<<
See My Presentation Delivered At the Workshop On Peer Review, Trieste, Italy, May 23-24 2003
"Alternative Peer Review: Quality Management for 21st Century Scholarship"
[http://www.public.iastate.edu/~gerrymck/APR-1.ppt]
>>> See In Particular > 'Seize The E! Section >>> Embrace the potential of the digital environment to facilitate access, retrieval, use, and navigation of electronic scholarship.
>>It's A Large PPT (200+ Slides) But IMHO ... Well Worth The Experience [:-)]<<
AND
The Big Picture(sm): Visual Browsing in Web and non-Web Databases
[http://www.public.iastate.edu/~CYBERSTACKS/BigPic.htm]
To ReQuote T.S. Elloit >
"Where is the wisdom we have lost in knowledge? Where is the knowledge that we have lost in information?"/ T.S. Eliot / The Rock (1934) pt.1
To Quote Me >
"It's Not About Publication, It's About Ideas"
>> We Now Have The Computational Power To Make Real-Time Conceptual Navigation An EveryDay Occurrence <<<
!! Let Us Use It To Navigate Ideas !!!
Indeed Let Us Continue "... experimenting, and keep the practice and dissemination of science evolving for the times."
See Also/Related
[http://scholarship20.blogspot.com/2009/09/article-level-metrics-at-plos-addition.html]
[http://scholarship20.blogspot.com/2009/06/article-level-metrics-at-plos-and.html]
Monday, 9 November 2009
Representing Reality with Integrity
Few television shows have received as much academic praise as HBO’s The Wire. From City Hall to the classroom to the street corner, The Wire brilliantly captured the heart and soul of urban America—the same heart and soul meticulously detailed in countless academic studies of urban inequality.
Yet the scholars that research urban America rarely come into contact with the actors that portray urban America in television or film. That is, until two weeks ago when three actors from The Wire—Andre Royo (Bubbles the junkie), Sonja Sohn (Kima the detective), and Michael Williams (Omar the stickup artist)—sat down with three scholars—Harvard sociologists William Julius Wilson and Larry Bobo, and Yale graduate student/Baltimore native Brandon Terry—to discuss the social policy implications and lessons from the show.
After the panel, Royo, Song, and Willams were escorted to a private after-party at The Harvard Lampoon, a building that houses the Harvard undergraduate humor magazine of the same name. Since the panel’s moderator is also my colleague in the Sociology department, I received a highly coveted (and much appreciated) invite to the party.
As a huge fan of the show, I could barely contain myself. I tried my best to keep it cool, keeping my camera in my pocket and resisting the urge to ask for an autograph. But when I sparked a conversation with Andre Royo, my composure started to fade. See, Bubs was my favorite character on the show, and Royo’s brilliant and careful portrayal of the homeless drug addict made my admiration for the character that much stronger. After a geek-out session about the show’s integration of professional actors with actual Baltimore natives (we went back and forth for a good five minutes listing each and every B-more local that appeared on the show), we started to talk about the inherent difficulty of portraying junkie.
Of course, playing a junkie isn’t that difficult, if the whole twitching-and-randomly-scratching-yourself caricature is your thing. But fans of The Wire know that Bubbles wasn’t your average junkie. There was an art to the way Royo played Bubs, a unique take on a classic character that fundamentally changed the way we approach “the junkie:” We never pitied him when he fell, but rather rooted for him to rise back up. In a strange way, the down-and-out junkie was the show’s most consistent hero.
According to Royo, a few unpaid consultants helped him develop the character. These consultants showed up at his trailer each morning, followed him and the production crew throughout Baltimore, and advised him through each and every scene. When he threw away a cigarette before smoking it to the filter, for example, they were quick to correct his mistake. How were they privy to the intimate details of life as a junkie? Because these consultants weren’t really consultants at all: They were the men and women that lived their lives on the streets of Baltimore. The very men and women Royo’s character was based on.
After each day, he’d retire to his trailer and remove his makeup. “And when I’d walk out,” he told me, “I looked at these people that had helped me all day, and I could see the betrayal in their eyes. At the end of the day, I could get cleaned up and go home, while they spent the night on the street. They looked at me like a sellout. It [messed] with my head for a long time.” Royo would feel depressed, and after particularly long days he often needed to spend time alone, away from everyone, to gather his thoughts. Here were men and women that could barely get by, struggling with addiction, and Royo was exploiting their lived experiences to get a paycheck. I asked Royo how he dealt with the guilt. “I just tried to portray the character—their world—with humanity. That’s all I could really do,” he replied. “But it was hard for me, emotionally.”
Detailing the lives of marginalized and disadvantaged communities requires a profound responsibility—a responsibility to be humble, compassionate, and above all else, honest. When Royo waxed philosophical, introspectively analyzing his role as both actor and representative, I couldn’t help but connect. As a qualitative researcher of urban inequality, I’m constantly dealing with the label of “privileged white guy that studies poor black people.” One the one hand, I feel a moral obligation to fight for those that are systematically disadvantaged. But on the other hand, such analyses can quickly become deeply paternalistic. Those that are familiar with my work and ideas know that I reject fetishizing “the other,” taking a comprehensive approach that avoids a singular focus on poor people of color. Still, the risk of exploitation never leaves the back of my mind. I never forget that I am making a career out of someone else’s life. I never forget who I am, constantly problematizing my ability to ever fully understand someone else’s world. And I never forget why I do it in the first place or why these issues matter. Like Royo, I constantly question myself, my work, and my role as a researcher.
Representing reality—be it on television or in academic research—requires a commitment to the craft of storytelling. But more importantly, it requires integrity. It was that integrity that made The Wire so powerful, and it is that critical honesty that makes good research. Urban polemics notwithstanding, this was the show’s most valuable lesson.