While industries such as music, newspapers, film and publishing have seen radical changes in their business models and practices as a direct result of new technologies, higher education has so far resisted the wholesale changes we have seen elsewhere. However, a gradual and fundamental shift in the practice of academics is taking place. Every aspect of scholarly practice is seeing changes effected by the adoption and possibilities of new technologies. This book will explore these changes, their implications for higher education, the possibilities for new forms of scholarly practice and what lessons can be drawn from other sectors.
Table of Contents
Acknowledgements
Digital, Networked and Open
Is the Revolution Justified?
Lessons from Other Sectors
The Nature of Scholarship
Researchers and New Technology
Interdisciplinarity and Permeable Boundaries
Public Engagement as Collateral Damage
A Pedagogy of Abundance
Openness in Education
Network Weather
Reward and Tenure
Publishing
The Medals of Our Defeats
Digital Resilience
References
Martin Weller / September 2011 / 256 pp. / http://dx.doi.org/10.5040/9781849666275
Available At
[http://bit.ly/pvz7pz]
Free listing of scholarships for students majoring in Social Sciences. Apply to scholarships for your major now!
Monday, 30 April 2012
Sunday, 29 April 2012
PressForward > Scholarship and Publication, The Web Way
Bringing together the best scholarship from across the web, producing vital, open publications scholarly communities can gather around.
The Roy Rosenzweig Center for History and New Media at George Mason University created PressForward to explore and produce the best means for collecting, screening, and drawing attention to the vast expanse of scholarship that is currently decentralized across the web or does not fit into traditional genres such as the journal article or the monograph.
The web beyond academia has had to develop mechanisms for filtering for quantity, on sites such as Techmeme and The Browser; the academy has honed a set of methods of filtering for quality, through peer review. PressForward aims to marry these old and new methods to expose and disseminate the very best in online scholarship.
PressForward will pioneer new methods for capturing and highlighting presently orphaned or underappreciated scholarship—including “gray literature” such as conference papers, white papers, reports, scholarly blogs, and digital projects—in ways that are useful to scholarly communities. Through a structured study of existing methods and by modeling a new kind of active publication, PressForward will collect data to provide the project and other organizations improved knowledge about open-web scholarly curation. Meanwhile, PressForward will release an open-source platform for scholarly communities and organizations to create their own trusted, high-value streams of relevant content. All data and code produced by PressForward will be freely available on this site.
Source
[http://pressforward.org/]
Synthesizing New Forms of Scholarly Communication
Keynote Speaker: Dan Cohen, Associate Professor, Department of History and Art History and Director, Roy Rosenzweig Center for History and New Media at George Mason University
Scholarship and scholarly discussion is increasing occurring outside of the normal channels of scholarly communication, and in new shapes and sizes. Moreover, through social media scholars are able to more rapidly disseminate their work and engage their colleagues and a broader audience. Yet much of this scholarship languishes in these more informal venues. Might it be possible to aggregate, curate, and highlight this content, currently in circulation on blogs and personal, institutional, and conference websites? The PressForward project is exploring methods for filtering and building audiences for this new scholarship that may not fit into the form of an article or monograph. For instance, Digital Humanities Now, one of the PressForward publications, highlights work that has come to the attention of the digital humanities community through distribution via social media and open peer review. Dan Cohen will elaborate on the nature of this new "grey literature," its circulation, and the possible platforms for organizing and distributing it to scholarly communities.
Source
The Future of Peer Review
This guest post was written by Richard Price, founder and CEO of Academia.edu — a site that serves as a platform for academics to share their research papers and to interact with each other
Instant distribution
Many academics are excited about the future of instant distribution of research. Right now the time lag between finishing a paper, and the relevant worldwide research community seeing it, is between 6 months and 2 years. [snip].
Many platforms are springing up which enable research distribution to be instant, ,,, . [snip]. Some of the strong platforms are Academia.edu, arXiv, Mendeley, ResearchGate and SSRN.
What about peer review?
One question many academics have is: in a future where research is distributed instantly, what happens to peer review? [snip].
Content discovery on the web
Instant distribution is a characteristic of web content, and the web has thrived without a system of formal peer review in place. [snip].
The web has thrived because powerful discovery systems have sprung up that separate the wheat from the chaff for users. The main two systems that people use to discover content on the web are:
- Search engines (Google, Bing)
- Social platforms (mainly sites like Facebook and Twitter, but also generic communication platforms like email, IM etc)
Both search engines and social platforms are peer review systems in different ways. One can think of these two systems as “Crowd Review” and “Social Review” respectively:
- Crowd Review: Google’s PageRank algorithm looks at the link structure of the entire web, and extracts a number (PageRank) that represents how positively the web thinks about a particular website.
- Social Review: Twitter and Facebook show you links that have been shared explicitly by your friends, and people you follow.
One can think of the peer review system in the journal industry as “two person review”:
- Two Person review: Two people are selected to review the paper on behalf of the entire possible audience for that paper.
The drawbacks of the Two Person review process are that it is:
- expensive [snip].
- slow: [snip].
- of questionable quality [snip].
- unchanging [snip].
- a lot of work for the reviewers: [snip].
More and more, academics are discovering research papers nowadays via the web, and in particular, via search engines and social platforms:
- Search engines: Google, Google Scholar, Pubmed
- Social platforms: Academia.edu, arXiv, Mendeley, ResearchGate, blogs, conversations with colleagues over email or IM, Facebook and Twitter.
As research distribution has moved to the web mostly, so the discovery engines for research content are the same as those for general web content. The peer review mechanism is evolving from The Two Person review process to the Crowd Review process, and the Social Review process.
But has the research been done to a high standard?
People often say that the formal peer review process helps ensure that all the accessible research is above a certain minimum quality. [snip].
The experience of the web is that this fear is over-blown. There is no quality floor for content on the web. There is bad content on the web, and there is great content. The job of search engines and social platforms is to ensure that the content that you discover, either via Google or Facebook, is of the good kind. [snip]
Discovery and credit systems are powered by the same metrics
Peer review in the journal industry has historically played another interesting role, other than powering research discovery. [snip]..
The peer review system has historically played this dual role, in powering both the discovery system and the credit system, because ultimately research discovery and research credit are about the same issue: which is the good research? [snip].
One new metric of academic credit that has emerged over the last few years is the citation count. Google Scholar makes citation counts public for papers, and so now everyone can see them easily. Citations between papers are like links between websites, and citation counts are an instance of the Crowd Review process.
Legend has it that Larry Page came up with the idea of PageRank after reflecting on the analogy between citations and links. Citation counts nowadays play the dual role of driving discovery on Google Scholar, as they determine the ordering of the search results, and help to determine academic credit.
Academic credit from social platforms
In the case of social platforms, the metric that drives discovery is how much interaction there is with your content on the social platform in question. Examples of such interaction include:
- numbers of followers you have
- the number of times your content is shared, liked, commented on, viewed.
These metrics show how much interest there is in your papers, and how widely they are read right now, and thus provide a sense of their level of impact.
One drawback of citation counts as a metric of academic credit is that they are a lagging indicator, in that they take a while to build up. [snip]..
The advantage of the kinds of metrics that social platforms like Academia.edu, Mendeley, and SSRN provide is that they are real time, and they fill this credit gap. Academics are increasingly including these real time metrics in their applications for jobs and for grants. [snip].
[snip]
Instant Distribution and Peer Review
The prospect of instant distribution of research is tremendously exciting. If you can tap the global brain of your research community in effectively close to real time, as opposed to waiting 6 months to 24 months to distribute your ideas, there could be a wonderful acceleration in the rate of idea generation.
[snip]
The web is also an incredible place for new ideas to be invented and to take hold. No doubt new peer review mechanisms will emerge in the future that will advance beyond Crowd Review and Social Review.
Source and Fulltext Available At
[http://techcrunch.com/2012/02/05/the-future-of-peer-review/]
Tuesday, 24 April 2012
altmetrics12: An ACM Web Science Conference 2012 Workshop > Evanston IL > 21 June 2011
About the Workshop
Increasing scholarly use of Web2.0 tools like CiteULike, Mendeley, Twitter, and blog-style article commenting presents an opportunity to track scholarly impact in novel ways. Metrics based on this diverse set of Web sources could yield broader, richer, and more timely assessments of current and potential scholarly impact. Realizing this, many authors have begun to investigate these altmetrics.
altmetrics12 encourages continued investigation into the the properties of these metrics: their validity, their potential value and flaws, and their relationship to established measures. Submissions are invited from a variety of areas:
- New metrics based on social media
- Tracking science communication on the Web
- Relation between traditional metrics and altmetrics
- Peer-review and altmetrics
- Tools for gathering, analyzing, disseminating altmetrics
- This workshop is a follow-up to the successful altmetrics11 workshop hosted by WebSci’11.
- 2-page abstracts due > May 18, 2012
- Acceptance and abstract publication > May 22, 2012
- Open pre-workshop discussion > May 22 – June 18, 2012
- Workshop at WebSci 12 > June 21, 2012
- Discussion closed >June 30, 2012
- Invitations for post-workshop proceedings > TBA
[more]
Source and Links Available At
[http://altmetrics.org/altmetrics12/]
Thursday, 19 April 2012
Post-Publication Peer Review: What Value Do Usage-Based Metrics Offer?
Posted by David Crotty ⋅ Apr 19, 2012⋅ 13 Comments
A PLoS ONE article recently went viral, hitting the front page of Reddit and garnering an amazing amount of reader interest. This was great news for the journal and the paper’s authors, but raises questions for the notion of post-publication peer review.
As Kent Anderson recently discussed, the idea of post-publication peer review is nothing new — it’s called “science”. Publication of the paper is an end of one process but the beginning of another. [snip].
The proposed revolution then, is not in the concept, but in the tools available, ways to open that conversation worldwide and to track the life of that paper after it has been published, to better measure its true impact. Despite initial momentum, movement toward implementation of these new technologies seems to have hit a stalling point.
[snip]
Doing away with pre-publication peer review and replacing it entirely seems to have garnered little support in the research community. F1000 Research will be the biggest test of whether this has any viability. Their approach seems more a strategy meant to increase publisher revenue, rather than to benefit researchers. [snip].
[snip]
That leaves the search for new metrics (“altmetrics“) as perhaps the greatest hope for near-term improvement in our post-publication understanding of a paper’s value. The Impact Factor is a reasonable, if flawed measurement of a journal, but a terrible method for measuring the quality of work in individual papers or from individual researchers. [snip]
[snip]
Metrics based on social media coverage of an article tell us more about the author’s ability to network than about their actual experiments. Metrics based on article usage are even harder to interpret as they offer information on reader interest and subject popularity, rather than quality of the article itself.
[snip]
For the mainstream of science journals, usage based metrics don’t seem to offer the much-desired replacement for the Impact Factor. There is value in understanding the interest drawn by research, but that value is not the same as measuring the quality of that research.
So far we’re mining all the easy and obvious metrics we can find. But they don’t offer us the information we really need. Until better metrics that truly deliver meaningful data on impact are offered, the altmetrics approach is in danger of stalling out. This points to a major crossroads for the field.
Like so many new technologies, there’s an initial rush of enthusiasm as we think about how it could fit with scholarly publishing. But then we hit a point where the easy and obvious approaches are exhausted without much return. Now the hard work begins.
Source and Fulltext Available At
[http://bit.ly/IQ9W2N]
Is Google Scholar Useful for Bibliometrics? A Webometric Analysis
Aguillo, I.F. / (2012) Scientometrics, 91 (2), pp. 343-351.
Abstract
Google Scholar, the academic bibliographic database provided free-of-charge by the search engine giant Google, has been suggested as an alternative or complementary resource to the commercial citation databases like Web of Knowledge (ISI/Thomson) or Scopus (Elsevier). In order to check the usefulness of this database for bibliometric analysis, and especially research evaluation, a novel approach is introduced. Instead of names of authors or institutions, a webometric analysis of academic web domains is performed.
The bibliographic records for 225 top level web domains (TLD), 19,240 university and 6,380 research centres institutional web domains have been collected from the Google Scholar database. About 63. 8% of the records are hosted in generic domains like. com or. org, confirming that most of the Scholar data come from large commercial or non-profit sources. Considering only institutions with at least one record, one-third of the other items (10. 6% from the global) are hosted by the 10,442 universities, while 3,901 research centres amount for an additional 7. 9% from the total.
The individual analysis show that universities from China, Brazil, Spain, Taiwan or Indonesia are far better ranked than expected. In some cases, large international or national databases, or repositories are responsible for the high numbers found. However, in many others, the local contents, including papers in low impact journals, popular scientific literature, and unpublished reports or teaching supporting materials are clearly overrepresented. Google Scholar lacks the quality control needed for its use as a bibliometric tool; the larger coverage it provides consists in some cases of items not comparable with those provided by other similar databases.
Source
[http://bit.ly/IyJTPD]
... Open Access Will Open New Ways to Measure Scientific Output
The Next Revolution in Science ...
Posted on April 19, 2012 by Tom Olijhoek
Open Access will not only change the way that science is done, it will also change the way that science is judged. The way that scientific output is measured today centers around citations. Essentially, on an author level this means the number of publications and citations of an author’s articles (author-level metrics). On a journal level, it means the average number of citations that articles published in that journal have received in a given time period (journal-level metrics).
For author-level metrics the Author citation Index has now been replaced by the H-Index that was introduced in 2005 by JE Hirsch. Here the criterion is the number of articles [n] that have received ≥ n citations at a fixed date. [snip]
[snip]
The H-Index, Google Scholar metrics and the JIF are all rather good indicators of scientific quality. However, in measuring real-world impact they are seriously flawed. Think for a moment of how impact is felt for whatever random topic you can think of. Everyone of us will consider the publication itself, but probably also downloads, pageviews, blogs, comments, Twitter, different kinds of media and social network activity (Google+, Facebook), among other things. In other words, all activities that can be measured by “talking” through social media and other online activities can be used to give a more realistic impression of the real impact of a given research article. Since talking about articles depends on actually being able to read the articles, this is where open access comes into play. The use of the proposed kind of article-level metrics only makes sense when many people are being able to discuss the actual content of published articles, which in turn is only possible when articles are open access. The optimal conditions for using altmetrics would be when articles would all be published as open access, but even with the current growth of open access published papers the method is already starting to make sense.
A number of article-level metrics services are currently in the start-up phase. A company called Altmetric is a small London-based start-up focused on making article level metrics easy. They do this by watching social media sites, newspapers and magazines for any mentions of scholarly articles. The result is an “altmetric” score which is a quantitative measure of the quality and quantity of attention that a scholarly article has received.
[snip] Finally, Total-Impact also makes extensive use of the analysis of social media and other online statistics, to provide a tool to measure total impact of a given collection of scientific articles, datasets and other collections. Their focus on collections represents still another approach to the problem of evaluating scientific output.
The previous overview is probably far from complete, so please feel free to add other possibilities in your comments to this post. However, I do think that the description above is an accurate reflection of the fact that the field of bibliometrics is moving fast and that Open Access will provide the key to the development and implementation of better ways to evaluate scientific output. Compared with the current practices, all of which are based on citations only, the inclusion of altmetrics plus online usage statistics and post-publication peer-review in an open access world will represent a true revolution in the way that science is perceived by all, scientists included.
Source and Fulltext Available At
[http://bit.ly/J7VjK1]
Sunday, 15 April 2012
1st International Workshop on Mining Scientific Publications
1. INTRODUCTION
Digital libraries that store scientific publications are becoming increasingly central to the research process. They are not only used for traditional tasks, such as finding and storing research outputs, but also as a source for discovering new research trends or evaluating research excellence. With the current growth of scientific publications deposited in digital libraries, it is no longer sufficient to provide only access to content. To aid research it is especially important to improve the process of how research is being done.
The recent development in natural language processing, information retrieval and the semantic web make it possible to transform the way we work with scientific publications. However, in order to be able to improve these technologies and carry out experiments, researchers need to be able to easily access and use large databases of scientific publications.
This workshop aims to bring together people from different backgrounds who:
(a) are interested in analysing and mining databases of scientific publications,
(b) develop systems that enable such analysis and mining of scientific databases or
(c) who develop novel technologies that improve the way research is being done.
2. TOPICS
The topics of the workshop will be organised around the following three themes:
- Infrastructures, systems, datasets or APIs that enable analysis of large volumes of scientific publications.
- Semantic enrichment of scientific publications by means of text-mining, crowdsourcing or other methods.
- Analysis of large databases of scientific publications to identify research trends, high impact, cross-fertilisation between disciplines, research excellence etc.
Topics of interest relevant to theme 1 include, but are not limited to:
Systems, services, datasets or APIs for accessing scientific publications and/or research data. The existence of datasets, services, systems and APIs (in particular those that are open) providing access to large volumes of scientific publications and their metadata is an essential prerequisite for being able to research and develop new technologies that can transform the way people do research. We invite papers presenting new systems, services, APIs or datasets that enable people to access databases of scientific publications and carry out their analysis. Papers addressing Open Access are of a special interest. We also invite papers that discuss issues and current challenges in design of these systems or address the issues of accessing and managing scientific publications and/or research datasets.
Topics of interest relevant to theme 2 include, but are not limited to:
- Novel information extraction and text-mining approaches to semantic enrichment of publications. This might range from mining publication structure, such as title, abstract, authors, citation information etc. to more challenging tasks, such as extracting names of applied methods, research questions (or scientific gaps), identifying parts of the scholarly discourse structure etc.
- Automatic categorization and clustering of scientific publications. Methods that can automatically categorize publications according to an established subject-based classification/taxonomy (such as Library of Congress classification, UNESCO thesaurus, DOAJ subject classification, Library of Congress Subject Headings) are of particular interest. Other approaches might involve automatic clustering or classification of research publications according to various criteria.
- New methods and models for connecting and interlinking scientific publications. Scientific publications in digital libraries are not isolated islands. Connecting publications using explicitly defined citations is very restrictive and has many disadvantages. We are interested in innovative technologies that can automatically connect and interlink publications or parts of publications, according to various criteria, such as semantic similarity, contradiction, argument support or other relationship types.
- Models for semantically representing and annotating publications. This topic is related to aspects of semantically modeling publications and scholarly discourse. Models that are practical with respect to the state-of-the-art in Natural Language Processing (NLP) technologies are of special interest.
- Semantically enriching/annotating publications by crowdsourcing. Crowdsourcing can be used in innovative ways to annotate publications with richer metadata or to approve/disapprove annotations created using text-mining or other approaches. We welcome papers that address the following questions: (a) what incentives should be provided to motivate users in contributing metadata, (b) how to apply crowdsourcing in the specialized domains of scientific publications, (c) what tasks in the domain of organising scientific publications is crowdsourcing suitable for and where it might fail, (d) other relevant crowdsourcing topics relevant to the domain of scientific publications.
Topics of interest relevant to theme 3 include, but are not limited to:
- New methods, models and innovative approaches for measuring impact of publications. The most widely used metrics for measuring impact are based on citations. However, counting citations does not take into account the publication content and the qualitative nature of the citation. In addition, there is a delay between the publication and the measurable impact in citations. We in particular encourage papers addressing new ways of evaluating publications’ impact beyond standard citation measures.
- New methods for measuring performance of researchers. Methods for assessing impact of a publication can often be extended to methods that can assess the impact of individual researchers. However, there are also other criteria for measuring impact in addition to publications, such as the development and publication of research data, economical and market impact, that should also be taken into account. We welcome papers addressing these aspects.
- New methods for measuring impact of research groups. The same as for impact of individual researchers holds for research communities.
- Methods for identifying research trends and cross-fertilization between research disciplines. Identifying research trends should allow discovering newly emerging disciplines or it should help to explain why certain fields are attracting the attention of a wider research community. Such monitoring is important for research funders and governments in order to be able to quickly respond to new developments. We invite papers discussing new methods for identifying trends and cross-fertilization between research disciplines using methods ranging from social network analysis and text- and data-mining to innovative visualization approaches.
- Application of mining from scientific databases. New methods and models developed for mining from scientific publications can be applied in many different scenarios, such as improving access to scientific publications, providing exploratory search in digital collections, identifying experts. We encourage papers describing innovative approaches that use scientific publications and data to solve real-world problems.
[snip]
5. IMPORTANT DATES
April 30, 2012 - Submission deadline
May 12, 2012 - Notification of acceptance
May 25, 2012 - Camera-ready
June 14, 2012 - Workshop
[snip]
Source and Fulltext Available At
[http://core-project.kmi.open.ac.uk/jcdl2012/]
Google Starts Ranking Journals
On April 1 2012, Google announced a new feature to its Scholar service ... called Google Scholar Metrics. The service follows the same principle that has made Google's web search engine so successful - when you are unsure what a user is looking for, give them a list of options ranked by a metric of popularity. In this instance, the users are academics ready to submit their next breakthrough but are uncertain which journal to choose. The solution Scholar Metrics offers is a database summarizing the sway of the distributors of scholarship "to help authors as they consider where to publish their new research".
Here's how it works. Google creates a list of all the articles a journal has published in a specified period of time. The citations to each article are counted in order to determine the publication's h-index, which is the largest number "h" such that each of the set of "h" articles were cited "h" or more times. As an example of how the h-index is calculated, consider a publication that has had six total articles having 2, 18, 11, 3, 22, and 9 citations, respectively. This gives the journal an h-index of four. Articles meeting the h-index criterion constitute the h-core. In the example, the core is the articles with 18, 11, 22 and 9 citations. Within the h-core, the median of the citation counts is used to assess the typical influence among the most highly cited set and is reported as the h-median.
[more]
Source and Links Available At
[http://bit.ly/HxHipX]
Wednesday, 11 April 2012
Investigation of Peer Review Processes For Digital Humanities Monographs
April Loebick / April 10 2012 / 1:57 pm
Project participants from the North Georgia College & State University community are BJ Robinson, UPNG Director and NEH Grant Project Director; April Loebick, UPNG Managing Editor; Markus Hitz, Professor of Computer Sciences; Chris Jespersen, Dean of the School of Arts & Letters; and Denise Young, Executive Director of Institutional Effectiveness. Other participants include the Directors of the University Press of Akron, the University Press of Florida, and Wayne State University Press. Advisory Board project members are Kathleen Fitzpatrick, Director of Scholarly Communication, Modern Language Association; Kathy Kiloh, Director of Athabasca University Press; Saskia deVries, Director of Amsterdam University Press; and members of the Open Textbook Consortium.
NEH Digital Start-Up grants are designed to encourage innovations in the digital humanities. Level I funding supports brainstorming sessions, data gathering, and initial planning. With this consortia of peers and publishing groups, UPNG will develop and pilot a model for peer review and eventual electronic publishing of single-author, digital monographs ; this model will involve sharing resources among small university presses to ensure economic viability and to help alleviate the pressures facing academic publishing.
[more]
Source and Fulltext Available At
[http://upnorthgeorgia.org/?p=804]
Subscribe to:
Posts (Atom)