FIT Project turns to interdisciplinarity to understand injury factors in youth football

To address the alarming injury rate in youth footballers in Sweden, the project Injury-Free Children and Adolescents: Towards Better Practice in Swedish Football (FIT project) seeks to fill in the knowledge gaps by bringing biomedical and social science together.

With its multi-angled and interdisciplinary approach, the project involves a sample of male and female Swedish football players aged 10 to 19, in order to provide concrete, evidence-based recommendations for injury prevention strategies for the use of sporting federations, sport education institutions, coaches, sport support staff, as well as players.

Strength sub-study

Having received funding by the Swedish Research Council for Sport Science, the grant proposal is published in the open-access journal Research Ideas and Outcomes (RIO Journal). Currently finalising its data collection stage, FIT project is being conducted at the Department of Food and Nutrition, and Sport Science, University of Gothenburg, Sweden, by PhD student Solveig Hausken, Dr Natalie Barker-Ruchti, Dr Astrid Schubring and Prof Stefan Grau.

While injuries in youth athletes could potentially instigate injuries later on in their careers or even force them to drop out of sport, so far, research has focused almost exclusively on the biomedical perspective and the identification of clinical and mechanical risk factors. However, little is known about the role of socio-cultural risk factors.

Image 1

In contrast, FIT Project turns simultaneously to the disciplines of biomechanics, sport medicine, sport coaching and sport sociology. The researchers conducted laboratory tests to determine the physical and sport-specific dispositions of each player; handed out questionnaires to register details about experienced injury; performed interviews with both coaches and players to shed light on the coaching-training dynamics; and made direct observations on the coaching methods and coach-athlete relationships within the sporting context. Each of these sub-studies is meant to produce a separate dataset to be subjected to an interdisciplinary analysis.

“The FIT project is a rare example of how injury research can integrate biomedical and social science disciplines to produce multiple data sets and an interdisciplinary data analysis procedure,” the team explains.

The researchers expect to identify injury risk factors including: growth and maturation; injury history and general health; biomechanical and clinical parameters; training factors such as training intensity and recovery time between trainings; and contextual factors such as pressure to perform, athletic ideals and knowledge of coaches about injury prevention.Movement analysis sub-study

Starting in January 2019, a pilot analysis including the multiple datasets will be conducted. The team will be publishing updates on the FIT project’s progress on its website.

###

Original source:

Hausken S, Barker-Ruchti N, Schubring A, Grau S (2018) Injury-Free Children and Adolescents: Towards Better Practice in Swedish Football (FIT project). Research Ideas and Outcomes 4: e30729. https://doi.org/10.3897/rio.4.e30729

Advancing the science and management of European intermittent rivers and ephemeral streams

A COST Action set to bring together scientists and stakeholders from across 14 countries within Europe

Intermittent rivers and ephemeral streams (IRES) are waterways that cease to flow and sometimes dry. However, there is much left to learn about them, including their occurrence in the landscape, ecology, economic and societal values and incredible biodiversity. For efficient and adequate management and protection actions, these knowledge gaps need to be closed sooner rather than later.

In a call for a better understanding of IRES and their vital role in nature, a large international team, led by Dr. Thibault Datry, a freshwater ecologist working at IRSTEA, Lyon, France, has initiated the “Science and Management of Intermittent Rivers and Ephemeral Streams (SMIRES)” project. Their grant proposal, as approved for funding by the European framework COST, is published in the open science journal Research Ideas and Outcomes (RIO).

This COST Action brings together hydrologists, biogeochemists, ecologists, modellers, environmental economists, social researchers and stakeholders from 14 countries from around the continent. The aim of the interdisciplinary team is to develop a research network for synthesising the fragmented knowledge on IRES. In turn, improved understanding of IRES will translate to a science-based, sustainable management of river networks.

Along with networking between scientists and stakeholders, the Action will accommodate a whole set of good practices, including data sharing, technology development and citizen science.

The Calavon River, a Mediterranean IRES, during flowing (left) and dry phases (right).
The Calavon River, a Mediterranean IRES, during flowing (left) and dry phases (right).

Amongst the goals of the project are the creation of two meta-databases providing open data about IRES research activities and flow stations with intermittent flows; a proposal for novel indicators and technologies to assess changes and associated ecological responses in IRES; and the development of a European-scale network of citizen scientists to monitor, locate and map river flow states with the help of smartphone technology.

Having been ignored in conservation policies and initiatives, IRES are being degraded at an alarming rate. Water extraction, flood harvesting, river impoundment, channel modification, land-use change and mining are only part of the threats faced by IRES across Europe. In many areas, they are even used as disposal areas. For others, they are channelled underground or connected to larger water bodies as a means of flow augmentation, which could potentially lead to the spread of invasive species.

The researchers point out that lack of recognition and understanding lead to the rapid degradation of IRES.

###

Original source:

Datry T, Singer G, Sauquet E, Jorda-Capdevilla D, Von Schiller D, Subbington R, Magand C, Pa?il P, Miliša M, Acuña V, Alves M, Augeard B, Brunke M, Cid N, Csabai Z, England J, Froebrich J, Koundouri P, Lamouroux N, Martí E, Morais M, Munné A, Mutz M, Pesic V, Previši? A, Reynaud A, Robinson C, Sadler J, Skoulikidis N, Terrier B, Tockner K, Vesely D, Zoppini A (2017) Science and Management of Intermittent Rivers and Ephemeral Streams (SMIRES). Research Ideas and Outcomes 3: e21774. https://doi.org/10.3897/rio.3.e21774

Pilot Project provides findings and advice on data sharing in development research

Launched just ahead of last year’s FORCE11 conference devoted to the utilisation of technological and open science advancements to communication of scholarship, the data sharing pilot project of Prof. Cameron Neylon, Centre for Culture and TechnologyCurtin University, Australia, and his collaborators has published its final outcomes days before FORCE2017.

The project made use of the innovative open science RIO Journal, which allows for the publication of various research outcomes and sorting them into dedicated collections. The collection, “Exploring the opportunities and challenges of implementing open research strategies within development institutions: A project of the International Development Research Center” now features the project’s grant proposal as approved by the IDRC, a review article, several data management plans and case studies, and the final report as a research article.

The pilot project looked into the current state and challenges of data management and sharing policies and practices, as shown by case studies of seven IDRC-funded development research projects.

Having worked with the projects selected from across a range of geographies, scales and subjects over the course of 16 months, the data sharing initiative began with an introduction to data management and sharing concepts, as well as helping the projects to develop their own data management plans. Then, they carefully monitored and observed the implementation of those plans.

Over the course of the project, it became apparent that simply developing and implementing funder policies is not enough to change research culture. The question of how funder policy and implementation could support culture change both within research communities and within the funder itself became the focus of the initiative.

Data management plans have become a mandatory part of grant submission for many funders. However, they are often not utilised by researchers or funders later in the project, becoming “at best neutral and likely counter productive in supporting change in research culture.”

While the pilot project managed to identify a number of significant bottlenecks within both research institutions and for grantees that impede efficient data sharing practices, including, expectedly, lack of resources and expertise, the researchers specifically point out issues related to structural issues at the funder level.

“The single most productive act to enhance policy implementation may be to empower and support Program Officers,” says the author.

“This could be achieved through training and support of individual POs, through the creation of a group of internal experts who can support others, or via provision of external support, for instance, by expanding the services provided by the pilot project into an ongoing support mechanism for both internal staff and grantees.”

Amongst the findings of the pilot project are also the importance of language barriers and the need for better suited data management platforms and tools.

Furthermore, the study identified a gap between the understanding of “data” amongst cultures, pointing out that the concept of data is “part of a western scientific discourse which may be both incompatible with other cultures, particularly indigenous knowledge systems.”

In conclusion, the research article outlines a set of recommendations for funders, particularly those with a focus on development, as well as recommendations specific to the IDRC.

Original source:

Neylon C (2017) Building a Culture of Data Sharing: Policy Design and Implementation for Research Data Management in Development Research. Research Ideas and Outcomes 3: e21773. https://doi.org/10.3897/rio.3.e21773

The first microbial supertree from figure-mining thousands of papers

While recent reports reveal the existence of more than 114,000,000 documents of published scientific literature, finding a way to improve the access to this knowledge and efficiently synthesise it becomes an increasingly pressing issue.

Seeking to address the problem through their PLUTo workflow, British scientists Ross Mounce and Peter Murray-Rust, University of Cambridge and Matthew Wills, University of Bath perform the world’s first attempt at automated supertree construction using data exclusively extracted by machines from published figure images. Their results are published in the open science journal Research Ideas and Outcomes (RIO).

For their study, the researchers picked the International Journal of Systematics and Evolutionary Microbiology (IJSEM) – the sole repository hosting all new validly described prokaryote taxa and, therefore, an excellent choice against which to test systems for the automated and semi-automated synthesis of published phylogenies. According to the authors, IJSEM publishes a greater number of phylogenetic tree figure images a year than any other journal.

An eleven-year span of articles dating back to January, 2003 was systematically downloaded so that all image files of phylogenetic tree figures could be extracted for analysis. Computer vision techniques then allowed for the automatic conversion of the images back into re-usable, computable, phylogenetic data and used for a formal supertree synthesis of all the evidence.

During their research, the scientists had to overcome various challenges posed by copyrights formally covering almost all of the documents they needed to mine for the purpose of their work. At this point, they faced quite a paradox – while easy access and re-use of data published in scientific literature is generally supported and strongly promoted, common copyright practices make it difficult for a scientist to be confident when incorporating previously compiled data into their own work. The authors discuss recent changes to UK copyright law that have allowed for their work to see the light of day. As a result, they provide their output as facts, and assign them to the public domain by using the CC0 waiver of Creative Commons, to enable worry-free re-use by anyone.

“We are now at the stage where no individual has the time to read even just the titles of all published papers, let alone the abstracts,” comment the authors.

“We believe that machines are now essential to enable us to make sense of the stream of published science, and this paper addresses several of the key problems inherent in doing this.”

“We have deliberately selected a subsection of the literature (limited to one journal) to reduce the volume, velocity and variety, concentrating primarily on validity. We ask whether high-throughput machine extraction of data from the semistructured scientific literature is possible and valuable.”  

 

Original source:

Mounce R, Murray-Rust P, Wills M (2017) A machine-compiled microbial supertree from figure-mining thousands of papers. Research Ideas and Outcomes 3: e13589. https://doi.org/10.3897/rio.3.e13589

 

Additional information:

The research has been funded by the BBSRC (grant BB/K015702/1 awarded to MAW and supporting RM).

Legitimacy of reusing images from scientific papers addressed

It goes without saying that scientific research has to build on previous breakthroughs and publications. However, it feels quite counter-intuitive for data and their re-use to be legally restricted. Yet, that is what happens when copyright restrictions are placed on many scientific papers.

The discipline of taxonomy is highly reliant on previously published photographs, drawings and other images as biodiversity data. Inspired by the uncertainty among taxonomists, a team, representing both taxonomists and experts in rights and copyright law, has traced the role and relevance of copyright when it comes to images with scientific value. Their discussion and conclusions are published in the latest paper added in the EU BON Collection in the open science journal Research Ideas and Outcomes (RIO).

Taxonomic papers, by definition, cite a large number of previous publications, for instance, when comparing a new species to closely related ones that have already been described. Often it is necessary to use images to demonstrate characteristic traits and morphological differences or similarities. In this role, the images are best seen as biodiversity data rather than artwork. According to the authors, this puts them outside the scope, purposes and principles of Copyright. Moreover, such images are most useful when they are presented in a standardized fashion, and lack the artistic creativity that would otherwise make them ‘copyrightable works’.

image 3

“It follows that most images found in taxonomic literature can be re-used for research or many other purposes without seeking permission, regardless of any copyright declaration,” says Prof. David J. Patterson, affiliated with both Plazi and the University of Sydney.

Nonetheless, the authors point out that, “in observance of ethical and scholarly standards, re-users are expected to cite the author and original source of any image that they use.” Such practice is “demanded by the conventions of scholarship, not by legal obligation,” they add.

However, the authors underline that there are actual copyrightable visuals, which might also make their way to a scientific paper. These include wildlife photographs, drawings and artwork produced in a distinctive individual form and intended for other than comparative purposes, as well as collections of images, qualifiable as databases in the sense of the European Protection of Databases directive.

In their paper, the scientists also provide an updated version of the Blue List, originally compiled in 2014 and comprising the copyright exemptions applicable to taxonomic works. In their Extended Blue List, the authors expand the list to include five extra items relating specifically to images.

“Egloff, Agosti, et al. make the compelling argument that taxonomic images, as highly standardized ‘references for identification of known biodiversity,’ by necessity, lack sufficient creativity to qualify for copyright. Their contention that ‘parameters of lighting, optical and specimen orientation’ in biological imaging must be consistent for comparative purposes underscores the relevance of the merger doctrine for photographic works created specifically as scientific data,” comments on the publication Ms. Gail Clement, Head of Research Services at the Caltech Library.

“In these cases, the idea and expression are the same and the creator exercises no discretion in complying with an established convention. This paper is an important contribution to the literature on property interests in scientific research data – an essential framing question for legal interoperability of research data,” she adds.

###

Original source:

Egloff W, Agosti D, Kishor P, Patterson D, Miller J (2017) Copyright and the Use of Images as Biodiversity Data. Research Ideas and Outcomes 3: e12502. https://doi.org/10.3897/rio.3.e12502

Additional information:

The present study is a research outcome of the European Union’s FP7-funded project EU BON, grant agreement No 308454.

Guidelines for scholarly publishing of biodiversity data from Pensoft and EU BON

While development and implementation of data publishing and sharing practices and tools have long been among the core activities of the academic publisher Pensoft, it is well-understood that as part of scholarly publishing, open data practices are also currently in transition, and hence, require a lot of collaborative and consistent efforts to establish.

Based on Pensoft’s experience, and elaborated and updated during the Framework Program 7 EU BON project, a new paper published in the EU BON dedicated collection in the open science journal Research Ideas and Outcomes (RIO), outlines policies and guidelines for scholarly publishing of biodiversity and biodiversity-related data. Newly accumulated knowledge from large-scale international efforts, such as FORCE11 (Future of Research Communication and e-Scholarship), CODATA (The Committee on Data for Science and Technology), RDA (Research Data Alliance) and others, is also included in the Guidelines.

The present paper discusses some general concepts, including a definition of datasets, incentives to publish data and licences for data publishing. Furthermore, it defines and compares several routes for data publishing, namely: providing supplementary files to research articles; uploading them on specialised open data repositories, where they are linked to the research article; publishing standalone data papers; or making use of integrated narrative and data publishing through online import/download of data into/from manuscripts, such as the workflow provided by the Biodiversity Data Journal. Among the guidelines, there are also comprehensive instructions on preparation and peer review of data intended for publication.

Although currently available for journals using the developed by Pensoft journal publishing platform ARPHA, these strategies and guidelines could be of use for anyone interested in biodiversity data publishing.

Apart from paving the way for a whole new approach in data publishing, the present paper is also a fine example of science done in the open, having been published along with its two pre-submission public peer reviews. The reviews by Drs. Robert Mesibov and Florian Wetzel are both citable via their own Digital Object Identifiers (DOIs).

###

Original source:

Penev L, Mietchen D, Chavan V, Hagedorn G, Smith V, Shotton D, Ó Tuama É, Senderov V, Georgiev T, Stoev P, Groom Q, Remsen D, Edmunds S (2017) Strategies and guidelines for scholarly publishing of biodiversity data. Research Ideas and Outcomes 3: e12431. https://doi.org/10.3897/rio.3.e12431

New collection in RIO Journal devoted to neuroscience projects from 2016 Brainhack events

A new collection devoted to neuroscience projects from 2016 Brainhack events has been launched in the open access journal Research Ideas and Outcomes (RIO). At current count, the “Brainhack 2016 Project Reports” collection features eight Project Reports, whose authors are applying open science and collaborative research to advance our understanding of the brain.

Seeking to provide a forum for open, collaborative projects in brain science the Brainhack organization has found a like-minded partner in the innovative open science journal RIO. The editor of the series is Dr. R. Cameron Craddock, Computational Neuroimaging Lab, Child Mind Institute and Nathan S. Kline Institute for Psychiatric Research, USA. He is joined by co-editors Dr. Pierre Bellec, Unité de neuroimagerie fonctionnelle, Centre de recherche de l’institut de gériatrie de Montréal, Canada, Dr. Daniel S. Margulies, Max Planck Research Group “Neuroanatomy & Connectivity“, Max Planck Institute for Human Cognitive and Brain Sciences, Dr. Nolan Nichols, Genetech, USA, and Dr. Jörg Pfannmöller, University of Greifswald, Germany.

The first project description published in the collection is a Software Management Plan presenting a comprehensive set of neuroscientific software packages demonstrating the huge potential of Gentoo Linux in neuroscience. The team of Horea-Ioan Ioanas, Dr. Bechara John Saab and Prof. Dr. Markus Rudin, affiliated with ETH and University of Zürich, Switzerland, make use of the flexibility of Gentoo’s environment to address many of the challenges in neuroscience software management, including system replicability, system documentation, data analysis reproducibility, fine-grained dependency management, easy control over compilation options, and seamless access to cutting-edge software release. The packages are available for the wide family of Gentoo distributions and derivatives. “Via Gentoo-prefix, these neuroscientific software packages are, in fact, also accessible to users of many other operating systems,” explain the researchers.

While quantifying lesions in a robust manner is fundamental for studying the effects of neuroanatomical changes in the post-stroke brain while recovering, manual lesion segmentation has been found to be a challenging and often subjective process. This is where the Semi-automated Robust Quantification of Lesions (SRQL) Toolbox comes in. Developed at the University of Southern California, Los Angeles, it optimizes quantification of lesions across research sites. “Specifically, this toolbox improves the performance of statistical analysis on lesions through standardizing lesion masks with white matter adjustment, reporting descriptive lesion statistics, and normalizing adjusted lesion masks to standard space,” explain scientists Kaori L. Ito, Julia M. Anglin, and Dr. Sook-Lei Liew.

Called Mindcontrol, an open-source web-based dashboard application lets users collaboratively quality control and curate neuroimaging data. Developed by the team of Anisha Keshavan and Esha Datta, both of University of California, San Francisco, Dr. Christopher R. Madan, Boston College, and Dr. Ian M. McDonough, The University of Alabama, Mindcontrol provides an easy-to-use interface, and allows the users to annotate points and curves on the volume, edit voxels, and assign tasks to other users. “We hope to build an active open-source community around Mindcontrol to add new features to the platform and make brain quality control more efficient and collaborative,” note the researchers.

At University of California, San Francisco, Anisha Keshavan, Dr. Arno Klein, and Dr. Ben Cipollini, created the open-source Mindboggle package, which serves to improve the labeling and morphometry estimates of brain imaging data. Using inspirations and feedback from a Brainhack hackathon, they built-up on Mindboggle to develop a web-based, interactive, brain shape 3D visualization of its outputs. Now, they are looking to expand the visualization, so that it covers other data besides shape information and enables the visual evaluation of thousands of brains.

Processing neuroimaging data on the cortical surface traditionally requires dedicated heavy-weight software suites. However, a team from Max Planck Institute for Human Cognitive and Brain Sciences, Free University Berlin, and the NeuroSpin Research Institute, France, have come up with an alternative. Operating within the neuroimaging data processing toolbox Nilearn, their Python package allows loading and plotting functions for different surface data formats with minimal dependencies, along with examples of their application. “The functions are easy to use, flexibly adapt to different use cases,” explain authors Julia M. Huntenburg, Alexandre Abraham, Joao Loula, Dr. Franziskus Liem, and Dr. Gaël Varoquaux. “While multiple features remain to be added and improved, this work presents a first step towards the support of cortical surface data in Nilearn.”

To further address the increasing necessity for tools specialised to process huge high-resolution brain imaging data in their anatomical detail, Julia M. Huntenburg gathers a separate team to work on another Python-based software. Being a user-friendly standalone package, this subset of CBSTools requires no additional installations, and allows for interactive data exploration at each processing stage.

Developed at the University of California, San Francisco, Cluster-viz is a web application that provides a platform for cluster-based interactive quality control of tractography algorithm outputs, explain the team of Kesshi M. Jordan, Anisha Keshavan, Dr. Maria Luisa Mandelli, and Dr. Roland G. Henry. It.

A project from the University of Warwick, United Kingdom, aims to extend the functionalities of the FSL neuroimaging software package in order to generate and report peak and cluster tables for voxel-wise inference. Dr. Camille Maumet and Prof. Thomas E. Nichols believe that the resulting extension “will be useful in the development of standardized exports of task-based fMRI results.”

More 2016 Brainhack projects are to be added to the collection.

Novel genetic tools for bioassessment of European aquatic ecosystems, COST grant proposal

Often referred to as “the blue planet”, the majority of the Earth consists of aquatic ecosystems. Human land-use change, over-exploitation and pollution have severely impacted aquatic ecosystems over the past decades.

In order to protect and maintain central ecosystem services obtained from aquatic ecosystems, such as clean water and food, conservation actions have been proposed in order to protect and preserve our planet’s water ecosystems. Bioassessment and continuous monitoring are the central tools to evaluate the success of conservation management actions. However they are not efficient enough at the moment.

The DNAqua-Net project, funded under the European framework COST, is set to gather a large international professional community from across disciplines and fields in order to develop best practice strategies for using novel genetic tools in real-world bioassessment and monitoring of aquatic ecosystems in Europe and beyond. The grant proposal, authored by a large international team, is published in the open access journal Research Ideas and Outcomes (RIO).

Currently, biodiversity assessment relies on morpho-taxonomy, meaning species are identified based on studying the morphology of collected and manually sorted specimens. However, this approach is largely flawed due to being time-consuming, limited in temporal and spatial resolution, and dependent on the varying individual taxonomic expertise of the analysts.

In contrast, novel genomic tools, meant to be researched and developed over the course of DNAqua-Net, offer new solutions. They rely on DNA barcoding to identify species, even those undescribed yet, and assess biodiversity of water ecosystems using standardised genetic markers.

DNA barcoding is a modern taxonomic tool, which uses short standardised gene fragments of organisms allowing an unequivocal assignment to species level based on sequence data. Standardised DNA-barcode libraries, generated by the international Barcode of Life project (iBOL), and its associated and validated databases, such as BOLD and R-Syst provide reference data, which make it possible to analyse multiple environmental samples within a few days.

So far, a major problem in developing and adopting genomic tools has been that scientists have been working independently in different institutions rather unconnected from end-users. However, the DNAqua-Net team’s aim is to establish a cross-discipline, international network of scientists, managers, governmental institutions, manufacturers, and emerging service providers. Together, they would be able to identify the challenges in DNA-based bioassessment and provide standardised best-practice solutions.

Furthermore, as technological progress continues, DNA does not have to be necessarily extracted from tissue, but can also be collected from sediments, biofilms, or the water itself. Also called ‘environmental DNA’ (eDNA), it can provide information on much more than a number of specifically targeted species. Instead, it could deliver data on the entire biodiversity of micro-, meio- and macro-organisms living in an aquatic environment. While being far less invasive than traditional sampling techniques, the combined eDNA metabarcoding approach could also detect alien species and thus, act as an early warning for management.

“Novel DNA-based approaches currently emerge, possibly acting as a “game-changer” in environmental diagnostics and bioassessments by providing high-resolution pictures of biodiversity from micro to macro scales,” comment the authors.

###

Original source:

Leese F, Altermatt F, Bouchez A, Ekrem T, Hering D, Meissner K, Mergen P, Pawlowski J, Piggott J, Rimet F, Steinke D, Taberlet P, Weigand A, Abarenkov K, Beja P, Bervoets L, Björnsdóttir S, Boets P, Boggero A, Bones A, Borja Á, Bruce K, Bursi? V, Carlsson J, ?iampor F, ?iamporová-Zatovičová Z, Coissac E, Costa F, Costache M, Creer S, Csabai Z, Deiner K, DelValls Á, Drakare S, Duarte S, Eleršek T, Fazi S, Fišer C, Flot J, Fonseca V, Fontaneto D, Grabowski M, Graf W, Guðbrandsson J, Hellström M, Hershkovitz Y, Hollingsworth P, Japoshvili B, Jones J, Kahlert M, Kalamujic Stroil B, Kasapidis P, Kelly M, Kelly-Quinn M, Keskin E, Kõljalg U, Ljubeši? Z, Maček I, Mächler E, Mahon A, Marečková M, Mejdandzic M, Mircheva G, Montagna M, Moritz C, Mulk V, Naumoski A, Navodaru I, Padisák J, Pálsson S, Panksep K, Penev L, Petrusek A, Pfannkuchen M, Primmer C, Rinkevich B, Rotter A, Schmidt-Kloiber A, Segurado P, Speksnijder A, Stoev P, Strand M, Šulčius S, Sundberg P, Traugott M, Tsigenopoulos C, Turon X, Valentini A, van der Hoorn B, Várbíró G, Vasquez Hadjilyra M, Viguri J, Vitonyt? I, Vogler A, Vrålstad T, Wägele W, Wenne R, Winding A, Woodward G, Zegura B, Zimmermann J (2016) DNAqua-Net: Developing new genetic tools for bioassessment and monitoring of aquatic ecosystems in Europe. Research Ideas and Outcomes 2: e11321. https://doi.org/10.3897/rio.2.e11321

Celebrating RIO’s first birthday

Exactly a year ago, on 2 Nov 2015, we opened the Research Ideas and Outcomes (RIO) journal for submissions, hopeful that we will find fellow open-minded people across the community to support our cause.

Little did we know back then that RIO will become one of our most successful ventures, widely known as an innovator and carrier of change within the open science publishing community. Put in the spotlight, RIO has received some attention and positive feedback from outlets such as Science Magazine and Times Higher Education.

In just one year, RIO has accumulated a total of 76 published articles, most of them in our innovative categories aimed at opening up the research cycle for non-conventional outputs (see chart). Published articles include Research Ideas, Grant Proposals, Workshop Reports, Data Management Plans, Research Posters, Conference Abstracts and PhD Project plans, to name just a few.

publications

The innovative option to create research collections has already been actively utilized for an ongoing PhD Project, one Workshop, and a number of Data Management Plans, alongside a dedicated and ever-growing collection for the large FP7 Project EU BON.

Just as half a year since our first publications had passed, we were thrilled to receive the news that RIO has joined the prestigious club of SPARC Innovators. This distinction meant more than an award for RIO, it meant that the journal was firmly following its initial goal to innovate the scientific publishing field.

blog_2-01

RIO is now enjoying a growing community around its cause, not only in the face of its authors, but also attracting the attention of a number of projects and funders, who  are showing growing interest.

One year of RIO has brought 12 months of success and good news. Now it is time to celebrate!

Happy Birthday, RIO!

In a nutshell: The four peer review stages in RIO explained

Having received a number of requests to further clarify our peer review process, we hereby provide a concise summary of the four author- and journal-organised peer review stages applicable to all research article publications submitted to RIO

 

Stage 1: Author-organised pre-submission review

Optional. This review process can take place in the ARPHA Writing Tool (AWT) during the authoring process BEFORE the manuscript is submitted to the journal. It works much like discussion of a manuscript within an institutional department, akin to soliciting comments and changes on a collaborate Google Doc file. The author can invite reviewers via the “+Reviewers” button located on the upper horizontal bar of the AWT. Then, the author(s) and the reviewers are able to work together in the ARPHA online environment through an inline comment/reply interface. The reviewers are then expected to submit a concise evaluation form and a final statement.

The pre-submission review is not mandatory, but we strongly encourage it. Pre-submission reviews will be published along with the article and will bear a DOI and citation details. Articles reviewed before submission are labelled “Reviewed” when published. Manuscripts that have not been peer-reviewed before submission can be published on the basis of in-house editorial and technical checks, and will be labelled “Reviewable”.

If there is no pre-submission review, the authors have to provide a public statement explaining why they do not have, or need a pre-submission review for this work (e.g. a manuscript has been previously reviewed; a grant proposal has already been accepted for funding, etc.).

 

Stage 2: Pre-submission technical and editorial check with in-house editors or relevant members of RIO’s editorial board

Mandatory. Provided by the journal’s editorial office within the ARPHA Writing Tool when a manuscript is submitted to the journal. If necessary, it can take several rounds, until the manuscript is improved to the level appropriate for direct submission and publication in the journal. This stage ensures format compliance with RIO’s requirements, as well as relevant funding-body and discipline-specific requirements.

 

Stage 3: Community-sourced post-publication peer review

Continuously available. All articles published in RIO are available for post-publication review, regardless of them being subject to a pre-submission review or not, or their review status (Reviewable, Reviewed, or RIO-validated). The author may decide to publish a revised version of an article anytime based on feedback received from the community. Putatively, even years after publication of the original work our system allows a review to be published alongside the paper.  

 

Stage 4: Journal-organized post-publication peer review

Optional. If the author(s) request it, the journal can additionally organize a formal peer review from discipline-specific researchers in a timely manner. Authors may suggest reviewers during the submission process, but RIO may not necessarily invite suggested reviewers.

Once an editor and reviewers are invited by the journal, the review process happens much like the conventional peer review in many other journals, but is entirely open and transparent. It is also subject to a small additional fee, in order to cover the management of this process. When this review stage is successfully completed and the editors have decided to validate the article, the revised article version is labelled “RIO-validated”.