New collection in RIO Journal devoted to neuroscience projects from 2016 Brainhack events

A new collection devoted to neuroscience projects from 2016 Brainhack events has been launched in the open access journal Research Ideas and Outcomes (RIO). At current count, the “Brainhack 2016 Project Reports” collection features eight Project Reports, whose authors are applying open science and collaborative research to advance our understanding of the brain.

Seeking to provide a forum for open, collaborative projects in brain science the Brainhack organization has found a like-minded partner in the innovative open science journal RIO. The editor of the series is Dr. R. Cameron Craddock, Computational Neuroimaging Lab, Child Mind Institute and Nathan S. Kline Institute for Psychiatric Research, USA. He is joined by co-editors Dr. Pierre Bellec, Unité de neuroimagerie fonctionnelle, Centre de recherche de l’institut de gériatrie de Montréal, Canada, Dr. Daniel S. Margulies, Max Planck Research Group “Neuroanatomy & Connectivity“, Max Planck Institute for Human Cognitive and Brain Sciences, Dr. Nolan Nichols, Genetech, USA, and Dr. Jörg Pfannmöller, University of Greifswald, Germany.

The first project description published in the collection is a Software Management Plan presenting a comprehensive set of neuroscientific software packages demonstrating the huge potential of Gentoo Linux in neuroscience. The team of Horea-Ioan Ioanas, Dr. Bechara John Saab and Prof. Dr. Markus Rudin, affiliated with ETH and University of Zürich, Switzerland, make use of the flexibility of Gentoo’s environment to address many of the challenges in neuroscience software management, including system replicability, system documentation, data analysis reproducibility, fine-grained dependency management, easy control over compilation options, and seamless access to cutting-edge software release. The packages are available for the wide family of Gentoo distributions and derivatives. “Via Gentoo-prefix, these neuroscientific software packages are, in fact, also accessible to users of many other operating systems,” explain the researchers.

While quantifying lesions in a robust manner is fundamental for studying the effects of neuroanatomical changes in the post-stroke brain while recovering, manual lesion segmentation has been found to be a challenging and often subjective process. This is where the Semi-automated Robust Quantification of Lesions (SRQL) Toolbox comes in. Developed at the University of Southern California, Los Angeles, it optimizes quantification of lesions across research sites. “Specifically, this toolbox improves the performance of statistical analysis on lesions through standardizing lesion masks with white matter adjustment, reporting descriptive lesion statistics, and normalizing adjusted lesion masks to standard space,” explain scientists Kaori L. Ito, Julia M. Anglin, and Dr. Sook-Lei Liew.

Called Mindcontrol, an open-source web-based dashboard application lets users collaboratively quality control and curate neuroimaging data. Developed by the team of Anisha Keshavan and Esha Datta, both of University of California, San Francisco, Dr. Christopher R. Madan, Boston College, and Dr. Ian M. McDonough, The University of Alabama, Mindcontrol provides an easy-to-use interface, and allows the users to annotate points and curves on the volume, edit voxels, and assign tasks to other users. “We hope to build an active open-source community around Mindcontrol to add new features to the platform and make brain quality control more efficient and collaborative,” note the researchers.

At University of California, San Francisco, Anisha Keshavan, Dr. Arno Klein, and Dr. Ben Cipollini, created the open-source Mindboggle package, which serves to improve the labeling and morphometry estimates of brain imaging data. Using inspirations and feedback from a Brainhack hackathon, they built-up on Mindboggle to develop a web-based, interactive, brain shape 3D visualization of its outputs. Now, they are looking to expand the visualization, so that it covers other data besides shape information and enables the visual evaluation of thousands of brains.

Processing neuroimaging data on the cortical surface traditionally requires dedicated heavy-weight software suites. However, a team from Max Planck Institute for Human Cognitive and Brain Sciences, Free University Berlin, and the NeuroSpin Research Institute, France, have come up with an alternative. Operating within the neuroimaging data processing toolbox Nilearn, their Python package allows loading and plotting functions for different surface data formats with minimal dependencies, along with examples of their application. “The functions are easy to use, flexibly adapt to different use cases,” explain authors Julia M. Huntenburg, Alexandre Abraham, Joao Loula, Dr. Franziskus Liem, and Dr. Gaël Varoquaux. “While multiple features remain to be added and improved, this work presents a first step towards the support of cortical surface data in Nilearn.”

To further address the increasing necessity for tools specialised to process huge high-resolution brain imaging data in their anatomical detail, Julia M. Huntenburg gathers a separate team to work on another Python-based software. Being a user-friendly standalone package, this subset of CBSTools requires no additional installations, and allows for interactive data exploration at each processing stage.

Developed at the University of California, San Francisco, Cluster-viz is a web application that provides a platform for cluster-based interactive quality control of tractography algorithm outputs, explain the team of Kesshi M. Jordan, Anisha Keshavan, Dr. Maria Luisa Mandelli, and Dr. Roland G. Henry. It.

A project from the University of Warwick, United Kingdom, aims to extend the functionalities of the FSL neuroimaging software package in order to generate and report peak and cluster tables for voxel-wise inference. Dr. Camille Maumet and Prof. Thomas E. Nichols believe that the resulting extension “will be useful in the development of standardized exports of task-based fMRI results.”

More 2016 Brainhack projects are to be added to the collection.

Novel genetic tools for bioassessment of European aquatic ecosystems, COST grant proposal

Often referred to as “the blue planet”, the majority of the Earth consists of aquatic ecosystems. Human land-use change, over-exploitation and pollution have severely impacted aquatic ecosystems over the past decades.

In order to protect and maintain central ecosystem services obtained from aquatic ecosystems, such as clean water and food, conservation actions have been proposed in order to protect and preserve our planet’s water ecosystems. Bioassessment and continuous monitoring are the central tools to evaluate the success of conservation management actions. However they are not efficient enough at the moment.

The DNAqua-Net project, funded under the European framework COST, is set to gather a large international professional community from across disciplines and fields in order to develop best practice strategies for using novel genetic tools in real-world bioassessment and monitoring of aquatic ecosystems in Europe and beyond. The grant proposal, authored by a large international team, is published in the open access journal Research Ideas and Outcomes (RIO).

Currently, biodiversity assessment relies on morpho-taxonomy, meaning species are identified based on studying the morphology of collected and manually sorted specimens. However, this approach is largely flawed due to being time-consuming, limited in temporal and spatial resolution, and dependent on the varying individual taxonomic expertise of the analysts.

In contrast, novel genomic tools, meant to be researched and developed over the course of DNAqua-Net, offer new solutions. They rely on DNA barcoding to identify species, even those undescribed yet, and assess biodiversity of water ecosystems using standardised genetic markers.

DNA barcoding is a modern taxonomic tool, which uses short standardised gene fragments of organisms allowing an unequivocal assignment to species level based on sequence data. Standardised DNA-barcode libraries, generated by the international Barcode of Life project (iBOL), and its associated and validated databases, such as BOLD and R-Syst provide reference data, which make it possible to analyse multiple environmental samples within a few days.

So far, a major problem in developing and adopting genomic tools has been that scientists have been working independently in different institutions rather unconnected from end-users. However, the DNAqua-Net team’s aim is to establish a cross-discipline, international network of scientists, managers, governmental institutions, manufacturers, and emerging service providers. Together, they would be able to identify the challenges in DNA-based bioassessment and provide standardised best-practice solutions.

Furthermore, as technological progress continues, DNA does not have to be necessarily extracted from tissue, but can also be collected from sediments, biofilms, or the water itself. Also called ‘environmental DNA’ (eDNA), it can provide information on much more than a number of specifically targeted species. Instead, it could deliver data on the entire biodiversity of micro-, meio- and macro-organisms living in an aquatic environment. While being far less invasive than traditional sampling techniques, the combined eDNA metabarcoding approach could also detect alien species and thus, act as an early warning for management.

“Novel DNA-based approaches currently emerge, possibly acting as a “game-changer” in environmental diagnostics and bioassessments by providing high-resolution pictures of biodiversity from micro to macro scales,” comment the authors.

###

Original source:

Leese F, Altermatt F, Bouchez A, Ekrem T, Hering D, Meissner K, Mergen P, Pawlowski J, Piggott J, Rimet F, Steinke D, Taberlet P, Weigand A, Abarenkov K, Beja P, Bervoets L, Björnsdóttir S, Boets P, Boggero A, Bones A, Borja Á, Bruce K, Bursi? V, Carlsson J, ?iampor F, ?iamporová-Zatovičová Z, Coissac E, Costa F, Costache M, Creer S, Csabai Z, Deiner K, DelValls Á, Drakare S, Duarte S, Eleršek T, Fazi S, Fišer C, Flot J, Fonseca V, Fontaneto D, Grabowski M, Graf W, Guðbrandsson J, Hellström M, Hershkovitz Y, Hollingsworth P, Japoshvili B, Jones J, Kahlert M, Kalamujic Stroil B, Kasapidis P, Kelly M, Kelly-Quinn M, Keskin E, Kõljalg U, Ljubeši? Z, Maček I, Mächler E, Mahon A, Marečková M, Mejdandzic M, Mircheva G, Montagna M, Moritz C, Mulk V, Naumoski A, Navodaru I, Padisák J, Pálsson S, Panksep K, Penev L, Petrusek A, Pfannkuchen M, Primmer C, Rinkevich B, Rotter A, Schmidt-Kloiber A, Segurado P, Speksnijder A, Stoev P, Strand M, Šulčius S, Sundberg P, Traugott M, Tsigenopoulos C, Turon X, Valentini A, van der Hoorn B, Várbíró G, Vasquez Hadjilyra M, Viguri J, Vitonyt? I, Vogler A, Vrålstad T, Wägele W, Wenne R, Winding A, Woodward G, Zegura B, Zimmermann J (2016) DNAqua-Net: Developing new genetic tools for bioassessment and monitoring of aquatic ecosystems in Europe. Research Ideas and Outcomes 2: e11321. https://doi.org/10.3897/rio.2.e11321

Celebrating RIO’s first birthday

Exactly a year ago, on 2 Nov 2015, we opened the Research Ideas and Outcomes (RIO) journal for submissions, hopeful that we will find fellow open-minded people across the community to support our cause.

Little did we know back then that RIO will become one of our most successful ventures, widely known as an innovator and carrier of change within the open science publishing community. Put in the spotlight, RIO has received some attention and positive feedback from outlets such as Science Magazine and Times Higher Education.

In just one year, RIO has accumulated a total of 76 published articles, most of them in our innovative categories aimed at opening up the research cycle for non-conventional outputs (see chart). Published articles include Research Ideas, Grant Proposals, Workshop Reports, Data Management Plans, Research Posters, Conference Abstracts and PhD Project plans, to name just a few.

publications

The innovative option to create research collections has already been actively utilized for an ongoing PhD Project, one Workshop, and a number of Data Management Plans, alongside a dedicated and ever-growing collection for the large FP7 Project EU BON.

Just as half a year since our first publications had passed, we were thrilled to receive the news that RIO has joined the prestigious club of SPARC Innovators. This distinction meant more than an award for RIO, it meant that the journal was firmly following its initial goal to innovate the scientific publishing field.

blog_2-01

RIO is now enjoying a growing community around its cause, not only in the face of its authors, but also attracting the attention of a number of projects and funders, who  are showing growing interest.

One year of RIO has brought 12 months of success and good news. Now it is time to celebrate!

Happy Birthday, RIO!

In a nutshell: The four peer review stages in RIO explained

Having received a number of requests to further clarify our peer review process, we hereby provide a concise summary of the four author- and journal-organised peer review stages applicable to all research article publications submitted to RIO

 

Stage 1: Author-organised pre-submission review

Optional. This review process can take place in the ARPHA Writing Tool (AWT) during the authoring process BEFORE the manuscript is submitted to the journal. It works much like discussion of a manuscript within an institutional department, akin to soliciting comments and changes on a collaborate Google Doc file. The author can invite reviewers via the “+Reviewers” button located on the upper horizontal bar of the AWT. Then, the author(s) and the reviewers are able to work together in the ARPHA online environment through an inline comment/reply interface. The reviewers are then expected to submit a concise evaluation form and a final statement.

The pre-submission review is not mandatory, but we strongly encourage it. Pre-submission reviews will be published along with the article and will bear a DOI and citation details. Articles reviewed before submission are labelled “Reviewed” when published. Manuscripts that have not been peer-reviewed before submission can be published on the basis of in-house editorial and technical checks, and will be labelled “Reviewable”.

If there is no pre-submission review, the authors have to provide a public statement explaining why they do not have, or need a pre-submission review for this work (e.g. a manuscript has been previously reviewed; a grant proposal has already been accepted for funding, etc.).

 

Stage 2: Pre-submission technical and editorial check with in-house editors or relevant members of RIO’s editorial board

Mandatory. Provided by the journal’s editorial office within the ARPHA Writing Tool when a manuscript is submitted to the journal. If necessary, it can take several rounds, until the manuscript is improved to the level appropriate for direct submission and publication in the journal. This stage ensures format compliance with RIO’s requirements, as well as relevant funding-body and discipline-specific requirements.

 

Stage 3: Community-sourced post-publication peer review

Continuously available. All articles published in RIO are available for post-publication review, regardless of them being subject to a pre-submission review or not, or their review status (Reviewable, Reviewed, or RIO-validated). The author may decide to publish a revised version of an article anytime based on feedback received from the community. Putatively, even years after publication of the original work our system allows a review to be published alongside the paper.  

 

Stage 4: Journal-organized post-publication peer review

Optional. If the author(s) request it, the journal can additionally organize a formal peer review from discipline-specific researchers in a timely manner. Authors may suggest reviewers during the submission process, but RIO may not necessarily invite suggested reviewers.

Once an editor and reviewers are invited by the journal, the review process happens much like the conventional peer review in many other journals, but is entirely open and transparent. It is also subject to a small additional fee, in order to cover the management of this process. When this review stage is successfully completed and the editors have decided to validate the article, the revised article version is labelled “RIO-validated”.

RIO supports Publons in finding this year’s Sentinels of Science

Peer Review Week is coming and this year’s topic “Recognition for Review” is rather close to our hearts!

Developing the concept of RIO, among our central goals was to create a workflow that allows scientists – authors, reviewers and editors alike – to get the maximum credit for their work.

peer review week 2016 close

RIO implements one of the most transparent peer review processes, allowing authors to choose from several peer-review options.

The journal offers the unique opportunity for pre-submission peer review, where authors can invite mentors, colleagues and fellow scientists to review the manuscript and contribute, still during the authoring process.

Additionally, we power post-publication peer review, where all registered users have the option to publicly review journal articles.

RIO, alongside all other Pensoft journals, is also proud to highlight its partnership with Publons and showcase our commitment to honouring the efforts of our expert peer reviewers.

As part of Peer Review Week 2016, we support Publons in their excellent Sentinels of Science Awards initiative, which will put the spotlight on the most prolific heroes of peer review over the past year.

Just like us, Publons recognize that bad science slows down the rate of discovery. Expert peer reviewers protect us from bad science. These efforts on the front line help us in finding cures, develop innovative technologies and realise human potential.

To get recognized for your contributions to speeding up science this Peer Review Week Sign up to Publons now and effortlessly track, verify and showcase every review you do for RIO, all other Pensoft titles, and across the rest of the world’s journals.

*

For more information visit: https://blog.publons.com/unveiling-the-sentinels-of-science-for-prw16/

Scattered marine cave biodiversity data to find home in new database WoRCS, Project Report

Considered “biodiversity reservoirs,” underwater caves are yet to be explored with only a few thoroughly researched areas in the world. Furthermore, species diversity and distributional data is currently scattered enough to seriously hinder conservation status assessments, which is of urgent need due to planned and uncontrolled coastal urbanization.

Thereby, a large international team of scientists, led by Dr Vasilis Gerovasileiou, Hellenic Centre for Marine Research, Greece, have undertaken the World Register of marine Cave Species (WoRCS) initiative meant to aggregate ecological and geographical data to eventually provide information vital for evidence-based conservation. Their Project Report is published in the open access journal Research Ideas and Outcomes (RIO).

With more than 20,000 existing records of underwater cave-dwelling species spread across several platforms, the authors have identified the need for a new database, where a standard glossary based on existing terminology binds together all available ecological data, such as type of environment, salinity regimes, and cave zone, as well as geographical information on the distribution of species in these habitats.

Img 1

In their project, which has already produced a dynamic webpage, the scientists work within the context of the World Register of Marine Species (WoRMS) to add the already available records published in peer-reviewed outlets to reliable and case-by-case verified unpublished data, available from offline databases, museum collections and field notes, as well as the findings of the WoRCS thematic editors themselves.

Eventually, these presence records could be georeferenced for submission to the Ocean Biogeographic Information System (OBIS) and constitute an important dataset for biogeographical and climate change studies on marine caves and anchialine systems.

To invite both the marine biology scientific communities and citizen scientists, WoRCS is meant to adopt a number of strategies.

Short and mid-term plans to engage the scientific community include development of common projects on poorly known marine and anchialine caves; projects that use WoRCS data; initiation of a fellowship programme to engage young researchers; and work with societies.

In the meantime, WoRCS is also intended to develop educational, citizen science and conservation activities, by creating products (e.g., maps, guides, courses) for the public, engage volunteers to encode data, and develop tools for MPA managers and the conservationist community.

“In particular, each time that a project about caves is funded, a work package or module or deliverable about WoRCS should be included to employ students and young researchers for data encoding, or to facilitate new types of data, or new links to other e-infrastructures and data tools,” suggest the WoRCS thematic editors.

###

Original source:

Gerovasileiou V, Martínez A, Álvarez F, Boxshall G, Humphreys W, Jaume D, Becking L, Muricy G, van Hengstum P, Dekeyzer S, Decock W, Vanhoorne B, Vandepitte L, Bailly N, Iliffe T (2016) World Register of marine Cave Species (WoRCS): a new Thematic Species Database for marine and anchialine cave biodiversity. Research Ideas and Outcomes 2: e10451. doi:10.3897/rio.2.e10451

Robot to find and connect medical scientists working on the same research via Open Data

Sharing research data or Open Science, aims to accelerate scientific discovery, which is of particular importance in the case of new medicines and treatments. A grant proposal by an international research team, led by Dr Chase C. Smith, MCPHS University, and submitted to the Open Science Prize, suggests development of what the authors call The SCience INtroDuction Robot, (SCINDR). The project’s proposal is available in the open access journal Research Ideas and Outcomes (RIO).

Building on an open source electronic lab notebook (ELN) developed by the same team, the robot would discover and alert scientists from around the world who are working on similar molecules in real time. Finding each other and engaging in open and collaborative research could accelerate and enhance medical discoveries.

Already running and being constantly updated, the electronic lab notebook serves to store researchers’ open data in a machine-readable and openly accessible format. The next step before the scientists is to adapt the open source notebook to run SCINDR, as exemplified in their prototype.

“The above mentioned ELN is the perfect platform for the addition of SCINDR since it is already acting as a repository of open drug discovery information that can be mined by the robot,” explain the authors.

Once a researcher has their data stored on the ELN, or on any similar open database, for that matter, SCINDR would be able to detect if similar molecules, chemical reactions, biological assays or other features of importance in health research have been entered by someone else. If the robot identifies another scientist looking into similar features, it will suggest introducing the two to each other, so that they could start working together and combine their efforts and knowledge for the good of both science and the public.

Because of its ability to parse information and interests from around the globe, the authors liken SCINDR to online advertisements and music streaming services, which have long targeted certain content, based on a person’s writing, reading, listening habits or other search history.

“The potential for automatically connecting relevant people and/or matching people with commercial content currently dominates much of software development, yet the analogous idea of automatically connecting people who are working on similar science in real time does not exist,” stress the authors.

“This extraordinary fact arises in part because so few people work openly, meaning almost all the research taking place in laboratories around the world remains behind closed doors until publication (or in a minority of cases deposition to a preprint server), by which time the project may have ended and researchers have moved on or shelved a project.”

“As open science gathers pace, and as thousands of researchers start to use open records of their research, we will need a way to discover the most relevant collaborators, and encourage them to connect. SCINDR will solve this problem,” they conclude.

The system is intended to be tested initially by a community of researchers known as Open Source Malaria (OSM), a consortium funded to carry out drug discovery and development for new medicines for the treatment of malaria.

###

Original source:

Smith C, Todd M, Patiny L, Swain C, Southan C, Williamson A, Clark A (2016) SCINDR – The SCience INtroDuction Robot that will Connect Open Scientists. Research Ideas and Outcomes 2: e9995. doi: 10.3897/rio.2.e9995

Guiding EU researchers along the ‘last mile’ to Open Digital Science

Striving to address societal challenges in sectors including Health, Energy and the Environment, the European Union is developing the European Open Science Cloud, a complete socio-technical environment, including robust e-infrastructures capable of providing data and computational solutions where publicly funded research data are Findable, Accessible, Interoperable and Re-usable (FAIR).

Since 2007 The European Commission (EC) has invested more than €740 million in e-infrastructures through Horizon 2020 (the European Union Research and Innovation programme 2014-2020) and FP7 (the European Union’s Seventh Framework Programme for Research and Technological Development). They want to see this exploited in full.

Many research communities are, however, struggling to benefit from this investment. The authors call for greater emphasis on Virtual Research Environments (VREs) as the only way for researchers to capitalise on EC advances in networking and high performance computing. The authors characterise this as a “last mile” problem, a term borrowed from telecommunications networks and once coined to emphasise the importance (and difficulty) of connecting the broader network to each customer’s home or office. Without the last mile of connectivity, a network won’t generate a cent of value.

Some concerns around the transition to Open Digital Science refer to attribution and quality assurance, as well as limited awareness of open science and its implications to research. However, most difficulties relate to many e-infrastructure services being too technical for most users, not providing easy-to-use interfaces and not easily integrated into the majority of day-to-day research practices.

Trustworthy and interoperable Virtual Research Environments (VREs) are layers of software that hide technical details and facilitate communication between scientists and computer infrastructures. They serve as friendly environments for the scientists to work with complicated computer infrastructures, while being able to use their own set of concepts, ways of doing things and working protocols.

Helping them to solve the difficulties noted above, VREs could guide the skeptical research communities along the ‘last mile’ towards Open Digital Science, according to an international team of scientists who have published their Policy Brief in the open access journal Research Ideas and Outcomes (RIO).

The authors state “These domain-specific solutions can support communities in gradually bridging technical and socio-cultural gaps between traditional and open digital science practice, better diffusing the benefits of European e-infrastructures”. They also recognise that “different e-infrastructure audiences require different approaches.”

“Intuitive user interface experience, seamless data ingestion, and collaboration capabilities are among the features that could empower users to better engage with provided services,” stress the authors.

###

Original source:

Koureas D, Arvanitidis C, Belbin L, Berendsohn W, Damgaard C, Groom Q, Güntsch A, Hagedorn G, Hardisty A, Hobern D, Marcer A, Mietchen D, Morse D, Obst M, Penev L, Pettersson L, Sierra S, Smith V, Vos R (2016) Community engagement: The ‘last mile’ challenge for European research e-infrastructures. Research Ideas and Outcomes 2: e9933. doi: 10.3897/rio.2.e9933>

Biodiversity data import from historical literature assessed in an EMODnet Workshop Report

While biodiversity loss is an undisputable issue concerning everyone on a global scale, data about species distribution and numbers through the centuries is crucial for adopting adequate and timely measures.

However, as abundant as this information currently is, large parts of the actual data are locked-up as scanned documents, or not digitized at all. Far from the machine-readable knowledge, this information is left effectively inaccessible. In particular, this is the case for data from marine systems.

This is how data managers who implement data archaeology and rescue activities, as well as external experts in data mobilization and data publication, were all brought together in Crete for the European Marine Observation and Data network (EMODnet) Workshop, which is now reported in the open access journal Research Ideas and Outcomes (RIO).

“In a time of global change and biodiversity loss, information on species occurrences over time is crucial for the calculation of ecological models and future predictions”, explain the authors. “But while data coverage is sufficient for many terrestrial areas and areas with high scientific activity, large gaps exist for other regions, especially concerning the marine systems.”

Aiming to fill both spatial and temporal gaps in European marine species occurrence data availability by implementing data archaeology and rescue activities, the workshop took place on 8th and 9th June in 2015 at the Hellenic Center for Marine Research Crete (HCMR), Heraklion Crete, Greece. There, the participants joined forces to assess possible mechanisms and guidelines to mobilize legacy biodiversity data.

Together, the attendees reviewed the current issues associated with manual extraction of occurrence data. They also used the occasion to test tools and mechanisms that could potentially support a semi-automated process of data extraction. Long-disputed in the scholarly communities matters surrounding data re-publication, such as openly accessible data and author attribution were also discussed. As a result, at the end of the event, a list of recommendations and conclusions was compiled, also openly available in the Workshop Report publication.

Ahead of the workshop, curators extracted legacy data to compile a list of old faunistic reports, based on certain criteria. While performing the task, they noted the time and the problems they encountered along the way. Thus, they set the starting point for the workshop, where participants would get the chance to practice data extraction themselves at the organised hands-on sessions.

“Legacy biodiversity literature contains a tremendous amount of data that are of high value for many contemporary research directions. This has been recognized by projects and institutions such as the Biodiversity Heritage Library (BHL), which have initiated mass digitization of century-old books, journals and other publications and are making them available in a digital format over the internet,” note the authors.

“However, the information remains locked up even in these scanned files, as they are available only as free text, not in a structured, machine-readable format”.

In conclusion, the participants at the European Marine Observation and Data network Workshop listed practical tips regarding in-house document scanning; suggested a reward scheme for data curators, pointing out that credit needs to be given to the people “who made these valuable data accessible again”; encouraged Data papers publication, for aligning with the “emerging success of open data”; and proposed the establishment of a data encoding schema. They also highlighted the need for academic institutions to increase their number of professional data manager permanent positions, while also providing quality training to long-term data experts.

###

Original source:

Faulwetter S, Pafilis E, Fanini L, Bailly N, Agosti D, Arvanitidis C, Boicenco L, Capatano T, Claus S, Dekeyzer S, Georgiev T, Legaki A, Mavraki D, Oulas A, Papastefanou G, Penev L, Sautter G, Schigel D, Senderov V, Teaca A, Tsompanou M (2016) EMODnet Workshop on mechanisms and guidelines to mobilise historical data into biogeographic databases. Research Ideas and Outcomes 2: e9774. doi: 10.3897/rio.2.e9774

How RIO Collections help showcase research project outputs

This text was originally featured on the OpenAIRE Blog. We thank OpenAIRE for allowing us to republish, the original post can be found here.

A recent SPARC Innovator award winner, Research Ideas and Outcomes (RIO) was built around the principles of open research in scholarly communications. Traditionally, a research project ends up with just a few articles published in scholarly journals after many years of work. But why communicate just research articles at the end of a cycle?

Research articles are just a small component of the research cycle. What about all the other project outputs – research ideas, grant proposals, methodologies, data, software, policy briefs and others? At RIO we want to publish the full research cycle, all in one place, with ‘Collections’, to encourage re-usability and effective open knowledge transfer.

Collections for Project Coordinators

RIO offers a wide range of publication types to cover the needs of each research project. Flexible article templates are tailored to provide an easy fit for project outcomes, including H2020, FP7, NSF, NIH, DFG, FWF and other Grant Proposals, Case Studies, Project Reports, Data Management Plans, Data Papers, Software Descriptions, Workshop Reports, Policy Briefs, Conference Presentations and Posters, and many more.

Collections can be opened for each project, where project results, including interim ones, are officially published in both human- (HTML, PDF) and machine-readable (JATS XML) formats, assigned a DOI, collected, archived, made discoverable, easily citable and openly available to everyone. To ensure longevity and compatibility with Horizon 2020 recommendations for open access, all RIO publications are also automatically archived in ZENODO (in both PDF and XML) on the very same day.

RIO is also integrated with OpenAIRE and the CrossRef’s Open Funders Registry meaning that authors can choose to tag both funders and projects in their articles. The workflow is highly-beneficial – automatically linking funders to publications via API, whilst at the same time directly adding RIO articles to project and funder output lists, browsable on OpenAIRE.

This comes packed with additional innovation, thanks to the ARPHA Journal Publishing Platform.

ARPHA (Authoring, Reviewing, Publishing, Hosting and Archiving), is the first ever online collaborative journal publishing platform that supports the full life cycle of a manuscript, from authoring through submission, peer review, publication, dissemination and updates, within a single online collaborative environment.

ARPHA includes a What-You-See-Is-What-You-Get (WYSIWYG) authoring tool. It allows authors to work collaboratively on a manuscript with their co-authors, and one can also invite external contributors, such as mentors, pre-submission reviewers, linguistic and copy editors.

The authoring tool is fully integrated in the publishing workflow so there is no need for a ‘typesetting’ stage or ‘page proofs’ – we have eliminated this inefficiency. What the authors see, is what the reviewers and editors see, and is what will be published. At other publishers, errors and delays are often introduced by the need for a typesetting process – we therefore offer a more efficient and timely system.

The platform lays the infrastructure for RIO’s transparent three-stage peer review: (1) author-organized,pre-submission, during the manuscript authoring process in ARPHA (2) community-sourced, post-publication, and (3) journal-organised, post-publication (optional), to ensure quality-controlled, efficient publication and dissemination.

Emphasis is put on the societal relevance of research by mapping research outputs to the United Nations’Sustainable Development Goals (SDGs) for each published output.

All this comes with an affordable pricing model, which offers tailored packages for projects and individual researcher needs. For individual publications an à la carte pricing model gives authors the opportunity to select only the publishing services they need, thus providing flexibility in the final price they pay.

An example Collection at RIO: the FP7-funded EU BON project

The large-scale FP7-funded EU BON project: Building the European Biodiversity Observation Network is one of the first to try out our Collections feature. The EU BON Collection at RIO is a great way for the project to showcase and link-up all the important published outputs in the same place, openly available in a variety of formats, at the click of a button.

EU BON collection

Be our next open research pilot!

If you are feeling inspired and see the potential of publishing more of your project results, RIO invites you to enquire about publishing a pilot Collection with us. We welcome suggestions from all fields of research, not just the sciences.

The journal will support a limited number of FREE project output Collections. Apply now to be one of the first fully open access research cycles!

Interested project co-ordinators can contact us at rio@riojournal.com outlining initial output proposals or a Collection of project outcomes they would like to publish in RIO.