New collection in RIO Journal devoted to neuroscience projects from 2016 Brainhack events

A new collection devoted to neuroscience projects from 2016 Brainhack events has been launched in the open access journal Research Ideas and Outcomes (RIO). At current count, the “Brainhack 2016 Project Reports” collection features eight Project Reports, whose authors are applying open science and collaborative research to advance our understanding of the brain.

Seeking to provide a forum for open, collaborative projects in brain science the Brainhack organization has found a like-minded partner in the innovative open science journal RIO. The editor of the series is Dr. R. Cameron Craddock, Computational Neuroimaging Lab, Child Mind Institute and Nathan S. Kline Institute for Psychiatric Research, USA. He is joined by co-editors Dr. Pierre Bellec, Unité de neuroimagerie fonctionnelle, Centre de recherche de l’institut de gériatrie de Montréal, Canada, Dr. Daniel S. Margulies, Max Planck Research Group “Neuroanatomy & Connectivity“, Max Planck Institute for Human Cognitive and Brain Sciences, Dr. Nolan Nichols, Genetech, USA, and Dr. Jörg Pfannmöller, University of Greifswald, Germany.

The first project description published in the collection is a Software Management Plan presenting a comprehensive set of neuroscientific software packages demonstrating the huge potential of Gentoo Linux in neuroscience. The team of Horea-Ioan Ioanas, Dr. Bechara John Saab and Prof. Dr. Markus Rudin, affiliated with ETH and University of Zürich, Switzerland, make use of the flexibility of Gentoo’s environment to address many of the challenges in neuroscience software management, including system replicability, system documentation, data analysis reproducibility, fine-grained dependency management, easy control over compilation options, and seamless access to cutting-edge software release. The packages are available for the wide family of Gentoo distributions and derivatives. “Via Gentoo-prefix, these neuroscientific software packages are, in fact, also accessible to users of many other operating systems,” explain the researchers.

While quantifying lesions in a robust manner is fundamental for studying the effects of neuroanatomical changes in the post-stroke brain while recovering, manual lesion segmentation has been found to be a challenging and often subjective process. This is where the Semi-automated Robust Quantification of Lesions (SRQL) Toolbox comes in. Developed at the University of Southern California, Los Angeles, it optimizes quantification of lesions across research sites. “Specifically, this toolbox improves the performance of statistical analysis on lesions through standardizing lesion masks with white matter adjustment, reporting descriptive lesion statistics, and normalizing adjusted lesion masks to standard space,” explain scientists Kaori L. Ito, Julia M. Anglin, and Dr. Sook-Lei Liew.

Called Mindcontrol, an open-source web-based dashboard application lets users collaboratively quality control and curate neuroimaging data. Developed by the team of Anisha Keshavan and Esha Datta, both of University of California, San Francisco, Dr. Christopher R. Madan, Boston College, and Dr. Ian M. McDonough, The University of Alabama, Mindcontrol provides an easy-to-use interface, and allows the users to annotate points and curves on the volume, edit voxels, and assign tasks to other users. “We hope to build an active open-source community around Mindcontrol to add new features to the platform and make brain quality control more efficient and collaborative,” note the researchers.

At University of California, San Francisco, Anisha Keshavan, Dr. Arno Klein, and Dr. Ben Cipollini, created the open-source Mindboggle package, which serves to improve the labeling and morphometry estimates of brain imaging data. Using inspirations and feedback from a Brainhack hackathon, they built-up on Mindboggle to develop a web-based, interactive, brain shape 3D visualization of its outputs. Now, they are looking to expand the visualization, so that it covers other data besides shape information and enables the visual evaluation of thousands of brains.

Processing neuroimaging data on the cortical surface traditionally requires dedicated heavy-weight software suites. However, a team from Max Planck Institute for Human Cognitive and Brain Sciences, Free University Berlin, and the NeuroSpin Research Institute, France, have come up with an alternative. Operating within the neuroimaging data processing toolbox Nilearn, their Python package allows loading and plotting functions for different surface data formats with minimal dependencies, along with examples of their application. “The functions are easy to use, flexibly adapt to different use cases,” explain authors Julia M. Huntenburg, Alexandre Abraham, Joao Loula, Dr. Franziskus Liem, and Dr. Gaël Varoquaux. “While multiple features remain to be added and improved, this work presents a first step towards the support of cortical surface data in Nilearn.”

To further address the increasing necessity for tools specialised to process huge high-resolution brain imaging data in their anatomical detail, Julia M. Huntenburg gathers a separate team to work on another Python-based software. Being a user-friendly standalone package, this subset of CBSTools requires no additional installations, and allows for interactive data exploration at each processing stage.

Developed at the University of California, San Francisco, Cluster-viz is a web application that provides a platform for cluster-based interactive quality control of tractography algorithm outputs, explain the team of Kesshi M. Jordan, Anisha Keshavan, Dr. Maria Luisa Mandelli, and Dr. Roland G. Henry. It.

A project from the University of Warwick, United Kingdom, aims to extend the functionalities of the FSL neuroimaging software package in order to generate and report peak and cluster tables for voxel-wise inference. Dr. Camille Maumet and Prof. Thomas E. Nichols believe that the resulting extension “will be useful in the development of standardized exports of task-based fMRI results.”

More 2016 Brainhack projects are to be added to the collection.

In a nutshell: The four peer review stages in RIO explained

Having received a number of requests to further clarify our peer review process, we hereby provide a concise summary of the four author- and journal-organised peer review stages applicable to all research article publications submitted to RIO

 

Stage 1: Author-organised pre-submission review

Optional. This review process can take place in the ARPHA Writing Tool (AWT) during the authoring process BEFORE the manuscript is submitted to the journal. It works much like discussion of a manuscript within an institutional department, akin to soliciting comments and changes on a collaborate Google Doc file. The author can invite reviewers via the “+Reviewers” button located on the upper horizontal bar of the AWT. Then, the author(s) and the reviewers are able to work together in the ARPHA online environment through an inline comment/reply interface. The reviewers are then expected to submit a concise evaluation form and a final statement.

The pre-submission review is not mandatory, but we strongly encourage it. Pre-submission reviews will be published along with the article and will bear a DOI and citation details. Articles reviewed before submission are labelled “Reviewed” when published. Manuscripts that have not been peer-reviewed before submission can be published on the basis of in-house editorial and technical checks, and will be labelled “Reviewable”.

If there is no pre-submission review, the authors have to provide a public statement explaining why they do not have, or need a pre-submission review for this work (e.g. a manuscript has been previously reviewed; a grant proposal has already been accepted for funding, etc.).

 

Stage 2: Pre-submission technical and editorial check with in-house editors or relevant members of RIO’s editorial board

Mandatory. Provided by the journal’s editorial office within the ARPHA Writing Tool when a manuscript is submitted to the journal. If necessary, it can take several rounds, until the manuscript is improved to the level appropriate for direct submission and publication in the journal. This stage ensures format compliance with RIO’s requirements, as well as relevant funding-body and discipline-specific requirements.

 

Stage 3: Community-sourced post-publication peer review

Continuously available. All articles published in RIO are available for post-publication review, regardless of them being subject to a pre-submission review or not, or their review status (Reviewable, Reviewed, or RIO-validated). The author may decide to publish a revised version of an article anytime based on feedback received from the community. Putatively, even years after publication of the original work our system allows a review to be published alongside the paper.  

 

Stage 4: Journal-organized post-publication peer review

Optional. If the author(s) request it, the journal can additionally organize a formal peer review from discipline-specific researchers in a timely manner. Authors may suggest reviewers during the submission process, but RIO may not necessarily invite suggested reviewers.

Once an editor and reviewers are invited by the journal, the review process happens much like the conventional peer review in many other journals, but is entirely open and transparent. It is also subject to a small additional fee, in order to cover the management of this process. When this review stage is successfully completed and the editors have decided to validate the article, the revised article version is labelled “RIO-validated”.

New DFG grant proposal for a software quality control able to stand the test of time

For a software to be maintained in an optimal condition, as well as in track of any necessary updates and innovations, it needs to be kept in check constantly. This appears to be the only way for any potential quality problems that may arise to be detected and handled momentarily well before a user can encounter them.

A new grant proposal, addressed to the German Research Foundation (DFG), authored by Prof. Dr. Stefan Wagner, University of Stuttgart, and published in the open-access journal Research Ideas & Outcomes (RIO), suggests a new persistent set of quality control approaches meant to start analysing a software both manually and automatically during its creation and well before it has even been introduced.

The proposed methods, which Prof. Dr. Stefan Wagner envisions as a solution to software quality decay, provide thorough, contextual and focused feedback to the developers, who in their turn need less time and efforts to make sense of the new information. To achieve this, novel tools are to initiate regular analyses even before the implementation of the software changes and go on during the changes.

Previous knowledge and experience from similar problem-detection tools and practices are to be utilised as well. “Contemporary quality models, dynamic slicing and online discussions could even provide rationales for the feedback to support its acceptance and understandability,” explains the German researcher.

A particular issue addressed by the Professor of Software Engineering in his present publication are the so-called ‘co-changes’, which are changes to source code files that need to occur together. For example, if developers introduce a new feature it will cause changes in the functional part of the source code as well as the user interface. Such co-changes can lead to a defect when the change to the user interface is omitted. Giving such information on co-changes is especially useful to give the developers directly while the perform the change.

“Advances in static analysis, test generation and repository mining allow us to give further feedback to developers, potentially just-in-time while performing changes,” Prof. Dr. Stefan Wagner points out. “These analyses have not been incorporated into a joint feedback system that gives focused hints.”

###

Original source:

Wagner S (2015) Continuous and Focused Developer Feedback on Software Quality (CoFoDeF). Research Ideas and Outcomes 1: e7576. doi: 10.3897/rio.1.e7576

###

Additional information:

The DFG is the largest independent research funding organisation in Germany. It promotes the advancement of science and the humanities by funding research projects, research centres and networks, and facilitating cooperation among researchers.

The mission of RIO is to catalyse change in research communication by publishing ideas, proposals and outcomes in order to increase transparency, trust and efficiency of the whole research ecosystem. Its scope encompasses all areas of academic research, including science, technology, the humanities and the social sciences.

The journal harnesses the full value of investment in the academic system by registering, reviewing, publishing and permanently archiving a wider variety of research outputs than those traditionally made public: project proposals, data, methods, workflows, software, project reports and research articles together on a single collaborative platform offering one of the most transparent, open and public peer-review processes.