New development tool based on ‘software quality information needs’ and 3 case studies

While constantly developing, software takes over more and more aspects of our life at both individual and community level. Thus, software failures and security are easily becoming major concerns which need to be addressed on the spur of the moment.

A way to do so, according to computer scientist Dr Daniel Graziotin, University of Stuttgart, is adopting a new concept, which he terms ‘Software quality information needs’, along with multi-angled extensive empirical evidence to produce a development tool to improve software quality. His Grant Proposal is published in the open access journal Research Ideas and Outcomes (RIO).

The proposed research is in the context of an earlier DFG Grant Proposal, authored by Prof. Dr. Stefan Wagner, University of Stuttgart, and published in the same journal. The earlier project idea suggests novel tools analysing software changes before and during their implementation. Similarly to Dr Daniel Graziotin’s idea, it is based on fast and focused feedback loop.

The presently proposed research, planned to take 24 months, is set to start with establishing the ‘software quality information needs’ construct. The author coins this theory in order to conceptualise and provide deeper understanding of the information essential for a developer when performing code changes or designing new parts of a system.

The project is to then go on to produce metrics to detect and satisfy a developer’s needs. As a result, optimally unobtrusive measurement techniques are to be developed and evaluated in three empirical studies.

About 120 software engineering students from the University of Stuttgart are to be recruited to provide empirical evidence. While they are working on either real-world or university software projects, they are to be observed, regularly interviewed and asked to think aloud. Their insights will be further enriched through a post-task interview. The findings are to answer the question “How can we conceptualise information needs when dealing with software quality?”.

“What information is needed when dealing with software quality?”, is to be covered by the second empirical study, which plans to involve the software engineers of Daimler, Porsche, and Bosch, since the automotive industry is particularly concerned with software quality issues. The engineers are to fill in mostly open-ended surveys and thus, provide a broad view of software quality information needs and their priority from a practitioner’s perspective.

Building on the above case studies, the German multinational engineering and electronics company Robert Bosch GmbH is to be approached for validated questionnaires and behavior patterns tests, such as keystroke frequency and typos detection. Ultimately, the findings are to answer how software quality information needs to be detected unobtrusively through behavioral patterns.

To exemplify the tool, based on the described research and its expected findings, the scientist uses fictional software developer, called Anne. While working on a system routine to be applied in a banking application, she is notified by the integrated development environment (IDE), that there might be some quality issues.

It turns out that she is employing a design pattern that is not frequently employed in similar cases, so the IDE suggests that she browses some StackOverflow.com related questions and answers regarding the design pattern. Because she has also created part of the procedure by cloning code from another part of the project, the tool offers her to help to refactor the cloned code.

“Providing a software developer with the right kind of information about the current state of and the effect of changes on software quality can prevent catastrophic software failures and avoid opening up security holes,” Dr Daniel Graziotin argues.

###

Original source:

Graziotin D (2016) Software quality information needs. Research Ideas and Outcomes 2: e8865.doi: 10.3897/rio.2.e8865

Data sharing pilot to report and reflect on data policy challenges via 8 case studies

This week, FORCE2016 is taking place in Portland, USA. The FORCE11 yearly conference is devoted to the utilisation of technological and open science advancements towards a new-age scholarship founded on easily accessible, organised and reproducible research data.

As a practical contribution to the scholarly discourse on new modes of communicating knowledge, Prof. Cameron Neylon, Centre for Culture and Technology, Curtin University, Australia, and collaborators are to publish a series of outputs and outcomes resulting from their ongoing data sharing pilot project in the open access journal Research Ideas and Outcomes (RIO).

Starting with their Grant Proposal, submitted and accepted for funding by the Canadian International Development Research Centre (IDRC), over the course of sixteen months, ending in December 2016, they are to openly publish the project outputs starting with the grant proposal.

The project will collaborate with 8 volunteering IDRC grantees to develop Data Management Plans, and then support and track their development. The project expects to submit literature reviews, Data Management Plans, case studies and a final research article with RIO. These will report and reflect on the lessons they will have learnt concerning open data policies in the specific context of development research. Thus, the project is to provide advice on refining the open research data policy guidelines.

“The general objective of this project is to develop a model open research data policy and implementation guidelines for development research funders to enable greater access to development research data,” sum up the authors.

“Very little work has been done examining open data policies in the context of development research specifically,” they elaborate. “This project will serve to inform open access to research data policies of development research funders through pilot testing open data management plan guidelines with a set of IDRC grantees.”

The researchers agree that data constitutes a primary form of research output and that it is necessary for research funders to address the issue of open research data in their open access policies. They note that not only should data be publicly accessible and free for re-use, but they need to be “technically open”, which means “available for no more than the cost of reproduction, and in machine-readable and bulk form.” At the same time, research in a development context raises complex issues of what data can be shared, how, and by whom.

“The significance of primary data gathered in research projects across domains is its high potential for not only academic re-use, but its value beyond academic purposes, particularly for governments, SME, and civil society,” they add. “More importantly, the availability of these data provides an ideal opportunity to test the key premise underlying open research data — that when it is made publicly accessible in easily reusable formats, it can foster new knowledge and discovery, and encourage collaboration among researchers and organizations.”

However, such openness is also calling for extra diligence and responsibility while sharing, handling and re-using the research data. This is particularly the case in development research, where challenging ethical issues come to the fore. The authors point out the issues, raised by such practice, to be, among others, realistic and cost-effective strategies for funded researchers to collect, manage, and store the various types of data resulting from their research, as well as ethical issues such as privacy and rights over the collected data.

###

Original source:

Neylon C, Chan L (2016) Exploring the opportunities and challenges of implementing open research strategies within development institutions. Research Ideas and Outcomes 2: e8880. doi: 10.3897/rio.2.e8880

Open-source collaborative platform to collect content from over 350 institutions’ archives

With the technical and financial capacity of any currently existing single institution failing to answer the needs for a platform efficiently archiving the web, a team of American researchers have come up with an innovative solution, submitted to the U.S. Institute of Museum and Library Services (IMLS) and published in the open-access journal Research Ideas and Outcomes (RIO).

They propose a lightweight, open-source collaborative collection development platform, called Cobweb, to support the creation of comprehensive web archives by coordinating the independent activities of the web archiving community. Through sharing the responsibility with various institutions, the aggregator service is to provide a large amount of continuously updated content at greater speed with less effort.

In their proposal, the authors from the California Digital Library, the UCLA Library, and Harvard Library, give an example with the fast-developing news event of the Arab Spring, observed to unfold online simultaneously via news reports, videos, blogs, and social media.

“Recognizing the importance of recording this event, a curator immediately creates a new Cobweb project and issues an open call for nominations of relevant web sites,” explain the researchers. “Scholars, subject area specialists, interested members of the public, and event participants themselves quickly respond, contributing to a site list that is more comprehensive than could be created by any curator or institution.”

“Archiving institutions review the site list and publicly claim responsibility for capturing portions of it that are consistent with local collection development policies and technical capacities.”

Unlike already existing tools supporting some level of collaborative collecting, the proposed Cobweb service will form a single integrated system.

“As a centralized catalog of aggregated collection and seed-level descriptive metadata, Cobweb will enable a range of desirable collaborative, coordinated, and complementary collecting activities,” elaborate the authors. “Cobweb will leverage existing tools and sources of archival information, exploiting, for example, the APIs being developed for Archive-It to retrieve holdings information for over 3,500 collections from 350 institutions.”

If funded, the platform will be hosted by the California Digital Library and initialized with collection metadata from the partners and other stakeholder groups. While the project is planned to take a year, halfway through the partners will share a release with the global web archiving community at the April 2017 IIPC General Assembly to gather feedback and discuss ongoing sustainability. They also plan to organize public webinars and workshops focused on creating an engaged user community.

###

Original source:

Abrams S, Goethals A, Klein M, Lack R (2016) Cobweb: A Collaborative Collection Development Platform for Web Archiving. Research Ideas and Outcomes 2: e8760. doi: 10.3897/rio.2.e8760