Print
Publication date: 
April 16, 2014

We all recognize the value of open access to data and the need for good data curation practices. This talk will concentrate on some equally important requirements with regard to software and computer code used to analyze data or to perform simulations. The lack of reproducibility in computational and data science is increasingly recognized as a problem that affects our ability to confirm or build on past research. Many papers published on the development or application of computational techniques do not contain sufficient detail for others to independently verify the conclusions, and often even the authors would have a difficult time reconstructing the code used. A variety of tools are available to assist researchers in improving the reproducibility of their results, and adoption of good practices often has the side benefit of greater research productivity (rather than being a further drain on our limited time). Professor LeVeque will introduce some of the techniques he has found most valuable in the context of Clawpack, an open-source software effort he has been leading for 20 years, and tsunami hazard assessment, one specific application of this software where accountability and reproducibility are particularly important.