Proceedings of SciPy 2011
SciPy 2011, the 10th annual Scientific Computing with Python conference, was held July 11-16, 2011 in Austin, Texas. 20 peer reviewed articles were published in the conference proceedings. Full proceedings and organizing committee can be found at https://
With increasing population and water use demands in Texas, accurate estimates of lake volumes is a critical part of planning for future water supply needs. Lakes are large and surveying them is expensive in terms of labor, time and cost.
We introduce the new time series analysis features of scikits.statsmodels. This includes descriptive statistics, statistical tests and several linear model classes, autoregressive, AR, autoregressive moving-average, ARMA, and vector autoregressive models VAR.
In recent years, one of the fastest growing trends in information technology has been the move towards cloud computing. The scalable concept of computing resources on demand allows applications to dynamically react to increased usage instead of having to keep resources in reserve that are often not in use but are still paid for.
The research contained herein yielded an open source interpolation library implemented in and designed for use with the Python programming language. This library, named smbinterp, yields an interpolation to an arbitrary degree of accuracy.
Today's productivity programmers, such as scientists who need to write code to do science, are typically forced to choose between productive and maintainable code with modest performance (e.g. Python plus native libraries such as SciPy SciPy) or complex, brittle, hardware-specific code that entangles application logic with performance concerns but runs two to three orders of magnitude faster (e.
PyStream is a static compiler that can radically transform Python code and run it on a Graphics Processing Unit (GPU). Python compiled to run on the GPU is \textasciitilde{}100,000x faster than when interpreted on the CPU.
Key questions that scientists and engineers typically want to address can be formulated in terms of predictive science. Questions such as: "How well does my computational model represent reality?", "What are the most important parameters in the problem?", and "What is the best next experiment to perform?" are fundamental in solving scientific problems.
Computational scientists seek to provide efficient, easy-to-use tools and frameworks that enable application scientists within a specific discipline to build and/or apply numerical models with up-to-date computing technologies that can be executed on all available computing systems.
IMUSim is a new simulation package developed in Python to model Inertial Measurement Units, i.e. devices which include accelerometers, gyroscopes and magnetometers. It was developed in the course of our research into algorithms for IMU-based motion capture, and has now been released under the GPL for the benefit of other researchers and users.
The National Centers for Environmental Prediction (NCEP) Global Forecast System (GFS) is a global spectral model used for aviation weather forecast. It produces forecasts of wind speed and direction, temperature, humidity and precipitation out to 192 hr every 6 hours over the entire globe.
The process of tuning an inertial confinement fusion pulse shape to a specific target design is highly iterative process. When done manually, each iteration has large latency and is consequently time consuming.
In unit testing, the programmer codes the test cases, and also codes assertions that check whether each test case passed. In model-based testing, the programmer codes a "model" that generates as many test cases as desired and also acts as the oracle that checks the cases.
Obtaining time-series monitoring data in a particular region often requires a significant effort involving visiting multiple websites, contacting multiple organizations and dealing with a variety of data formats.
We describe a method for constructing scientific programs where SymPy is used to model the mathematical steps in the derivation. With this workflow, each step in the process can be checked by machine, from the derivation of the equations to the generation of the source code.
Vision Spreadsheet is an environment for computer vision. It combines a spreadsheet with computer vision and scientific python. The cells in the spreadsheet are images, computations on images, measurements, and plots.
Global Arrays (GA) is a software system from Pacific Northwest National Laboratory that enables an efficient, portable, and parallel shared-memory programming interface to manipulate distributed dense arrays.
In this work we discuss gpustats, a new Python library for assisting in "big data" statistical computing applications, particularly Monte Carlo-based inference algorithms. The library provides a general code generation / metaprogramming framework for easily implementing discrete and continuous probability density functions and random variable samplers.
Crab is a flexible, fast recommender engine for Python that integrates classic information filtering recommendation algorithms in the world of scientific Python packages (NumPy,SciPy, Matplotlib). The engine aims to provide a rich set of components from which you can construct a customized recommender system from a set of algorithms.
Sherpa is a generalized modeling and fitting package. Primarily developed for the Chandra Interactive Analysis of Observations (CIAO) package by the Chandra X-ray Center, Sherpa provides an Object-Oriented Programming (OOP) API for parametric data modeling.
SPM.Python is a scalable, parallel fault-tolerant version of the serial Python language, and can be deployed to create parallel capabilities to solve problems in domains spanning finance, life sciences, electronic design, IT, visualization, and research.