ABSTRACT
Rover is a mechatronic imaging device inserted into quotidian space, transforming the sights and sounds of the everyday through its peculiar modes of machine perception. Using computational light field photography and machine listening, it creates a kind of cinema following the logic of dreams: suspended but mobile, familiar yet infinitely variable in detail. Rover draws on diverse traditions of robotic exploration, landscape and still-life depiction, and audio field recording to create a hybrid form between photography and cinema. This paper describes the mechatronic, machine perception, and audio-visual synthesis techniques developed for the piece.
Supplemental Material
- V. Vash, "Synthetic Aperture Imaging Using Dense Camera Arrays," PhD Thesis, Stanford University (2007). Google ScholarDigital Library
- "Integral Photography," Scientific American, 165 (1911).Google Scholar
- M. Levoy and P. Hanrahan, "Light field rendering," Proceedings of the 23rd Annual Conference on Computer Graphics and Interactive Techniques, SIGGRAPH (1996). Google ScholarDigital Library
- The Stanford Multi-Camera Array, <https://graphics.stanford.edu/projects/array/>.Google Scholar
- "Light Field Gantry," <http://lightfield.stanford.edu/acq.html>.Google Scholar
- R. Ng, "Light Field Photography with a Hand-Held Plenoptic Camera," Stanford University Computer Science Tech Report CSTR 2005-02 (2005).Google Scholar
- Lytro Illum, <https://www.lytro.com/imaging>.Google Scholar
- W.G. Sebald, Rings of Saturn (New York: New Directions, 1995).Google Scholar
- R. Barthes, Camera Lucida (New York: Hill & Wang, 1980).Google Scholar
- A. Tarkovsky, Instant Light: Tarkovsky Polaroids (London: Thames & Hudson, 2006).Google Scholar
- G. Richter, Gerhard Richter: Landscapes (Ostfildern: Cantz Verlag, 1998) pp. 84--87, 97--99.Google Scholar
- GRBL, an open-source, embedded, high-performance g-code-parser and CNC milling controller written in optimized C, <https://github.com/grbl/grbl>.Google Scholar
- Python / OpenCV Camera Calibration Example, <http://opencv-python-tutroals.readthedocs.io/en/latest/py_tutorials/py_calib3d/py_calibration/py_calibration.html>.Google Scholar
- C. Wu, "VisualSFM: A Visual Structure from Motion System" (2011), <http://ccwu.me/vsfm/>.Google Scholar
- C. Wu, "SiftGPU: A GPU implementation of Scale Invariant Feature Transform (SIFT)" (2007), <http://cs.unc.edu/~ccwu/siftgpu>.Google Scholar
- C. Wu, et al., "Multicore Bundle Adjustment," Proceedings of IEEE CVPR, pp. 3057--3064 (2011). Google ScholarDigital Library
- N. Collins, SuperCollider Music Information Retrieval Library (SCMIR), <https://composerprogrammer.com/code.html>.Google Scholar
- Black Box 2.0 Festival (6 May--7 June 2015), <http://www.aktionsart.org/allprojects/2015/5/6/black-box-2>.Google Scholar
- Supported by an Amazon Web Services Cloud Credits for Research Grant, awarded December 2016.Google Scholar
- CoreXY Cartesian Motion Platform, <http://corexy.com/theory.html>.Google Scholar
Recommendations
Machine Tango: An Interactive Tango Dance Performance
TEI '19: Proceedings of the Thirteenth International Conference on Tangible, Embedded, and Embodied InteractionIn Argentine tango, dancers typically respond to fixed musical recordings with improvised movements, each movement emerging in a wordless dialog between leader and follower. In the interactive work Machine Tango, this relation between dancers and music ...
Perception-Based Microtuning over MIDI Networks
MIDI has intrinsic limitations that have held back the development of flexible microtunable systems. I evaluate previous solutions and address the problem of pervasive microtuning in MIDI networks within the context of the general weaknesses and ...
Comments