Imaginando Macondo is a public artwork that commemorates Nobel Prize-winning author Gabriel García Márquez. It was first showcased at the Bogota International Book Fair in April 2015 to an audience of more than 300,000 over the course of two weeks. The project, developed by George Legrady, Andres Burbano, and Angus Forbes, involved extensive collaboration between an international team of artists, designers, and programmers, including Paul Murray and Lorenzo Di Tucci of UIC. Viewers participate by submitting and classifying a photograph of their choice via a kiosk or their mobile phone. The classification is based on literary themes that occur in García Márquez’ work; and user-submitted photos appear alongside images produced by well-known Colombian photographers. An article describing the project was published in IEEE Computer Graphics & Applications in 2015.
The electromagnetic spectrum is a vast expanse of varied energies that have been with us since the origin of the universe. Despite the great range of energies in the electromagnetic spectrum, our human senses can only detect a very limited portion, a portion we call visible light. It has been only in recent history that we have created technologies that enable us to harness light and other energies. Even though we may not be able to see these other energies with the naked eye, we have employed them in our communications. Radio is one such technology, and the technological lens by which we “see” radio is a radio receiver. The RF project investigates creating pictures of the overall radio activity in a particular physical locale. Using an N6841A wideband RF sensor donated by Keysight Technologies, Brett Balogh, Anil Camci, Paul Murray, and Angus Forbes devised a series of projects that explore our “lived electromagnetism”, including an interactive sonification of electromagnetic activity, real-time information visualization applications, and a VR experience where common RF signals were identified in various social scenarios. These projects were shown at the opening night of VISAP’15, and presented the following day in an artist talk by Brett Balogh.
Video granular synthesis is an experimental method for the creative reshaping of one or more video signals based on granular synthesis techniques, normally applied only to audio signals. A wide range of creative effects are made possible through conceptualizing a video signal as being composed of a large number of “video grains.” These grains can be manipulated and maneuvered in a variety of ways, and a new video signal can then be created through a resynthesis of these altered grains. Video granular synthesis was first used in a composition by Christopher Jette, Kelland Thomas, Angus Forbes, and Javier Villegas, titled v→t→d. A description of this project was presented at the International Computer Music Conference in Athens, Greece (2014), and the piece was performed at Exploded View Microcinema (2014), as a University of Arizona Confluencenter event (2014), and again at ICMC in Denton, Texas (2015). A write-up of the approach was published in Computational Aesthetics in 2015.
Current authoring interfaces for processing audio in 3D environments are limited by a lack of specialized tools for 3D audio, separate editing and rendering modes, and platform-dependency. To address these limitations, we introduce a novel web-based user interface that makes it possible to control the binaural or Ambisonic projection of a dynamic 3D auditory scene. Specifically, our interface enables a highly detailed bottom-up construction of virtual sonic environments by offering tools to populate navigable sound fields at various scales (i.e. from sound cones to 3D sound objects to sound zones). Using modern web technologies, such as WebGL and Web Audio, and adopting responsive design principles, we developed a cross-platform UI that can operate on both personal computers and tablets. This enables our system to be used for a variety of mixed reality applications, including those where users can simultaneously manipulate and experience 3D sonic environments. An article describing this project is currently in submission to IEEE 3DUI’16.
Works by media artists tend to be evaluated in terms of either cultural or pragmatic utility. The function of the media arts is often described by highlighting the societal contribution of creating products of cultural enrichment, introducing tools for promoting innovation or providing the means by which to think critically about the ethical ramifications of technology. The media arts also are seen as having the potential ability to aid in the solving of specific scientific and engineering problems, especially those having to do with creative representations of, interacting with, and reasoning about data. Many media artists also characterize their own work as critical refection on technology, embracing technology while questioning the implications of its use. Articulating these multifaceted tensions between artistic outlooks and technical engagement in interdisciplinary art-science projects can be complex—What is the role of the artist in research collaborations? Many artists have wrestled with this question, but there is no clear methodological approach to conducting media arts research in these contexts. Articles presented at VISAP’14, ArtsIT’14, SIGGRAPH’15, and recently accepted to Leonardo, investigate the role of the media artist in art-science contexts; articles written collaborations with George Legrady explore a range of issues related to presenting data visualization in public arts venues.
Turbulent world is a time-based artwork that displays an animated atlas that changes in response to the increased deviation in world temperature over the next century. The changes are represented by visual eddies, vortices, and quakes that distort the original map. Additionally, the projected temperatures are themselves shown across the world, increasing or decreasing in size to indicate the severity of the change. The data used in the artwork was generated by a sophisticated climate model that predicts the monthly variation in surface air temperature across different regions of the world through the end of the century. A write-up of the project was presented at ISEA’15 in Vancouver, British Columbia.
Signal alteration is a well established means for artistic expression in the visual arts. A series of collaborations by Javier Villegas and Angus Forbes explore a range of interactive projects featuring novel video processing techniques on mobile and desktop environments. We introduce a powerful strategy for the manipulation of video signals that combines the processes of analysis and synthesis. After an analysis process a signal is represented by a series of elements or features. This representation can be more appropriate than the original for a wide range of applications, including, for example, the compression and transmission of video signals, or this representation can be used to generate new modified instances of the starting signal. Our Analysis/Synthesis (A/S) approach is general enough to be adequate for the creation of non-photorealistic representations that are not intended to mimic art styles from the past, but instead seek to find novel creative renditions of moving images. Our methods are particularly powerful in interactive arts projects as they enable even drastic manipulations of the input image while still maintaining fundamental aspects of its original identity. Articles describing the theory, application, and evaluation of these A/S techniques have been presented at ACM MM’14, VPQM’14, and HVEI’15.
Poetry Chains is a series of animated text visualizations of the poetry of Emily Dickinson, first showcased in the Hybridity and Synesthesia exhibition at Lydgalleriet, as part of the Electronic Literature Organization Festival in Bergen, Norway in 2015. The project is inspired by Lisa Samuels’ and Jerome McGann’s reading of a seemingly whimsical fragment found in a letter written by Dickinson: “Did you ever read one of her Poems backward, because the plunge from the front overturned you?” They investigate what might it mean to interpret this question literally, asking how a reader could “release or expose the poem’s possibilities of meaning” in order to explore the ways in which language is “an interactive medium.” Poetry Chains provides a continuous, dynamic remapping of Dickinson’s poems by treating her entire corpus as a single poem. A depth-first search is used to create collocation pathways between two words within the corpus, performing a non-linear “hopscotch” (with a poetic rather than narrative destabilization). A version of the animations (with no interaction) is available online, developed by Angus Forbes and Paul Murray.
The Fluid Automata system is comprised of an interactive fluid simulation and vector visualization technique that can be incorporated in media arts projects. These techniques have been adapted for various configurations, including mobile applications, interactive 2D and 3D projections, and multi-touch tables, and have been presented in a number of different environments, including galleries, conferences, and a virtual reality research lab, including: Science City at Tucson Festival of Books (2013); Center for NanoScience Institute in Santa Barbara (2012); IEEE VisWeek Art Show in Providence, Rhode Island, curated by Bruce Campbell and Daniel Keefe (2011); and Questionable Utility at University of California, Santa Barbara, organized by Xárene Eskandar (2011). The techincal details of the Fluid Automata system are described in a paper presented at Computational Aesthetics in 2013; an expanded version of the paper, including a discussion of the history of artworks making use of cellular automata concepts, was published as a chapter in the 2014 Springer volume, Cellular Automata in Image Processing and Geometry, edited by Paul Rosin, Adam Adamatzky, and Xianfang Sun.
The interactive multimedia composition Annular Genealogy, created in collaboration with Kiyomitsu Odai, explores the use of orchestrated feedback as an organizational theme. The composition is performed by two players, each of whom use a separate digital interface to create and interact with the parallel iterative processing of compositional data in both the aural and visual domains. In the aural domain, music is generated using a stochastic process that sequences tones mapped to a psycho-acoustically linear Bark scale. The timbre of these tones and the parameters determining their sequencing are determined from various inputs, including especially the 16-channel output of the previous pass fed back into the system via a set of microphones. In the visual domain, animated, real-time graphics are generated using custom software to create an iterative visual feedback loop. The composition brings various layers of feedback into a cohesive compositional experience. These feedback layers are interconnected, but can be broadly categorized as physical feedback, internal or digital feedback, interconnected or networked feedback, and performative feedback. An article describing our approach was presented at ICMC’12.
Infrequent crimes is a data visualization piece that iterates through a list of unusual, uncommon crimes that occurred in San Francisco within the last year. The squares accompanying each type of crime indicate an incident and furthermore display the location of that crime. The longitude and latitude of each incident, gathered from San Francisco police reports, is indicated either by a map tile or an image taken from Google Maps Street View. Infrequent Crimes was part of the Super Santa Barbara 2011 exhibition at Contemporary Arts Forum in Santa Barbara, California, curated by Warren Schultheis. It was also featured at Spread: California Conceptualism, Then and Now at in SOMArts in San Francisco, CA, curated by OFF Space. Excerpts can be seen here.
Coming or going is a fixed video piece created with custom software in which drums are used to trigger the creation of procedurally generated geometric abstractions and the application of various visual effects. This project was most recently shown as part of Idea Chain, the Expressive Arts Exhibition, at Koç University Incubation Center in Istanbul, Turkey (2015); and was featured in AVANT-AZ at Exploded View Microcinema in Tucson, Arizona (2014), curated by David Sherman and Rebecca Barten. An early version of the software was used in a live performance with live coder Charlie Roberts at Something for Everyone, the Media Arts and Technology End-of-the-Year festival at University of California, Santa Barbara (2009).
The New Dunites is a site-specific media art project comprised of research, an augmented reality application, and an interactive multimedia installation. The project investigates a culturally unique and biologically diverse geographic site, the Guadalupe-Nipomo Coastal Dunes. Buried under these dunes are the ruins of the set of DeMille’s 1923 epic film, “Ten Commandments.” The project employed Ground Penetration Radar (GPR) technology to gather the data on this artifact of film history. In an attempt to articulate and mediate the interaction between humans and this special environment, the New Dunites project, led by Andres Burbano, Solen Kiratli DiCicco, and Danny Bazo in collaboration with Angus Forbes and Andrés Barragán, constructed an ecology of interfaces (from mobile device apps to gallery installations) that made use this data as their primary input. The artistic outputs include interactive data visualization, physical data scultpure, a novel temporal isosurface reconstructions of original film, and video documentation describing the data collection process and introducing the project as a work of media archaeology. The project was selected for an “Incentivo Produccion” award by Vida 13.0 and has been presented at the Todaiji Culture Center in Nara, Japan. A write-up of the project was published at ACM MM’12.
Cell Tango is a dynamically evolving collection of cellphone photographs contributed by the general public. The images and accompanying descriptive categories are projected large scale in the gallery and dynamically change as the image database grows over the course of the installation. The project is a collaboration with artist George Legrady and (in later iterations) composer Christopher Jette. Cell Tango was featured at the Inauguration of the National Theatre Poitiers, organized by Hubertus von Amelunxen, Poitiers, France (2008); as a featured installation at Ford Gallery, Eastern Michigan University, Ypsilanti (2008-2009), curated by Sarah Smarch; as part of “Scalable Relations,” curated by Christiane Paul, Beall Center for Art & Technology, UC Irvine (2009), as a featured installation at the Davis Museum and Cultural Center, Wellesley College, Wellesley (2009), curated by Jim Olson. Sonification was added and premiered at the Lawrence Hall of Science, UC Berkeley (2010); and featured at the Poznan Biennale, Poland (2010). More information about the project can be found here.
Data Flow consists of three Dynamically generated data visualizations that map members’ interactions with Corporate Executive Board’s web portal. The three visualizatons are situated on the “Feature Wall” from the 22nd to 24th floor of the Corporate Executive Board Corporation, Arlington, Virginia. The three visualizations of Data Flow each consist of three horizontally linked screens to feature animations in 4080 x 768 pixel resolution. The flow of information consists of the following: CEB IT produces appropriately formatted data which is retrieved every ten minutes by the Data Flow project server and stored in a local database, where it is kept for 24 hours. The project server also retrieves longitude and latitude for location data and discards any data that does not correlate with the requirements of the visualizations. The server stored data is then forwarded to three visualization computers that each process the received data according to their individual animation requirements. Data Flow was developed in collaboration with George Legrady in 2009, commissioned by Gensler Design. More information about the project can be found here and here.