Archive for the ‘code’ Category

Upcoming workshop at FSU, “I Know Where Your Cat Lives”: The Process of Mapping Big Data for Inconspicuous Trends

Monday, March 16th, 2015

I’m doing a workshop / lecture as part of the ongoing Digital Scholars digital humanities discussion group here at Florida State University. Workshop is free and open to the public.

Wednesday, March 25, 2:00-3:30 pm
Fine Arts Building (FAB) 320A [530 W. Call St. map]

Screen Shot 2014-10-29 at 4.32.37 PM

“I Know Where Your Cat Lives”: The Process of Mapping Big Data for Inconspicuous Trends

Big Data culture has its supporters and its skeptics, but it can have critical or aesthetic value even for those who are ambivalent. How is it possible, for example, to consider data as more than information — as the performance of particular behaviors, the practice of communal ideals, and the ethic motivating new media displays? Professor Owen Mundy from FSU’s College of Fine Arts invites us to take up these questions in a guided exploration of works of art that will highlight what he calls “inconspicuous trends.” Using the “I Know Where Your Cat Lives” project as a starting point, Professor Mundy will introduce us to the technical and design process for mapping big data in projects such as this one, showing us the various APIs (Application Program Interfaces) that are constructed to support them and considering the various ways we might want to visualize their results.

This session offers a hands-on demonstration and is designed with a low barrier of entry in mind. For those completely unfamiliar with APIs, this session will serve as a useful introduction, as Professor Mundy will walk us through the process of connecting to and retrieving live social media data from the Instagram API and rendering it using the Google Maps API. Participants should not worry if they do not have expertise in big data projects or are still learning the associated vocabulary. We come together to learn together, and all levels of skill will be accommodated, as will all attitudes and leanings. Desktop computers are installed in FAB 320A, but participants are welcome to bring their own laptops and wireless devices.

Participants are encouraged to read the following in advance of the meeting:

and to browse the following resources for press on Mundy’s project:

For further (future) reading:

I Know Where Your Cat Lives launched

Wednesday, July 16th, 2014

I just launched a new ongoing project this week. Here’s the text, a video and some screenshots. I’ll post more about how I made it soon.

Welcome to the today’s internet—you can buy anything, every website is tracking your every move, and anywhere you look you find videos and images of cats. Currently, there are 15 million images tagged with the word “cat” on public image hosting sites, and daily thousands more are uploaded from unlimited positions on the globe.

“I Know Where Your Cat Lives” iknowwhereyourcatlives.com is a data experiment that visualizes a sample of 1 million public pics of cats on a world map, locating them by the latitude and longitude coordinates embedded in their metadata. The cats were accessed via publicly available APIs provided by popular photo sharing websites. The photos were then run through various clustering algorithms using a supercomputer at Florida State University in order to represent the enormity of the data source.

This project explores two uses of the internet: the sociable and humorous appreciation of domesticated felines, and the status quo of personal data usage by startups and international megacorps who are riding the wave of decreased privacy for all. This website doesn’t visualize all of the cats on the net, only the ones that allow you to track where their owners have been.

Folks can also contribute to a kickstarter to help with hosting costs.

Screen Shot 2014-05-14 at 11.01.20 PM

Screen Shot 2014-05-14 at 11.06.09 PM

Screen Shot 2014-05-15 at 10.16.42 AM

Screen Shot 2014-05-23 at 8.48.02 PM

Screen Shot 2014-05-23 at 9.24.59 PM

Screen Shot 2014-06-05 at 10.28.53 AM

Screen Shot 2014-06-05 at 10.30.26 AM

Term vs. Term for Digital Public Library of America hackathon

Monday, April 14th, 2014

I made a small app to compare the number of search results for two phrases from the Digital Public Library of America for a hackathon / workshop here at Florida State next week.

http://owenmundy.com/work/term-vs-term

dpla term vs term

Digital Humanities Hackathon II – Digital Public Library of America

Monday, April 21, 2:00-3:30 p.m.
Strozier Library, Scholars Commons Instructional Classroom [MAP]

The Digital Scholars Reading and Discussion Group will simulate its second “hackathon” on April 21, allowing participants to learn more about the back-end structure of the Digital Public Library of America. With its April 2013 launch, the DPLA became the first all-digital library that aggregates metadata from collections across the country, making them available from a single point of access. The DPLA describes itself as a freely available, web-based platform for digitized cultural heritage projects as well as a portal that connects students, teachers, scholars, and the public to library resources occurring on other platforms.

From a critical point of view, the DPLA simultaneously relies on and disrupts the principles of location and containment, making its infrastructure somewhat interesting to observe.

In this session, we will visit the DPLA’s Application Programming Interface (API) codex to observe some of the standards that contributed to its construction. We will consider how APIs function, how and why to use them, and who might access their metadata and for what purposes. For those completely unfamiliar with APIs, this session will serve as a useful introduction, as well as a demonstration of why a digital library might also want to serve as an online portal. For those more familiar with APIs, this session will serve as an opportunity to try on different tasks using the metadata that the DPLA aggregates from collections across the country.

At this particular session, we are pleased to be joined by Owen Mundy from FSU Department of Art and Richard Urban from FSU College of Communication and Information, who have considered different aspects of working with APIs for projects such as the DPLA, including visualization and graphics scripting, and developing collections dashboards.

As before, the session is designed with a low barrier of entry in mind, so participants should not worry if they do not have programming expertise or are still learning the vocabulary associated with open-source projects. We come together to learn together, and all levels of skill are accommodated, as are all attitudes and leanings.

Participants are encouraged to explore the Digital Public Library of America site prior to our meeting and to familiarize themselves with the history of the project. Laptops will be available for checkout, but attendees are encouraged to bring their own.

After Douglas Davis – The World’s First Collaborative Sentence

Thursday, September 5th, 2013

Screen Shot 2013-09-05 at 3.21.58 PM

README for After Douglas Davis
==============================

Statement
————–

The World’s First Collaborative Sentence was created by Douglas Davis in 1994 and donated to the Whitney Museum of American Art in 1995. Much like today’s blog environments and methods for crowdsourcing knowledge, it allowed users to contribute practically any text or markup to a never-ending sentence with no limits on speech or length.

At some point the sentence stopped functioning, and in early 2012 the Whitney Museum undertook a “preservation effort” to repair and relaunch the project. Measures were taken during the “restoration” to stay true to the original intent of the artist, leaving dead links and the original code in place.

During the preservation the curators placed small sections of garbled ASCII text from the project on Github with the hope that others would “fork” the data and repair the original. However, the Whitney Museum did not succeed in realizing that the collaborative culture of the net Davis predicted has actually arrived. This is evident not only through sites like Wikipedia, Facebook, and Tumblr, but the open source movement, which brings us Linux, Apache, and PHP, the very technologies used to view this page, as well as others like Firefox, Arduino, Processing, and many more.

In the spirit of open source software and artists like Duchamp, Levine, runme.org and Mandiberg, on September 5, 2013, I “forked” Douglas Davis’ Collaborative Sentence by downloading all pages and constructing from scratch the functional code which drives the project. I have now placed this work on Github with the following changes:

1. All pages are updated to HTML5 and UTF-8 character encoding
2. The functional code was rewritten from scratch including a script to remove malicious code
3. The addition of this statement

I was originally disappointed the Whitney Museum didn’t place the full source code in the public domain. What better way to make it possible for artists and programmers to extend the life of Davis’ project by learning from, reusing, and improving the original code than to open source this work? Though, possibly like Davis, my motivation is largely in part an interest in constructing a space for dialog, framing distinct questions and new possibilities, and waiting to see what happens from this gesture.

Included software
————–
HTML Purifier http://htmlpurifier.org/

Live version
————–
Enter After Douglas Davis

About the author
————–
Owen Mundy http://owenmundy.com/

Give Me My Data visualizations / tutorials

Monday, July 15th, 2013

I received two separate emails recently sharing images and sample code for visualization projects produced with the help of Give Me My Data. I’m happy to share them both here:

Screen Shot 2013-07-15 at 11.09.27 PM

Alex Hornbake posted a visualization mapping Facebook friend locations and connections in D3. You can view it here and there’s a Gist with all the code here. His image shows where his connections are located, or have move to or from. As abstract and beautiful as the image is, one can still make out a rough outline of the United States and even guess at specific cities like Atlanta, New York, Miami, and Los Angeles.

Rik Van Bruggen, of Neo Technology, used the “mutual friends graph” to create an interactive neo4j graph database. Following is more information from the tutorial he produced.

Ever since Facebook promoted its “graph search” methodology, lots of people in our industry have been waking up to the fact that graphs are über-cool. Thanks to the powerful query possibilities, people like Facebook, Twitter, LinkedIn, and let us not forget, Google have been providing us with some of the most amazing technologies. Specifically, the power of the “social network” is tempting many people to get their feet wet, and to start using graph technology. And they should: graphs are fantastic at storing, querying and exploiting social structures, stored in a graph database.

The first step to take was to get access to my own facebook data. Obviously there is the facebook graph api, but I hope that by now you would realise that is just not my cup of tea. Sounded to exhausting :) … So: I found myself a little tool that would allow me to download the data from facebook, in a workable format. Give me my data provides a number of different export options, but I chose the “Mutual Friends Network Graph – as it would give me most info with regards to my actual social network.

png;base64ad8e402bc7ee51e

Use Processing and FFmpeg to export HD video

Monday, January 21st, 2013

p_000919

I’ve been trying to find a dependable method for exporting HD video from Processing sketches that contain dynamic data and movement as well as complex 3-dimensional shapes rendered using OpenGL. QuickTime’s screen recording option hogs memory and causes frames to skip, as does does ScreenFlow, my usual go-to for screen recording. The best way I have found so far is to export actual resolution PNG files using “save()” and create an HD video using FFmpeg.

FFmpeg is a very fast video and audio converter that can also grab from a live audio/video sources or create and compress video from multiple still images. It is used from the command line and gives one control of bitrates, codecs, formats, resolution, quality, metadata, and many more options for working with video.

To install FFmpeg with Macports
sudo port install ffmpeg

# view help
ffmpeg -h

# list encoders
ffmpeg -encoders

# list codecs
ffmpeg -codecs

# use format
ffmpeg [global_options] {[input_options] -i ‘input_file’} ... {[output_options] ‘output_file’} ...

# make a movie from matching .png files, write over original, codec = libx264 (H.264), framerate = 30, 1 pass, resolution = 1920×1080, video bitrate = 6Mbits, format = mp4
ffmpeg -y -pattern_type glob -i 'p_*.png' -vcodec libx264 -r 30 -q 100 -pass 1 -s 1920x1080 -vb 6M -threads 0 -f mp4 file.mp4

# convert the video file to Apple ProRes codec for use in Final Cut
ffmpeg -y -i file.mp4 -vcodec prores -vb 6M -r 30 -s 1920x1080 -f mov file.mov

# here’s another example conversion command. this one has a much higher bitrate
ffmpeg -y -i file_01.mp4 -b:v 40M -vcodec libx264 -pass 1 file_02.mp4

For example, using this Processing sketch, you generate the PNG files, and then run the code above on the command line to make this file:

Also see this video for a more detailed example

References

Processing / OpenGL glitches from recent projects

Saturday, January 19th, 2013

Enjoying all the screenshots of glitches that appeared as I was testing the visualization for the Flashpoint show in D.C. Here is a sample.

Grid, Sequence Me @ Flashpoint Gallery, Washington D.C.

Thursday, January 17th, 2013

2013_grid_11_1024w

Surrounded by images of cross-sectioned buildings and source code excerpts, gallery visitors encounter fragments of Washington, DC architecture—a vaguely familiar roofline or grid of office windows—remixed with data and source code representing the latest housing sales in the area. Constantly changing, the live data streams into the gallery from both local sources (DC short sale listings) and national (federal policy sites), emphasizing the effects of related micro-macro shifts.

2013_grid_screen_10_1024w

2013_grid_screen_09_1024w

Generated with custom software, these fragments echo financial systems and housing market fluctuations. They mirror mortgages repackaged and sold, titles lost in administrative tape, and dreams confused by legal jargon. Like the complex financial systems of the housing market heyday, the software generates an infinite number of arrangements. The complexity of unique and dynamically-created algorithmic outcomes contrasts with the comforting predictability referenced in the exhibition’s title, “Grid, Sequence Me.”

—Joelle Dietrick and Owen Mundy

2013_grid_01_1024w

2013_sea_screen_23_1024w

2013_sea_screen_20_1024w

2013_sea_screen_13_1024w

2013_sea_screen_06_1024w

Packet Switching, College of Journalism and Communications at the University of Florida

Wednesday, November 14th, 2012

Joelle and have completed our Packet Switching (Weimer Hall) commission in the College of Journalism and Communications at the University of Florida. Read about the process for this project here.

Packet Switching (Weimer Hall), (detail) College of Journalism and Communications, University of Florida, inkjet on polyester on panel, 177.5 ft. x 20.21 ft., 2012; Photograph by Steve Johnson / UF College of Journalism and Communications

Packet Switching (Weimer Hall), (detail) College of Journalism and Communications, University of Florida, inkjet on polyester on panel, 177.5 ft. x 20.21 ft., 2012; Photograph by Steve Johnson / UF College of Journalism and Communications

Packet Switching (Weimer Hall), (detail) College of Journalism and Communications, University of Florida, inkjet on polyester on panel, 177.5 ft. x 20.21 ft., 2012; Photograph by Steve Johnson / UF College of Journalism and Communications

Packet Switching (Weimer Hall), (detail) College of Journalism and Communications, University of Florida, inkjet on polyester on panel, 177.5 ft. x 20.21 ft., 2012; Photograph by Steve Johnson / UF College of Journalism and Communications

Packet Switching (Weimer Hall), (detail) College of Journalism and Communications, University of Florida, inkjet on polyester on panel, 177.5 ft. x 20.21 ft., 2012; Photograph by Steve Johnson / UF College of Journalism and Communications

Packet Switching (Weimer Hall), (detail) College of Journalism and Communications, University of Florida, inkjet on polyester on panel, 177.5 ft. x 20.21 ft., 2012; Photograph by Steve Johnson / UF College of Journalism and Communications

Packet Switching (Weimer Hall), (detail) College of Journalism and Communications, University of Florida, inkjet on polyester on panel, 177.5 ft. x 20.21 ft., 2012; Photograph by Steve Johnson / UF College of Journalism and Communications

Packet Switching (Weimer Hall), (detail) College of Journalism and Communications, University of Florida, inkjet on polyester on panel, 177.5 ft. x 20.21 ft., 2012; Photograph by Steve Johnson / UF College of Journalism and Communications

Packet Switching project: ColladaFragmenter software, Kassel, Germany and University of Florida Public Commission

Tuesday, August 21st, 2012

Joelle Dietrick and I embarked on a new body of work this summer called “Packet Switching.” Inspired by her Sherwin Series images and wall paintings, and my work deconstructing and re-visualizing source code and other data, we’ve created two new software projects, as well as a series of limited edition prints, large photo installations, wall-sized paintings, and animations.

The full statement explains our process and intent clearly:

Packet Switching is an ongoing body of work by Joelle Dietrick and Owen Mundy that visualizes architecture as fragments affected by economic and communications systems.

The title of the series references how contemporary communications systems break digital files into smaller manageable blocks of data called packets. Each packet is then sent through a network, taking the quickest route possible, and reassembled once they reach their destination. One JPG image, for example, might be broken into several packets, each of which may travel a different path through the net, even through different cities, before being recompiled into a copy of the original file.

To reference this common process used in networked systems, we wrote custom software that deconstructs a 3D model’s source code and produces unique fragments. We further remixed these fragments using an original application created in Processing. The resulting images become limited edition prints, large photo installations, wall-sized paintings, and animations.

Our process underscores how incidental fragmentation and automation can streamline markets, but also make them vulnerable to systems failure. The use of architecture specifically points to recent real estate market volatility and considers how communication technology-enabled pursuits of profit margins alters our most basic needs.

The first software, that “deconstructs a 3D model’s source code and produces unique fragments,” is open source and available on Github. Essentially, the PHP software, parses a 3D COLLADA file and exports a set number of geometries, that can then be further broken down and used in an artwork or design.

The second software, which we will release soon, remixes these fragments using Processing. The video below shows an example of the whole process.

Wall painting at “Temporary Home” in Kassel, Germany

While artists-in-residence at Temporary Home, in Kassel, Germany, which coincided with Documenta13, Joelle Dietrick and I completed a wall-sized temporary painting based on the architecture from the Bauhaus School at Dessau and 2012 American color forecasts.

Commission at Weimer Hall at the University of Florida

Joelle and I have also received a commission to complete Packet Switching (Weimer Hall) at the University of Florida College of Journalism and Communications this fall. This will be inkjet on adhesive polyester on a large wall (approx. 177.5 ft. x 20.2 ft.). More details soon.