Keyword Intervention update

I launched Keyword Intervention in January 2007 and for almost four years now it has been scraping topical search terms and attracting random traffic. Today I moved the project to its own domain, keywordintervention.com and also updated the documentation on the site. Below is a sample of the last 500 search terms by users all around the world. The full list is here.

Plutonian Striptease VIII: Owen Mundy

Originally published in Plutonian Striptease, a series of interviews with with experts, owners, users, fans and haters of social media, to map the different views on this topic, outside the existing discussions surrounding privacy.

PS: Social networks are often in the news, why do you think this is?

OM: Assuming “social networks” refers to the online software, application programming interfaces (APIs), and the data that constitutes sites like MySpace, Facebook, and Twitter, I feel its popular to discuss them in the news for many reasons.

Online applications that enable enhanced connectivity for individuals and other entities are relatively new and there is an apparent potential for wealth through their creation and the connections they enable. News organizations are businesses, so they naturally follow the money, “reporting” on topics which are considered worthwhile to advertisers who buy space in their pages, pop-ups, and commercial breaks.

Additionally, the public is still grappling with the ability for online software to collect and distribute data about them, both with their permission and through clandestine means at once. Most users of social networking software don’t understand the methods or potential for behavior manipulation in these user interfaces and therefore are wary of what they share. Other users seem to be more care-free, making many private details from their lives public.

Finally, online social networking software is still evolving, so it’s difficult for users to establish a consensus about best practices. I believe the accelerating functionality of web 2.0 software will continue to complicate how we feel about online social networks for much longer.

PS: In what way do they differ from older forms of communication on the Internet?

OM: If web1.0 consisted of static pages, web2.0 is made-up of dynamic information, generated by the millions of users accessing the web through personal computers and mobile devices. This rapid rise in user-generated content has been made possible by the development of online applications using a myriad of open source programming languages. Sites like youtube.com (launched 2005 and written primarily in Python) and Facebook.com (2004, PHP) which consist entirely of content contributed by users, store information in databases allowing for fast searching, sorting, and re-representation. Initially, the web consisted of information and we had to sift through it manually. Web2.0 allows for the growth of a semantic web and possibilities for machines to help us describe, understand, and share exponential amounts of data through tags, feeds, and social networks.

PS: Who is ultimately responsible for what happens to the data you upload to social networks?

OM: Obviously users are responsible for deciding what information they publish online. Still, Facebook’s “Recommended Privacy Settings” should emphasize more not less. While their privacy settings always seem to be a work in progress. One thing they do consistently is default to less privacy overall, thus more sharing of your information on their site. For a website that depends on user-generated content the motivation to encourage sharing is clear enough. Still, why do they use the word “privacy” if they’re not actually embracing the idea?

I honestly feel that all software that accepts user input, credit cards and phone companies, should be bound by strict written rules preventing them from sharing my information with advertising companies or the government. It seems like a basic human right to me. If there are laws preventing me from downloading and sharing copywritten music then there should be laws protecting my intellectual property as well.

PS: Do you read Terms of Use or EULA’s and keep up to date about changes applied to them?

OM: Only when curious or suspicious. They’re usually intentionally full of so much legalese that I don’t bother torturing myself. But as an artist and programmer, I have an interest in sharing my information in public space because I benefit from its appreciation. Perhaps a more accurate answer to this question would come from someone who doesn’t have this interest.

PS: Do you think you’ve got a realistic idea about the quantity of information that is out there about you?

OM: Yes I do. I am definitely conscious of the information I share. In addition I also research methods of surveillance and incorporate that knowledge into my art practice. So while I haven’t seen the visualization that determines the likelihood that my grandmother is a terrorist threat, it’s guaranteed that one is possible with a few clicks and some multi-million dollar defense contractor dataveillance tool. This is true for any human being through aggregation of credit card records, travel information, political contributions, and what we publish online.

PS: How do you value your private information now? Do you think anything can happen that will make you value it differently in the future?

OM: It’s important to me to situate my art practice in public space where it can provoke discussion for all audiences. But yes, I do intentionally avoid distributing dorky pictures of my mountain bike adventures. Seriously though, I’ve been watching the news. I can say that I’m definitely alarmed by the post-911 surveillance on U.S. citizens.

PS: How do you feel about trading your personal information for online services?

OM: It depends on the service. We all have to give up something in order to use these tools. For example, without telling Google Maps that I’m interested in Mexican restaurants in Williamsburg, I might never find Taco Chulo. This continual paradox in making private information public is somewhat rendered void if the sites we use actually protect our information, but it is more likely that everything we say and do online is used to some degree to enhance and advertisements. Here’s another example, 97% of Google’s revenue comes from advertising, which should suggest that while they produce software, their ultimate goal is to appeal to advertisers.[1]

PS: What do you think the information gathered is used for?

OM: I have a background in interface design and development so I know how great it is to use web stats to see where users are clicking. If traffic is not moving in the direction that you want then you can make specific buttons more prevalent.

I can only imagine what a company like Google does with the data they gather through their analytics tools. The fact that a government could access this information is scary when you think of the actions of past fascist states. The amount of control a government could levy through a combination of deep packet searching and outrightly ignoring human rights is staggering.

PS: Have you ever been in a situation where sharing information online made you uncomfortable? If so, can you describe the situation?

OM: Definitely. Sharing financial information online always causes a little anxiety. One of my credit cards has been re-issued three times now due to “merchant databases being hacked.”

PS: What is the worst case scenario, and what impact would that have on an individual?

OM: I just moved to Berlin so I’m looking at the history of this place quite a bit. This is relevant because, during the Cold War, before Germany was reunited, the German Democratic Republic (GDR) Ministry for State Security (MfS) or ‘Stasi’ is believed to have hired, between spies and full- and part-time informants, one in every 6.5 East German citizens to report suspicious activities.[2] That’s millions of people. At this moment, the ratio of people entering data on Facebook to non-members is one in fourteen for the entire world.[3] We have probably the most effective surveillance machine in the history of mankind.

PS: Nowadays, most of the “reading” of what is written online is done by machines. Does this impact your idea of what is anonymity and privacy?

OM: Well, it’s not surprising the interview has come to this point, since I keep referrencing the multitude of methods of computer-controlled digital surveillance. It’s true that machines have replaced humans for remedial work. For example: searching text strings for suspicious statements. But the ultimate danger to my privacy is only enhanced by machines. The real problem is when companies that I trust with my data decide to share it with corporations or governments that engage in behavior control.

PS: Can a game raise issues such as online privacy? And if so, what would you like to see in such a game?

OM: I find this question to be intentionally leading. Perhaps its because I’m generally optimistic and come from farmers, so I assume anything is possible? Not being a gamer though, I can tell you honestly that yes, it is possible, but you will have some challenges if you intend to reach an audience that doesn’t already agree with you. Reaching non-gamers who don’t already feel the same will be even tougher.

Games are generally immersive; you are either playing or your not. The biggest challenge you may have is reaching non-gamers, because they don’t generally invest large amounts of time in games for enjoyment. Try to find ways to highlight complexity and prompt discussion regardless of how long users play, and make this clear from the outset.

Finally, in politically-motivated cultural production it’s important to appeal to an audience first, and let them come to the issues on their own. Who would sit through a film knowing the twist at the end? Especially a conclusion intended to spur critical thinking and action, which is of course the goal.

[1] “Google Financial Tables for Quarter ending June 30, 2009” Retrieved October 13, 2010
[2] Koehler, John O. (2000). Stasi: the untold story of the East German secret police. Westview Press. ISBN 0813337445.
[3] “Facebook Statistics” Retrieved October 14, 2010

Facebook’s recommended privacy settings should emphasize more not less

Facebook’s “Privacy Settings” always seem to be a work in progress. One thing they do consistently is default to less privacy overall, thus more sharing of your information on their site. For a website that depends on user-generated content the motivation to encourage sharing is clear enough. Still, why do they use the word “privacy” if they’re not actually embracing the idea?

For example, a recent update introduces a table with degrees of privacy from less to more (left to right). Types of data are listed in rows, while access is shown in the columns, with Everyone to Friends Only, again left to right.

fb_more

Curious about what Facebook “Recommended” settings were, I clicked and am sharing the screenshot below. I am not surprised to see that they wish me to open-up all content I generate; status messages, posts, images, etc. and discourage allowing anyone I don’t know to comment on posts (probably as spam prevention).

fb_less

I have been thinking about privacy quite a bit this week, developing ideas for what next to do with Give Me My Data, and providing an interview about social media for Naked on Pluto (along with the likes of Marc Garrett and Geert Lovink). Plus I went to see the “geek hero story” The Social Network at the Babylon Cinema last night.

Anyway, after all this thinking about Facebook’s past, I’m curious about its future, and how it will continue to try to hold on to the #1 social networking website position that Friendster and MySpace lost so quickly. The API, games, etc could be expected, but the Facebook Connect tools that are so prevalent now, even on Yelp, a site I figured could make it without schlepping, were a surprise.

Facebook Connect, a jquery “widget” that allows you to login to other websites using your Facebook ID, is clever and eerie at once. It allows Facebook to track you when you are not even on their site, and make sure you stay loyal. If that sounds sinister, well it is. What other purpose could there be for making available a service with the single purpose of mediating every interaction or bit of content you add to the web? It seems at first like OpenID, and it is, except that its run by a multi-billion dollar social media corporation.

How to install Scrapy with MacPorts (full version)

macports-logo-top + logo

Here is a step-by-step explaining how I got Scrapy running on my MacBook Pro 10.5 using MacPorts to install Python and all required libraries (libxml2, libxsit, etc.). The following has been tested on two separate machines with Scrapy .10.

Many thanks to users here who shared some helpful amendments to the default installation guide. My original intention was to post this at stackoverflow, but their instructions discourage posting issues that have already been answered so here it is…

1. Install Xcode with options for command line development (a.k.a. “Unix Development”). This requires a free registration.

2. Install MacPorts

3. Confirm and update MacPorts

$ sudo port -v selfupdate

4. “Add the following to /opt/local/etc/macports/variants.conf to prevent downloading the entire unix library with the next commands”

+bash_completion +quartz +ssl +no_x11 +no_neon +no_tkinter +universal +libyaml -scientific

5. Install Python

$ sudo port install python26

If for any reason you forgot to add the above exceptions, then cancel the install and do a “clean” to delete all the intermediary files MacPorts created. Then edit the variants.conf file (above) and install Python.

$ sudo port clean python26

6. Change the reference to the new Python installation

If you type the following you will see a reference to the default installation of Python on MacOS 10.5 (Python2.5).

$ which python

You should see this

/usr/bin/python

To change this reference to the MacPorts installation, first install python_select

$ sudo port install python_select

Then use python_select to change the $ python reference to the Python version installed above.

$ sudo python_select python26

UPDATE 2011-12-07: python_select has been replaced by port select so…

To see the possible pythons run

port select --list python

From that list choose the one you want and change to it e.g.

sudo port select --set python python26

Now if you type

$ which python

You should see

/opt/local/bin/python

which is a symlink to

/opt/local/bin/python2.6

Typing the below will now launch the Python2.6 shell editor (ctl + d to exit)

$ python

7. Install required libraries for Scrapy

$ sudo port install py26-libxml2 py26-twisted py26-openssl py26-simplejson

Other posts recommended installing py26-setuptools but it kept returning with with errors, so I skipped it.

8. “Test that the correct architectures are present:

$ file `which python`

The single quotes should be backticks, which should spit out (for intel macs running 10.5):”

/opt/local/bin/python: Mach-O universal binary with 2 architectures
/opt/local/bin/python (for architecture i386): Mach-O executable i386
/opt/local/bin/python (for architecture ppc7400): Mach-O executable ppc

9. Confirm libxml2 library is installed (those really are single quotes). If there are no errors it imported successfully.

$ python -c 'import libxml2'

10. Install Scrapy

$ sudo /opt/local/bin/easy_install-2.6 scrapy

11. Make the scrapy command available in the shell

$ sudo ln -s /opt/local/Library/Frameworks/Python.framework/Versions/2.6/bin/scrapy /usr/local/bin/scrapy

One caveat for the above, on a fresh computer, you might not have a /usr/local/bin directory so you will need to create it before you can run the above to create the symlink.

$ sudo mkdir /usr/local/bin

13. Finally, type either of the following to confirm that Scrapy is indeed running on your system.

$ python scrapy

$ scrapy

A final final bit… I also installed ipython from Macports for use with Scrapy

sudo port install py26-ipython

Make a symbolic link
sudo ln -s /opt/local/bin/ipython-2.6 /usr/local/bin/ipython

An article on ipython
http://onlamp.com/pub/a/python/2005/01/27/ipython.html

ipython tutorial
http://ipython.scipy.org/doc/manual/html/interactive/tutorial.html

Setup Macports Python and Scrapy successfully

logo

“Scrapy is a fast high-level screen scraping and web crawling framework, used to crawl websites and extract structured data from their pages. It can be used for a wide range of purposes, from data mining to monitoring and automated testing.”

But, it can be a little tricky to get running…

Attempting to install Scrapy on my MBP with the help of this post I kept running into errors with the libxml and libxslt libraries using the Scrapy documentation.

I wanted to try to let Macports manage all the libraries but I had trouble with it referencing the wrong installation of Python. I began with three installs:

  1. The default Apple Python 2.5.1 located at: /usr/bin/python
  2. A previous version I had installed located: /Library/Frameworks/Python.framework/Versions/2.7
  3. And a Macport version located: /opt/local/bin/python2.6

My trouble was that:

$ python

would always default to the 2.7 when I needed it to use the Macports version. The following did not help:

$ sudo python_select python26

I even removed the 2.7 version which caused only an error.

I figured out I needed to change the default path to the Macports version using the following:

$ PATH=$PATH\:/opt/local/bin ; export PATH

And then reinitiate the ports, etc.

Finally, I was not able to reference the scrapy-ctl.py file by default through these instructions so I had to reference the scrapy-ctl.py file directly

/opt/local/Library/Frameworks/Python.framework/Versions/2.6/bin/scrapy-ctl.py

UPDATE

A quick addendum to this post with instructions to create the link, found on the Scrapy site (#2 and #3).

Starting with #2, “Add Scrapy to your Python Path”

sudo ln -s /opt/local/Library/Frameworks/Python.framework/Versions/2.6/bin/scrapy-ctl.py /opt/local/Library/Frameworks/Python.framework/Versions/2.6/lib/python2.6/site-packages/scrapy

And #3, “Make the scrapy command available”

sudo ln -s /opt/local/Library/Frameworks/Python.framework/Versions/2.6/bin/scrapy-ctl.py /usr/local/bin/scrapy

How to easily set up a campaign finance database (well, kind of) or Make Python work with MAMP via MySQLdb

Picture 15

I’ve been trying for a few hours to run a Python script from The Sunlight Foundation Labs which downloads (and updates) a campaign finance database from the Center for Responsive Politics. See their original post for more information.

In the process of getting this working I accidentally broke a working copy of MySQL and overwrote a database installed on my MBP (which I had stupidly not backed-up since last year). FYI, you can rebuild any MySQL database with the original .frm, .MYD, and .MYI files if you 1. Recreate the database in the new install of MySQL and 2. Drag the files into the mysql data folder.

I struggled quite a bit getting Python to work with MySQL via MySQLdb. I’m documenting some of the headaches and resolutions here in case they are useful. I’ve tried to include error messages for searches as well.

The Sunlight Foundation instructions require Python and MySQL, but don’t mention you have to have already wrestled with the madness involved in installing Django on your machine. Here is what I did to get it working on my MacBook Pro Intel Core 2 Duo. I’ve included their original instructions with my own (and a host of others).

Instructions

  1. Install MAMP.

    While I had working installations of MySQL and Python (via installers on respective sites), I couldn’t get Python to connect to MySQL via MySQLdb. I decided to download and try MAMP for a clean start.

  2. Install XCode

    Past installs are available on Apple Developer website.

  3. Install setuptools

    Required for the MySQLdb driver. Remove the .sh extension from the filename (setuptools-0.6c11-py2.7.egg.sh) and in a shell:

    ~$ chmod +x setuptools-0.6c11-py2.7.egg
    ~$ ./setuptools-0.6c11-py2.7.egg

  4. Install the MySQLdb driver

    After downloading and unzipping, from the directory:

    ~$ python setup.py build
    ~$ sudo python setup.py install

    Continue following the advice of this post to the end How to install Django with MySQL on Mac OS X.

    I also followed another piece of advice in Python MySQL on a Mac with MAMP to change the mysql_config.path from:

    /usr/local/mysql/bin/mysql_config

    to

    /Applications/MAMP/Library/bin/mysql_config

    Especially useful is his test script for making sure that Python is indeed accessing MySQL.

  5. Create a symbolic link between Python and MySQL in MAMP

    This is required in order to use a socket to connect to the MySQL. See How to install MySQLdb on Leopard with MAMP for more information.

    ~$ sudo ln -s /Applications/MAMP/tmp/mysql/mysql.sock /tmp/mysql.sock

  6. Create a directory and put the two Python files in it.
  7. Modify the top of the sun_crp.py file to set certain parameters–your login credentials for the CRP download site and your MySQL database information.
  8. Install pyExcelerator

    Error: ImportError: No module named pyExcelerator

    I had to install this module next.

  9. Comment out multiple lines

    Error: NameError: name 'BaseCommand' is not defined

    In download.py comment out the following:

    The line: from django.core.management.base import BaseCommand, CommandError

    Everything from class CRPDownloadCommand(BaseCommand): to the end of the document.

  10. From the command line, run the script by typing, from the proper directory: Python sun-crp.py.
  11. It will take several hours to download and extract the data, especially the first time it’s run. But after that, you’re good to go.

Recent and ongoing projects

Howdy, it’s been awhile since I last shared news about recent and ongoing projects. Here goes.

close-your-eyes-ac-direct-me-14_1000h

1. You Never Close Your Eyes Anymore

You Never Close Your Eyes Anymore is an installation that projects moving US Geological Survey (USGS) satellite images using handmade kinetic projection devices.

Each device hangs from the ceiling and uses electronic components to rotate strips of satellite images on transparency in front of an LED light source. They are constructed with found materials like camera lenses and consumer by-products and mimic remote sensing devices, bomb sights, and cameras in Unmanned Aerial Vehicles.

The installation includes altered images from various forms of lens-based analysis on a micro and macro scale; land masses, ice sheets, and images of retinas, printed on reflective silver film.

On display now until July 31 at AC Institute 547 W. 27th St, 5th Floor
Hours: Wed., Fri. & Sat.: 1-6pm, Thurs.: 1-8pm

New video by Asa Gauen and images
http://owenmundy.com/site/close_your_eyes

2. Images and video documentation of You Never Close Your Eyes Anymore will also be included in an upcoming Routledge publication and website:

Reframing Photography: Theory and Practice
by Rebekah Modrak, Bill Anthes
ISBN: 978-0-415-77920-3
Publish Date: November 16th 2010
http://www.routledge.com/books/details/9780415779203/

gmmdlogo

3. Give Me My Data launch

Give Me My Data is a Facebook application designed to give users the ability to export their data out of Facebook for any purpose they see fit. This could include making artwork, archiving and deleting your account, or circumventing the interface Facebook provides. Data can be exported in CSV, XML, and other common formats. Give Me My Data is currently in public-beta.

Website
http://givememydata.com/

Facebook application
http://apps.facebook.com/give_me_my_data/

logo_nyt

4. Give Me My Data was also covered recently by the New York Times, BBC, TechCrunch, and others:

Facebook App Brings Back Data by Riva Richmond, New York Times, May 1, 2010
http://gadgetwise.blogs.nytimes.com/2010/05/01/facebook-app-brings-back-data/

Picture 6

5. yourarthere.net launch

A major server and website upgrade to the yourarthere.net web-hosting co-op for artists and creatives. The new site allows members of the community to create profiles and post images, tags, biography, and events. In addition to the community aspect, yourarthere.net is still the best deal going for hosting your artist website.

Website
http://yourarthere.net

More images
http://owenmundy.com/site/design_yourarthere_net

americans_nwfsc_0033_1000w

6. The Americans

The Americans is currently on view at the Northwest Florida State College in Niceville, FL. It features a new work with the same title.

More images
http://owenmundy.com/site/the-americans

bb101_schematic_oblique

7. Your Art Here billboard hanger

I recently designed a new billboard hanging device and installed it in downtown Bloomington, IN with the help of my brother Reed, and wife Joelle Dietrick.

Stay tuned here for news about Your Art Here and the new billboard by Joelle Dietrick.
http://www.facebook.com/pages/Your-Art-Here/112561318756736

lockheedmartin.com_sitemap_20091214_red_800w

8. Finally, moving to Berlin for a year on a DAAD fellowship to work on some ongoing projects, including Automata.

More images
https://owenmundy.com/blog/2010/07/new-automata-sitemaps/

I’ll be giving a paper about Automata at the upcoming ISEA2010 conference in Ruhr, Germany.
http://www.isea2010ruhr.org/conference/tuesday-24-august-2010-dortmund

Many thanks to Chris Csikszentmihályi, Director of the Center for Future Civic Media http://civic.mit.edu/ , for inviting me to the MIT Media Lab last August to discuss the project with his Computing Culture Group: http://compcult.wordpress.com/

-->