Posts Tagged ‘networks’

Freedom for Our Files: Code and Slides

Monday, May 16th, 2011

A two-day workshop, with both technical hands-on and idea-driven components. Learn to scrape data and reuse public and private information by writing custom code and using the Facebook API. Additionally, we’ll converse and conceptualize ideas to reclaim our data literally and also imagine what is possible with our data once it is ours!

Here are the slides and some of the code samples from the Freedom for Our Files (FFOF) workshop I just did in Linz at Art Meets Radical Openness (LiWoLi 2011).

The first one is a basic scraping demo that uses “find-replace” parsing to change specific words (I’m including examples below the code)

<?php

/*	Basic scraping demo with "find-replace" parsing
 *	Owen Mundy Copyright 2011 GNU/GPL */

$url = "http://www.bbc.co.uk/news/";	// 0. url to start with

$contents = file_get_contents($url);	// 1. get contents of page in a string

					// 2. search and replace contents
$contents = str_replace(		// str_replace(search, replace, string)
			"News",					
			"<b style='background:yellow; color:#000; padding:2px'>LIES</b>",
			$contents);

print $contents;			// 3. print result

?>

Basic scraping demo with “foreach” parsing

<?php

/*	Basic scraping demo with "foreach" parsing
 *	Owen Mundy Copyright 2011 GNU/GPL */
 
$url = "http://www.bbc.co.uk/news/";	// 0. url to start with

$lines = file($url);			// 1. get contents of url in an array

foreach ($lines as $line_num => $line) 	// 2. loop through each line in page
{		
					// 3. if opening string is found
	if(strpos($line, '<h2 class="top-story-header ">')) 	
	{
		$get_content = true;	// 4. we can start getting content
	}
	
	if($get_content == true)
	{
		$data .= $line . "\n";	// 5. then store content until closing string appears
	}

	if(strpos($line, "</h2>")) 	// 6. if closing HTML element found
	{
		$get_content = false;	// 7. stop getting content
	}
}

print $data;				// 8. print result

?>

Basic scraping demo with “regex” parsing

<?php

/*	Basic scraping demo with "regex" parsing
 *	Owen Mundy Copyright 2011 GNU/GPL */
 
$url = "http://www.bbc.co.uk/news/";		// 0. url to start with

$contents = file_get_contents($url);		// 1. get contents of url in a string
											
						// 2. match title
preg_match('/<title>(.*)<\/title>/i', $contents, $title);

print $title[1];				// 3. print result

?>

Basic scraping demo with “foreach” and “regex” parsing

<?php

/*	Basic scraping demo with "foreach" and "regex" parsing
 *	Owen Mundy Copyright 2011 GNU/GPL */

// url to start
$url = "http://www.bbc.co.uk/news/";

// get contents of url in an array
$lines = file($url);

// look for the string
foreach ($lines as $line_num => $line) 
{	
	// find opening string
	if(strpos($line, '<h2 class="top-story-header ">')) 
	{
		$get_content = true;
	}
	
	// if opening string is found 
	// then print content until closing string appears
	if($get_content == true) 
	{
		$data .= $line . "\n";
	}

	// closing string
	if(strpos($line, "</h2>")) 
	{
		$get_content = false;
	}
}

// use regular expressions to extract only what we need...

// png, jpg, or gif inside a src="..." or src='...' 
$pattern = "/src=[\"']?([^\"']?.*(png|jpg|gif))[\"']?/i";
preg_match_all($pattern, $data, $images);

// text from link
$pattern = "/(<a.*>)(\w.*)(<.*>)/ismU";
preg_match_all($pattern, $data, $text);

// link
$pattern = "/(href=[\"'])(.*?)([\"'])/i";
preg_match_all($pattern, $data, $link);

/* 
// test if you like
print "<pre>";
print_r($images);
print_r($text);
print_r($link);
print "</pre>";
*/

?>

<html>
<head>
<style> 
body { margin:0; } 
.textblock { position:absolute; top:600px; left:0px; }
span { font:5.0em/1.0em Arial, Helvetica, sans-serif; line-height:normal; 
background:url(trans.png); color:#fff; font-weight:bold; padding:5px } 
a { text-decoration:none; color:#900 }
</style>
</head>
<body>
<img src="<?php print $images[1][0] ?>" height="100%"> </div>
<div class="textblock"><span><a href="<?php print "http://www.bbc.co.uk".$link[2][0] ?>"><?php print $text[2][0] ?></a></span><br>
</div>
</body>
</html>

And the example, which presents the same information in a new way…

Advanced scraping demo with “regex” parsing. Retrieves current weather in any city and colors the background accordingly. The math below for normalization could use some work.

<?php

/*	Advanced scraping demo with "regex" parsing. Retrieves current 
 * 	weather in any city and colors the background accordingly. 
 *	The math below for normalization could use some work.
 *	Owen Mundy Copyright 2011 GNU/GPL */

?>

<html>
<head>
<style> 
body { margin:20; font:1.0em/1.4em Arial, Helvetica, sans-serif; } 
.text { font:10.0em/1.0em Arial, Helvetica, sans-serif; color:#000; font-weight:bold; } 
.navlist { list-style:none; margin:0; position:absolute; top:20px; left:200px }
.navlist li { float:left; margin-right:10px; }
</style>
</head>

<body onLoad="document.f.q.focus();">

<form method="GET" action="<?php print $_SERVER['PHP_SELF']; ?>" name="f">

	<input type="text" name="q" value="<?php print $_GET['q'] ?>" />
	<input type="submit" />

</form>

<ul class="navlist">
	<li><a href="?q=anchorage+alaska">anchorage</a></li>
	<li><a href="?q=toronto+canada">toronto</a></li>
	<li><a href="?q=new+york+ny">nyc</a></li>
	<li><a href="?q=london+uk">london</a></li>
	<li><a href="?q=houston+texas">houston</a></li>
	<li><a href="?q=linz+austria">linz</a></li>
	<li><a href="?q=rome+italy">rome</a></li>
	<li><a href="?q=cairo+egypt">cairo</a></li>
	<li><a href="?q=new+delhi+india">new delhi</a></li>
	<li><a href="?q=mars">mars</a></li>
</ul>

<?php

// make sure the form has been sent
if (isset($_GET['q']))
{
	// get contents of url in an array
	if ($str = file_get_contents('http://www.google.com/search?q=weather+in+'
						. str_replace(" ","+",$_GET['q'])))
	{
		
		// use regular expressions to extract only what we need...
		
		// 1, 2, or 3 digits followed by any version of the degree symbol 
		$pattern = "/[0-9]{1,3}[º°]C/";
		// match the pattern with a C or with an F
		if (preg_match_all($pattern, $str, $data) > 0)
		{
			$scale = "C";
		}
		else
		{
			$pattern = "/[0-9]{1,3}[º°]F/";
			if (preg_match_all($pattern, $str, $data) > 0)
			{
				$scale = "F";
			}
		}
		
		// remove html
		$temp_str = strip_tags($data[0][0]);
		// remove everything except numbers and points
		$temp = ereg_replace("[^0-9..]", "", $temp_str);
		
		if ($temp)
		{
			
			// what is the scale?
			if ($scale == "C"){
				// convert ºC to ºF
				$tempc = $temp;
				$tempf = ($temp*1.8)+32;
			}
			else if ($scale == "F")
			{
				// convert ºF to ºC
				$tempc = ($temp-32)/1.8;
				$tempf = $temp;
			}
			// normalize the number
			$color = round($tempf/140,1)*10;
			// cool -> warm
			// scale -20 to: 120
			$color_scale = array(
					'0,  0,255',
					'0,128,255',
					'0,255,255',
					'0,255,128',
					'0,255,0',
					'128,255,0',
					'255,255,0',
					'255,128,0',
					'255,  0,0'
					);	
		
?>

<style> body { background:rgb(<?php print $color_scale[$color] ?>) }</style>
<div class="text"><?php print round($tempc,1) ."&deg;C " ?></div>
<?php print round($tempf,1) ?>&deg;F

<?php
		
		}
		else 
		{
			print "city not found";	
		}
	}
}
?>

</body>
</html>




For an xpath tutorial check this page.

For the next part of the workshop we used Give Me My Data to export our information from Facebook in order to revisualize it with Nodebox 1.0, a Python IDE similar to Processing.org. Here’s an example:

Update: Some user images from the workshop. Thanks all who joined!

Mutual friends (using Give Me My Data and Graphviz) by Rob Canning

identi.ca network output (starting from my username (claude) with depth 5, rendered to svg with ‘sfdp’ from graphviz) by Claude Heiland-Allen

Network graph grouping: A small art world

Thursday, December 16th, 2010

This “Mutual friends network graph” created with Nodebox using data I exported with Give Me My Data contains 540 “Facebook friends” and their connections to each other. When the graph renders it attempts to position people who have lots of connections closer together. With this you can see groups unfold based on your own social networks. Since I have spent more time in academia than I have at specific jobs my “clusters” are based mostly on my academic history.

You can also see that there are a lot of connections between my high school and where I did my undergraduate study, which is based on the fact they are located very close to each other, so friends from high school also chose the same university or town to live in. There are also a lot of interconnections between Indiana University where I did my undergrad, the University of California, San Diego, where I did graduate study, and Florida State University, where I teach now. This is probably due to the fact that my connections are all within a given field, in my case visual arts, and points to the often expressed notion that “the art world is actually very small.”

Reading list for August 2010

Monday, August 16th, 2010

About to embark on some new projects here in Berlin. Here’s my reading list at the moment…

free

Free: The Future of a Radical Price
by Chris Anderson
July 7th 2009 by Hyperion

Traditional economics operates under fundamental assumptions of scarcity–there’s only so much oil, iron, and gold in the world. But the online economy is built upon three cornerstones: processing power, hard drive storage, and bandwidth–and the costs of all these elements are trending toward zero at an incredible rate.

The Exploit

The Exploit: A Theory of Networks
by Alexander R. Galloway, Eugene Thacker
October 1st 2007 by Univ Of Minnesota Press

“The Exploit is that rare thing: a book with a clear grasp of how networks operate that also understands the political implications of this emerging form of power. It cuts through the nonsense about how ‘free’ and ‘democratic’ networks supposedly are, and it offers a rich analysis of how network protocols create a new kind of control. Essential reading for all theorists, artists, activists, techheads, and hackers of the Net.” —McKenzie Wark, author of A Hacker Manifesto

gw_cover_front

Group Work
by Temporary Services
New York, NY: Printed Matter. 2007

Based on a pamphlet published by Temporary Services in 2002 titled Group Work: A Compilation of Quotes About Collaboration from a Variety of Sources and Practices, this publication provides a multitude of perspectives on the theme of Group Work by practitioners of artistic group practice from 1960s to the present.

New Automata sitemaps

Sunday, July 4th, 2010

A deconstruction of defense contractor website data structures.

ga-asi.com_sitemap_20091208_red_800w

ga-asi.com_sitemap_20091208_red_detail

lockheedmartin.com_sitemap_20091214_red_800w

lockheedmartin.com_sitemap_20091214_red_detail

Give Me My Data public alpha launch

Sunday, January 31st, 2010
GMMD_Icon_for_facebook_150w

Today I launched the public alpha version of my Facebook application Give Me My Data for testing and feedback. This app helps you reclaim and reuse your Facebook data.

Feel free to test it and let me know what you think!

Facebook Death Star

Friday, November 13th, 2009

Here are two images I created with NodeBox (Python) and PHP (data via facebook API).

facebook_network_graph_bigcircle
All friends, one common link.

facebook_network_graph_mutualfriends
All the mutual connections I could gather before the script timed-out. Need to make the gathering more efficient.

Visualizations of sitemap and content of owenmundy.com via manyeyes

Sunday, August 9th, 2009

Experimenting with various methods of visualization available at manyeyes

20090801_manyeyes_wordle

Wordle cloud

ISEA2009 Belfast, Ireland

Wednesday, July 22nd, 2009

20070607_salonoblique_floor

Saturday, August 29, 2009, I will be talking about Keyword Intervention in a paper titled, “Intervention and the Internet: New Forms of Public Practice” at ISEA2009: the 15th International Symposium on Electronic Art in Belfast, Ireland.

Full program
http://www.isea2009.org/images/programme15july.pdf

Sitemap of owenmundy.com with PHP-based spider and Graphvis

Wednesday, July 8th, 2009

Screenshots of a sitemap of my website generated using Graphvis and a .dot file created with a PHP spider.

sitemap_20090708_2

sitemap_20090708_4

sitemap_20090708_3

sitemap_20090708_1