Archive for the ‘research’ Category

Beautiful Data II @ Metalab at Harvard University

Tuesday, August 4th, 2015

This month found me at the excellent Beautiful Data II workshop at the MetaLab at Harvard University sponsored by the Getty Foundation. Participants worked together in the Carpenter Center and Harvard Art Museum under the theme “Telling Stories About Art with Open Collections.”

Screen Shot 2015-08-01 at 7.03.00 PM

There were presentations by known visualization and museums experts, breakout sessions exploring how to represent problem data and collections, and talks by participants and Metalab staff and fellows, including a wonderful group of artists, curators, designers, and scholars in attendance.

Here are a few of the many highlights starting with this nerdy shot of me…
P1030056

P1000822
Data Therapy workshop with Rahul Bhargava (slides1, slides2).

Screen Shot 2015-08-04 at 8.18.09 PM
Learning about provenance at the Harvard Art Museum (note stamp declaring Nazi property)

This spanking cat statuette from the Cooper Hewitt collection.
Colour Lens produced at Beautiful Data I.
Presentation by Seb Chan Director of Digital at Cooper Hewitt.
Memory Slam by Nick Montfort.
Meow Met Chrome extension shows cats from the Met Museum in new tabs.

IMG_3966
Behind the scenes of Ivan Sigal‘s Karachi Circular Railway, Harvard Art Museum Lightbox.

The Life and Death of Data by Yanni Loukissas.
Ben Rubin discussing his and works by Mario Klingemann, Ryoji Ikeda, Jer Thorp and others.
William James Twitter Bot by Rachel Boyce.


Cold Storage documentary by Jeffrey Schnapp, Cristoforo Magliozzi, Matthew Battles, et al.

Cooper Hewitt Font Specimen
Cooper Hewitt typeface by Chester Jenkins


“Unicode” by Jörg Piringer shows all 49571 displayable characters in the unicode range.

*Most photos by Metalab staff

Term vs. Term for Digital Public Library of America hackathon

Monday, April 14th, 2014

I made a small app to compare the number of search results for two phrases from the Digital Public Library of America for a hackathon / workshop here at Florida State next week.

http://owenmundy.com/work/term-vs-term

dpla term vs term

Digital Humanities Hackathon II – Digital Public Library of America

Monday, April 21, 2:00-3:30 p.m.
Strozier Library, Scholars Commons Instructional Classroom [MAP]

The Digital Scholars Reading and Discussion Group will simulate its second “hackathon” on April 21, allowing participants to learn more about the back-end structure of the Digital Public Library of America. With its April 2013 launch, the DPLA became the first all-digital library that aggregates metadata from collections across the country, making them available from a single point of access. The DPLA describes itself as a freely available, web-based platform for digitized cultural heritage projects as well as a portal that connects students, teachers, scholars, and the public to library resources occurring on other platforms.

From a critical point of view, the DPLA simultaneously relies on and disrupts the principles of location and containment, making its infrastructure somewhat interesting to observe.

In this session, we will visit the DPLA’s Application Programming Interface (API) codex to observe some of the standards that contributed to its construction. We will consider how APIs function, how and why to use them, and who might access their metadata and for what purposes. For those completely unfamiliar with APIs, this session will serve as a useful introduction, as well as a demonstration of why a digital library might also want to serve as an online portal. For those more familiar with APIs, this session will serve as an opportunity to try on different tasks using the metadata that the DPLA aggregates from collections across the country.

At this particular session, we are pleased to be joined by Owen Mundy from FSU Department of Art and Richard Urban from FSU College of Communication and Information, who have considered different aspects of working with APIs for projects such as the DPLA, including visualization and graphics scripting, and developing collections dashboards.

As before, the session is designed with a low barrier of entry in mind, so participants should not worry if they do not have programming expertise or are still learning the vocabulary associated with open-source projects. We come together to learn together, and all levels of skill are accommodated, as are all attitudes and leanings.

Participants are encouraged to explore the Digital Public Library of America site prior to our meeting and to familiarize themselves with the history of the project. Laptops will be available for checkout, but attendees are encouraged to bring their own.

“The Earth Observation Guide” – Post Media Lab notes

Wednesday, January 22nd, 2014

I’m back from Lüneberg, Germany and already missing the simplicity of traveling by train as well as the pleasure of fine wurst. However I had an engaging few days at the Post Media Lab’s Taking Care of Things!. The event began with a keynote by Kelly Dobson from RISD, followed by a tour of Stadtarchive the next morning. Then we broke off into groups to address various topics around archives, art, media, and politics.

I worked in the Measure Drones group with colleagues, Kristian Lukic, Moritz Queisner, Boaz Levin, Daniel Herleth, Adam Kaplan, Frédéric Eyl, and Oliver Lerone Schultz (one of the coordinators of Taking Care of Things along with Christina Kral). Together, over the course of two days, we worked together to conceive, research, write, illustrate, and design a booklet called “The Earth Observation Guide.” This is not a history of art about drones, nor does it try to tackle the whole subject. Rather it is more akin to a time capsule that preserves a moment in time before drones are widespread. It acts as a guidebook, recording what is known about their past and present, illustrating shifts and concerns, and addressing how humans might understand their future. Here are some images of the presented work on the third day:

IMG_5074

IMG_140115_PML+HoL+CDC_Taking-Care-of-Things_Stadtarchiv-Lueneburg_0434-XL

IMG_5070

IMG_140115_PML+HoL+CDC_Taking-Care-of-Things_Stadtarchiv-Lueneburg_0596-XL

 

I also somehow managed to get my mug in the newspaper in Lüneberg. I think it says “American professor launching spy drones in Germany” or some such thing.

IMG_5062

The group examining 16th century drawings of salt mines from the archive.

IMG_5058

After Douglas Davis – The World’s First Collaborative Sentence

Thursday, September 5th, 2013

Screen Shot 2013-09-05 at 3.21.58 PM

README for After Douglas Davis
==============================

Statement
————–

The World’s First Collaborative Sentence was created by Douglas Davis in 1994 and donated to the Whitney Museum of American Art in 1995. Much like today’s blog environments and methods for crowdsourcing knowledge, it allowed users to contribute practically any text or markup to a never-ending sentence with no limits on speech or length.

At some point the sentence stopped functioning, and in early 2012 the Whitney Museum undertook a “preservation effort” to repair and relaunch the project. Measures were taken during the “restoration” to stay true to the original intent of the artist, leaving dead links and the original code in place.

During the preservation the curators placed small sections of garbled ASCII text from the project on Github with the hope that others would “fork” the data and repair the original. However, the Whitney Museum did not succeed in realizing that the collaborative culture of the net Davis predicted has actually arrived. This is evident not only through sites like Wikipedia, Facebook, and Tumblr, but the open source movement, which brings us Linux, Apache, and PHP, the very technologies used to view this page, as well as others like Firefox, Arduino, Processing, and many more.

In the spirit of open source software and artists like Duchamp, Levine, runme.org and Mandiberg, on September 5, 2013, I “forked” Douglas Davis’ Collaborative Sentence by downloading all pages and constructing from scratch the functional code which drives the project. I have now placed this work on Github with the following changes:

1. All pages are updated to HTML5 and UTF-8 character encoding
2. The functional code was rewritten from scratch including a script to remove malicious code
3. The addition of this statement

I was originally disappointed the Whitney Museum didn’t place the full source code in the public domain. What better way to make it possible for artists and programmers to extend the life of Davis’ project by learning from, reusing, and improving the original code than to open source this work? Though, possibly like Davis, my motivation is largely in part an interest in constructing a space for dialog, framing distinct questions and new possibilities, and waiting to see what happens from this gesture.

Included software
————–
HTML Purifier http://htmlpurifier.org/

Live version
————–
Enter After Douglas Davis

About the author
————–
Owen Mundy http://owenmundy.com/

Give Me My Data visualizations / tutorials

Monday, July 15th, 2013

I received two separate emails recently sharing images and sample code for visualization projects produced with the help of Give Me My Data. I’m happy to share them both here:

Screen Shot 2013-07-15 at 11.09.27 PM

Alex Hornbake posted a visualization mapping Facebook friend locations and connections in D3. You can view it here and there’s a Gist with all the code here. His image shows where his connections are located, or have move to or from. As abstract and beautiful as the image is, one can still make out a rough outline of the United States and even guess at specific cities like Atlanta, New York, Miami, and Los Angeles.

Rik Van Bruggen, of Neo Technology, used the “mutual friends graph” to create an interactive neo4j graph database. Following is more information from the tutorial he produced.

Ever since Facebook promoted its “graph search” methodology, lots of people in our industry have been waking up to the fact that graphs are über-cool. Thanks to the powerful query possibilities, people like Facebook, Twitter, LinkedIn, and let us not forget, Google have been providing us with some of the most amazing technologies. Specifically, the power of the “social network” is tempting many people to get their feet wet, and to start using graph technology. And they should: graphs are fantastic at storing, querying and exploiting social structures, stored in a graph database.

The first step to take was to get access to my own facebook data. Obviously there is the facebook graph api, but I hope that by now you would realise that is just not my cup of tea. Sounded to exhausting :) … So: I found myself a little tool that would allow me to download the data from facebook, in a workable format. Give me my data provides a number of different export options, but I chose the “Mutual Friends Network Graph – as it would give me most info with regards to my actual social network.

png;base64ad8e402bc7ee51e

Sweet map glitch

Saturday, June 2nd, 2012

A strange and beautiful glitch in the Google Maps application.

Grid living

Monday, May 28th, 2012

Improve Your Experience

Friday, May 25th, 2012

Taking a trip down memory lane with:

$ sudo port install lynx

has proved to be an unsupported adventure.

Somehow I feel my experience was improved regardless.

Facebook’s God complex: No data for the “wicked”

Friday, July 8th, 2011

I was quoted recently in a TechCrunch article about the ongoing battle between Google and Facebook to draw users to their respective social networking services. In the article, The Only Backdoor Left To Sneak Your Facebook Friends Into Google+ Is Yahoo, Erick Schonfeld writes:

‘Over the weekend, Facebook blocked a Google Chrome extension called the Facebook Friend Exporter. And in fact, Facebook changed its OAuth 2.0 API in such a way that it “suddenly removed email addresses from the queries without warning,” says Owen Mundy, creator of Give Me My Data. Other data can still be exported, just not your friends’ email addresses.’ [1]

I wanted to clarify something about the above because my contribution is slightly vague. I also wanted to elaborate for other developers and people interested in how to get their data, specifically the email addresses *out* of Facebook.

Getting your data our of Facebook is a hot issue right now with the emergence of Google+. It’s important to be able to preserve your data, especially in the event you want to exit Facebook, or prepare for its possible demise. I think most people don’t care which social networking software they use, as long as they can stay in touch with their friends. Many depend on Facebook to keep their friends’ contact information up to date. Their friends’ phone numbers, emails, and physical addresses may have changed, but they can still be found on Facebook.

This is one great benefit of Facebook, but I think they’ve come to take a higher-than-thou approach to user data. For example, if I use a single software on my computer to manage contact data for my friends, send them messages with pictures, etc., I am not locked-into a contract with the machine nor the software. I physically have the device that stores this data in my possession, and therefore can do whatever I like with it—assuming I can wrangle my data out of it.

One of the great benefits of the cloud is access, right? Wrong. Facebook has a god’s eye view regarding the matter of our data. The view from on high is that they can do whatever benefits Facebook, not necessarily the user. So they continue profiting from our activity while protecting it’s evidence from any company/person/software which could compromise their lead in social networking. Unfortunately this includes us, so we are ultimately at their mercy.

But Facebook is not a merciless God. They don’t charge to use their service or store my data. This is the tradeoff. We sacrifice privacy—our right to not be tracked, advertised to, or sniffed by governments—in order to play games, have stupid political debates, and post pictures of our kids. So, they have the right to allow or prevent access to this information, regardless of whether it is right.

Regarding getting email addresses of your contacts, as far as I know, there are three ways to mass export your friends’ contact data, only one of which is currently allowed by Facebook.

Hacking

The Facebook Friend Exporter, created by Mohamed Mansour, is a browser extension that works independent of the Facebook API. It scrapes your friends’ contact data from the Facebook pages you visit and exports them as CSV. First, Facebook’s rules here for reusing this type of data are not exactly clear. While they don’t explicitly ban saving the contact info of your friends, the have a catch-all statement that tries to set guidelines for doing so. One could argue by accepting your friend request a user has already consented to sharing with you.

“If you collect information from users, you will: obtain their consent, make it clear you (and not Facebook) are the one collecting their information, and post a privacy policy explaining what information you collect and how you will use it.” [2]

One thing that is clear is the method the Facebook Friend Exporter extension uses is banned according to Facebook’s Statement of Rights and Responsibilities:

“You will not collect users’ content or information, or otherwise access Facebook, using automated means (such as harvesting bots, robots, spiders, or scrapers) without our permission.” [2]

And as expected, Facebook has taken measures to prevent Mansour’s app from working.

“Mansour says that Facebook removed emails from their mobile site, which were critical to the original design of his extension. He told me that the company had implemented a throttling mechanism: if you visit any friend page five times in a short period of time, the email field is removed.” [3]

This is not the first time Facebook has referenced their Statement of Rights and Responsibilities to take measures to keep user data from hackers, artists, or competitors. In 2010 they served moddr and other makers of the Web 2.0 Suicide Machine with a cease and desist letter to prevent them from providing a service for Facebook users to delete their accounts. The letter, which claimed the project was guilty of “Soliciting users’ Facebook login information; Accessing a Facebook account belonging to someone else; Collecting Facebook users’ content or information using automated means such as scripts or scrapers without Facebook’s permission” was possibly effective. Currently only users of MySpace, LinkedIn and Twitter can commit “virtual suicide.”

The Face to Facebook (see image on left) project by Paolo Cirio and Alessandro Ludovico stole one-million Facebook profiles, filtered them with face-recognition software, and then posted them on a fake dating website sorted by facial characteristics. From a statement by the makers:

“Everybody can steal personal data and re-contextualize it in a completely unexpected context. And that shows, once more, how fragile and potentially manipulable the online environment actually is.”

I agree, and I like the project. But in a public work such as this it’s hard to argue, in my opinion, that this project will help the average web user. While many may reconsider the type of information they post, more than likely they will think this project is another example of the “bad guys” (a.k.a. hackers) doing bad things. While it critiques, in a very amusing and relevant way, the issue of online privacy, it leaves the user feeling violated, not necessarily thoughtful. As a work of protest it encounters the same problem that holding a giant sign in someone’s face that says “go to hell”—It makes it hard for the other person to see your perspective when you do that.

While I appreciated the fun jab and reference to FaceMash, Facebook did not see the humor in the project. The creators received cease and desist letters and were threatened with multiple lawsuits from Facebook’s lawyers. And in a final spiteful measure, Facebook deleted their profiles [4]. Apparently the rule is, what happens on Facebook, stays on Facebook.

Using the API

While I was sure that in the past my app, Give Me My Data, was able to retrieve a user’s friend’s email address, when I tested it after the Facebook Friend Exporter news broke I found no emails. Looking around the web I can’t find any evidence that Facebook ever allowed this data to be accessed. While you can still export all other data from your Facebook profile using Give Me My Data, this tiny and important string of characters with an ‘@’ is one essential component Facebook won’t allow. Likely, the policy was spurred by spam prevention, but given the above, it has the added bonus of blocking an exodus of users from Facebook.

In any case when you run the following FQL (Facebook Query Language) against their API it doesn’t error. This means the field exists, but they have written a custom script to remove it from the results.

SELECT first_name, middle_name, last_name, email
FROM user
WHERE uid IN (SELECT uid2 FROM friend WHERE uid1 = me())

Becoming a Preferred Developer

In the article above, Schonfeld also explains how users can access their Facebook friend’s contact info by first importing it into a Yahoo! account and then exporting a CSV which can be imported into Google+ (or anything for that matter). I believe that Yahoo! belongs to the Facebook Preferred Developer Consultant Program which gives them access above and beyond regular developers:

“Facebook provides PDCs with increased access to its employees and training. PDCs are expected to abide by program expectations around policy compliance, integration quality, and high-level cooperation with Facebook.” [6]

Whatever kind of cooperation it is that Facebook is giving these preferred developers, one can be sure it includes access to data Facebook considers sensitive, like email addresses. While Yahoo! is not listed as a preferred developer on the Facebook page above, they have access to the emails so they clearly have some kind of arrangement.

Google, on the other hand, most definitely does not. This is not the first time Google and Facebook have gotten into a scuffle over sharing (or lack of) data. Late in 2010 Google stated they would no longer allow Facebook and other services access to their users’ data unless Facebook or the other service allowed data to be accessed by Google. [7]

In closing, we already know everyone wants our data. All the clicks, likes, comments, photos, and video we incur or upload are tracked, analyzed, and ultimately compiled and sold to advertisers or others in the business of molding consumer (or political) behavior. We’ve come a long way since Gutenberg, but just like when he was alive, it seems there will always be powerful groups in control of the media of the day. And, even with the utopian promise of a democratic internet, information continues to be manipulated or hidden in order to keep them powerful.

Notes

  1. Schonfeld, Erick, “The Only Backdoor Left To Sneak Your Facebook Friends Into Google+ Is Yahoo,” Jul 5, 2011, http://techcrunch.com/2011/07/05/google-facebook-friends-yahoo/
  2. “Statement of Rights and Responsibilities,” Facebook, last revised April 26, 2011, http://www.facebook.com/terms.php
  3. Protalinski, Emil, “Facebook blocks Google Chrome extension for exporting friends,” July 5, 2011 http://www.zdnet.com/blog/facebook/facebook-blocks-google-chrome-extension-for-exporting-friends/1935
  4. “Angry Victims and Eager Business Partners React to the “Face to Facebook” Art Stunt,” ARTINFO, February 11, 2011 http://www.artinfo.com/news/story/36963/status-update-angry-victims-and-eager-business-partners-react-to-the-face-to-facebook-art-stunt/
  5. Gayathri, Amrutha, “Why Facebook’s Acts of Desperation Are Not Enough to Stop Google+,” International Business Times, July 6, 2011, http://www.ibtimes.com/articles/174946/20110706/google-plus-facebook-chrome-extension-block-facebook-friends-exporter-disable-social-network-yahoo-b.htm
  6. “Facebook Preferred Developer Consultant Program FAQ,” http://developers.facebook.com/preferreddevelopers/#FAQ
  7. Oreskovic, Alexei, “Google bars data from Facebook as rivalry heats up,” Reuters, Nov 5, 2010, http://www.reuters.com/article/2010/11/05/us-google-facebook-idUSTRE6A455420101105

12 apps to track, share, and visualize personal data

Monday, July 4th, 2011

When it comes to personal data everyone’s first concern is usually privacy. But a lot of us want to share our data too, with friends, colleagues, and even complete strangers. While numbers have been used for centuries to improve the way we manufacture and do business, using them to quantify our personal lives is a recent phenomenon.

I’ve been thinking about this because one of my goals in creating Give Me My Data was to inspire others to reuse their data, and respond with images and objects they created. But I’m learning if you don’t know a programming language your choices are somewhat scattered and intimidating.

In a recent email exchange with Nicholas Felton, creator of daytum.com and other quality data products, I asked him what other user data sharing and/or visualization web applications he might have encountered while working on daytum.

Included in this article are the three apps he mentioned with my research plus nine additions of my own. All of the apps I mention help users access their own data to track, share, and/or visualize it either by recording it themselves or exporting it from another software. There’s a table at the end of the article to summarize and compare each.

Give Me My Data givememydata.com free

First, to give some context, Give Me My Data is a Facebook application that helps users export their data out of Facebook for reuse in visualizations, archives, or any possible method of digital storytelling. Data can be exported in common formats like CSV, XML, and JSON as well as customized network graph formats.

Status: operational, in-development

Daytum daytum.com free/$$

And to further contextualize, I’ll also address Daytum, an online app that allows users to collect, categorize, and share personal or other data. You can add any data that can be quantified or written down and organize and display it in many forms including bar and pie charts, plain text, and lists. There’s also a mobile site for quick submissions from your device or you can use their iphone app.

Status: operational, but not currently being developed

Geckoboard geckoboard.com $$

Geckoboard is a hosted real-time status board for all sorts of business (or personal) data. You can view web analytics, CRM, support, infrastructure, project management, etc., in one interface, on your computer or smart phone. To see data from other web services in your “dashboard” you add “widgets”—choose from a large list of APIs, give permissions, configure a variety of options, and see your data in a customized graph. Note though, this service is only for presenting data that is hosted elsewhere, and only in this interface. If you like looking at numbers all day, this is for you.

Status: operational

Track-n-Graph trackngraph.com free/$$

Track, graph, and share any information you can think of: your weight, gas mileage, coffee consumption, anything. The design is a little awkward, the graphs don’t display in Chrome or Safari (Mac), and as far as I can tell there’s no API, but the site seems very useful for storing and making simple graphs of your personal data. There are also various “templates” you can reuse to keep track of data like the Workout Tracker, which has fields for gender and age in addition to minutes you worked out, all of which are important in figuring other data (e.g. calories).

Status: operational

your.flowingdata.com your.flowingdata.com/ free

your.flowingdata lets you record your personal data with Twitter. With it you can collect, interact, customize views, and determine privacy by sending private tweets to your account. This project is created by Nathan Yau who writes Flowing Data and studies statistics at UCSD.

Status: operational, in-development

mycrocosm mycro.media.mit.edu free

Mycrocosm is a web service that allows you to track and share data and statistical graphs from the minutiae of daily life. Mycrocosm was developed by Yannick Assogba of the Sociable Media Group of the MIT Media Lab.

Status: operational, but not currently being developed

ManyEyes www-958.ibm.com free

ManyEyes is a project by the IBM Research and the IBM Cognos software group. On Many Eyes you can upload your own data and create visualizations, and view, discuss, and rate other’s visualizations and data sets. It is a great concept but it hasn’t evolved much since its original launch. In fact I’m finding the visualization technology has slowly devolved, leaving only about 20% of visualizations actually displaying (Chrome 12.0 on OSX 10.5.8 if folks are reading).

Status: operational

Fitbit fitbit.com $99.95

The Fitbit is a hardware device which tracks your motions and sleep throughout each day. This data can be uploaded and visualized on their website to realize information about your daily activities like calories burned, steps taken, distance traveled and sleep quality. The Fitbit contains a 3D motion sensor like the one found in the Nintendo Wii and plugs into a base station to upload the data.

Status: operational

Personal Google Search History google.com/history free

When I first saw this application on the Google site I was immediately alarmed. The amount of data they have collected is staggering; for example, “Total Google searches: 36323.” (since Jan 2006) This is a fantastic picture into the life of a user and what they are reading, watching, responding to. It’s like another, admittedly less manicured version, of Facebook. Instead of creating a profile, I am being profiled.

The privacy implications are serious here, which is probably why you have to login again to view it. It is also why a user’s search history draws the interest of interface artist, Johannes P. Osterhof, who is in the process of exploring the line between private and public data, as well as the even further-evaporated division between surveillance and social networks, in his one-year-long search history-made public project, simply titled, Google.

But, as everyone probably already knows, these big companies are making money and providing services. Google has the resources to take your privacy seriously. Well, kind of, because it mostly doesn’t fit into their business model to not track people.

Status: operational

Google Takeout google.com/takeout free

Speaking of funding, I’m quite impressed by this project. Google Takeout is developed by an engineering team at Google called the Data Liberation Front who take their jobs very seriously. In addition to their Google Takeout project, which allows you to export some of your data from Google, they have a really great website with current information about getting access to the data you store with Google.

Status: operational, in-development

gottaFeeling gottafeeling.com free/$$

gottaFeeling is an iphone application that allows you to track and share your feelings. It’s a simple concept, and while loaded down with a lot of rhetoric, reminds me of the amazing, “We Feel Fine.”

Status: in-development

BuzzData buzzdata.com unsure

Finally, I’ll end with BuzzData, a data-publishing platform that encourages the growth of communities around data. Not yet public, I’ve received a private taste of what this app will do, and it looks like it will be pretty cool. Think a mashup between Github and ManyEyes.

Status: still in-development, not public

So I’ll end with the table I created in my research. There are obviously many more types of ways to keep and manage data that I haven’t addressed here, but this is a good start. For further reading check out the Quantified Self blog/user community/conference created by Gary Wolf, who also authored, The Data-Driven Life, the New York Times article linked above.

track/ upload custom data types visualize publish privacy export mobile upload API price limits

Give Me
My Data

yes yes n/a no yes yes n/a no free none

Daytum

yes yes yes yes $$ yes mobile site and iphone app no free / $4/ month free account limited by amount

Geckoboard

no yes yes no yes no n/a only for viewing $9-$200
/ month
number of users

Track-n-Graph

yes yes yes yes yes no web-based no free / $25 per year free account limited by amount

your. flowingdata

yes yes yes yes yes yes via twitter via twitter free none

mycrocosm

yes yes yes yes yes no web-based email-based free none

ManyEyes

yes yes yes yes no yes no no free none

Buzzdata

yes yes yes yes yes yes email ??? ??? ???

Google Web History

yes n/a yes yes yes yes yes no free none

Google Takeout

yes yes n/a n/a yes yes n/a no free none

Fitbit

yes yes yes yes yes yes yes yes $100 / website is free free web account limited by amount

gottaFeeling

yes no no yes yes no iphone no free none
track/upload Can you track or upload your own data?
custom data types Does the a support custom data types?
visualize Can you create visualizations with the app?
publish Can you publish your data with the software?
privacy Are there options for keeping your data private while using the app?
export Can you export the data back out?
mobile upload Are there options to track or upload data from a device?
API Is there an Application Program Interface that allows you to write code to manage data?
price Is there a free version?
limits What limits are imposed on the free version?

Update: Check out Google Guages and other Google Charts.