I received my MFA this spring from Art Center College of Design’s Media Design Practices program. Before graduate school I worked on green infrastructure (green roofs, street-level rain capture), studied human geography at Dartmouth, and spent 18 years in rural Vermont.
AKA Genderalizations, if you're a fan of glib portmanteaus (which I am), looks at the way people talk about men and women as groups through sentiment analysis. I captured several thousand tweets containing the phrases "men are" or "women are" and then ran them through a sentiment analysis algorithm, which determined if each tweet was 'positive', 'neutral', or 'negative' in tone and assigned them a corresponding score. These methods are by no means precise and in exploration you may find a few of tweets that seem to be miscategorized or even inapplicable.
That said, some interesting patterns do reveal themselves. On the whole, we seem to have a lot more to say about women than we do about men. We also skew towards neutral-negative statements. And while not measured by any algorithm, I'm going to go ahead and say that many of the generalizations captured could be safely described as offensive. Generalizing statements on the internet isn't our best look, as a species.
While in total I captured and analyzed 3908 tweets, the visualization displays a smaller set of 600 which has been lightly cleaned up (excessive retweets were expunged, etc) for the sake of clarity. The color and order position of each tweet rectangle is a representation of its sentiment score. The graphic elements are rendered dynamically using two.js.
The tweets were streamed via the Twitter API using Tweepy in Python. The site is best experienced in a browser as the mouseover interaction doesn't translate perfectly to a touch screen environment. I think there is a lot of room to expand this project and I hope to revisit it at some point in the future.