Why we do stuff

This blog has become a home to digital projects of various sorts, as my interest has grown in the productively critical potential of the digital humanities. One of the ways that I think about the importance of that quasi-field is as a tool for strategic cultural participation.[1]

I think we happen to have  better ideas, and access to cooler and more sharply critical stuff than what the culture industry is willing to pay for; and as long as we have some healthy mix of open-source and freemium proprietary platforms and we’re willing to work more or less for free, there’s no reason academia can’t help redefine what culture is and what it’s for.

Deak Nabers gave a characteristically contrarian talk a few years back at Brown, about how it’s not such a bad thing when administrators talk about the corporate university, and students as consumers — if we turn to our advantage the fact that they don’t really grasp the full history and political potential of those terms. (YouTube of the talk.) The short version of his talk is that corporations are mission-driven social entities that seek the cultural reproduction of that mission; and that they go about doing this not by making what people want, so much as trying to get people to want what they make. I find that liberatory consumerist perspective (which I believe Frances Ferguson’s work on utilitarianism also gestures towards) compelling. I want students who leave my Romanticism classes to read more of the Shelleys, more of the Wordsworths, and to get lost in the Blake Archive when they’re bored.

But I also think it’s important to leave students feeling empowered to participate critically in the cultural sphere, even if the upshot is just that they live-tweet a film or a book they’re reading. In other words, I agree with Michael Bérubé’s diagnosis of our present fears of cultural irrelevance, but his prescription that we become “curators” is clearly inadequate to the problems he’s identified.

That, then, is why I find meaning in the odd mini-toys that I’ve built and shared on this site. They’ve only really been precision kaleidoscopes up until this point, which is to say, my own way of trying to prove to myself that recondite scholarly sources can be made aesthetically compelling — for instance, getting a slice of the congressional record to argue with you in real time, or allowing one to spatially navigate George Biddell Airy’s double-decker summary of procedures for the 1874 Transit of Venus.

There will be more of this sort of work coming in the future, and I expect it will be weightier. I also think undergraduate students are perfectly capable of the same sort of participation. I will therefore continue to share project work by my students, when they want me to. This post in fact started as a short introduction to a project by a student from my Romanticism course at Rice last semester, but it ballooned into this here exposition on my thoughts at the beginning of what I think is going to be a productive year in terms of cultural participation.

If you read this far, please do check out (listen to!) Molly’s objectively more interesting project, in which she arranges and performs old song versions of two Percy Shelley’s poems, and talks about how this process affects her readings of them.

————————

[1] I should note some great recent critiques, or institutional contextualizations, of public intellectualism, which I think is what I’m talking about here.
A) Tressie McMillan Cottom, discussing that work’s necessarily controversial nature and the need for institutions to support people who engage in it, lays out pretty starkly the terms by which public intellectualism is incentivized:

The point is, institutions have been calling for public scholarship for the obvious reasons. Attention can be equated with a type of prestige. And prestige is a way to shore up institutions when political and cultural attitudes are attacking colleges and universities at every turn. And, faculty are vulnerable to calls for them to engage. We’re all sensitive to claims that we’re out of touch and behind on neoliberal careerism.

B) Let’s not forget Adeline Koh’s recent cold-water dousing of the field’s oftentimes feverish, self-congratulatory futurism, “A Letter to the Humanities: DH Will Not Save You.

Singing Shelley

Paula Feldman has recently pushed discussions about the musicality of Romantic poetry into a consideration of the movement’s actual music. As I understand it, her project, Romantic-Era Lyrics, attempts to bring contemporary sheet music renderings of the period’s poetry to life, by making these accessible as a database and providing some actual recorded performances.

One of my students this semester, Molly Mohr, was very interested in the musicality of the poetry we had been reading. She was also a member of a talented group in Rice’s vibrant a capella scene. Molly used her final project to bring her artistic skills to bear on the material we had been working with. Choosing two poems by Percy Shelley and their 1903 arrangement by Colin McAlpin, she arranged these as a capella numbers and performed each twice: once solo, and once as a duet.

The performances are beautiful, but in my opinion the most important part of Molly’s project was her critical reflection on how the arrangement and singing of these poems productively interfered with her readings of them. Molly has been kind enough to share these recordings; I’ll present them below with her readings of each.

In another post, I said that experimental practical work is complementary to critical work when students feel empowered in relation to the cultural artifacts they’re working with. I think Molly’s work is a great example of this.

Everything below is from Molly’s artist statement! There is a link below each song to the pdf of the scored music.

1. “Widow Bird”

McAlpin’s composition really highlights the stressed and unstressed syllables of Shelley’s poem. In 4/4 music, the emphasis usually falls on the first and third beats of each measure, and the stressed syllables in Shelley’s poem mostly match up with these stressed musical beats. For example, in measures 1-2, the lyrics say “A widow bird sat” (the bold syllables are the ones stressed in Shelley’s poem). Accordingly, “wi” and “bird” fall on the first and third beats of the measure, which emphasizes those syllables just as Shelley does in his poem. McAlpin repeats this method throughout a lot of the song – for another example, look at measure 18: “flow’r upon the.” Similarly, McAlpin uses dotted quarter notes and eighth notes to further emphasize Shelley’s stressed syllables. For instance, in measure twelve, the notes in “stream below” emphasize “stream” because the word falls on a dotted quarter note, which is held longer than the quickly following eighth note. The eighth note with the first syllable of “below” has the effect of an unstressed syllable because of how quick it passes. McAlpin also utilizes pickup notes to imitate unstressed syllables from the poem, such as in measures 5-6. The fourth beat of measure 5 is the syllable “up,” which leads into – or gives weight to – the first beat of measure 6, which is both a stressed note and a stressed syllable (the “on” of “upon”). McAlpin’s various uses of musical rhythms and beats to emphasize Shelley’s stressed and unstressed syllables allowed me to understand exactly what Shelley wanted the readers to hear through the stresses of his poem.

I perceived Shelley’s poem to be dreary and serious. He writes of a widowed bird that is alone and mourning for her love, and he uses various cold adjectives, like wintry, freezing, and frozen, to metaphorically express the depressed mood of the poem. So, I composed the duet (alto) line to emphasize the sad, “mourning” tone of the song. First, while the first soloist (soprano) sings a lot of quarter notes, like in measure two, I wrote the duet line to hold long notes ranging from two to four beats underneath the soprano’s movement. For example, in measures 1-5, the alto line holds low notes to give the impression of a death march of sorts to accompany the mood of mourning. Then, in measures 8-10, the alto line sings half notes held to give the feeling of “creeping,” just as the lyrics state. I also wanted to emphasize the adjectives Shelley uses to add to the mood of the song. So, I turned “wintry” in measure six into two eighth notes so that the syllables are sung quickly. Through the quickness, the consonants of the word are brought out; for example, the “tr” of the word is more violent. The consonants in turn really make the word stand out to help establish the cold mood of the song. I also wanted to exacerbate Shelley’s depiction of frozen wind “above” and freezing stream “below,” so I had the soprano hold “above” for one beat longer than the alto because she sings the high note (measures 10-11), while I had the alto hold “below” for an extra beat since she sings the low note (measures 12-13). These small additions make Shelley’s scenery stand out to add to the wintery atmosphere.

I added depth to the songs through the two places I chose to remain solo-voiced: measures 17-19 and 25-29. In measures 17-19, the narrator seems to be reflecting on a thought because of the sudden quietness of the dynamic marketing (p means piano, or quiet) and the performance marking of “ad lib,” meaning to sing at your own pace and rhythm. To accentuate the narrator’s reflection, I excluded a duet voice because it seems as if the narrator is in her own head, alone. Then, in measures 25-29, I chose to not write a duet line because I wanted the ending of the song to have a “haunting” mood. While McAlpin repeats “A widow bird of mourning” at the end of his song, Shelley doesn’t repeat this line at the end of his poem. This addition makes me think that McAlpin wants to emphasize the dreary nature of the poem by repeating “widow” and “mourning.” I left the measures solo to create a haunting, eery effect, which is accentuated even more so because of the pp, or double-soft, dynamic marking McAlpin used. I also wanted to highlight the fact that the bird is alone, hence the solo voice.

PDF: Widow-Bird (Solo)

PDF: Widow-Bird (Duet)

2. “Music, When Soft Voices Die”

McAlpin’s version of Shelley’s poem definitely altered my perception of the song. When I first read Shelley’s “Music,” I simply thought the narrator of the poem was saying that love lives on even after death – a romantic thought. However, the very beginning of McAlpin’s song threw me for a loop because of the suggested performance marking of “Grave.” This means that the composer wants the mood of the song to be just that – grave. Confused, I re-examined the poem and found it to have underlying moments of dreariness, such as when Shelley writes “die,” “sicken,” and “dead,” which are not exactly cheery and supportive words. I then reviewed the song with a mindset of a mix of happiness and sadness.

Just as he did in “Widow Bird,” McAlpin uses beat stresses in his music to emphasize stressed and unstressed syllables in the poem. For example, in measure 1, the first and third beats of the measure emphasize the stressed syllables of the line: “Music, when soft.” Another example is in measure 11 with the stressed word “rose” falling on a dotted quarter note and its next word, “is,” quickly going by in an eighth note. My favorite example, however, is in measure 12 when McAlpin really emphasizes the three unstressed syllables in a row by using a triplet set of eighth notes: “Are heap’d for the beloved’s.”

When writing the duet, I wanted to highlight the feeling of “yes, this is supposed to be romantic, but something just doesn’t feel right.” I therefore wrote the alto line to mostly sound great with the soprano, but to have a few small clashes that throw things slightly off. This dissonance between notes occurs on “die” in measure 1, the last syllable of “sicken” in measure 7, the first syllable of “dead” in measure 11, and the first syllable of “slumber” in measure 18. In what is not a coincidence, these small note clashes occur on the words that stick out and give an ugly picture among an otherwise romantic poem. McAlpin adds to this feeling of uneasiness with his random sharps and naturals in measures 4, 6, and 13, so I took his lead and created even more uneasiness by adding in oddly sounding notes to the alto line in measures 4 and 13.

I chose to leave out the alto line in measure 16 because, like in “Widow Bird,” the extra “art gone” is not part of Shelley’s poem. This fact, coupled with the sudden growth and decline in volume and the performance marking of rit, or to slow down, creates a feeling of reflection. To highlight the narrator’s pondering within her own head, I removed the second voice.

Overall, McAlpin treats the two songs similarly by aligning his stressed beats with Shelley’s stressed syllables and by adding in random but meaningful moments of extra lyrics. I simply expanded on McAlpin’s work by trying to make “Widow Bird” even more mournful and “Music” more uneasy by adding in a duet line.

PDF: Soft Voices (Solo)

PDF: Soft Voices (Duet)

A Public Discourse Bot Should Be Mistaken for the Real Thing

1. Afterwards, there was still the command line

This past month, having just defended my dissertation, I found myself similarly needing to do some work, but not up to the challenge of literature. In one of my favorite passages from De Quincey’s Confessions, he claims that in the depths of his addiction, his thinking clouded by pain and intoxication, political economy helped him to convalesce:

In this state of imbecility I had, for amusement, turned my attention to political economy; my understanding, which formerly had been as active and restless as a hyæna, could not, I suppose (so long as I lived at all) sink into utter lethargy; and political economy offers this advantage to a person in my state, that though it is eminently an organic science (no part, that is to say, but what acts on the whole as the whole again reacts on each part), yet the several parts may be detached and contemplated singly.

In a similar situation, I decided to code something. Coding projects in my experience have a similar cognitive reward structure to what De Quincey describes of reading economics. Because computers do exactly what you tell them to, it’s hard, unless you’re working in a complex design environment, to go too far afield of your intentions. Contrary to the questions-based approach of traditional humanistic inquiry, I’ve found that programming, at my level of sophistication, either results in something that works, or doesn’t. Every once in a while, an error will be genuinely interesting, but that’s the rare exception. And so the modular thinking facilitated by programming allows me to test and re-test pieces and wholes in ways that keep me productively balanced in that flow state between feeling successful and feeling frustrated. In short, I find the work of coding to be qualitatively different than that of critical thinking. But that’s a far thing from saying that the end result of such work necessarily lacks a critical edge.

2. Critical bots

I’ve wanted to build a Twitter bot for a while, both because the Twitter API is where I first cut my teeth on processing large data sets, and because some of the work in this area I’ve found intellectually stimulating and politically inspiring. Mark Sample has a list of “protest bots,” which, in a post that inspired my title here, he defines as “topical, cumulative, data-based, and oppositional,” and, more importantly, “can’t be mistaken for bullshit.” In my opinion, the most medium-transforming protest bot is @every3minutes by the historian Caleb McDaniel at Rice. Caleb famously built a bot that tweets a variation of “A slave was just sold” every three minutes, which was the average time between such sales between 1820-1860.

This bot changes the entire Twitter experience: scrolling through status updates stops being a thoughtless passtime when, every few flicks of the thumb, you’re suddenly socked in the gut by a historical reality made immediate.

3. A political discourse bot

I attempted to build something slightly different — a public discourse bot, we might call it. It’s May 2015, and the 2016 presidential campaign is well underway. On the one hand, this is depressing because blah blah permanent campaign blah blah. On the other hand, it’s exciting because it is one of the few domains in which most (not all) Americans still have some say over policy: voting districts have been gerrymandered–and then some–out of existence, and the Supreme Court is openly flirting with disregarding the clear intent of what little the congress has passed. In other words, I think Presidential campaigns are a great opportunity for public discourse.

What I built, then, was an interactive Twitter bot based on a rudimentary keyword search engine. I scraped the congressional record for all of Bernie Sanders’ statements, and used NLTK to build a database of his keywords and phrases.[1] I removed some of the passages that would be confusing when taken out of context (points of order, some letters read into the record, the details of some amendments, etc.), and after messing around for a while with the basic search algorithm and the output formatting, the senator’s congressional record had essentially been made interactive.

4. The Specifics

@SandersBot has two functions built in:

  1. If someone mentions him, he makes a weighted random guess as to which document in his corpus is the best fit, and, then the best passage in that document.
  2. If nobody talks to him for an hour, he reads what his friends are tweeting about, and tries to respond with the two sentences he thinks are most relevant to those topics.

And, because he was written for an academic, he always includes a link to the source material in the congressional record.

I’ve been surprised at how well he deals with basic policy questions:

bernie_jesse And this past weekend, on Memorial Day, he really seemed to pick up on the day’s theme in his timeline:

bernie vets copy

Sometimes, of course, he misses his mark. But the other day, the bot did alright on Twitter’s version of the Turing test, when a couple users didn’t realize it was a bot account, and engaged it in an extended discussion. The bot responds to almost any mention, so replies to him can quickly prompt runaway threads.  One person was a bit frustrated at how quickly the bot posted long, sometimes off-topic replies in her timeline; after I told her the replies weren’t coming from a human, she was very kind but said she wouldn’t use it again. Another person who engaged it at length didn’t seem to mind when I told him it was a bot, and came back to ask it a few more questions the next day.

5. Conclusion

I don’t know if this would work with every congressperson (though I did post an early version of the project’s code to github). What’s ironic about my choice of Bernie Sanders is that his vocal stance on specific issues is both what makes this bot functional and in many ways superfluous. Functional because his specific and expressive stances on policy questions makes the corpus so searchable and quote-worthy; superfluous because his clear stances on most issues makes them easy to find in any news search engine.[2]

But at the same time, his regular tweeting schedule and always-on availability for questions stage less of an intervention than a contribution to political discourse. At the end of the day, the bot is supposed to be like C-SPAN having a dream about social media: it tries to blend the dusty text of the congressional record, and the day-and-night pulse of Twitter. This bot tries to make the politics of congress accessible to one of the most politically rich communications platforms we have today.

 

Footnotes———–

[1] I was inspired here by one of my students, who this semester submitted a truly compelling final project that took William Wordsworth’s poetic corpus, and scrambled the lines into believable simulacra using Markov chains and Natural Language Processing.

[2] If anybody is interested, I started processing Hillary Clinton’s congressional corpus but decided I’d given enough time to the project as a whole. I would be happy to send you the files.

2014 Wordsworthian Student Projects

2014 was busy: teaching at URI, moving to Houston, writing like mad, and teaching at Rice — Romanticism and Shakespeare, so thanks both to the Romanticists back at Brown and to the Shakespeareans James Kuzner and Jean Feerick, without whom I wouldn’t be able to teach a spider to weave a web.

flower copy

The point being that I only blog about my students these days, and my Romanticism students this past semester did some very interesting work that lends itself to the blog format. Conveniently, they both worked with the same subject material, namely the working relationship of Dorothy and William Wordsworth. That relationship has become something of a pedagogical touchstone for me (displacing even Blake!), after two years of on-and-off dissertation engagement with the subject.

And so in my Fall 2014 Rice seminar, I gave students a crash course in Wordsworthianism, walking them through Homans, Levin, Mellor, Woof (and Woof), Fay, and Newlyn – though I should say I only teach the journals. I’ve adopted, as a means of fixing the canonicity problem, the resolution never to teach William without Dorothy. I’m more in Fay’s camp than anywhere else, and try to do justice to the complexity of their working relationship, but as one student (I can’t remember who) assessed my terrible poker face rather fairly during office hours, “You weren’t going to let William get away with it.”

And I love what they did. Dorothy, in their renderings, isn’t an appendix, or a pretext, or an index. Their representations make us think of the writer’s relationship in terms of intertextuality, with interesting and productive differences between their readings.

———–

1. Jessica, Chas, and Daniel made a video reflecting on William and Dorothy’s biographical and literary relationship. In a reading (Daniel) of an excerpted “Tintern Abbey”, they allow William only to go so far in his approach to the poem’s closing address to his sister (see Fay’s “address-to-maiden”). At a certain point, Dorothy insinuates herself into this poem as though she were refusing to merely be imagined by her brother, and retroactively changes the video we’ve just watched.

———–

2. Alitha (Computer Science major) and Sharon (English major) combined their skills to create a hypertext version of the “Daffodils”. In one column, we have William’s poem (the 1815 version); in the second, Dorothy’s journal entry; and in the third, a changing block of commentary. When you mouseover portions of either column that have been categorized in a particular way, corresponding blocks of text in both columns are highlighted. When you click on such a highlighted text, commentary pops up in the third column. It’s a lot like RapGenius, but I like this interface much better: there is no original text here with an index, but two interrelated texts and a changing third that attempts to mediate. As you’ll see, the page takes a determined interpretive stance on the nature of the relation between the texts; and while such authoritativeness is often a subject of criticism against hypertextual presentations, I think the site’s interactivity and critical approach present a real challenge by Digital Humanities to the traditional anthology form.

syau-site-conceptConcept sketch. Click to view interactive site.
(best viewed in full-screen mode)

Kudos to both groups for doing great critical work in nontraditional formats, and for giving me some incredible teaching aids for this semester’s Romanticism class!

2013 Summer Science Fiction Course Feedback

Regular visitors to my blog know that I taught a science fiction course last year at Brown, through the Continuing Education department. In that course, we explored a number of different authors’ and filmmakers’ attempts to understand the limits of personal and social human existence, from Mary Shelley’s Frankenstein to Star Trek to Neal Stephenson’s imagination of a Turing tests with paranoid computers.

Some of my students, a year out, have written testimonials for me to share about how the course has helped them as writers and critical readers of literature, and I’d like to share them! It was an immensely gratifying experience to teach an often-overlooked genre to such gifted students, and I’m so pleased to hear the same from them.

Student 1:

“Future Perfect was a fantastic course, not only because of the nature of the material, which was in and of itself fascinating, but because of the new way that the class forced me to look at literature, at society, and at the world- expanding my horizons, opening my eyes to a new layer of textual analysis. Exploring the crossroads of science and literature in a way I never would have imagined, gave us all a chance to think about things in new ways and to try to understand the innate human desire to wonder about what the future will bring, what our technological advancements mean, and ultimately what makes us human and what makes humanity superior or different to nature or to technology. I find myself thinking about literature in new ways still, of course as much as this is an accolade for the class, it is even more so an accolade for John, who expanded my world view, while simultaneously helping me focus on the close analysis of science fiction in every medium and most surprisingly helped me to unearth aspects of my own writing that I otherwise never would have realized fully.”

Student 2:

As someone who adores science fiction, Future Perfect was basically an irresistible opportunity to talk about the books and films I love. Sci-fi is a field that gets tragically overlooked by just about every curriculum, so to be able to learn more about it and discuss its themes in a classroom environment was a unique (and awesome) experience. The class definitely helped me as a writer; the assignments gave us a lot of room for creativity, and the feedback we received was conducive to improvement.

Student 3:

I signed up for Future Perfect planning to indulge a guilty pleasure of mine (science fiction of course!), but after the two weeks were over I had such a deep respect for the genre and its ideas that I now consider sci-fi a true art form and a (entirely guiltless) passion. The class touched on everything from philosophy to history to real scientific discoveries, all of which are the inspiration for science fiction along with the powerful question, what if? Science fiction is not only a film and literary genre, it is a lens through which many people have looked at the world and imagined it differently, which of course, is the first step to changing it. As a student of the Future Perfect class, I was expected to complete college-level reading and writing assignments, and the fact that I was reading Frankenstein and writing short stories about reanimated corpse “service beings” and meteorites that bent the laws of probability had no impact on the fact that now, as I face college in a mere few months, I feel ready for it. Future Perfect was an unforgettable experience, and as close to perfect as things come outside of fiction.

I’m teaching a new version of this course in July. If you know a pre-college student who loves science fiction and wants to learn how to read it and write it (either in short story form or as an academic essay), please send them my way!
john_mulligan@brown.edu

#ACLA2014 tweets (Now with #ACLA14) through Saturday, March 22, 10:00pm

The graph has been updated to include all tweets before 10:00pm (ish) on Saturday, March 22. It includes Friday’s tweets as well.

In addition, it now includes #ACLA14, since there has been some 

I’m sharing what I think is a useful tool for navigating the Twitter activity on #ACLA2014 (&#ACLA14) (though this is more of a potential utility, as there’s not yet enough activity to require this kind of map). There is now enough activity (734 tweets by 225 users, and 196 connections),  to produce a navigable map.

The nodes in this graph are people tweeting on #ACLA2014 & #ACLA14, or mentioned by people using that hashtag. If you click on them, you’ll see a user icon, a list of their tweets with links to that content, and below this a list of their connections to other people. Connections represented here: retweets, replies, and in-line mentions (“Loved the panel with @SoAndSo”).

View the network graph here.

acla_network_screenshot_1

I will update it over the course of the day, and make another for Saturday’s activity (unless people would prefer a multi-day map). I welcome feedback!

My own panel can be found on pages 274-5 of your program. It’s Friday & Saturday, 4:40-6:30, at 25 West 4th C-16. I present on Saturday, and will be talking about Thomas De Quincey and the Netflix & Amazon recommendation algorithms.

Studies in Romanticism Dynamic, Co-Citational Network Graph (Video)

Last week, because of a tweet by Alan Liu, I found Scott Weingart’s wonderful digital humanities blog. As it turns out, it looks like he had already gone through some of the same work processes last year, that I codified last month by adding a .gexf export function to Neal Caren’s refcliq.

One of the things I learned from scott’s post was that I had been drawing co-citational, rather than citational, graphs. Which made a lot of sense of the structures I’d been seeing. Basically, a line on the graph btw A and B doesn’t represent work A citing work B, but instead that A and B are both cited by some third work, not necessarily represented on the graph. All the nodes you have seen in the graphs I have posted are works that have been cited two or more times, and the edges are all representations that those two works have been cited together by two or more separate articles.

What was missing from these exports was the temporal dimension: co-citational network graphs allows us to think visually about how fields organize knowledge, and their own production of it. However, the interactive graphs I published before were static, and so did not allow us to think about how these internal structures developed over time.

I therefore reworked the code to export dynamic graphs (.gexf format only). These graphs register changes in influence and connectedness, over time, of the works cited by a journal.

I believe I wrote this code properly, but it is producing small variances in graph sizes compared to Caren’s original, so if anyone is interested in helping to unpack that, definitely email me. I also considered the usefulness of making modularity classes dynamic,

Back to the graph. My test case is again Studies in Romanticism. Over time, you will see individual nodes and edges changing size based on (respectively) the number of times a given work has been cited, and the number of times two works have been cited together. You will also see clusters develop, and separate from one another. I have not added any decay function, so once works are linked, or once a work has a specific node size, it either keeps that size or grows; no works or links diminish in absolute terms simply because they haven’t been cited in a while.

In relative terms, however, they may fail to keep up with the growing influence of Wordsworth’s Prelude, Coleridge’s Biographia Literaria, or even Milton’s Paradise Lost. I have identified clusters around the big six poets, plus three around Mary Shelley, William Godwin, and Walter Scott. I have also identified the developments of two of these sub-areas of Romantic interest with the publications of major critical works (all shown on graph).

The .gexf file can be downloaded here.

Here is my annotated video screen capture of the dynamic graph’s development over time.

Studies in Romanticism Dynamic Co-Citational Graph from John Mulligan on Vimeo.

I really think this kind of visualization could be an incredible research aid, if the raw data were cleaned up. But other commitments are likely going to keep me from working on this project for a while. In the meantime, please consider developing the code on GitHub and/or use the tool to create a map of your own field’s evolution. If you do, please email me a link to give me a few minutes’ break :)

PMLA Citation Network, 1975-2010

I’ll be making one more of these graphs (Victorian Studies) before I give it a rest for a while, but I thought I would present a nice coda to the MLA interactions graphs; I have two network graphs (using slightly different scripting and visualization) of Twitter interactions on the hashtag #mla14, in previous blog posts.

To round off this thinking about academic networks in all senses — though I have to say I haven’t been doing much thinking at all on the blog about this as I just try to make the data legible — I thought I would publish a citational network graph for PMLA. For the details on how to navigate these graphs, go to my earlier post on Studies in Romanticism. On this graph, though, metadata doesn’t seem to be doing the job on identifying communities, and the database had a good number of orphan nodes that were causing problems with the graph and had to be removed.

The below displays the citational network for PMLA from vol. 90, no. 1 – v. 125 no. 4 (1975-2010).

View Graph:
pmla

#MLA14 network, 6pm Friday to 10am Sunday

This is my second graph of tweets on this blog. I’m using an old script of mine to capture #mla14 tweets (using Twitter’s REST API). The below graph was built from 9785 tweets, posted between 6pm Friday and 10am Sunday. It shows 1736 users and 3666 interactions.

There is at least one other #mla14 visualization out there, Ernesto Priego’s, which uses d3.js. His visualization, which is searchable, uses Martin Hawksey’s TAGS. Priego is also posting regular updates on #mla14  statistics on his twitter feed.

My version, which will not update with new data, I built with Gephi and my own script, but it loads and runs a bit faster as a result. It would be interesting to hold up the two networks and to see how differences in interpreting mentions create different groupings. This visualization is made using a sigma.js plugin (see below).

Click to view:
mla14

Credits:

I’m using Alexis Jacomy’s sigma.js to render Graph #1. Graph #2 was produced using a plugin written by Scott Hale. The original graph was drawn in Gephi, using data gathered by and old script of mine, that I’ve updated to use on this hashtag.

Three Americanist Journals

I’ve had a request to map citations in three Americanist journals: Early American LiteratureAmerican Literary History, and American Literature.

For simplicity’s sake and to see how well the community-detection algorithms work across journals. I’m actually quite surprised at how well this seems to have worked (and how coherent the detected communities seem to be). I have a little bit of training in (19th-c ) Americanism, so I’ve gone ahead and identified some of the communities:

americanists

The full, interactive graph is available below:

View full screen:
americanists

I welcome expert commentary below, or on Twitter.

You can download the original gephi file here.

Credits:

As usual, some credits: the javascript visualization, which allows this complex graph to be presented in your browser, was written by Alexis Jacomy. The raw data comes from Thomson Reuters’ Web of Science. The parser/analyzer that turns the raw data into a network was written by Neal Caren. And I wrote a patch that allows Caren’s code to talk to Gephi. It occurs to me that these credits might lend themselves to a network graph…