Curating and Re-Using Amplified Conference Discussions

Participants and lurkers in conference hash tag discussions get instant benefits, including new ideas, links to useful resources, new contacts and a range of opinions and reactions to the conference content. But these conversations can be huge, complex beasts stretching across several days, intermingled with repetitions and social banter. If held entirely via Twitter, the conversation may not be searchable for more than a few days after the event and will then be delivered in reverse order. This makes it difficult for anyone who did not follow the conversation in real time to get value from the discussions after the event.

So how can we curate the discussions to make them useful to the community after the event, when the conversations could still be useful for citation, resource discovery, contacts, and topical analyses?

There are some great tools out there which are starting to address these needs…

Twapper Keeper – this service enables you to archive hash tagged tweets, export the archive as a single file or make use of the API to track and analyse the tweets in different ways.

Summarizr – created by Andy Powell at Eduserve. This uses the Twapper Keeper API to create a series of statistical graphs summarising your event tweets and top tweeters.

Revisit – this produces a very pretty visualisation of the tweets from an event, including the conversational relationships between tweets (re-tweets and @replies).

Wordle – one of many tools that produce a word cloud emphasising terms based on their rate of occurrence, thus helping to identify popular topics within the conversation archive as a whole.

Packrati.us – extracts links from tweets by a specific Twitter account (usually the account providing the official live commentary) and automatically adds them to a designated Delicious account. This will not capture all of the links shared unless the official account re-tweets every tweet which includes a link, which may not be appropriate behaviour.

iTitle – as I have described before, this service uses event tweets to add subtitling to videos of presentations after the event. This can be used in conjunction with Google Translate to offer subtitling in a variety of languages, making the video more accessible to a wider audience. The benefit of this technique is that the backchannel conversation is displayed in context with the formal presentation.

Part of the role of the event amplifier should involve making best use these types tools to not only preserve the event conversations, but also to make the records useful to a future audience. This means cleaning up the noise post-event, and choosing tools carefully to help leave a considerately curated record at the end of the event.

Brian Kelly recently wrote about event amplification as a mechanism to enable a presentation to escape the constraints of time and space. Whilst the records of a presentation, including video/audio recordings, slides etc, can escape these restrictions, I think we are still very much at the beginning of releasing conversations from their temporal anchoring. Twapper Keeper has made archiving the conversations much easier and given this data greater flexibility. However, we are still tending to use analyses of the conversations to demonstrate how participants are using the platform (usually Twitter), rather than looking at the longer tale of that conversation and how the actual content can be absorbed into the community’s body of knowledge most effectively.

I think contextualising the conversation using a tool like iTitle is one very useful mechanism for presenting the record, but what about discussions around sessions that are not being recorded or live streamed? All those conversations that happen outside of the recording window will not benefit from the context iTitle provides and may simply remain as statistics or data points within a broader visualisation of the conversation itself. The content of that conversation will not be easily accessible and re-usable to the majority of the community. We need to find ways of presenting these sections of the conversations so that they have a practical use across time. This may involve improving on existing tools, or devising new tools altogether.

To do this, we need to consider what uses the community might have for the record. Citation, resource discovery, contacts, and topical analyses are the most obvious, but there may be further motives in the future, such as nostalgia or historical/cultural analysis. We also need to be aware that the needs of different communities will vary, so there will not be a one-size-fits-all solution to archiving and curating event conversations.

The challenge for the event amplifier at the present is to select the best combination of tools to help as wide an audience as possible engage with the record of the event conversation. This audience can then inform the evolution of new conventions for presenting the data in more practical and accessible ways, depending on the needs they demonstrate. This is obviously a bit of a chicken and egg situation, but user input is vital if these conversations are to form a useful record for the community post-event. As far as possible, the conversation data needs to be available for the community to manipulate in its own way, if this is to be effective.

In terms of best practice, I think it is vital for the event amplifier to work with the event organisers before the event to understand how they wish to archive all of the online resources, including the backchannel conversations, so that these records can be made available and promoted to the audience in a timely way, before interest in the hash tag diminishes. It is also worth publishing a lightweight policy in advance so that participants know how the conversation will be recorded and used. This should be simple, and preferably tweetable e.g.

#youreventhashtag is being archived by Twapper Keeper. Download the conversation so far at [link]

We are capturing this conversation to provide a record for use & citation by others. To remove your contributions from the record, please DM

Links tweeted by @officialaccount will be collected at [Delicious link]. Please direct any valuable links to @officialaccount for inclusion

Selected tweets from #youreventhashtag will be used to provide Twitter subtitling for this presentation using iTitle

… and so on.

Letting the audience know, and opt out if they wish, not only makes the process more transparent, but also draws the audience’s attention to the post-event resources. They can then make use of these records or recommend them to others. The more practical use these records get, the more we can refine the ways the conversations are recorded and presented so that they too can escape their temporal walled garden.
 
 


2 Comments

  1. Great rundown of possible post event uses for the conversation, which I shall borrow…

    Am still struggling with the relationship between Twitter data and a community’s knowledge (skipping information entirely!) – only a variable proportion of the dataset covers substantive topics IMO. I’m hoping the dataset I am currently looking at will help me with this!

    On a broader note, what activities do you see as ‘curation’ in this context? There are varying usages about, with some people getting sniffy about automated curation, and preferring to define it as aggregation.

    Reply
    • Hi Ann

      I think it is true of any conversation, both online and offline, that only a proportion will be of value in terms of long term community knowledge – there will always be a lot of phatic talk, dead ends, repetition etc. Part of the curation process may involve separating out the “high quality” comments to make the record more useful? There is not really an automatic way of doing this, which means there will be a human editorial judgement needed.

      Separating out links using a tool like Packrati.us is probably the closest to an automated way of filtering the actual content of a tweet and representing it in a more useful format, without a human editor. I also see things like the official live commentary and the use of iTitle as curation activities, as they are providing context to the conversation, which improves its practical use and therefore increases the value of the information.

      Personally, I don’t mind if curation is automatic to a degree, particularly when you’re looking at a large data set. However, curation needs to involve placing a relative value on content and then presenting the most valuable content in a way that is accessible and useful for promoting further thought or debate. Aggregation is part of that process, so a mixture of automatic tools may help filter or demonstrate relationships between comments, but actually boiling conversation data down into something practical will inevitably require a human curator.

      That said, I don’t as yet have a clear picture of what a beautifully curated Twitter conversation might look like, but part of the point of this post was to help get closer to that picture – so any suggestions would be great :-)

      Reply

Trackbacks/Pingbacks

  1. A Delicious Warning for Event Amplifiers « The Event Amplifier - [...] web links), thereby maximising the reach of event resources. It can also be easily integrated with other tools – …

Submit a Comment

Your email address will not be published. Required fields are marked *

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>