Qwiki – Assessing Automated Storytelling

For the competitive analysis phase of our final project we have been collecting ideas from platforms that operate in the same realm as our project. One project, Qwiki, at first seemed to have a very similar goal: “Qwiki is working to deliver information in a format that’s quintessentially human – via storytelling instead of search”.

I’ve been looking at one specific narrative, because Neuroscience has been the topic we chose for our paper prototype. Qwiki takes different types of media (text, images, links) from the web and stitches them together to a multi medial narrative. The claimed strength of the system, delivering “quintessentially human” information turns out to be quite the opposite.

Freedberg writes that creators of images have to consider the “effectiveness, efficacy and vitality of images”. When Qwiki takes pictures from the web, not only is the original purpose for taking the picture removed from the picture, there is no instance of a creator that makes sure the image is efficient and vital. Qwiki is essentially an algorithm, and because it operates on so many cognitive and cultural assumptions, the efficiency of images is arguably one of the most difficult tasks to automate.

This is probably one of the reasons the Qwiki narrative seems random, and hardly more compelling than the audio would be on its own. The images don’t add to the understanding, yet they occupy most screen real estate.

We can learn two things for our project: One, we should let the author of a narrative freely choose the source and type of media she uses for her narratives. And two, the design of the authoring tool could nudge authors towards using images and other media effectively.