Archive for September, 2009
Jeffrey Heer is a familiar name for people from info-visualization community. The Prefuse visualization toolkit created by Jeffery is used and referred by almost all infographics enthusiasts. But somehow I missed out his wonderful (almost an hour long) lecture at University of Washington, Voyagers and Voyeurs: Supporting Collaborative Information Visualization.
Apart from the awesome demos of his works on interactive visualizations, in this lecture, Jeffrey also talks about how collaborative annotations and discussions can enhance the effectiveness of an info-visual. According to him, if a large enough, interesting and interactive info-visual is made available to general audience, equipped with integrated annotation and commenting tools, individuals (not only experts) can find out amazing patterns and perspectives.
From the excerpt:
Interactive visualizations leverage human visual processing and cognition to increase the scale of information with which we can effectively work. However, most visualization research to date focuses on a single-user model, overlooking the social nature of visual media. Visualizations are used not only to explore and analyze, but to communicate findings. People may disagree on how to interpret data and contribute contextual knowledge that deepens understanding. Furthermore, some data sets are so large that thorough exploration by a single person is unlikely. Jeffrey Heer from the University of California, Berkeley, presents a number of novel visualization techniques in this University of Washington Computer Science and Engineering program.
Some ‘good’ discussion going on there, in blogpost and the comments. For example, Sally Madsen wrote, ‘How Might We Celebrate Learning through Evaluation?‘. To quote:
Why do we evaluate? Sometimes it’s for reflective validation: qualifying the success of a program after it is complete. Other times it’s for active learning: seeing what is working well and what could be improved, and using this insight to change things for the better.
Evaluation for validation has an important role in comparing different approaches: Which approach has the most impact? Which gives the best value for money? How can this affect strategy moving forward? The downside of this type of evaluation is that it often doesn’t produce conclusions until months or years after the actual project has ended—when the opportunity to change course or affect the project outcome is gone. Evaluation for active learning, on the other hand, allows you to take action as soon as a problem is identified. In design and innovation, evaluation for learning is a natural and essential part of the process.