Sunday, April 12, 2009

My Crack at Evals

I don't mean to try to ride on Nick and Nate's thunder with their series on evals, but I actually came up with an idea earlier I thought I'd share. To get any new reader up to speed, we do team evaluations for our Design Studio projects in Raikes School. Many of us have been analyzing the way we do things in Design Studio to see how we can make them better. One specific area is in our evaluation system. If none of those things made sense to you, you may not find anything useful in the rest of this post.


I was taking a walk earlier today and I was thinking about how technology has changed the way people communicate. We got the pleasure of listening to Evan Williams from Twitter speak to our program last week (recorded for your viewing pleasure). Twitter is an increasingly popular way to communicate. In my experience, it is marked by brief, succinct communication, yet users are still able to say everything they need to say. I've also noticed that most Twitter users update pretty frequently.
Readers of my blog may note that brevity isn't one of my strengths ;)


Another thing I see today is the use of visual media is also a popular way to communicate. Web designers often turn to visual representations of data to mimic conventions we are used to in real life.


So my idea focused on combining short communication, frequent updates, and visual cues to create a new evaluation system.

I tried to sketch out what I thought the interface might look like with the GIMP.

My idea for an evaluation system


In this design, the individual being evaluated is shown in the middle of the graph. I made a quick assumption that this evaluation system has identified 6 key points of success that every member of Design Studio should have. Each category is a corner on graph.


Each member on the team would be able to evaluate the individual using this tool. For each category, a user would be able to add points for each category, thereby pushing the graph out on that particular axis. This graph would quickly show where the individual's strengths are. However, an evaluation system that only focused on where an individual was doing well wouldn't be too effective. To show where an individual should improve, a user would also be able to subtract points for a given strength.


The system would aggregate the submissions for all users evaluating a given individual. This data would contribute to the overall graph for an individual. If designed well, the system could present different views with the data, but that's another story.


I gave some thought to whether a user should have a number of points to distribute for each evaluation period. Players of RPG's should find this familiar if they've ever distributed points across dimensions of a character's skills. The alternative would just allow the user to add a point, subtract a point, or do nothing to a given category. Going even further, the system could allow a user to add or subtract one or two points to allow for greater variation or more expressive decisions.


One thing I don't like about the current evaluation system is that it doesn't tie results of an evaluation to specific incidents. For example, if I did well in my 'Communications' category, I can't point to any specific behaviors or incidents to reinforce the feedback. Conversely, if I did poorly in an area, I don't know which behaviors to change. Therefore, each time a user added or subtracted a point, the system would prompt for a brief comment as to why. The system would then save the comments to each category for each user in a database or other storage system. Program managers could pull this data later for reporting purposes.


The other thing that I would want to see in this system would be shorter evaluation periods. Perhaps every week or two weeks, team members would go through this process. Hopefully, the interface is easy enough that this can be done with minimal time and effort. Another advantage of more frequent updates is that feedback is more timely, and thus more useful and relevant. Individuals would be able to see their chart as it changes and can react to the evaluations more effectively.


Perhaps a cool but not as useful feature would be the ability to take periodic snapshots of the graphs and be able to play them back. A user could see his or her graph change over time. This might be a fun way to see an individual's growth over time. This engaging interface might make user's less likely to get frustrated and tired of using the system, thus making evaluations more honest, useful, and effective.

Let me know what you think! Is this a good idea? Would you use a system like this? What glaring mistakes or disasters did I overlook with this idea?
Bear in mind this came to me as I was running through the rain from the garage to Kauffman.

1 comment:

Unknown said...

I like the idea of simple good/bad points and really like the "why?" tracking.

A couple things to keep in mind is that students are students, so they won't do anything not required. Have you though of a good way to ensure participation with an objective measure? Also, are grades in areas relative to peers or to a set guideline?

Anyways, we will definitely add this to a wrap-up eval discussion post if we have one.