Sometimes you don't notice that the clouds were there until the sun breaks through.
Harry Gottleib's talk was phenomenal. In the last year here I've learned a toolbox full of inspection methods and design techniques that put user comprehension on center stage, but they were all blown away last Tuesday night.
I think that we got to where we are in computer-based interaction design as a very long evolutionary process. With the exception of the advent of the desktop metaphor, I can't see a single step as great in the path of interaction design as Jellyvision's work.
Maybe it was because I didn't know a tenth as much about interaction design when I first played You Don't Know Jack as I do now, or maybe it's a testament of the seamlessness of the interface, but I didn't recognize the ability of a finely-tuned conversational interface to help me suspend my disbelief without even realizing that I was doing it.
Prior to coming to CMU, I put some time into the idea of humans treating their computer-based conversational counterparts as cognitive equals. In August of 2000, I built AOLiza (www.aoliza.com), a chat bot that was essentially a linkage between Joseph Weizenbaum's 1966 creation ELIZA and AOL's Instant Messenger. Setting unsuspecting users up talking to the bot, I recorded over a hundred conversations, of which only one suspected they were talking to something other than an actual person.
So set on Turing's ideas of eliminating the ability do discern between man and machine, I dismissed the idea of expanding AOLiza beyond the chatroom. Surely, I thought, a person would be able to tell the difference between computer-generated speech, or even actual soundbytes snipped together.
In that moment I forgot a fundamental tenet of HCI, that Human-Computer interaction isn't a parity relationship. At all times either the human is control, using the computer as a tool, or the computer is in control (usually in games or expert systems) where the software utilizes indirect control to guide the person toward certain actions.
That was the missing piece, the realization that given the right framework, the right task or goals, suddenly the idea of prerecording actors to create an amalgamated digital agent becomes a tractable (though hardly trivial) task.
Now, whatever it is I think I see becomes an iCi to me.
Just a week before the talk I and five other HCI masters students stood before 400 professionals at CHI2003, competing in the Interactionary. Given a design problem on the spot, we had 10 minutes to design a solution. Our task? Design an admissions kiosk for a Florida amusement park. I know now that I would have designed something completely different, and infinitely more engaging, enjoyable, and memorable.
But it's not too late. Right now in Programming Usable Interfaces my final project is creating a video-mail kiosk for use within defined spaces, such as an amusement park or mall. Tomorrow I'm going on a photo shoot, with a script of snippets in hand, ready to make my first iCi.
I'm pretty excited.