Just because you can doesn’t mean you should

Today’s Boston Globe ran a front page story on a blogging doctor unmasked during a malpractice trial. It appears that the doctor in question – Robert P. Lindeman – had been writing a running commentary under an assumed name of a case very much like his own. Once the plaintiff’s lawyer determined that Lindeman was Flea (the name he used for his blog), the case was quickly settled.

In his blog, Flea had ridiculed the plaintiff’s case and the plaintiff’s lawyer. He had revealed the defense strategy. He had accused members of the jury of dozing.

Blogger unmasked, court case upended – The Boston Globe

I’ve written before that social media and transparency don’t translate to stupid or sloppy and this case demonstrates what can happen when they do. There is a time and a place for everything and when you are being sued for your part in the death of a child (as was Lindeman) it is not the time to pen a humorous and irreverent blog on the situation . . .

[tags]Robert P. Lindeman, Flea, Boston Globe, blogging, malpractice, hubris[/tags]

Back in Maction

Thanks to people’s understanding here at work, I will continue to be able to use my personal system at the office. It’s a big deal for me because I have things set up the way I like them, have all of the applications I need, as well as the content.

When word came down that my system was machina-non-grata earlier today I did try doing everything from my office PC and I can sum up the experience in one word: stinky. Sure it worked, but it involved moving files from one system to the other, dealing with kludgey interface issues, etc.

I really appreciate that I have the freedom and flexibility to work in ways that work best for me and need to make sure that I’m not overstepping any boundaries or otherwise misbehaving . . . Reasonable enough.

[tags]GregPC, two computers, work, home, personal, professional[/tags]

Is this the end of two computers for GPC?

Today an email was sent out from our IT group saying that any personal computers logged in through the company network need to be removed “ASAP.” I pretty much the only person logging in with a personal computer so I guess that means me. . .

Back in March I wrote about my reasoning for using two computers at work and I think the reasoning still stands up. I don’t think it is reasonable for me to necessarily do my blogging, Flickr, Second Life, etc. from my work computer. While some of those things are work-related, they are also personal and portable.

I wonder if this prohibition applies to any outside system? Does this mean that clients, vendors and partners are prohibited from accessing the Internet through our network? Should I assume that the inverse is also true – not work computers accessing the Internet through non-work networks?

This is troubling and disappointing news . . .

[tags]work computers personal professional bummer[/tags]

Swaptree, let me sing your praises

Almost a year ago I went to my first WebInno event. One of the main dish companies was Swaptree. The CEO, Greg Boesel, described a service that would allow people to trade books, CDs, DVDs and video games. As a pretty heavy reader, I liked the idea; but at the same time, I wondered how it would operate and even more importantly – how it would make money.

I bumped into Greg at a subsequent event and I had to revise my thinking based on that conversation. Greg also invited me to be a part of the Swaptree beta and it has totally changed the way I think about books and reading.

My first trade was on January 31st. I swapped “The Seventy Great Inventions Of The Ancient World” for “The World is Flat.” As it happened, Greg was the person I was trading with and so rather than mailing the books we met and made the exchange in person. It turns out that we have similar taste in reading so this was the first of many trades between us.

After a few trades, I asked if I could write something about Swaptree, but Greg asked that I hold off as they were still in the midst of a private beta. That beta is over and when I asked him again last night he said by all means. (In fact, he apparently had emailed my a while ago to say it was OK but I missed the note . . .)

So now, without further ado, let me sing the praises of Swaptree.

As mentioned above, Swaptree allows people to trade books, CDs, DVDs and games. It does this my asking for a list of items you’d be willing to trade, as well as for a list of items you’d like in return. I have a fairly large collection of books (700-1000 or so) and came up with about 80 that I was willing to trade.

Adding books is easy. You simple enter the ISBN number and Swaptree gathers all of the relevant information (they use the Amazon database). You can also enter the number using a bar code scanner if you have access to one. Once your book is in the system, you need to rate its condition and confirm that you are willing to put it up for trade. You can also add comments if you like. And that’s it. Your book now awaits an opportunity to be transformed into something new.

Building the list of items you want feels like shopping. The database of materials is extensive and relatively well organized (there were more than a few cases of mis-classification). Right now I have about 20 items on my want list.

With these two sets of information, Swaptree starts trying to find ways to get you what you want. If the system were limited to simple one-to-one trades things probably wouldn’t work. Fortunately, Swaptree is able to do trades that involve multiple people. This greatly increases the likelihood of success.

Once a potential trade is available, the system notifies everyone and gives them the opportunity to accept or reject the trade. If it’s rejected the process starts all over again. If it’s accepted, you’re given the shipping address of the person receiving your item (remember, this person may or may not be the one sending your item to you).

Swaptree provides the ability to print postage right on the site – which is great because if I had to go to the Post Office to do this I wouldn’t. Media rates are low and mailing a book is generally only a few dollars. The site has some tracking tools but they are spotty (the data is provided by the USPS so Swaptree doesn’t have much control over this).

I’ve done 20 trades so far and with only one exception they have gone off just fine. The one exception was so bizarre that it doesn’t reflect on the service. (Here’s what happened. A member who I’d traded with before sent me a book. First days, and then weeks, went by. I used the system to contact the sender. The tracker showed that it had been sent but he offered to follow up with the local Post Office. All signs pointed to a simple delay. A few weeks later the package arrived. It was totally ripped apart and had been put in plastic by the Post Office. When I opened it it wasn’t the book I expected. I contacted the sender again. He had never owned the book I received – a 2007 Reader’s Digest hardback edition of “The Phantom of the Opera” – and offered to send my original book back to me (an offer I declined).)

The biggest problem I’ve had is how to queue all of the books I’ve received. At this point I have more than 20 and am not going to be able to read even half of them this year. Now I’ve started using Swaptree for movies and games and have been equally happy with the results.

In the five months I’ve been using Swaptree, the number of books I’ve bought is way down (way, way down); but the number of books I’ve acquired is way up. It’s also given me a way to pass along books that I’ve read and enjoyed to good homes that want them.

There’s more that can be said about Swaptree but I think this gives the basic idea. The company offers a mediated way to exchange tangible content and it’s great. The official launch of Swaptree is happening soon and once that happens the number of items for trade will go through the roof. Check it out.

[tags]Swaptree, books, CDs, DVDs, games[/tags]

WebInno 12 – Scorecard

So WebInno was last night and it was – as always – a lot of fun. I got there early and chatted with a few people, checked out some of the side dishes, had a drink, etc.

I had good a conversations with Greg Boesel from Swaptree. It sounds like things are going very well over there and that they are fast approaching the formal launch of the company. There’s already made trades to be made (last time I looked I think there were 1200 items I could get) so it’s going to be crazy when the floodgates really open.

Ernst Oddsund from Digibug and I chatted for some time – mostly about communities, social media and whatnot. We shared the opinion that the communities created by technology are more important than the technology itself – a point missed by people constantly seeking out the newest and coolest bells and whistles.

After the event, I joined Dave Evans (Digicraft) and John Lester (Second Life) down in the bar for a few extra drinks. That was a good time; but for some reason I felt compelled to disagree with everything that was being said – even though in many cases I actually agreed. I’m not sure why this is but it isn’t new . . .

Afterward, as we walked back toward Kendal Square, a bit of exploring was done. Here Dave and John point the way:

over there

But enough about who I met. Let’s get to the event itself. Yesterday I posted a preview and so I need to check in to see whether I was right or wrong.


The clock is counting down to tonight’s Web Innovators Group forum. Last month I went in with a set of expectations and assumptions and left with totally different opinions of the various companies that were there.

This time, I’ve decided to check out each of the companies and lay out what I think of each so I can compare notes after the fact. Here goes.

Geezeo – billed as “personal finance for the rest of us” Geezeo currently offers mobile and student versions of a planned online personal finance system. I’ll be honest, I don’t manage my money AT ALL. We’ve used Quicken for years and I’ve tried to be good with it but it’s never taken. My wife, on the other hand, is an absolute Quicken maven. I watched the screencast on the service, which was cool (if a little choppy). Basically, it allows you to receive your account balance information as an SMS message. Of course if you happen to have your phone, you could also call your bank and hear your balance . . . The direct to bank approach might involve more key presses but it also means that your personal financial data passes through fewer hands.

Expectation that I’ll be wowed – pretty low. I’m not a big personal finance kind of guy so it’s unlikely that this is something I’d use myself. I also worry about security I guess; but who am I fooling really? I’ll sign up for pretty much anything . . . The fact that geezeo uses gmail as for authentication makes me wonder who’s talking to who on this one. I like that they are using Amazon’s Simple Storage Service if for no other reason than that it allows good ideas to exist more easily.

Chances that I’m way off base – I’m feeling pretty confident on this one.

Survey says – I was not wowed. Perhaps when the full service comes out I will be but access to account balances didn’t do it for me. I’m also pretty skeptical that they will have much success taking on Quicken. Beating Quicken? No. Being acquired . . .?

DNS Stuff – “Welcome to the center of your DNS universe,” the site proclaims. I didn’t realize that I had a DNS universe – much less that it had a center. But enough bad attitude . . . This is a cool site. I like all of the various tools it lays out on the home page and the amount of good educational content on the site. The fact is though that this isn’t something for everyone (not that there’s a problem with that).

Expectations that I’ll be wowed – pretty high. Sure, I may not need or use DNSstuff but I am a geek and like slick networking tools as much as the next guy.

Chances that I’m way off base – I think I’ll be impressed.

Survey says – I was wrong about this one. I was not wowed. Part of it might have been that their presentation was not very good. I felt like it was too technical a story to try to tell in the few minutes they had. Better luck next time.

Enjoymymedia – when I went and checked this out I was pretty excited and impressed. The basis idea is that EnjoyMyMedia allows you to stream content to select users right from your computer. I have a ton of content that is of interest to only a small number of people and I don’t necessarily want to post it all publicly. I also don’t want to have to ask friends and family to sign up for accounts on a bunch of different services so this seemed like a good solution. Until I registered and tried to set it up. Windows only. I wish that was made clear up front somewhere . . .

Expectations that I’ll be wowed – very high. I like the idea and promise of enjoymymedia and hope that there will be a Mac version soon.

Chances that I’m way off base – pretty low.

Survey says – I was really impressed by these guys. The idea is good and the execution is clean and simple. The lack of Mac support is a bummer but it is part of their longer-term plans. I’ll be installing this on my Windows partition today.

imoondo – video classifieds? There are some times when simple = better and this might be one of them. I enjoyed watching Steve make a vegetable omelet but there is a limited amount of time I can spend watching video whose content I could absorb at text (or text and images) in a fraction of the time.

Expectations that I’ll be wowed – pretty low.

Chances that I’m way off base – pretty low.

Survey says – I’ll be honest, as much as I wanted to go over and check out what they were doing I didn’t make it.

TownConnect – it seems that there are a lot of efforts under way to create online communities around towns and cities. I think it’s a great idea but no one has seemed to crack the code. There is outside.in, local wikis and online communities started by town governments, boosters and newspapers. Another hat in the ring isn’t a bad thing but I don’t get what makes this one different or better.

Expectations that I’ll be wowed – pretty low.

Chances that I’m way off base – pretty high. I could be totally wrong about this one. It’s possible that once I hear more about it I’ll be totally blown away.

Survey says – Again, I didn’t make it over to their table; but I did have a chance to talk to several people about what TownConnect is doing. The consensus seemed to be that it isn’t terribly interesting.

Video Ad Factory – maybe these guys should hook up with the guys over at imoondo? If one video ad system is good, does that make two better? Hard to say. I will say that Video Ad Factory has a more polished look that imoondo; and I also liked the fact that I could take the videos and put them on my site if I wanted. (If that’s possible with imoondo, my apologies). I don’t know – there’s something to be said for that homemade look that doesn’t come across with many of the videos on this site.

Expectations that I’ll be wowed – lowish

Chances that I’m way off base – pretty good. I can imagine that these guys will have some pretty good stuff to say for themselves.

Survey says – I was not wowed. I spoke with a couple of guys from the company; checked out their stuff in more detail but didn’t get a good sense of the business model or unique value they offer. It was nice looking and all but that probably isn’t enough. I was talking with Rod Begbie about them (and imoondo as well) and the problem with both is that people want to scan classified ads quickly, not sit through multiple videos.

You Have Not Changed One Bit – besides having the most awkward domain name I think I’ve ever seen, this is one strange idea. It allows you to try to match then and now photos. The idea is that you’d use it when organizing a reunion. It was fun trying to match the photographs but it’s hard to imagine that this is enough to sustain a business.

Expectations that I’ll be wowed – low

Chances that I’m way off base – pretty low

Survey says – I was not wowed. I spoke to Erik Sebesta for a while and it’s a cute application and a nice idea but it’s hard to see it being a real business. Somehow they have been granted a patent on this idea – something about score-based contests that involve matching then and now photographs. As I said to Erik, someone in the Patent office must have been drinking that day . . .

So there are my predictions for tonight’s event. Check back tomorrow to see if I was on target or way off base.

I was more or less on the money with my expectations. Kudos to me. Despite not being blown away by any of the companies (with the possible exception of EnjoyMyMedia) it was a good event. Having a venue like this for early stage companies – and the people that play around them – is important and WebInno is fostering a real sense of community.

[tags] WebInno, Geezeo, DNSStuff, EnjoyMyMedia, iMoondo, TownConnect, VideoAdFactory, You have not changed one bit, Greg Boesel, SwapTree, Dave Evans, John Lester, Ernst Oddsund[/tags]

Social Media Club/Boston – 5/17: (Getting a) Second Life

Last night the Social Media Club/Boston had a real humdinger of a meeting. Sponsored by Text 100, the topic was Second Life. The meeting was at the Harvard Club on Comm Ave in Boston and there were well over 100 people – making it one of the biggest events to date. It started out with snacks and pop – how nice. People from Text were helping others create avatars and explore Second Life. At the appointed hour, we all made our way to the Massachusetts Room, where – under the watchful gaze of great American heroes and statesmen – we were treated to an excellent panel.

The panel was John Rodzilla, Emerson College; Drew Stein, Infinite Vision; John Lester, Linden Labs; Mike Askew, Fidelity; and Aaron Uhrmacher, Text 100.

Ken Peters from Text acted as the moderator and kept things flowing with good questions for each of the panelists. I’ve not included his questions in the following summary.

John Lester started by saying that looking at Second Life today reminds him of the early years of the Web. Every communication medium, he pointed out, has been hard to adopt at first. He cited the early days of film when the were essentially recorded plays, and the telephone which was initially treated like the telegraph. Over time people come to understand how to put technology to work and Lester is starting to see that process happening in the ways people are using Second Life.

Mike Askew explained that Fidelity stared using Second Life about six months ago. His group functions as a think tank within the company and he wanted to explore the possibilities for collaboration that Second Life offered. He believes that business-to-business is the best place for Fidelity to start and they have established a briefing center similar to the physical one they have here in Boston. Lester pointed out that Linden Labs uses Second Life as their meeting and collaboration venue.

Drew Stein talked about businesses’ changing expectations around their participation in Second Life. Many of them seem to want their 15 minutes of fame for being there, want to grab some headlines for being there and that’s pretty boring. At this point, Stein explained, people have figured out the what and the when and the how of Second Life– now we need to address the why . He no longer looks what he does as Web development, now he says, they need to think more deeply and help clients understand how Second Life fits into an overall interacting strategy. When working on a project, Stein asks two questions – how can this be made fun? and what would Walt Disney do? He views Infinite Vision (and Linden) as an entertainment company.

He made some good points – especially on the importance of considering a company’s broad goals – but he did come across heavily on the tools and functions side of the equation.

Aaron Uhrmacher suggested the need for balance. Second Life can’t just be about entertainment. It’s also an opportunity for people to develop new and different relationships with brands. Over the last 10 months three phases: being there, becoming involved in the community and then integrating Second Life in the real world business activities.

I talk a lot about brand myself sometimes, but listening to someone else talk about it made me wonder what does a relationship with a brand mean? And as much as I like Second Life (and I do) how helpful is it as a brand relationship tool at this point? The realism is still not there, the performance can be spotty and frankly these things could point to a rocky relationship. The fact of the matter is that these are details that will be worked out as the technology improves.

Lester described the power of Second Life as its ability to create a sense of community. Once a community exists it needs to be maintained through interactivity. This is an important point and one that many people and companies don’t get. It gets back to points that Stein and Uhrmacher – people want to start just by being there and getting their 15 minutes of fame without thinking through the meaning or implications. Lester sees this starting to turn around as more people understand the interactive nature of community in Second Life.

As Fidelity considered using Second Life, it became a big debate within the company. Askew said that it was the enthusiasm of senior management that overcame the early concerns. One of the important things for Fidelity is the social aspect of meetings in Second Life. Conversations take place and trust in built in meetings – whether in person or in Second Life – that just isn’t possible with conference calls. In Second Life meetings people start to talk in small groups and socialize much more. Askew thinks that this provides a higher quality interaction.

Lester believes that this is because of the sense of place in Second Life. On the phone everyone is just a voice, and multiple voices quickly become confusing. Linden is working on spacialized audio which will allow voice interaction adjusted for peoples location and proximity. This well, he feels, add to the realism without the problems of muddled conference call audio.

Uhrmacher was asked to provide some communication lessons he’s taken away from his work with Second Life. The first phase, he said was for people to go and watch, get cards, etc.; but not much interaction. Now he is starting to see more companies staffing Second Life and engaging with people in the space. There is also an organic evolution of groups and communities with some of the interactions moving beyond Second Life .

Stein felt that customers my not be fully on board yet but that they will be. He feels that older people don’t get social media but that 15 year-olds do and so businesses need to starting thinking of how things like Second Life will fit into their communication mix for the future. Second Life, he believes, is the next generation of the web – it is why brands like the Weather Channel are there now.

I continue to wonder if the claims of social media as a youth movement are valid. It seems like a real oversimplification to me. I’m sure that there are some social media elements that are more appealing to different demographics and age cohorts and I wish someone (maybe the Pew Center?) would do a social media census to clear this up for everyone.

Lester spoke of the potential merging of various virtual worlds. The fact that much of Second Life is open source will allow for this integration and interoperability and the more people that get in there and start hacking away with the tools the better.

Askew brought up some of the issues the stand in the way of Fidelity using Second Life as a B2C tool. On the top of the list were identity and security – issues, frankly with any social media platform. The argument was made the people invest time and energy into their avatars and so maintaining a persistent identity in Second Life is possible. I didn’t get the impression that Askew or Fidelity would be satisfied with this. The reason it’s less of an issue for B2B is that Fidelity can invite specific people to specific locations and control who joins or participates in a meeting.

John Rodzilla was asked to discuss how Second Life might function from a literary perspective. He explained that it depends on the author or publisher. There are already a number of authors who are active in Second Life now and Random House recently held a book group for The Time Traveler’s Wife which went well. He also pointed to Info Island – where real people are staffing a service to help people find real world information.

I had a chance to talk with John after the session and wish that he’d had more opportunities to participate in the panel. Given the flow and themes that were discussed though this wasn’t the case.

Stein was asked about the barriers to entry. He said that they are lessening every day but that even with executive support and buy-in you still need to create something that makes sense.

At this point, members of the audience began asking questions. The first was around audience type, size and where they congregate. Stein talked about the four islands they build for the Weather Channel. One of them was designed to show surf. Very quickly, the surfer community within Second Life made their home on this island because it had the best waves. An interesting answer, but not what the questioner was looking for. Prompted, Stein began to describe the Linden traffic system. Lester jumped in to talk about how they are creating sensor-based measurement systems to see where people are spending time and are coupling this with survey data to get a better view of audience behavior.

Uhrmacher said that there basically three main audience groups – those looking to be entertained, to be educated and to conduct business. Their levels or participation depends on the nature of the event or space they are visiting. He pointed out that each sim can accommodate about 50 people. Stein said this number was too low and that he’s conducted events with close to 100 people; and that some events, like the Suzanne Vega concert, have been viewed more than 10,000 times.

This discussion prompted Lester to mention that they are working to improve concurrency; but the fact remains that server resources are limited and that even traditional Web sites can run into trouble with heavy volume. He started to make the argument that Second Life’s limit on the number of people in a space was actually a nice benefit – you know, because it keeps events on a human scale and allows interaction. I pointed out that at a concert I don’t necessarily want to interact with everyone else in the audience but with my friends and the artist.

It brought to mind for me the fact that not all of our time in the real world involves engaging with the people around us. There are times when we just want to be able to go about our business without having interaction thrust upon us. Stein had made a good point earlier in the discussion that they always try to work with clients to understand their goals and reason for wanting to get involved with Second Life – and that there are times when it doesn’t make sense. I think it can often make sense but that we all need to take a breath and not assume that time spent in Second Life needs to be all engagement all the time.

Askew built on the theme of interactivity by explaining that they are faced with different levels of ability to deal with interfaces. They are trying to create a level playing field that will work for all audiences.

One mistake that people make, explained Uhrmacher, is that they are still focused on trying to replicate the real world in Second Life. Until you’ve tried it, it is hard to conceptualize. Once people do try it and become engaged they begin to realize that duplication doesn’t make sense. His counsel is to try something different in Second Life.

Peters asked everyone to project the development of Second Life a few years into the future.

Rodzilla thinks there will me more meetings occurring and the people will be more active in assisting one another. He referred back to the live reference assistance available on Info Island and thinks that this type of think will become more common.

Stein expects to see a deeper level of immersion and avatars able to travel between different virtual worlds. He also expects we’ll be seeing more fun to. He thought it was interesting that no one had discussed mashups in virtual worlds and thinks that this is also something that will be come more and more common as people begin mixing different media types in Second Life. Finally, he suggested that people should begin asking themselves how they can use Second Life to have a positive impact in their real lives.

While Stein was speaking, Lester’s avatar kept changing on a screen to the right of the panel. One questioner, perhaps prompted by this, asking if all of this was actually really engaging for people.

Uhrmacher thought that Second Life generates the same degree of interaction and pressure to interact as exists in the real world; and that companies – recognizing this – will attempt to engage and entertain people to bring them back. They still have to fulfill their brand promise though in a way that is more compelling than a traditional Web site. I don’t think I buy this idea that one experiences the same kind of interaction of pressure to interact that one does in the real world. While there is certainly some very cool stuff in Second Life there are also vast stretches of nothing that are not especially compelling or interactive. On top of that, I often don’t want to necessarily interact with the people I see in Second Life. Not because they’re bad people or anything but because I generally don’t strike up conversations with strangers in the real world either.

I was talking with Hiawatha Bray from the Boston Globe after the event about this idea on ad hoc interaction with strangers. There are plenty of times when I go into a store simply to make a purchase. The fact that there are others in the store – potentially shopping for the same item as me – doesn’t make them fair game. We joked that if you started talking to everyone about what they were doing, buying, thinking, etc. you’d probably be escorted out of the store by security.

Anyhow, back to the question of Second Life ability to really engage. Lester explained that his background is neuroscience and that one of the things that our brains do really well is filling in cognitive holes. He went on to explain that when you are in Second Life, because you are interacting with real people in three dimensional space, your brain begins to function as though everything in the space is real. This is one of the reasons people get so immersed in Second Life.

Another questioner wanted to hear the panels thoughts on the experience of construction and creation in Second Life – an important aspect that is often overlooked.

Uhrmacher agreed that co-creation is really important to Second Life and that more and more, members of the community are being invited to participate and build. (I took this to mean that the community was being invited to build by a company or other entity within Second Life rather than to build for themselves.)

The issue of identity and authenticity came up again. Lester explained that they are working on ways for people to prove who they are – the first step will be age verification – but that this is a challenge in all online environments. Askew said that this is really hard to create secure and authenticated identities for financial services but that they had to deal with it on the traditional Web as well. Developing standards will be critical – especially as people want to move their identities from one world to another.

Someone else wanted to know how does the business aspect of Second Life works and how much it costs. Stein explained that it starts with fixed costs (which are sent by Linden Labs). After that, you need to look at what you are trying to accomplish – the effort, scope and creativity will determine the ultimate cost. He went on the say that the costs are comparable to developing a good Flash site.

I called him on that, point out that a good Flash Website would probably be seen by more people. Not necessarily, he said, at any given time there are 30,000-40,000 people in Second Life and no Web sites have that kind of concurrent traffic. That may be true, but it still doesn’t make sense. A more correct analogy would be to look at all of the concurrent users of the Web itself (I’m willing to bet it’s a lot more than 40,000). I personally think that the whole numbers discussion about Second Life is immaterial. The fact remains that at any given time there are a ton of people on there; but they are all over the place. This means that investing to develop a presence may not pay off in the short term; but the same was true of the Web and that changed very very quickly.

That was essentially the end of the formal panel portion of the evening. I spend some time talking with John Lester and Hiawatha and enjoyed myself throughly. I was also able to catch up with Stein and Rodzilla before night was out. All of the panelists did a great job. I especially enjoyed my conversation with Stein at the very end of the evening.

Second Life – and other worlds like it – are here to stay in one form or another and it was a good topic for the the evening’s meeting. The next meeting will be on June 7th at the Watertown Public Library and will be focused on the business case for social media. Cymfony will be the sponsor.

[tags]SMCBoston, Social Media Club, Social Media, Second Life, John Rodzilla, Emerson College, Mike Askew, Fidelity Investments Center for Applied Technology, Drew Stein, Infinite Vision Media, John Lester, Linden Labs, Aaron Uhrmacher, Ken Peters, Text 100[/tags]

The Same All Over The World

I can’t stand the how much the differences between people are emphasized; and how those differences are used to fuel hate and jealousy and hurt and hardship.

If you think about it, the world we share right now was once just an empty void in the vacuum of space. Somehow (and frankly, the specific how isn’t that important) all of us now here, and all who came before and all who will come after share a common origin. And all of us will share the common fate of someday not being here.

Thinking about this made me wonder if there isn’t a way that people can start of share some of the things that make us all the same rather than the things that make us different. I’m a huge fan of Flickr and am often struck, as I look through people’s photographs, by how wonderful we are and how much we have in common. This led me to start a group in Flickr called The Same All Over The World.

Bert Kommerij, whom I met through Flickr, posted on the idea yesterday and it prompted me to try to get more people more involved.

The idea is to collect photographs of people which share some common elements:

– Wearing an outfit that is special/meaningful to them
– Seated outdoors in daylight
– In their “natural environment”
– Looking directly at the camera and smiling
– Holding a stone or a pebble (?)
– Tagged with TSAW

What do people think? Can we use social media to do more than create and participate in narrow communities around specific ideas and interests? Feel free to visit the group (there are no photos yet), sign up and share your thoughts on how this idea might be executed. If you’re not on Flickr share ideas here.

[tags]Flickr, photographs, people, same, similar, common, sharing, world, TSAW[/tags]

MIT Media Lab H2.0 Conference – New Minds

Last week I was able to attend h2.o, a conference organized by the MIT Media Lab that was focused on Human 2.0. The theme of the conference was new minds, new bodies and new identities and to help support the goals of the them the Lab has recently created a Center for Human Augmentation.

I was interested in attending after hearing Frank Moss, the director of the Lab, speak at an MIT Communications Forum event earlier this year. While I often attend events because of a personal curiosity or professional aims, this one was different. My son, who is 10, has a number of neurological and psychological issues. His condition has profound effects on our family.

When I arrived at the conference last week I was running late. I missed the keynote by Oliver Sacks and some of the first session. That was a bummer but not the biggest deal in the world. The first session wasn’t what I wanted to hear (although it was interesting none the less).

I was interested in the morning’s second session – which was on the theme of “new minds.” There were three stand alone presentations – by Ed Boyden (assistant professor of media arts and sciences, MIT Media Lab), Douglas Smith, MD (professor, department of neurosurgery and director of the center for brain injury and repair, University of Pennsylvania) and John Donoghue (Henry Merritt Wilson professor of neuroscience, Brown University).

Ed Boyden – Engineering the Brain: Toward Systematic Cures for Neural Disorders
Boyden is working on a project focused on re-engineering the brain’s circuits. The lab is new, as are the projects. The hope is to develop new tools to treat the brain directly.

The goals of the projects are:-

To treat neurological and psychiatric issues
To augment cognition
To better understand the human experience

Doing these things requires new, systematic tools. The 20th century was the era of pharmacology with specific drugs available to solve single specific problems. Boyden hopes to create solutions that can be used to address multiple problems.

The challenging thing, of course, is that the brain is really complicated – and as you zoom in the complexity grows and grows. So how does one apply engineering concepts to these complex systems? By looking at behavior.

Boyden went on to discuss three potential approaches they are working on:

The first are devices for non-invasive brain stimulation. These are safe, can turn on or off specific regions of the brain and are being tested to treat conditions like depression. A wearable version is currently being developed in the Lab. In addition, there is also work being done on more focused – but still non-invasive – stimulation technologies.

The second is engineering software for automated, customized adaptive therapy. Boyden was looking at hypnotherapy and noticed that the scripts looked a lot like computer programs. They are using this similarity to develop customized hypnotherapy scripts. He demonstrated this with the conference moderator, John Hockenberry. The system asked a series of questions which modified the phrases and flow of the script. Boyden pointed out that while this customized hypnotherapy can be used to relax, it is also designed to help people develop or strengthen cognitive skill.

The final approach he presented was the ultra precise engineering of neural computation by optical neural control. This was the most gee whiz of the three. Basically, the idea is to use light to trigger and control neurons with the goal being the creation of optical neural control prosthetics. They understand how this works and will next focus on prosthetic design. Boyden can see applications for this approach in dealing with conditions like blindness, deafness and Parkinson’s. Lest this all be thought of as pie in the sky, he pointed out that this is being tested now.

It was exciting to see and hear about the different approaches being considered, studied and developed at the Lab.

Douglas Smith, The Brain is the Client: Designing a Back Door into the Nervous System
Smith is at Penn and is focused on traumatic brain and spinal cord injury. He described progress in developing a brain-machine interface as being at a crossroads as people try to figure out how to move electrical signals to and from the brain. For example, where and how does one connect to the brain?

As far as Smith is concerned a hard/sharp interface is not a good idea. The nervous system, he explained is promiscuous and so a wet and juicy interface (which is what it is used to) is more appropriate. But, he wondered, do we need to connect directly to the brain at all? He does not believe that we should and suggested establishing a brain/machine interface as far from the brain as possible to take advantage of the processing power of the central nervous system.

This power needs to have true, two-way communication if it hopes to do allow for complex tasks. Simple on-off functions are not good enough. Getting signals in and out – and allowing for the performance of complex tasks means that at some point there does need to be a hard interface. Smith and his team are addressing this with nerve development – the creation of neurofilament. Doing this means that there must be growth of neurons and axons.

Smith is developing a means to stretch axons to allow them to span multiple neurons. Nerve fibers obviously grow, but no one is quite sure where the growth occurs. This stretching technique appears to be working and seems to be mimicking natural processes.

The axons that have been grown in culture can then be removed and used to bypass damaged neurons and connect healthy isolated ones. Axons can grow fast and they are harnessing the capacity of axons to grow to address traumatic injury. Can they be used to repair spinal cord injuries as great as three centimeters? Smith showed images of this being accomplished.

Another application he discussed was the creation of a nervous tissue construct. In this case they created tube of nutrient/culture for the axon to grow into and around and were then able to transplant the resultant nerves into a animal. After four months, the new nerve was integrated into the animals nerve network.

It is not a huge leap to apply this to approach to lost limbs by connecting a multi-electrical array on a device to a host nerve using the grown axon package. This allows a wet-to-wet connection to a hard/tensioned device.

They have recovered neurons from patients and organ donors; and these neurons can be preserved and used to seed and stretch axons which is leading to real and practical clinical applications. They’ve figured out how to integrate with the nervous systems, how to move an electrical signal back and forth across the axon for two way communication. The only missing piece is the device able to receive and respond to the signals and that, Smith believes, will come.

John Donoghue, New Successes in Direct Brain/Neural Interface Design
Donoghue is known for developing brain/machine interfaces and devices. He is the executive director of Brown University’s Brain Science Program and is the founder and CTO of Cyberkinetics.

He discussed the use of neural interface devices that could be coupled directly to the nervous system to diagnose problems, treat conditions and repair function. Such devices already exist, he pointed out, using the pacemaker as an example.

Today, the ability exists to get signals into the brain (through electrical stimulation) and back out (with sensors). Neurotech is here – devices like cochlear implants and early work in retinal implants as well. Another – deep brain stimulator for movement disorders

At its simplest level, Donoghue described the human nervous system as being the brain sending a signal to a muscle resulting in an action. There are a number of conditions that can break the connection between the brain and the muscles. To deal with this, systems are being designed and developed to reconnect stranded brains to the outside world. They all consist of a sensor (that receives the brain’s signal) and a decoder that receives the signal and converts it into a signal that a device can react to.

CyberKinetics is working the BrainGate interface system. It consists of a hard interface (a 100 microelectrode array) that connects directly to the cerebral cortex. The signal processing is done externally and then sent to a system or device. There are currently four people using BrainGate – two w/ spinal cord damage, one with a deep stroke and one w/ ALS.

To make the system work, Donoghue and his team first needed to determine whether there were still signals occurring in the motor cortex (after the insult or injury causing the patient’s condition). They next had to modulate the signal by asking the patient to “preform” the task to see the signal of the intention to make the movement. Their ability to do this demonstrated that the signal was there and that it could be harnessed to control devices. Donoghue went on to show a number of videos of patients controlling movements and devices with the BrainGate interface.

The next steps for this technology are the development of a wearable system (for which a prototype already exists) and a system for connecting BrainGate to muscle for movement control. Over the next five years, Donoghue expects a whole array of sensors and stimulators that will be able to address a range of conditions.

While none of the three presentations was a silver bullet, it was heartening – both as a parent and as someone who believes in the positive possibilities of technology – to see smart people applying novel thinking to solving pressing problems.

[tags]MIT, Media Lab, H2.0, h2.o, Frank Moss, Ed Boyden, Douglas Smith, John Donoghue, Cyberkinetics, BrainGate, neural disorder, nerve growth, brain/machine interface, brain, spine[/tags]


For some reason, for a long time, I kept my various social media channels separate. I used Flickr for photos but didn’t really check out the blogs of people I know through Flickr. I read plenty of blogs but don’t check to see if the bloggers are also on Flickr or del.icio.us, etc. I’m trying to get better about this and here’s why.

On Saturday night (or Sunday morning) I was on Flickr and came across DoddieboBottie’s photos. I really really liked them and decided that the thing to do would be to write a post about them on my other blog. So I did. And the next morning I had an email from Dottie saying she’d read the post and thanking me.

Last night I saw that Alice Robison, one of the panelists on the fourth plenary session at MiT5 had posted something in response to my summary of the event. She had links to Flickr and del.icio.us and I went off to check them out.

For most people, this probably isn’t a big deal; but for me, for whatever reason, I’d been doing siloed social media and now I’ve decided to stop.

[tags]social media, silos, Flickr, del.icio.us, DoddieboBottie, Dottie Guy, Alice Robison, cha-cha-cha-change[/tags]

MiT5 – Third Plenary, Identity and Forth Plenary

After the great session on brand, I headed back over to the Bartos to attend the third plenary session. I was acting as the designated rapporteur for the day’s plenary sessions and so the notes for them are more in-depth. At some point the complete summaries will appear on the conference Web site but until then here are my notes.

Third Plenary – Copyright, Fair Use and the Cultural Commons

Wendy Gordon – is a professor of law and Paul J. Liacos Scholar in Law at Boston University. In many well-known articles, she has argued for an expansion of fair use utilizing economic, Lockean and ethical perspectives.

Gordon Quinn – is the president and founding member of Kartemquin Films, where for over 40 years he had been making cinema verite films that investigate and critique society by documenting the unfolding lives of real people (i.e., Hoop Dreams, 1994). Quinn is working on Milking the Rhino, a film examining community-based conservation in Africa and At The Death House Door, a film on a wrongful execution in Texas.

Hal Abelson – is a professor of electrical engineering and computer science at MIT. He is engaged in the interaction of law, policy and technology as they relate to the growth of the Internet and is active in projects at MIT and elsewhere to help bolster the intellectual commons. Abelson is a founding director of the Free Software Foundation, Creative Commons and Public Knowledge and serves as a consultant to HP Laboratories.

Patricia Aufderheide – is a professor in the School of Communication at American University where she also directs the Center for Social Media. She is the author of several books including Documentary: A Very Short Introduction (2007), The Daily Planet (2000), and Communications Policy in the Public Interest (1999). She has been a Fulbright and John Simon Guggenheim fellow and has served as a juror at the Sundance Film Festival. She received a career achievement award in 2006 from the International Documentary Association.

William Uricchio – is co-director of Comparative Media Studies at MIT and professor of comparative media history at the University of Utrecht in the Netherlands. His most recent book is Media Cultures, on the responses to media in post-9/11 Germany and the U.S.

Uricchio began by providing an overview of the roots of the debate around IP protection. In early 18th century England, the Statute of Anne (which formed the basis for US copyright law) transferred copyright protection from the publishers – who had enjoyed a royal monopoly in perpetuity – to the creators. This protection was good for 21 years with a 14 year extension.

This change provoked a robust response from the publishing industry and a whole series of court battles followed. One English case in particular, on the eve of the American Revolution, was Donaldson v. Beckett (1744). It had to do with the reach of the protection afforded creators and the publishers attempt to regain control of works for themselves. The courts decided that the publishers desire to regain control in perpetuity were not in the publics’ best interest.

This outcome – as was the case of the Statute of Anne – was reflected in the U.S. Constitution in its ideal of promoting science and the useful arts by providing to their authors and inventors the exclusive rights to their writings and discoveries.

So what did this protection look like in 18th century America? A time of horse and carriage? Copyrights lasted for 14 years (with a 14 year extension) in a time when it took days – or even weeks – to go from Boston to New York.

Now we live in an age of endless rights and extensions. Something is amiss. Bizarrely, the faster information circulates, the longer copyright protection lasts. This seems at odds with the intentions of the framers and the case law upon which they based their thinking. We’re back to the 18th century debate; back to the battle between creators’ rights and the industry, back to the battle of limited protections versus what seems like protection in perpetuity once again.

In light of what is being discussed at MiT5, it is important to ask what the new era of IP will look like.

With that, Uricchio introduced the panel and handing things over to Wendy Gordon.

Gordon set up a film on best practices for fair use that was created by a coalition of documentary filmmakers. Copyright is designed and intended to provide ground rules for using copyrighted materials. One can always use facts and ideas and one may use expression provided that its use is deemed to be fair.

It is difficult people to use all of the liberties that the law provides due to resource constraints – in terms determining and defending fair usage. One doesn’t need a lawyer though in order to use some of the rights provided by the law. In fact, if you create coalitions you may get unexpected support.

To get support, and to have full rights under copyright law, individuals and organization need to think about three things: coalitions to consider and address the issues, courage in terms of standing up for one’s legal rights and new customs that can be pointed to when challenges are made.

Our free speech rights aren’t always exercised because we often choose the second best option rather than insisting on being allowed our rights. This is a chilling effect driven by fear of the repercussions; but more than that, it creates a custom that allows rights holders to continue to act as they do.

So how does one take a stand for fair use? One approach is isolated courage – simply proceeding without securing the necessary rights. Another is to reach reciprocal agreements not to sue. While yet another is to consider the prisoners dilemma and try to come up with a cooperative first move – for example, putting content into the public domain.

What Pat and her group have created is a standard for what documentarians can use under fair use practice. The coalition they created wasn’t limited to filmmakers but even gained the support of insurance companies that are willing to insure projects that abide by the agreed-upon fair use standards. This adherence can then lead to customs that can ultimately change the way the law views content and usage.

Today, fear is driving the purchase of lots of licenses – which can lead to a vicious cycle for those courageous individuals who try to act in fair use. Through projects like this one, it becomes possible to push back on the misinformation of the content community to bring fair use back into common use.

The film – Fair use and Free Speech explains the creation, content and purpose of the Documentary Filmmakers’ Statement of Best Practices in Fair Use document.

Following the film, Gordon Quinn spoke. As a filmmaker coming out of the 1960s, Quinn said that many of the early films included fair use content everywhere. Now though, he’s found himself self-censoring. For example in Hoop Dreams, he paid $5,000 to license “Happy birthday.” In a more recent film, The New Americans, it was removed all together.

Quinn supports the Best Practices document mentioned in the film. He is just finishing a film on stem cell research that includes lots of fair use content. He has been able to proceed because he knows that it will be insured and that it will be aired. What he found particularly empowering was the knowledge that he didn’t have to go to anyone for direction. By relying on a set up agreed upon standards, filmmakers can determine for themselves the appropriateness of fair use in their works.

While Quinn is seeking to understand and use fair use, he is also a copyright holder and has concerns with how fair use is applied. He offered, as an example, footage from one of his earlier films of a young girl at a demonstration that was requested by filmmakers working on a project on abortion. The filmmakers wanted to use the footage to convey a sense of the time. When Quinn saw the footage in context he was concerned – it implied that the youngster in the film had herself received an abortion herself – and would not let them use the content.

Hal Abelson spoke next and presented himself as a simple nerd and one intimidated by the rest of the panel. Abelson addressed the issue of fair use in Academe and the fact that if it isn’t used it will be lost. He described the academic community as being “to chicken” to act of fair use and offered two recent examples that he’d come across.

The first was a request for a sentence of his to be included in another author’s work. The second was the inclusion – and ensuing comedy of errors – of a reference to recent research on the effects of alcohol on the anti-oxidant benefits of strawberries on a blog. (There was first a request that the copyrighted material be removed, which was posted to the blog, followed by an apology for the misunderstanding, followed by a subsequent request by another organization that the content be removed . . .)

At MIT, this problem has several manifestations. On Stellar [the school’s online course resource system] access to materials is often limited to students of a specific course and only for the duration of the course. Some of the works he cited were classical ones, clearly by their nature on longer under copyright; but the selected translations were still protected.

Abelson has been very active in developing the MIT Open Courseware program. For this they have avoided relying on fair use content in virtually all cases, electing to either secure permission for third-party content, removing it or recreating it. Of the 81 hours that it takes to produce a course for the system, approximately 40 percent of that time is spend dealing with protected content.

Universities, he believes need to rely more on open content and also become more aggressive about their use of fair use content. The restrictions being placed on usage – particularly on the limits placed on students access to information – spells the destruction of the university as an intellectual community. Use open content be more aggressive about fair use

Abelson was followed by Aufderheide, who wondered what the future will look like. Practice, she argued, makes practice and this makes it critical that people use their fair use rights. This was the case in the development and adoption of the Fair Use Best Practices that was adopted by the documentary filmmakers and of the agreement by the insurance industry to provide fair use coverage.

The model used by the documentary community can be applied elsewhere – the university is on example, as are other situations where the production of content has become a community process that lends itself to the creation of coalitions. The McCarther Foundation is also funding a project to create a fair use code for media literacy practitioners. This is especially important now that media literacy means helping people create the most compelling and creative content possible.

While all of the plenary sessions I attended were interesting, this one was probably the most important. For social media to work, there needs to be some understanding among those involved on how content will be used. Content appropriation and reinterpretation have become – thanks to technology – new tools for communication and expression. How people work with that content will have an impact on how that communication is received and, in turn, interpreted again. This panel presented a model for what can work and a warning for what might happen if steps aren’t taken to make it work.

Reimagining Identity
As was the case with the imaging panel this morning, I came to this with a set of expectations that didn’t nearly match up with the content. I’m pretty interested in the issues of identity in social media and was hoping that this would be discussed. Nope. This panel was more focused on how identity is created online (primarily through a discussion of celebrity culture that included the quote – “Tom Cruise is the most iconic actor in 20 years.” I’ve never thought of him that way but maybe that’s just me.

There was also an interesting presentation on the “Trickster Identity” but it was too nebulous and transitioned from one theme to another too quickly for me to follow a clear chain of logic. The third presentation of the session was on Deleuzian perspectives on ownership and identity on the Web. Of all the papers that were presented, this one was probably the least accessible to me and so I didn’t get much out of it.

Forth Plenary – Learning Through Remixing
If the panel on copyright was the most important of the conference, this one was the most inspiring. Many of the panels and discussions that had taken place were focused on ideas and theory. This one was focused on real applications and projects that illustrated the ideas of creativity, ownership and collaboration that were at the center of the conference.

Erik Blankinship – is a co-founder of Media Modifications, a new start-up whose mission is to expose and enhance the structure of media to make its full learning and creative potential accessible to all. He has many years of experience working with children as an inventor of educational technologies and activities and as a researcher studying to potential of digital media for teaching and learning literature, history, mathematics and game design. While an undergraduate at the University of Maryland, College Park, he was a recipient of the Jim Henson award for Projects Related to Puppetry.

Juan Devis – is a new media producer at KCET/PBS Los Angeles in charge of all original Web content including Web Stories, KCETs multimedia Webzine. He is currently working with the USC School of Cinematic Arts and the Institute of Multimedia Literacy to develop a serious game based on Mark Twain’s Huckleberry Finn. Devis was recently awarded a writer’s fellowship at ABC/Disney for his original screenplay Welcome to Tijuana which is scheduled for production in early 2008. Devis is president of the board at Freewaves, a non-profit media arts organization, and the project manager for OpenPlay.

Renee Hobbs – is associate professor of communication and education at Temple University where she directs the Media Education Lab. She has worked extensively with state departments of education in Maryland and Texas, and her new book Reading the Media: Media Literacy in High School English (2007) provides empirical evidence to document how media literacy improves adolescents’ reading comprehension skills.

Ricardo Pitts-Wiley – has been the artistic director of Mixed Magic Theatre for over 20 years. In that role, he has written/produced/directed a number of productions including From the Bard to the Bounce: A Hip-Hop Shakespeare Experience, Kwanzaa Song, The Great Battle for the Air, and four Annual Black History Month Celebrations at Portsmouth Abbey. Pitts-Wiley was resident artist at Brown University Summer High School in 2001.

Alice Robison – is a postdoctoral fellow in the Comparative Media Studies program at MIT, where she writes about literacy and video games. She is also a consultant for the New Media Literacies Project and advises several student-run organizations devoted to the study of video games and interactive media.

Henry Jenkins

Jenkins began by pointing out that there had been discussions throughout the conference of the historical antecedents of the topics at hand. In terms of using remixing as a tool for learning, he cited Lev Kuleshov – who started what may have been the first film studies program in the early days of the Soviet Union – asking his students to re-edit Birth of a Nation and Intolerance and also pointed to the use of commonplace books in the 19th century as an example of collected/appropriated content.

The purpose of this session is to share information on a number of current projects dedicated to promoting learning through remixing content. Jenkins pointed out that engineers learn how machines work by taking things apart and putting them back together. Can the same be done with culture? The people and projects represented on this panel demonstrate that it might.

Eric Blankenship starts things off by discussing his current company, Media Modifications. They invent tools for exposing and enhancing the structure of media to make its full creative and learning potential accessible to all. This is a theme he promises to return to throughout the course of his comments and demonstration.

If one starts with a black screen, you have the space to create a screenplay and ultimately a film or video. In the case of his demonstration, the video was a clip from Star Trek the Next Generation. On the left hand side of the screen the video of the scene appeared, on the right side, the text of the script. Blankenship was able to drag and drop sections of the script which in turn reordered the words and action in the video. He described it as being similar to magnetic poetry, exposing the structure of the media and allowing it to be rearranged and reloaded.

He next demonstrated how this type of remixing and restructuring could be used to create new content. In this case, he created a countdown by selecting and connecting numbers used by Star Trek characters in many many episodes. Giving fans access to the structure of media – as in this case – can be a lot of fun.

This project led them to begin further work around the idea of adaptations. In the case of the Star Trek countdown, he was able to adapt the Star Trek content to tell the simple story conveyed through the numbers in an interesting and original way. At this point he announced adapt.tv, a Web site (not yet launched) to provide access to tools for media adaptation.

He used to adapt.tv tools to do two demonstrations on how people can expose the structure of media to create new adaptations.

The first example was of The Fellowship of the Ring and it started with two representations of the same content in text and video side-by-side. This allows for the comparison of the two forms to understand what is happening in each. Across the top of the screen, two time lines – one for the movie and the other for the book – appeared and were connected where the two formats shared content. He described this capability as a new type of closed captioning that allows additional detail from either media to be used to enhance the other. As a scene played on the video, the text related to the screen from the book was highlighted, illustrating those parts of the book used in developing the film.

The second example used Romeo and Juliet. Two different films used – Zeffirelli’s from 1968 and the 1996 DiCaprio version. In each case, the connections to the source text were shown at the top of the screen. This allowed one to see how the different film versions had adapted the text differently, choosing to emphasize or ignore sections of the story. This exposure of the underlying structure creates opportunities for students to study and consider the thinking and context behind the final content.

A final fun element of the process that Blankenship demonstrated was the ability to cast a remixed version of the film by using and combining performers from each of the versions at hand.

All of this provides for the deep analysis of content in multiple formats. With this, Blankenship’s time came to a close.

He was followed by Juan Devis.

In 2002/2003 Devis worked to develop a video game with students at Belmont HS in Los Angeles. Ninety five percent of the students were from Central America and Mexico and the goal was to create a game based on life in their home countries to help illustrate their history. It was a good idea, but there were two problems: first, the students were involved in the conceptualization of the game but not in its development or production and second they were living here in the US and were making a game about Latin America.

These problems led to the decision to do another project, a game about the neighborhoods they live it and that they’d be able to create and code themselves. Pacman was chosen as the basis of the game because it was familiar and essentially non-violent. It could serve as a simple template for the students to remix their neighborhoods.

Devis demonstrated one version of the game called El Imigrante. In this remix of Pac Man, a Mexican character moves through LA, picks up trash and tried to get a Green Card while avoiding the Minute Men. Each of these games (and there were several) became portraits of the students’ neighborhoods.

These games addressed the first of the problems – limited student involvement. Now Devis is working on a project to deal with the second – presenting American civics and history in an interesting and meaningful way. The project is build around Huckleberry Finn, which initially seemed like a great idea, but one that had a lot of problems that he hadn’t anticipated. Issues of bondage and slavery and language that, as a foreigner himself, Devis hadn’t considered.

They went back to the original novel and broke it apart – a process that is currently ongoing. As he and the students are reading the novel, they are creating a “side script” to reimagine it in 21st century LA. For example, instead of the Mississippi River they are using the LA River, etc.

While he is still planning on creating the game, he’s come to realize that there are a lot of issues around race and class that young people here in the US just don’t understand. Before making a game out of this content, the tools for understanding the issues needed to be applied – which is what led to the creation of the side script and the discussions that followed.

Renee Hobbs was next and she discussed how young people can be helped to read the media.

Hobbs started by discussing the importance of media literacy as a way for young people to understand the underlying nature of the media. Remixing, she believes, is a tool that can deepen our appreciation of the constructedness of media messages. As a media literacy educator, this understanding needs to be a core element of the community.

Remixing also helps illustrate the plasticity of meaning and how it can so easily be altered. This works because remixing allows us to see and appreciate the functions and structure as they are expressed in the content. In the past Hobbs had worked on developing curricula and materials for teachers but not for reaching kids directly.

To do this, Hobbs and her group have created My Pop Studio to help girls between 10 and 12 understand media literacy. It was launched in July, 2006 with funding from the Office for Women’s Health (part of DHSS). The site includes 15 games and a number of discussion forums and is used by 10,000 and 20,000 people per month.

There is a TV Studio that provides drag-and-drop editing tools. In the Music Studio kids can create their own pop star to get a sense of all of the choices involved in constructing popular music. In the Magazine Studio they can turn themselves into celebrities, constructing a celebrity identity to help understand image, celebrity culture and body ideals. In the Online Studio girls can experiment to understand how their social relationships are impacted by their online life.

The goal was to combine the key elements of media literacy (building skills around creative production and authorship, as well as analysis skills) by exploring themes like celebrity culture and music and how these are being used to form and understand identity.

To illustrate her points, Hobbs demonstrated Pop Star Producer. It begins by asking visitors to select a value message in order to consider how values play into decisions about music and image. Next they choose a musical genre, lyrics and an image/style for their character. When done, the avatar performs the music and other visitors rate the performance and try to determine the intended value message. It was an interested demo and exposed – to a degree – how music functions. This section also has a feature that shows how music is used to sell products by using it to convey ideals and associations.

As girls use My Pop Studio, the can begin to understand how meaning changes as a result of context. It also helps them to understand the essential “constructedness” of all representational forms. These aren’t things that kids just understand so it’s important for them to have an opportunity to learn.

Ricardo Pitts-Wiley spoke next on his work with the Mixed Magic Theater

Pitts-Wiley is currently working on Moby Dick and wasn’t sure how this project fits in with the others. This is because what he is doing is less about remixing than getting people into the mix.

One of the challenges in working with material like Moby Dick was to do it in a way that would be interesting to young people while preserving the integrity of the novel. His goal was not to deconstruct the novel but to keep it whole. Times change, people change, but Mody Dick remains constant.

The white whale is Ahab’s nemesis, but it isn’t something young people identify with; but the pursuit – and the idea of tracking and vengeance is something they very much understand. In this interpretation, Moby Dick is transformed from the white whale into the white thing – cocaine, the seas into a city and the Pequod into a subway.

With this new context, Pitts-Wiley took his group back into the novel to find the words and themes they would need to address. Although the setting had been shifted into their time, they still needed to tell Melville’s story.

The first time he did this project was at the Rhodes Island training school, a reform school. The participants were all bright people and he explained to them that they were going to be doing Moby Dick as cocaine – but that they would have to read the novel and then choose a character that they identified with and redefine it for the new context. One example of this recontextualization was Queequeg as a pimp. Why a pimp? Because Queequeg is colorful, exciting, dangerous, he deals in human flesh and he’s loyal. However the kids choose to redefine their characters, Pitts-Wiley forced them to defend their choice using the novel.

People often ask him why he uses Moby Dick as the basis for this project. It is, he said, because it is all there. All of the characters are there, the history is there, the culture is there so there is no need to invent any of them. It is also great and challenging literature.

Pitts-Wiley chose to complicate his task in producing Moby Dick by doing two versions simultaneously – one with young people and the other with older members of the community. Part of this decision was based on his belief that young people are taught things that are important but that are not demonstrated as being important in the community.

Part of his goal is to create a community around a shared language; and for him, having many members of the community read Moby Dick helps to create that common language and deeper community. It offers opportunities for engagement between different people; but only if everyone shares the experience of reading the novel.

The idea of community building aside, Pitts-Wiley still needed to tell the story. As the two companies – the young one and the older one – worked on their productions, they began to teach and learn from one another. Not just about the novel, but about community and the impact of culture on community. Throughout the production, familiar cultural elements – music, fashion, authority figures – are used to convey the meaning of Melville’s work.

Pitts-Wiley digressed for a time to describe the size, scope and impact of the drug culture until Jenkins let him know his time was coming to a close.

He then discussed the importance of keeping people moving into the future – but not at the expense of older literature. Moby Dick is the first of three projects. The next one will be Frankenstein followed by Uncle Tom’s Cabin. The big goal of this program is to change the literary landscape of the community over the next 10 years and to bring young people not only into the technical age, but also into the literary age.

Alice Robison was the panel’s final speaker.

Robison is working on a project with Jenkins at CMS around remixing. Her comments focused on the idea that new media literacy borrows from and extends on the concepts of new literacy studies. New media literacy expands on – but does not replace – new media studies by creating a place for the study of things like participatory culture.

The new media literacy framework borrows and builds upon some of new media studies’ cutting-edge theories of cognition. All of this has been slowly developing over the last 10-15 years as new theories of literacy, ones that go beyond functional models, have come about. The new theories focus more on the process by which people create meaning and include ideas like:

Multimodial literacy
Multiliteracy framework
collective intelligence
Problem-based learning
Situated and distributed cognition
Peripheral participation

At the heart of all of this is the question, where does meaning come from? Much of the way new literacy has been taught has been based on a consumerist model – to view an image and to understand what it is trying to communicate – similar to what Hobb’s work [described above] attempts to do.

This approach is now expanding to include the participant when thinking about the creation of meaning by considering what happens in the space between the individual as the consumer of a message and the writer or producer of a message. Robison isn’t interested in the making of meaning but more in what happens in the space between the production and consumption of meaning.

The role of context is something that she finds to be very important when discussing the issues of media literacy. As part of the New Media Literacy project they have identified a number of what she refers to as “exemplar videos,” and at this point Robison showed a number of them.

These videos, of which there are eight, are designed to provide a framework for understanding media literacy. The intention is that educators will access these videos to use with their students in a variety of environments. Robison sees value in the way that these videos expose the process of media making to people unfamiliar with the way in which new media works.

There is also a skills and competencies white paper available on the site that addresses topics like play, performance, simulation, appropriation, multitasking, distributed cognition, collective intelligence, judgmental, transmedia navigation, networking, negotiation as they relate to media creation and new media literacy.

The New Media Literacy project will be working with Pitts-Wiley and the Mixed Magic Theatre next year. Robison encouraged everyone to read the white paper as it develops many of the theories behind new media literacy and why they are so critical.

The issue of new media literacy is really important. I’m often worried that the capabilities presented by social media will simply be co-opted as tools to reach markets in new ways. To make these tools and ideas really valuable, people need to understand how to use them and how to dissect the content created with them. This final session of the day presented examples of social media being applied to enhance our understanding of content, context and meaning. All four of the projects that were presented will help accomplish this goal.

As I think I mentioned in an earlier post, attended this conference made me realize just how little we really understand about social media and its implications. Everyone is talking about the latest and greatest tool or technology but this event gave me pause to consider what is happening and why it matters in a larger sense. I’d suggest that PR and marketing people take the time to visit the event Web site and prowl around for a while. There are recordings of many of the sessions and a growing collection of the papers that were presented.

[tags]MIT, MiT5, Media, Copyright, Fair Use, Cultural Commons, Wendy Gordon, Gordon Quinn, Hal Abelson, Patricia Aufderheide, William Uricchio, Remixing, New Media Literacy, Erik Blankinship, Juan Devis, Ricardo Pitts-Wiley, Renee Hobbs, Alice Robison, Henry Jenkins[/tags]