MIT Media Lab H2.0 Conference – New Minds

Last week I was able to attend h2.o, a conference organized by the MIT Media Lab that was focused on Human 2.0. The theme of the conference was new minds, new bodies and new identities and to help support the goals of the them the Lab has recently created a Center for Human Augmentation.

I was interested in attending after hearing Frank Moss, the director of the Lab, speak at an MIT Communications Forum event earlier this year. While I often attend events because of a personal curiosity or professional aims, this one was different. My son, who is 10, has a number of neurological and psychological issues. His condition has profound effects on our family.

When I arrived at the conference last week I was running late. I missed the keynote by Oliver Sacks and some of the first session. That was a bummer but not the biggest deal in the world. The first session wasn’t what I wanted to hear (although it was interesting none the less).

I was interested in the morning’s second session – which was on the theme of “new minds.” There were three stand alone presentations – by Ed Boyden (assistant professor of media arts and sciences, MIT Media Lab), Douglas Smith, MD (professor, department of neurosurgery and director of the center for brain injury and repair, University of Pennsylvania) and John Donoghue (Henry Merritt Wilson professor of neuroscience, Brown University).

Ed Boyden – Engineering the Brain: Toward Systematic Cures for Neural Disorders
Boyden is working on a project focused on re-engineering the brain’s circuits. The lab is new, as are the projects. The hope is to develop new tools to treat the brain directly.

The goals of the projects are:-

To treat neurological and psychiatric issues
To augment cognition
To better understand the human experience

Doing these things requires new, systematic tools. The 20th century was the era of pharmacology with specific drugs available to solve single specific problems. Boyden hopes to create solutions that can be used to address multiple problems.

The challenging thing, of course, is that the brain is really complicated – and as you zoom in the complexity grows and grows. So how does one apply engineering concepts to these complex systems? By looking at behavior.

Boyden went on to discuss three potential approaches they are working on:

The first are devices for non-invasive brain stimulation. These are safe, can turn on or off specific regions of the brain and are being tested to treat conditions like depression. A wearable version is currently being developed in the Lab. In addition, there is also work being done on more focused – but still non-invasive – stimulation technologies.

The second is engineering software for automated, customized adaptive therapy. Boyden was looking at hypnotherapy and noticed that the scripts looked a lot like computer programs. They are using this similarity to develop customized hypnotherapy scripts. He demonstrated this with the conference moderator, John Hockenberry. The system asked a series of questions which modified the phrases and flow of the script. Boyden pointed out that while this customized hypnotherapy can be used to relax, it is also designed to help people develop or strengthen cognitive skill.

The final approach he presented was the ultra precise engineering of neural computation by optical neural control. This was the most gee whiz of the three. Basically, the idea is to use light to trigger and control neurons with the goal being the creation of optical neural control prosthetics. They understand how this works and will next focus on prosthetic design. Boyden can see applications for this approach in dealing with conditions like blindness, deafness and Parkinson’s. Lest this all be thought of as pie in the sky, he pointed out that this is being tested now.

It was exciting to see and hear about the different approaches being considered, studied and developed at the Lab.

Douglas Smith, The Brain is the Client: Designing a Back Door into the Nervous System
Smith is at Penn and is focused on traumatic brain and spinal cord injury. He described progress in developing a brain-machine interface as being at a crossroads as people try to figure out how to move electrical signals to and from the brain. For example, where and how does one connect to the brain?

As far as Smith is concerned a hard/sharp interface is not a good idea. The nervous system, he explained is promiscuous and so a wet and juicy interface (which is what it is used to) is more appropriate. But, he wondered, do we need to connect directly to the brain at all? He does not believe that we should and suggested establishing a brain/machine interface as far from the brain as possible to take advantage of the processing power of the central nervous system.

This power needs to have true, two-way communication if it hopes to do allow for complex tasks. Simple on-off functions are not good enough. Getting signals in and out – and allowing for the performance of complex tasks means that at some point there does need to be a hard interface. Smith and his team are addressing this with nerve development – the creation of neurofilament. Doing this means that there must be growth of neurons and axons.

Smith is developing a means to stretch axons to allow them to span multiple neurons. Nerve fibers obviously grow, but no one is quite sure where the growth occurs. This stretching technique appears to be working and seems to be mimicking natural processes.

The axons that have been grown in culture can then be removed and used to bypass damaged neurons and connect healthy isolated ones. Axons can grow fast and they are harnessing the capacity of axons to grow to address traumatic injury. Can they be used to repair spinal cord injuries as great as three centimeters? Smith showed images of this being accomplished.

Another application he discussed was the creation of a nervous tissue construct. In this case they created tube of nutrient/culture for the axon to grow into and around and were then able to transplant the resultant nerves into a animal. After four months, the new nerve was integrated into the animals nerve network.

It is not a huge leap to apply this to approach to lost limbs by connecting a multi-electrical array on a device to a host nerve using the grown axon package. This allows a wet-to-wet connection to a hard/tensioned device.

They have recovered neurons from patients and organ donors; and these neurons can be preserved and used to seed and stretch axons which is leading to real and practical clinical applications. They’ve figured out how to integrate with the nervous systems, how to move an electrical signal back and forth across the axon for two way communication. The only missing piece is the device able to receive and respond to the signals and that, Smith believes, will come.

John Donoghue, New Successes in Direct Brain/Neural Interface Design
Donoghue is known for developing brain/machine interfaces and devices. He is the executive director of Brown University’s Brain Science Program and is the founder and CTO of Cyberkinetics.

He discussed the use of neural interface devices that could be coupled directly to the nervous system to diagnose problems, treat conditions and repair function. Such devices already exist, he pointed out, using the pacemaker as an example.

Today, the ability exists to get signals into the brain (through electrical stimulation) and back out (with sensors). Neurotech is here – devices like cochlear implants and early work in retinal implants as well. Another – deep brain stimulator for movement disorders

At its simplest level, Donoghue described the human nervous system as being the brain sending a signal to a muscle resulting in an action. There are a number of conditions that can break the connection between the brain and the muscles. To deal with this, systems are being designed and developed to reconnect stranded brains to the outside world. They all consist of a sensor (that receives the brain’s signal) and a decoder that receives the signal and converts it into a signal that a device can react to.

CyberKinetics is working the BrainGate interface system. It consists of a hard interface (a 100 microelectrode array) that connects directly to the cerebral cortex. The signal processing is done externally and then sent to a system or device. There are currently four people using BrainGate – two w/ spinal cord damage, one with a deep stroke and one w/ ALS.

To make the system work, Donoghue and his team first needed to determine whether there were still signals occurring in the motor cortex (after the insult or injury causing the patient’s condition). They next had to modulate the signal by asking the patient to “preform” the task to see the signal of the intention to make the movement. Their ability to do this demonstrated that the signal was there and that it could be harnessed to control devices. Donoghue went on to show a number of videos of patients controlling movements and devices with the BrainGate interface.

The next steps for this technology are the development of a wearable system (for which a prototype already exists) and a system for connecting BrainGate to muscle for movement control. Over the next five years, Donoghue expects a whole array of sensors and stimulators that will be able to address a range of conditions.

While none of the three presentations was a silver bullet, it was heartening – both as a parent and as someone who believes in the positive possibilities of technology – to see smart people applying novel thinking to solving pressing problems.

[tags]MIT, Media Lab, H2.0, h2.o, Frank Moss, Ed Boyden, Douglas Smith, John Donoghue, Cyberkinetics, BrainGate, neural disorder, nerve growth, brain/machine interface, brain, spine[/tags]

Mixing

For some reason, for a long time, I kept my various social media channels separate. I used Flickr for photos but didn’t really check out the blogs of people I know through Flickr. I read plenty of blogs but don’t check to see if the bloggers are also on Flickr or del.icio.us, etc. I’m trying to get better about this and here’s why.

On Saturday night (or Sunday morning) I was on Flickr and came across DoddieboBottie’s photos. I really really liked them and decided that the thing to do would be to write a post about them on my other blog. So I did. And the next morning I had an email from Dottie saying she’d read the post and thanking me.

Last night I saw that Alice Robison, one of the panelists on the fourth plenary session at MiT5 had posted something in response to my summary of the event. She had links to Flickr and del.icio.us and I went off to check them out.

For most people, this probably isn’t a big deal; but for me, for whatever reason, I’d been doing siloed social media and now I’ve decided to stop.

[tags]social media, silos, Flickr, del.icio.us, DoddieboBottie, Dottie Guy, Alice Robison, cha-cha-cha-change[/tags]

MiT5 – Third Plenary, Identity and Forth Plenary

After the great session on brand, I headed back over to the Bartos to attend the third plenary session. I was acting as the designated rapporteur for the day’s plenary sessions and so the notes for them are more in-depth. At some point the complete summaries will appear on the conference Web site but until then here are my notes.

Third Plenary – Copyright, Fair Use and the Cultural Commons

Panel:
Wendy Gordon – is a professor of law and Paul J. Liacos Scholar in Law at Boston University. In many well-known articles, she has argued for an expansion of fair use utilizing economic, Lockean and ethical perspectives.

Gordon Quinn – is the president and founding member of Kartemquin Films, where for over 40 years he had been making cinema verite films that investigate and critique society by documenting the unfolding lives of real people (i.e., Hoop Dreams, 1994). Quinn is working on Milking the Rhino, a film examining community-based conservation in Africa and At The Death House Door, a film on a wrongful execution in Texas.

Hal Abelson – is a professor of electrical engineering and computer science at MIT. He is engaged in the interaction of law, policy and technology as they relate to the growth of the Internet and is active in projects at MIT and elsewhere to help bolster the intellectual commons. Abelson is a founding director of the Free Software Foundation, Creative Commons and Public Knowledge and serves as a consultant to HP Laboratories.

Patricia Aufderheide – is a professor in the School of Communication at American University where she also directs the Center for Social Media. She is the author of several books including Documentary: A Very Short Introduction (2007), The Daily Planet (2000), and Communications Policy in the Public Interest (1999). She has been a Fulbright and John Simon Guggenheim fellow and has served as a juror at the Sundance Film Festival. She received a career achievement award in 2006 from the International Documentary Association.

Moderator
William Uricchio – is co-director of Comparative Media Studies at MIT and professor of comparative media history at the University of Utrecht in the Netherlands. His most recent book is Media Cultures, on the responses to media in post-9/11 Germany and the U.S.

Uricchio began by providing an overview of the roots of the debate around IP protection. In early 18th century England, the Statute of Anne (which formed the basis for US copyright law) transferred copyright protection from the publishers – who had enjoyed a royal monopoly in perpetuity – to the creators. This protection was good for 21 years with a 14 year extension.

This change provoked a robust response from the publishing industry and a whole series of court battles followed. One English case in particular, on the eve of the American Revolution, was Donaldson v. Beckett (1744). It had to do with the reach of the protection afforded creators and the publishers attempt to regain control of works for themselves. The courts decided that the publishers desire to regain control in perpetuity were not in the publics’ best interest.

This outcome – as was the case of the Statute of Anne – was reflected in the U.S. Constitution in its ideal of promoting science and the useful arts by providing to their authors and inventors the exclusive rights to their writings and discoveries.

So what did this protection look like in 18th century America? A time of horse and carriage? Copyrights lasted for 14 years (with a 14 year extension) in a time when it took days – or even weeks – to go from Boston to New York.

Now we live in an age of endless rights and extensions. Something is amiss. Bizarrely, the faster information circulates, the longer copyright protection lasts. This seems at odds with the intentions of the framers and the case law upon which they based their thinking. We’re back to the 18th century debate; back to the battle between creators’ rights and the industry, back to the battle of limited protections versus what seems like protection in perpetuity once again.

In light of what is being discussed at MiT5, it is important to ask what the new era of IP will look like.

With that, Uricchio introduced the panel and handing things over to Wendy Gordon.

Gordon set up a film on best practices for fair use that was created by a coalition of documentary filmmakers. Copyright is designed and intended to provide ground rules for using copyrighted materials. One can always use facts and ideas and one may use expression provided that its use is deemed to be fair.

It is difficult people to use all of the liberties that the law provides due to resource constraints – in terms determining and defending fair usage. One doesn’t need a lawyer though in order to use some of the rights provided by the law. In fact, if you create coalitions you may get unexpected support.

To get support, and to have full rights under copyright law, individuals and organization need to think about three things: coalitions to consider and address the issues, courage in terms of standing up for one’s legal rights and new customs that can be pointed to when challenges are made.

Our free speech rights aren’t always exercised because we often choose the second best option rather than insisting on being allowed our rights. This is a chilling effect driven by fear of the repercussions; but more than that, it creates a custom that allows rights holders to continue to act as they do.

So how does one take a stand for fair use? One approach is isolated courage – simply proceeding without securing the necessary rights. Another is to reach reciprocal agreements not to sue. While yet another is to consider the prisoners dilemma and try to come up with a cooperative first move – for example, putting content into the public domain.

What Pat and her group have created is a standard for what documentarians can use under fair use practice. The coalition they created wasn’t limited to filmmakers but even gained the support of insurance companies that are willing to insure projects that abide by the agreed-upon fair use standards. This adherence can then lead to customs that can ultimately change the way the law views content and usage.

Today, fear is driving the purchase of lots of licenses – which can lead to a vicious cycle for those courageous individuals who try to act in fair use. Through projects like this one, it becomes possible to push back on the misinformation of the content community to bring fair use back into common use.

The film – Fair use and Free Speech explains the creation, content and purpose of the Documentary Filmmakers’ Statement of Best Practices in Fair Use document.

Following the film, Gordon Quinn spoke. As a filmmaker coming out of the 1960s, Quinn said that many of the early films included fair use content everywhere. Now though, he’s found himself self-censoring. For example in Hoop Dreams, he paid $5,000 to license “Happy birthday.” In a more recent film, The New Americans, it was removed all together.

Quinn supports the Best Practices document mentioned in the film. He is just finishing a film on stem cell research that includes lots of fair use content. He has been able to proceed because he knows that it will be insured and that it will be aired. What he found particularly empowering was the knowledge that he didn’t have to go to anyone for direction. By relying on a set up agreed upon standards, filmmakers can determine for themselves the appropriateness of fair use in their works.

While Quinn is seeking to understand and use fair use, he is also a copyright holder and has concerns with how fair use is applied. He offered, as an example, footage from one of his earlier films of a young girl at a demonstration that was requested by filmmakers working on a project on abortion. The filmmakers wanted to use the footage to convey a sense of the time. When Quinn saw the footage in context he was concerned – it implied that the youngster in the film had herself received an abortion herself – and would not let them use the content.

Hal Abelson spoke next and presented himself as a simple nerd and one intimidated by the rest of the panel. Abelson addressed the issue of fair use in Academe and the fact that if it isn’t used it will be lost. He described the academic community as being “to chicken” to act of fair use and offered two recent examples that he’d come across.

The first was a request for a sentence of his to be included in another author’s work. The second was the inclusion – and ensuing comedy of errors – of a reference to recent research on the effects of alcohol on the anti-oxidant benefits of strawberries on a blog. (There was first a request that the copyrighted material be removed, which was posted to the blog, followed by an apology for the misunderstanding, followed by a subsequent request by another organization that the content be removed . . .)

At MIT, this problem has several manifestations. On Stellar [the school’s online course resource system] access to materials is often limited to students of a specific course and only for the duration of the course. Some of the works he cited were classical ones, clearly by their nature on longer under copyright; but the selected translations were still protected.

Abelson has been very active in developing the MIT Open Courseware program. For this they have avoided relying on fair use content in virtually all cases, electing to either secure permission for third-party content, removing it or recreating it. Of the 81 hours that it takes to produce a course for the system, approximately 40 percent of that time is spend dealing with protected content.

Universities, he believes need to rely more on open content and also become more aggressive about their use of fair use content. The restrictions being placed on usage – particularly on the limits placed on students access to information – spells the destruction of the university as an intellectual community. Use open content be more aggressive about fair use

Abelson was followed by Aufderheide, who wondered what the future will look like. Practice, she argued, makes practice and this makes it critical that people use their fair use rights. This was the case in the development and adoption of the Fair Use Best Practices that was adopted by the documentary filmmakers and of the agreement by the insurance industry to provide fair use coverage.

The model used by the documentary community can be applied elsewhere – the university is on example, as are other situations where the production of content has become a community process that lends itself to the creation of coalitions. The McCarther Foundation is also funding a project to create a fair use code for media literacy practitioners. This is especially important now that media literacy means helping people create the most compelling and creative content possible.

While all of the plenary sessions I attended were interesting, this one was probably the most important. For social media to work, there needs to be some understanding among those involved on how content will be used. Content appropriation and reinterpretation have become – thanks to technology – new tools for communication and expression. How people work with that content will have an impact on how that communication is received and, in turn, interpreted again. This panel presented a model for what can work and a warning for what might happen if steps aren’t taken to make it work.

Reimagining Identity
As was the case with the imaging panel this morning, I came to this with a set of expectations that didn’t nearly match up with the content. I’m pretty interested in the issues of identity in social media and was hoping that this would be discussed. Nope. This panel was more focused on how identity is created online (primarily through a discussion of celebrity culture that included the quote – “Tom Cruise is the most iconic actor in 20 years.” I’ve never thought of him that way but maybe that’s just me.

There was also an interesting presentation on the “Trickster Identity” but it was too nebulous and transitioned from one theme to another too quickly for me to follow a clear chain of logic. The third presentation of the session was on Deleuzian perspectives on ownership and identity on the Web. Of all the papers that were presented, this one was probably the least accessible to me and so I didn’t get much out of it.

Forth Plenary – Learning Through Remixing
If the panel on copyright was the most important of the conference, this one was the most inspiring. Many of the panels and discussions that had taken place were focused on ideas and theory. This one was focused on real applications and projects that illustrated the ideas of creativity, ownership and collaboration that were at the center of the conference.

Panel:
Erik Blankinship – is a co-founder of Media Modifications, a new start-up whose mission is to expose and enhance the structure of media to make its full learning and creative potential accessible to all. He has many years of experience working with children as an inventor of educational technologies and activities and as a researcher studying to potential of digital media for teaching and learning literature, history, mathematics and game design. While an undergraduate at the University of Maryland, College Park, he was a recipient of the Jim Henson award for Projects Related to Puppetry.

Juan Devis – is a new media producer at KCET/PBS Los Angeles in charge of all original Web content including Web Stories, KCETs multimedia Webzine. He is currently working with the USC School of Cinematic Arts and the Institute of Multimedia Literacy to develop a serious game based on Mark Twain’s Huckleberry Finn. Devis was recently awarded a writer’s fellowship at ABC/Disney for his original screenplay Welcome to Tijuana which is scheduled for production in early 2008. Devis is president of the board at Freewaves, a non-profit media arts organization, and the project manager for OpenPlay.

Renee Hobbs – is associate professor of communication and education at Temple University where she directs the Media Education Lab. She has worked extensively with state departments of education in Maryland and Texas, and her new book Reading the Media: Media Literacy in High School English (2007) provides empirical evidence to document how media literacy improves adolescents’ reading comprehension skills.

Ricardo Pitts-Wiley – has been the artistic director of Mixed Magic Theatre for over 20 years. In that role, he has written/produced/directed a number of productions including From the Bard to the Bounce: A Hip-Hop Shakespeare Experience, Kwanzaa Song, The Great Battle for the Air, and four Annual Black History Month Celebrations at Portsmouth Abbey. Pitts-Wiley was resident artist at Brown University Summer High School in 2001.

Alice Robison – is a postdoctoral fellow in the Comparative Media Studies program at MIT, where she writes about literacy and video games. She is also a consultant for the New Media Literacies Project and advises several student-run organizations devoted to the study of video games and interactive media.

Moderator:
Henry Jenkins

Jenkins began by pointing out that there had been discussions throughout the conference of the historical antecedents of the topics at hand. In terms of using remixing as a tool for learning, he cited Lev Kuleshov – who started what may have been the first film studies program in the early days of the Soviet Union – asking his students to re-edit Birth of a Nation and Intolerance and also pointed to the use of commonplace books in the 19th century as an example of collected/appropriated content.

The purpose of this session is to share information on a number of current projects dedicated to promoting learning through remixing content. Jenkins pointed out that engineers learn how machines work by taking things apart and putting them back together. Can the same be done with culture? The people and projects represented on this panel demonstrate that it might.

Eric Blankenship starts things off by discussing his current company, Media Modifications. They invent tools for exposing and enhancing the structure of media to make its full creative and learning potential accessible to all. This is a theme he promises to return to throughout the course of his comments and demonstration.

If one starts with a black screen, you have the space to create a screenplay and ultimately a film or video. In the case of his demonstration, the video was a clip from Star Trek the Next Generation. On the left hand side of the screen the video of the scene appeared, on the right side, the text of the script. Blankenship was able to drag and drop sections of the script which in turn reordered the words and action in the video. He described it as being similar to magnetic poetry, exposing the structure of the media and allowing it to be rearranged and reloaded.

He next demonstrated how this type of remixing and restructuring could be used to create new content. In this case, he created a countdown by selecting and connecting numbers used by Star Trek characters in many many episodes. Giving fans access to the structure of media – as in this case – can be a lot of fun.

This project led them to begin further work around the idea of adaptations. In the case of the Star Trek countdown, he was able to adapt the Star Trek content to tell the simple story conveyed through the numbers in an interesting and original way. At this point he announced adapt.tv, a Web site (not yet launched) to provide access to tools for media adaptation.

He used to adapt.tv tools to do two demonstrations on how people can expose the structure of media to create new adaptations.

The first example was of The Fellowship of the Ring and it started with two representations of the same content in text and video side-by-side. This allows for the comparison of the two forms to understand what is happening in each. Across the top of the screen, two time lines – one for the movie and the other for the book – appeared and were connected where the two formats shared content. He described this capability as a new type of closed captioning that allows additional detail from either media to be used to enhance the other. As a scene played on the video, the text related to the screen from the book was highlighted, illustrating those parts of the book used in developing the film.

The second example used Romeo and Juliet. Two different films used – Zeffirelli’s from 1968 and the 1996 DiCaprio version. In each case, the connections to the source text were shown at the top of the screen. This allowed one to see how the different film versions had adapted the text differently, choosing to emphasize or ignore sections of the story. This exposure of the underlying structure creates opportunities for students to study and consider the thinking and context behind the final content.

A final fun element of the process that Blankenship demonstrated was the ability to cast a remixed version of the film by using and combining performers from each of the versions at hand.

All of this provides for the deep analysis of content in multiple formats. With this, Blankenship’s time came to a close.

He was followed by Juan Devis.

In 2002/2003 Devis worked to develop a video game with students at Belmont HS in Los Angeles. Ninety five percent of the students were from Central America and Mexico and the goal was to create a game based on life in their home countries to help illustrate their history. It was a good idea, but there were two problems: first, the students were involved in the conceptualization of the game but not in its development or production and second they were living here in the US and were making a game about Latin America.

These problems led to the decision to do another project, a game about the neighborhoods they live it and that they’d be able to create and code themselves. Pacman was chosen as the basis of the game because it was familiar and essentially non-violent. It could serve as a simple template for the students to remix their neighborhoods.

Devis demonstrated one version of the game called El Imigrante. In this remix of Pac Man, a Mexican character moves through LA, picks up trash and tried to get a Green Card while avoiding the Minute Men. Each of these games (and there were several) became portraits of the students’ neighborhoods.

These games addressed the first of the problems – limited student involvement. Now Devis is working on a project to deal with the second – presenting American civics and history in an interesting and meaningful way. The project is build around Huckleberry Finn, which initially seemed like a great idea, but one that had a lot of problems that he hadn’t anticipated. Issues of bondage and slavery and language that, as a foreigner himself, Devis hadn’t considered.

They went back to the original novel and broke it apart – a process that is currently ongoing. As he and the students are reading the novel, they are creating a “side script” to reimagine it in 21st century LA. For example, instead of the Mississippi River they are using the LA River, etc.

While he is still planning on creating the game, he’s come to realize that there are a lot of issues around race and class that young people here in the US just don’t understand. Before making a game out of this content, the tools for understanding the issues needed to be applied – which is what led to the creation of the side script and the discussions that followed.

Renee Hobbs was next and she discussed how young people can be helped to read the media.

Hobbs started by discussing the importance of media literacy as a way for young people to understand the underlying nature of the media. Remixing, she believes, is a tool that can deepen our appreciation of the constructedness of media messages. As a media literacy educator, this understanding needs to be a core element of the community.

Remixing also helps illustrate the plasticity of meaning and how it can so easily be altered. This works because remixing allows us to see and appreciate the functions and structure as they are expressed in the content. In the past Hobbs had worked on developing curricula and materials for teachers but not for reaching kids directly.

To do this, Hobbs and her group have created My Pop Studio to help girls between 10 and 12 understand media literacy. It was launched in July, 2006 with funding from the Office for Women’s Health (part of DHSS). The site includes 15 games and a number of discussion forums and is used by 10,000 and 20,000 people per month.

There is a TV Studio that provides drag-and-drop editing tools. In the Music Studio kids can create their own pop star to get a sense of all of the choices involved in constructing popular music. In the Magazine Studio they can turn themselves into celebrities, constructing a celebrity identity to help understand image, celebrity culture and body ideals. In the Online Studio girls can experiment to understand how their social relationships are impacted by their online life.

The goal was to combine the key elements of media literacy (building skills around creative production and authorship, as well as analysis skills) by exploring themes like celebrity culture and music and how these are being used to form and understand identity.

To illustrate her points, Hobbs demonstrated Pop Star Producer. It begins by asking visitors to select a value message in order to consider how values play into decisions about music and image. Next they choose a musical genre, lyrics and an image/style for their character. When done, the avatar performs the music and other visitors rate the performance and try to determine the intended value message. It was an interested demo and exposed – to a degree – how music functions. This section also has a feature that shows how music is used to sell products by using it to convey ideals and associations.

As girls use My Pop Studio, the can begin to understand how meaning changes as a result of context. It also helps them to understand the essential “constructedness” of all representational forms. These aren’t things that kids just understand so it’s important for them to have an opportunity to learn.

Ricardo Pitts-Wiley spoke next on his work with the Mixed Magic Theater

Pitts-Wiley is currently working on Moby Dick and wasn’t sure how this project fits in with the others. This is because what he is doing is less about remixing than getting people into the mix.

One of the challenges in working with material like Moby Dick was to do it in a way that would be interesting to young people while preserving the integrity of the novel. His goal was not to deconstruct the novel but to keep it whole. Times change, people change, but Mody Dick remains constant.

The white whale is Ahab’s nemesis, but it isn’t something young people identify with; but the pursuit – and the idea of tracking and vengeance is something they very much understand. In this interpretation, Moby Dick is transformed from the white whale into the white thing – cocaine, the seas into a city and the Pequod into a subway.

With this new context, Pitts-Wiley took his group back into the novel to find the words and themes they would need to address. Although the setting had been shifted into their time, they still needed to tell Melville’s story.

The first time he did this project was at the Rhodes Island training school, a reform school. The participants were all bright people and he explained to them that they were going to be doing Moby Dick as cocaine – but that they would have to read the novel and then choose a character that they identified with and redefine it for the new context. One example of this recontextualization was Queequeg as a pimp. Why a pimp? Because Queequeg is colorful, exciting, dangerous, he deals in human flesh and he’s loyal. However the kids choose to redefine their characters, Pitts-Wiley forced them to defend their choice using the novel.

People often ask him why he uses Moby Dick as the basis for this project. It is, he said, because it is all there. All of the characters are there, the history is there, the culture is there so there is no need to invent any of them. It is also great and challenging literature.

Pitts-Wiley chose to complicate his task in producing Moby Dick by doing two versions simultaneously – one with young people and the other with older members of the community. Part of this decision was based on his belief that young people are taught things that are important but that are not demonstrated as being important in the community.

Part of his goal is to create a community around a shared language; and for him, having many members of the community read Moby Dick helps to create that common language and deeper community. It offers opportunities for engagement between different people; but only if everyone shares the experience of reading the novel.

The idea of community building aside, Pitts-Wiley still needed to tell the story. As the two companies – the young one and the older one – worked on their productions, they began to teach and learn from one another. Not just about the novel, but about community and the impact of culture on community. Throughout the production, familiar cultural elements – music, fashion, authority figures – are used to convey the meaning of Melville’s work.

Pitts-Wiley digressed for a time to describe the size, scope and impact of the drug culture until Jenkins let him know his time was coming to a close.

He then discussed the importance of keeping people moving into the future – but not at the expense of older literature. Moby Dick is the first of three projects. The next one will be Frankenstein followed by Uncle Tom’s Cabin. The big goal of this program is to change the literary landscape of the community over the next 10 years and to bring young people not only into the technical age, but also into the literary age.

Alice Robison was the panel’s final speaker.

Robison is working on a project with Jenkins at CMS around remixing. Her comments focused on the idea that new media literacy borrows from and extends on the concepts of new literacy studies. New media literacy expands on – but does not replace – new media studies by creating a place for the study of things like participatory culture.

The new media literacy framework borrows and builds upon some of new media studies’ cutting-edge theories of cognition. All of this has been slowly developing over the last 10-15 years as new theories of literacy, ones that go beyond functional models, have come about. The new theories focus more on the process by which people create meaning and include ideas like:

Multimodial literacy
Multiliteracy framework
collective intelligence
Problem-based learning
Situated and distributed cognition
Peripheral participation

At the heart of all of this is the question, where does meaning come from? Much of the way new literacy has been taught has been based on a consumerist model – to view an image and to understand what it is trying to communicate – similar to what Hobb’s work [described above] attempts to do.

This approach is now expanding to include the participant when thinking about the creation of meaning by considering what happens in the space between the individual as the consumer of a message and the writer or producer of a message. Robison isn’t interested in the making of meaning but more in what happens in the space between the production and consumption of meaning.

The role of context is something that she finds to be very important when discussing the issues of media literacy. As part of the New Media Literacy project they have identified a number of what she refers to as “exemplar videos,” and at this point Robison showed a number of them.

These videos, of which there are eight, are designed to provide a framework for understanding media literacy. The intention is that educators will access these videos to use with their students in a variety of environments. Robison sees value in the way that these videos expose the process of media making to people unfamiliar with the way in which new media works.

There is also a skills and competencies white paper available on the site that addresses topics like play, performance, simulation, appropriation, multitasking, distributed cognition, collective intelligence, judgmental, transmedia navigation, networking, negotiation as they relate to media creation and new media literacy.

The New Media Literacy project will be working with Pitts-Wiley and the Mixed Magic Theatre next year. Robison encouraged everyone to read the white paper as it develops many of the theories behind new media literacy and why they are so critical.

The issue of new media literacy is really important. I’m often worried that the capabilities presented by social media will simply be co-opted as tools to reach markets in new ways. To make these tools and ideas really valuable, people need to understand how to use them and how to dissect the content created with them. This final session of the day presented examples of social media being applied to enhance our understanding of content, context and meaning. All four of the projects that were presented will help accomplish this goal.

As I think I mentioned in an earlier post, attended this conference made me realize just how little we really understand about social media and its implications. Everyone is talking about the latest and greatest tool or technology but this event gave me pause to consider what is happening and why it matters in a larger sense. I’d suggest that PR and marketing people take the time to visit the event Web site and prowl around for a while. There are recordings of many of the sessions and a growing collection of the papers that were presented.

[tags]MIT, MiT5, Media, Copyright, Fair Use, Cultural Commons, Wendy Gordon, Gordon Quinn, Hal Abelson, Patricia Aufderheide, William Uricchio, Remixing, New Media Literacy, Erik Blankinship, Juan Devis, Ricardo Pitts-Wiley, Renee Hobbs, Alice Robison, Henry Jenkins[/tags]

MiT5 – Disruptive Practices, Reproducing Images and Brand Strategy/Consumption Practices

The next morning came early. I was moderating a panel at 9:00 so I got into Kendal just after 8:00. I wandered around a bit taking pictures and talking with people and before I knew it I was running to make it to the panel before it started. (I made it with time to spare.)

Disruptive Practices
The panel was on Disruptive Practices. The first presenter was my old college chum Jim Cypher. Jim is doing media art through Somerville Community Access Television and is also the operations manager at the Larz Anderson Auto Museum.

Cypher showed a handful of videos that he’s created. Most of them were around vaguely (and is some cases explicitly) political themes. They included no narrative or interpretation and left it to the viewer to draw whatever conclusions they saw fit. The content itself was conceptual and generally repetitive but effective (if blunt) in conveying meaning withing the interpretive limits mentioned above.

Following the videos Cypher did a brief presentation on the idea behind disruptive mixing and mashups. One interesting point that he raised (and which all of the panelists did in one way or another) was around identity and the use of anonymity for creating this type of content. My own feelings on identity as ambivalent. Sometimes I feel that social media content and communication should always be done under ones own name and identity; but I also understand that there are times and cases where that can’t happen. In those cases, maintaining a persistent alter ego seems like the appropriate thing to do.

The next panelist was Jay Critchley. Jay is a visual/conceptual artist and his videos and presentation were mind boggling. Critchley incorporates his ideas on a regular basis. Establishing corporations allows him access and freedom that he might not receive as an individual.

He showed a handful of projects that he’s worked on that certainly deserved to be called disruptive. One was his submission to the Army Corps of Engineers for the development of Nantucket Sound. It included remaking the island as Martucket Eyeland and featured some outlandish ideas and suggestions for improvement. Now lots of people might come up with interesting ideas like this, but Jay takes it a step further. Not only does he have the ideas, he also develops them and has incorporated several companies to promote his ideas and bring them to whatever degree of fruition they might achieve.

Another one of his projects was the Old Glory Condom Corporation. Started as the realization of an idea first presented at an exhibition at the List Visual Arts Center at MIT in 1989, Old Glory Condoms went on to market condoms with a trademark incorporated the US flag and a condom. This led to the trademark’s initial rejection – a decision that did not stand.

During his presentation Critchley also aired video to illustrate ways that people are engaging with content in new ways. One example he used was a group of singers covering the Oreo cookie jingle. As I watched and thought about the idea of disruptive media and its application, I wondered how easily it might be co-opted. Unintentionally, the next speakers provided a hint.

Next up were Ben Mako Hill and Elizabeth Stark. He is at the Media Lab and she Harvard Law (which I mistakenly referred to as HBS and was quickly corrected. They discussed different – and often controversial – approaches to copyright: reformist (which aims to make current system work), Utopian (which seeks a new approach that builds on what is already in place) and transgressive (which rejects current thinking on copyright and encourages actions that challenge the current system).

They discussed the fact that copyright was essentially focused on the rights of the creator paid little attention to the rights of content users. The transgressive model seeks to challenge this thinking and is represented through the growing “pirate politics” that has emerged in various forms around the world. The most recognized example has been Sweden there the Torrent site PirateBay has spawned a full-blown political movement.

They also cited a case in France where Aziz Ridouan, a high school student has become a visible and outspoken advocate for piracy. Ridouan gained prominence by voicing his opinions of copyright laws during a press conference of the French equivalent of the RIAA.

One interesting point that came up during discussions of his situation touches on the issue of co-opting ideas and individuals for commercial or political gain. In the case of Ridouan, there have been questions as to the part of the Socialist party in his becoming the public voice for transgressive copyright thinking in France. While neither Hill nor Stark supported or refuted the claim, it did raise the issues (in my mind at least) of transparency and authenticity.

The only other issue that I wondered about during this discussion (and during several points during the conference) is the fact that many people (at least in the online communities that I spend time in) try to justify piracy by citing the poor quality of many copyrighted films, TV shows and music. I’m never clear on why – if the content is so bad – people want it in the first place whether it is free or not.


Reproducing Images

The next session I attended was Reproducing Images. Let me admit right here that I thought it was going to have something to do with Flickr and how images are shared. It wasn’t. It had more to do with how images can be used to convey meaning beyond their content and how content consumers have understood and interpreted visual information. While much of what was being discussed was interested it wasn’t what I was expecting. It was also highly academic and so was not, in the end, an especially interesting session for me.

Brand Strategy and Consumption Practices
I next was moderating the session Brand Strategy and Consumption Practices. This was absolutely fascinating to me – and not for the reasons I had assumed. I was imagining a discussion of how brand is conveyed and how it is changing. What was discussed instead was how brand is understood and can be studied in the current media environment. Because I was moderating the session I was unable to take notes as I now wish I had been able.

Zvezdan Vukanovic, the senior advisor for media analytics for the Government of Montenegro started off with a discussion of the interactive television and its role as a brand building tool. How can one really define interactivity with so formal a media channel as television – and how interactive can television as a channel really be?

Interactivity in the cases he described seemed to be limited to the ability to access increasingly discreet content pools (enabled by the fragmentation of the channel), the ability to get additional and deeper content on topics (or brands) of interest and to interact with brands through games, etc. His content was interesting but it described only a very limited form of brand interactivity.

One area that he touched on briefly though was the idea of peer-to-peer interactivity through enhanced television service and how this could give rise to user-defined branding. I think that this is something that happens naturally among people through unmediated channels and interactions but wondered how this type of thing would work in a medium like television where advertising and sponsors are so central to the content. Would they feel comfortable paying for a communications channel that could be harmful to their brands?

Andrew Feldstein, a doctoral candidate at Pace University went next. He discussed the ways in which consumers co-opt brands and build communities around shared experience. These communities, he pointed out, were originally started around tangible brands. What has been the impact, he wondered, of the divorce of the physical from the brand? And how do you validate, view and interpret what is essentially a nameless/faceless brand community?

Answering this question is at the root of the research he is doing. How can one measure and understand the attitudes of a brand community without a tangible good or a physical community? To do this, Feldstein has developed some deep analytic tools and has begun applying them to discreet communities that share some attributes (in this case, the negative attitudes toward Microsoft Vista within both the Macintosh and Ubuntu communities) but are, as it turns out, dramatically different in the underlying reasons for their shared opinions.

The complexity of his methodology – and the clarity and implications of his findings – were fascinating. As someone involved in marketing, I’d never seen so much information about the attitudes of brand communities distilled and presented. By Feldstein’s own admission, this work is still at a very early stage. I’m looking forward to hearing more about it though as I try to understand (and help clients understand) how brand communities operate in the online environment.

The sessions final presenters were Masahiko Kambe and Yuichi Washida, both of the Japanese advertising agency Hakuhodo. (Washida is also a research affiliate at CMS.) Their topic was word of mouth (WOM) and how a more developed understanding of the concept could be achieved and whether messages in word of mouth communication could be effectively controlled.

As was the case with Feldstein, Kambe and Washida based their comments on extremely deep and comprehensive research. They looked at two classes of WOM – common WOM (which is essentially people sharing commonly-known information about a brand) and gap WOM (which involves people with greater amounts of information sharing it in a way that increases their audiences understanding).

The research they presented was around the impact of WOM (in Japan) on the Toyota Yaris brand. They measured the impact of various media types (print, TV and online advertising; newspaper and magazine articles; etc) on common and gap WOM and found (one surprisingly) that different media types had different effects on WOM activity.

They also looked at how increasing the frequency of exposure to these media types would impact WOM and found that in some cases increases made the communication less effective. Given the general interest in WOM, I am hoping to get my hands on their detailed findings when I am able. As is the case with understanding the behavior of brand communities online, this research on the channels that influence WOM and their effectiveness was fascinating.

Of the sessions I participated in during the first part of the day, the one on brand consumption was the most interesting and the one who’s lessons I most want to understand and apply to my own work. I was disappointed by the image session but that had more to do with my expectations rather than the content that was presented. All-in-all, a good and thought-provoking set of sessions and information.

[tags]MIT, MiT5, Media in Transition, Disruption, Jim Cypher, Jay Critchley, Ben Mako Hill, Elizabeth Stark, Brand Strategy, Brand Communities, Zvezdan Vukanovic, Andrew Feldstein, Masahiko Kambe, Yuichi Washida[/tags]

MiT5 – Second Life, the Nature of the News and the Second Plenary

One thing I noticed during the first session was that the woman sitting next to me was participating in the conference in Second Life. Second Life was a pretty frequent topic/theme during the conference so I decided to head to the session on it.

Mary Hopper from Northeastern described her efforts at creating a knowledge system for the community. I had the opportunity to speak with Mary before the session and then again on and off throughout the day. She talked about the fact that she had had a vision of how knowledge could be organized since she was a child but had never had a way from expressing that vision before Second Life.

For Mary, what is cool is that Second Life provides a platform that allows people to ask questions like: how do you build a world? how do you get people into the world? and how can you engage people in creating the world? Her work on developing a theory of knowledge within Second Life is an exciting example of the potential answers.

Burcu Bakioglu looked at how some of the antisocial behaviors (hacking and griefing) within Second Life can be viewed as creating “performance narratives” that help to create shared stories for the community. She talked about the fact that poaching is frequent but that the poachers themselves are often poached and have their words and actions turned against them. These were interesting ideas but they were not presented in an especially compelling way.

Brent Britton talked about the idea of virtual ownership and intellectual property within Second Life. He wondered if Second Life needed a separate set of laws to govern behavior within the community. While he didn’t explicitly answer this question (or at least I can’t recall it if he did), he did point out that there are two mechanisms for control in Second Life: the terms and conditions and the code.

He pointed out that the T&C are essentially a contract that all too few people take the time to read, but that it is one that lays out the rules for the community – including some that ought to give people pause, particularly in the area of IP. At the end of the day though, everything is controlled by the underlying software code that powers the environment. And, almost like the physical laws of the real world, have the final say of what can and cannot happen in Second Life.

The final presenter in this panel was Jeffery Bardzell. He spoke on the role of fashion as a tool for self-expression within Second Life. The only thing that struck me about Jeffery’s part of the discussion was his reckoning that he has more than 3,000 items of clothing for his avatar. That just seemed bananas to me.

As much as Second Life might be kicked around or derided in some circles it’s cool that it has provided a platform for someone like Mary to realize a longtime idea. As the ideas and technology behind social media are introduced we all need to be patient to see how they are applied and by whom.

The next session I attended was The Nature of News. I missed the start of the panel and came in while Claudia Schwarz from the Department of American Studies at the University of Innsbruck, Austria, was reading her paper, “Creativity, Ownership and Collaboration in the News Business.” Her points – which focused on the way the news media frames information as a means of asserting ownership – were interesting. What was more interesting though was the conversation that following.

This was focused on the role of the individual content consumer in terms of a potential contributor to the news process. The idea of citizen journalism is not new but it was interesting to hear perspectives from outside the US on this. The impression I got was that people in Europe assume that the US media is more narrowly focused that theirs (probably true) and that citizen journalism isn’t as active (probably false).

One attendee seemed to be arguing for a radically decentralized form of news gathering (built around citizen journalism and without much in the way of editorial oversight). The problem with this is that some stories need dedicated beat reporters in order to be uncovered and written – and this means focusing people not necessarily on a news story but rather on a news source that could produce stories. That is a hard commitment for a citizen journalist to make.

The other issue that came up during the conversation was around the potential death spiral that is happening in many newsrooms: lower circulation = less revenue = scaling back on reporters/correspondents = weak or non-differentiated news (based increasingly on the wire services) = lower circulation . . . It was not a new theme but always interesting to hear different people’s take on it.

Second plenary session – Collaboration and Collective Intelligence.

This panel started with Tom Malone, of the MIT Center for Collective Intelligence, discussing the idea of collaboration. He defined it as groups acting together in ways that seem intelligent. He pointed out that there have been examples of collective intelligence through time but that technology is allowing new forms that need to be considered and understood on a much deeper level.

The core question is how people and computers can be connected so that they can act more intelligently that any people, groups or systems have in the past. Doing this requires that you collect and connect the right people with the right computers. He cited NASA click workers and Gary Kasparov vs. the world as two examples of effective collective collaboration.

The first panelist to speak was Trebor Scholz, from the Institute for Distributed Creativity at SUNY Buffalo. He started by suggesting that the MySpace generation has a lot to learn about working for free. His point was that user generated content on sites like MySpace is what drives traffic (and ad-based revenue) without providing any sort of compensation to the creators.

He thought that the current climate of fear has led to more and more online interaction and that when we think about online interaction, we don’t generally think of the labor that this involves. This is in part due to the fact that people aren’t producing objects online, they are becoming “virtuoso speakers” and are doing it by posting, tagging and commenting.

Where does the value of online spaces come from? Scholz asked the audience. It comes from the collective intelligence of everyone that participates. Where does that value go? Into the hands of relatively few. Bacially, Tthe very few are benefiting on the backs of the many. He described this simply as capitalism moved online and wondered how people ought to be compensated for their participation and contribution.

It was a pretty interesting point. Especially if you consider that 40 percent of all traffic goes to a very small number of sites, that most of the content on the Web is still user-generated and that many of the companies crying loudly about their content rights aren’t talking about how they plan to compensate people for driving traffic to a given site . . .

Cory Ondrejka from Linden Labs spoke next. He mostly discussed their approach to IP, the size and growth of the internal economy and the volume of content being generated by members. (On a typical day, 34 user YEARS are spent on content creation!)

He then went on to talk about some of the collective intelligence applications of SL, mentioning the analysis on NOAA weather trend data for example. He described the collaborative/collective intelligence aspects of SL as being the main things that set it apart from the Web. As an example, he discussed what had happened with Aloft, a Starwood Hotels brand in SL. When they opened and began surveying users, they found that their design just didn’t work. They were able to redesign the space based on user feedback and input. He also discussed the rise and effectiveness of protest movements within SL as a means for members to reach Linden Labs with complaints and concerns.

I thought his points were interesting but I didn’t buy them all. For example, user feedback is often collected and used to improve the design of standard Web sites, so I didn’t think the Starwood thing was that compelling. The idea of protests cropping up though – and being effective – was very cool. Given the amount of time and effort people are putting into building SL (a privilege for which not only are they not compensated but for which they have to pay), I wish he’s responded to Scholz’s point regarding user compensation.

The final panelist was Mimi Ito, a cultural anthropologist on technology and kids who is currently at the Annenberg Center for Communication. She is doing ethnographic work around anime and games in Japan and how they shape the collective imagination.

Because Japan is in an age of media reference, media has become the mechanism through which they connect with more content than they could otherwise. This referencing of collective sources of culture provided by media has always been the case; but there has been a substantive change in the content and channels people interact with. Rich media content has become how we share and tell others who we are. Ito calls this hypersociality.

An example of this is Pokemon. What’s important isn’t just that it is abundant, but also about the relationship between various media forms – the mediamix – that allow content and channels to be combined to support each other. This illustrates the practices of sharing media and demonstrates how it is migrating away from static/stationary screens into the places and contexts of every day life

Pokemon brought the idea of content mobility to the fore and also demonstrated the ability of kids to comprehend and consumer high volumes of complex characters and dynamics. When kids get together w/ Pokemon, they find themselves participating in a collective imagination build around media and content.

As someone who’s played Pokemon with my kids and experienced it in all of its manifold expressions, I understand what Ito is talking about. Books, games, movies, clothes all are knit together to create a media-driven culture that its members immediately recognize and are able to participate in. I’ve heard complaints that the energy kids invest in understanding the world of Pokemon could be better applied elsewhere – but there are few other areas that offer this kind of complexity with the richness of evolving and interrelated media types.

This was the last session of the day and I left with my head spinning. The idea of the Gutenberg Parenthesis, the possibilities of SL as a tool for organizing information and the issues of compensation for content contributors were all really interesting.

[tags]brent-britton, burcu-bakioglu, claudia-schwarz, collaboration, collective-intelligence, cory-ondrejka, jeffery-bardzell, mary-hopper, media-in-transition, mimi-ito, mit, mit5, news, second-life, tom-malone, trebor-scholz[/tags]

Media in Transition 5 – Overview and First Plenary

The theme of the MiT5 conference was creativity, ownership and collaboration in the digital age. All I can say is “wow.” This was a terrific experience from start to finish. I was there in my role as rapporteur for the MIT Communications Forum and so wrote summaries of two of the plenary sessions (one on copyright and fair use and the other on learning through remixing). I also moderated two panels – one on disruptive practices and the other on brand.

I had intended to share everything in one post, but 16+ pages seemed like it ought to be broken up into smaller sections and so that’s what I’ve decided to do.

Let me begin by saying – as I have to anyone whose come near me since – that we have only the most superficial understanding of the impact of new communication technologies and behaviors. By we I mean communicators and PR people. I tend to look at communication as a tool and try to choose the one best suited for the task at hand. My determination of which tool to use is usually based on past experience or on the experience and advice of colleagues. This is an effective way to work but it is not an effective way to develop an understanding of the media, how it is changing and what those changes mean. For that, this conference was an effective and eye-opening experience.

When I arrived on Friday morning, the first person I saw was Jim Cypher. Jim and I went to college together, and, as it happened, he was also on one of the panels I was moderating. It was nice to see a familiar face from the past so soon. The overall event attracted some 400 people from all over the world. There were academics, students, business people, film makers, writers, advocates and activists, artists and people who were just plain curious.

Henry Jenkins
, the head of MIT’s Comparative Media Studies Program, got things started by discussing just what was transitioning in the world of media. He talked about how the face and nature of media and content had been changed by ongoing advances in technology and went on the show some examples of media that is being created today – discussions of which would be central to the conference.

When he finished, the first plenary – Folk Cultures and Digital Cultures – got underway.

What was the most interesting thing to me was Thomas Pettitt’s discussion of the “Gutenberg parenthesis.” This is the idea that it has only been since the rise of printing that the written word has assumed a canonical place in Western thinking. Before Gutenberg, Pettitt suggested, content was regularly borrowed from multiple sources, was not necessarily the same twice, varied based on its context and was generally unstable and open to borrowing, being borrowed from and reinterpretation.

According to Pettitt, we are entering a post-parenthetical period where content is again being thought about and used in fresh and exciting ways. He cited a number of examples – including sampling, remixing and mashing various content types and presenting them in new and unexpected contexts. It was a well developed idea and one that I found myself applying to other sessions I attended.

Craig Watkins, from the University of Texas, reinforced many of Pettitt’s points during his discussion of black orality and cultural practices. He described rap and hip hop culture as being only the latest in a long history of creativity through appropriation.

The third speaker on the panel, Lewis Hyde (who actually went first by the way), talked about Benjamin Franklin as a pirate for his willingness to encourage (and even institutionalize) the violation of then existing copyright laws for the public good.

What was the most interesting to me about this panel was the idea that the written word has only gained its status relatively recently and that it is now being challenged by the rise of new technologies. It was an exciting and refreshing conversation.

[tags]MIT, MIT5, Media, Henry Jenkins, Jim Cypher, Thomas Pettitt, S. Craig Watkins, Lewis Hyde, folk culture, digital culture[/tags]

Wherever does the time go?

It has been weeks since I last posted; but it’s not for lack of things to write about. In fact, these past few weeks have been almost overwhelmingly interesting ones. What’s happened is that I’ve ended up with so much content that sorting through it to make it intelligible has taken time – too much time. I’ve also had some other writing that’s been demanding my attention – and a brief vacation that I’d foolishly hoped to use to catch up on things.

Excuses aside, let me say that I had the good fortune to attend the Media in Transition Conference at MIT at the very end of April, the H2.0 Conference (on human enhancement and augmentation) at MIT this month and the Greater Boston Chamber of Commerce awards dinner last week. These were three very different events but all interesting in their own right. Rather than trying to summarize each in this short post, I’ll be posting separately on each (and in the case of MiT5, several posts).

My apologies for the hiatus and my promise to share some of the interesting content I’ve come across over the past few weeks.

[tags]MIT, Media in Transition, MiT5, H2.0, Media Lab, GBCC, Boston, Chamber, Commerce[/tags]