SlideShare ist ein Scribd-Unternehmen logo
1 von 52
Downloaden Sie, um offline zu lesen
VW METHODS RESEARCH PANEL - MARCH 30, 2009



ROBERT LEGRAND: Okay. Shall we get started? Okay, we officially start now?



ROBERT BLOOMFIELD: Yeah, if everyone is here and everyone’s voice is good. I guess

we should just get an okay from Joel, if he’s doing the recording.



ROBERT LEGRAND: Okay. So welcome, everybody, on this special session of

Metanomics, where we will have a rather long discussion. And, first of all, thank you for

having me as your moderator today. Maybe I will just present myself just briefly. My name is

Roland Legrand, and I am a business reporter working for Mediafin. That’s a Belgian

publisher of business newspapers and electronic services. And I have a personal blog about

Virtual Worlds, Mixed Realities. And so today we are here, in fact, to discuss the topic: What

can qualitative and experimental methods tell us about Virtual Worlds and culture?



And, in fact, what happened was, at the previous Metanomics gathering, there was the

famous Connecting The Dots, a part of the discussion, where Beyers Sellers, who is

here--and we’ll just present everyone briefly after this--but Beyers Sellers, or

Professor Robert Bloomfield was making some suggestions to send anthropology into a

more experimental direction, and that provoked a lot of discussion. And so now we are here

all together, in order to have a very tough discussion of what it is all about. So first of all, let

me present.



Celia is sitting here at my right hand-- Artemesia Sandgrain, and in I would not say real life,
but let us say, in the other world, Celia Pearce, assistant professor of Digital Media School

of Literature, Communication and Culture, and director of the experimental gamelet, director

of the Emergent Game Group at the Georgia Institute of Technology. Of course, all our

participants in this panel have really lots and lots of activities and publications, and I’m sure

they will correct me if I’m really too incomplete or if I’m even wrong in describing all the

wonderful things what you folks are doing there.



And sitting next to Celia is Beyers Sellers, Professor Robert Bloomfield, professor of

management and professor of accounting at the Cornell University Johnson Graduate

School of Management. As we all know, he has a special interest in experimental

economics, and he is, of course, the host of our Metanomics show.



Then on my left hand, we have Thomas Guildenstern, and Thomas Guildenstern happens to

be Thomas M. Malaby, associate professor at the University of Wisconsin Milwaukee, social

cultural anthropologist, and his principal research interest seems to be the relationships

between institutions, unpredictability and technology, which seems to be, in these times,

extremely interesting to--especially the unpredictability part of his interests.



And then at my far left, we have Tom Bukowski, Tom Boellstorff, associate professor,

Department of Anthropology at the University of California Irvine, editor-in-chief of the

American Anthropologist, and the writer of many things, but among those things is Coming

of Age in Second Life. Welcome here on this panel.



What I would like to propose for this session is that each of you will talk for about five
minutes, responding to what Professor Bloomfield, Beyers Sellers, told at his famous

Connecting The Dots part of the previous show, that everyone comments on that. And then

we will have a second round again, of about five minutes, where that everyone can respond

to the other panel members’ statements. And, in fact, without further ado, I would like

Beyers Sellers to repeat or reformulate or explain a bit what he exactly meant during that

Connecting The Dots segment of the previous show. So, Professor Bloomfield, the floor is

all yours.



ROBERT BLOOMFIELD: Thanks a lot, Roland. Thanks to all the panelists for joining us.

And, finally, thanks to all of you for coming. This type of philosophy of science topic is

something that I love. We’re all academics on the panel, and part of it is just doing our work,

and part of it is thinking about why we do what we do and how we could do it differently. And

so I’m really looking forward to hearing what everyone has to say.



First, I’d like to say that, as far as I know, this particular Connecting The Dots segment is the

first one that people actually responded to because it really got much more of a reaction

than I was anticipating. I didn’t realize I was actually stepping my foot into, I think, a number

of hot-button issues and contentious issues and a larger discussion that has been going on

in anthropology and related fields, particularly in Virtual Worlds.



So just real quickly I want to emphasize what I see as the key point that I was trying to make

and give just a little bit of background on that, and Roland will cut me off, I’m sure, if I

yammer on too long.
ROBERT LEGRAND: I will do it. I will.



ROBERT BLOOMFIELD: There’s one part in particular, I’ll just read this. Oh, I should

mention that the entire Connecting The Dots comment piece is available somewhere.

Maybe someone can post this in text chat. There’s a box around here somewhere that says

“Touch for note card,” and you can touch that, and you can actually get a note card with the

entire content on it. But let me just read this one part from what I said before, “I think our

guests today Tom Boellstorff and Celia Pearce would probably agree with a theory or a

prediction like cultures arising in Virtual Worlds are shaped by the Real World cultures of the

players, by platform features governing their behavior and their in-world goals and

interactions and also affected by their early experiences.”



And then I go on to say, “But that’s not really specific enough because, if we’re going to test

assertions like this, we’re going to need to know how to measure cultural behaviors and

cultural differences, features and experiences. And we’re going to have to posit much more

specifically how which features and experiences will result in which cultural differences. But

specifying a theory precisely is only the first problem. The larger problem is figuring out a

systematic way to test it.”



Now, I am, by trade, an experimental researcher, and that’s a particular method that I’ve

used for as much by historical accident because I think it’s a good way to do research.

There are lots of opportunities, and once you get involved in a certain method, it’s what you

know, it’s what you use and you continue to use it. But I think that experiments can provide

one particular thing that’s especially valuable for cultural anthropologists, and I’d like to
explain this in light of--oh, I see someone is saying my voice is cutting out so I guess, if

other people are having problems, let me know through backchat, and I’ll know that it’s me.

Okay, looks like a couple other people can hear me fine so, good. Okay.



I’d like to point out that I am an accounting professor, and that will actually color a lot of my

remarks throughout this session, but, in particular, one of the things that I’d like to point out

is that accounting is the language of business, but it’s a very unusual language, in that it’s a

language that’s used when there is a very strong skepticism that you can believe what

people are saying. So the very short version of how I see experiments fitting in with

qualitative methods is that qualitative methods are really essential to gaining new

understandings about culture or about anything else. But it’s very hard to win over skeptics

with qualitative methods.



If people are inclined to disagree with you, either because they hold to a different ideology,

because they think you have a vested interest in misrepresenting, or they have a vested

interest in not believing you, then it’s helpful to have the additional controls that experiments

provide. And it’s helpful to have the sort of objective measures so that it’s just very difficult

for them to disagree with you. And so just to anticipate what I think the anthropologists will

say in defending qualitative methods, it’s very easy to talk about the limits of experimental

research. It’s easy to talk about the fact that--there are no slides. It just sounds like there

should be, given the way I’m talking. I apologize.



There are certainly problems and limitations of experiments, and I think that what I’m hoping

we will come away with at the end of the day is not just me saying, “Here are the problems
with qualitative research,” and the qualitative researchers saying, “Here are the problems

with experimentation.” But seeing how they complement one another. I’ll just end it there

because I think, obviously I have a lot more to say, but it’s going to be important to let the

qualitative research get set up so we can have the insiders talk about its strengths.



ROBERT LEGRAND: Okay. Thank you. Celia Pearce, Artemesia, what is your take on

this?



CELIA PEARCE: Well, I wasn’t sure if you wanted us to respond to this with general

comments from last time or to specifically what Rob is saying now.



ROBERT LEGRAND: Well, yes, you can comment on both, of course. I don’t think it’s really

contradictory what he is telling now, but maybe you see differently. So please go ahead.



CELIA PEARCE: Well, there are a couple of key ideas that I think were expressed a little bit

here and more so in the previous comment. There was a suggestion made that qualitative

research is [not?] empirical, which is a kind of bias, what Tom has referred to as a

disciplinary partisanship that, somehow, if it’s qualitative research, it’s not empirical. And I

think that most people who work in the kind of research that the three of us do would refute

that claim.



But I think there’s sort of a larger sort of theological issue here. I would say that Rob is also

saying that quote/unquote “Skeptics do not take qualitative research seriously,” and I think

that’s a relative kind of perception because, in some fields, we’re taken actually quite
seriously. So people in economics and accounting may not take qualitative research

seriously; however, in other fields, even in some business fields, qualitative research is

taken quite seriously and considered to be empirical and legitimate as a research form. So I

want to just put that on the table because I don’t want people to have a misconception that

everyone thinks qualitative research is not empirical.



The second is this issue of quote/unquote “objective,” the idea of quote/unquote “objective

research,” and I think Tom will probably elaborate on this, and he’s much better informed

than I am. But this is basically what is referred to as a positivist perspective, and that means

that, among some sciences and among some practitioners of science, there is a belief that

one can make a positive determination about something that is absolute. In other words, the

sky is blue. This is a fact. We can’t refute it. It’s not subjective in any way. So there are

some people that would argue that science is based on making those sorts of objective

assertions.



However--and also the question of a theory being proven, I think one of the issues that

maybe we need to explore here is what do we mean by theory. In physics, a theory is

essentially a hypothesis. In anthropology and sociology and humanities, a theory is not a

hypothesis that is meant to be proven. The word “theory” actually means something quite a

bit different from that so I think that’s something we should try to tackle here.



And the other issue is the question of understanding that, you know, I think some of us

would argue that all science is subjective, and if you look historically, you will find that at

various points in time objectivism was used to support extremely subjective points of view.
And probably the best example of this would be eugenics, which was a quote “scientific”

discipline that was essentially used to reinforce racism and assert forms of sort of

[biological?] colonialism.



So I think we need to sort of unpack some of these issues, and I guess I’m just setting the

stage for where I think some of the debates lie. One: What do we mean by “objective”? And

is that, in fact, a real thing that exists? I would argue that there is no such thing as

objectivity. If a human is doing anything or thinking about anything, it’s inherently subjective.

And then this issue of qualitative research somehow being less legitimate than other types

of research, which I think is, in some of the blogging that went on, post our last

conversation, I think part of what we were feeling was that there was a statement made that

quote/unquote “we should be doing something more useful, that we should be doing

empirical research,” etcetera, and all of these sort of unleashed a whole set of assumptions

and disciplinary biases that I think we wanted to kind of unpack here.



At the same time, I don’t think this is about a Jerry Springer approach to a research

discussion. It’s not going to be us versus them. I think we all need each other, and I’d like to

close with sort of a metaphorical way to think about this. A good friend of mine, who goes by

Catherine Barth here in Second Life, made a great comment to me a few years ago, which I

had been pondering, especially since I do a lot of traveling. She said that you can travel

across the country by plane or by car, and, in each vehicle, you’re going to see something

different. It doesn’t mean that what you see is more true or more accurate. It’s just a

different way of seeing it.
And what I was sort of thinking about as I was flying back yesterday from doing some

qualitative research up in Seattle, looking out of the plane window, was thinking that there’s

some types of research that allow you to see the big picture from above, and there’s some

types of research that allow you to have a more intimate view at close range. And I think the

challenge we face is, you really can’t do both at the same time. I’ve done both quantitative

and qualitative research. And there’s some information you just cannot get from interviews

and from participant observation. And there’s some information you just cannot get from

surveys or other kinds of data-capturing methods.



So the best outcome of this, I’m hoping, is that we will strengthen our mutual appreciation

for the disciplines that we work in and perhaps find ways to collaborate so that we can use

each other’s perspective, i.e., the airplane view versus the on-the-ground view, to help

eliminate the subjects of our queries even further. So that’s basically my opening

commentary.



ROBERT LEGRAND: Thanks a lot. So we cannot travel at the same time by plane and by

car. So, Thomas, shoot.



THOMAS GUILDENSTERN: Okay. Here we are. I’m back on. Yes, I have a lot of reactions

to this whole debate, and I very much appreciate Robert and Celia and Tom inviting me to

attend here. Issues of epistemology here, scientific epistemology, philosophy of science,

whatever you want to call it, basically the question of what kinds of claims can we make and

on what grounds do we make them have long been an interest of mine. I’m really struck by

the way certain kinds of dichotomies are going unexamined here, and I think Celia was
already gesturing toward some of them.



In particular, from Robert’s comments, both the ones from the previous event, but also the

ones starting out today, this contrast between qualitative methods and experimentation,

from a scientific epistemology point of view makes no sense, and I think it’s important to

mention that. If you consider Gregor Mendel, the first and perhaps one of the most famous

examples of experimental science on beans and the genetics of past characteristics, this

was not quantitative; this was entirely qualitative. There’s nothing that makes

experimentation non-qualitative or quantitative, and there’s nothing that makes qualitative

work non-experimental to court.



We have to be very, very careful and educated about those different terms and what they

mean. I think that this kind of blurring, I’m sorry to say, is most common in the hotly

contested territory of the social sciences, where it seems that some fields have felt a need

to try to gain legitimacy by drawing hard boundaries between certain kinds of approaches

and other kinds of approaches.



What Celia said is right, in my opinion. The issue here is one of positivism, and I see

positivism as the enduring faith in a law-driven world, that those laws are out there waiting to

be discovered, if we can only find them. And the funny thing is, for this conversation, from

my point of view, that you don’t even have to get to the social scientists or the humanists

that you’re worried about, if you want to find scientists who have directly questioned that

point of view. In fact, there are a couple of the most famous scientists we’ve had over the

last couple of hundred years, both Charles Darwin and James Clerk Maxwell, founder of the
thermodynamics laws, were people who did not believe in the positivist project. They did not

believe that experiment-driven science, based upon a presumption of laws of cause and

effect that would ultimately reveal more and more true laws about the world was the way

science needed to be. In fact, they felt that that was precisely what science needed to not

be. Rather, science needed to be built on claims that it always knew were provisional about

a changing world. And I’m just going to close with one more comment about that.



We might hold close to our discussion just to speak about natural sciences, even though I

broadly don’t feel that there is much of a distinction between all the parts of the academy.

But even just thinking about the natural sciences, you’ve got paleontologists, biologists,

astronomers, all different fields of the natural sciences that use qualitative methods, that do

not use experimentation, that use quantitative methods that are not experimental and

qualitative methods that are, that are doing exploratory work, that are doing work on specific

contingent events in history that are not about predictability and generalized ability, but are

rather about the unique circumstances that lead to certain processes and from which other

outcomes followed, things like the extinction of the dinosaurs. Contingent events.

Generalized ability and predictability are not the sine qua non of scientific inquiry or

empirical inquiry writ large.



ROBERT LEGRAND: Okay. Thank you. That was a very clear statement. Tom Bukowski,

Tom Boellstorff, your turn.



TOM BOELLSTORFF: Great. Well, once again, thanks for doing this, Roland, for

moderating this, and thanks for having this happen. I was the one actually who had the
original idea to invite Thomas because I really wanted to bring in his perspective into the

discussion because I know he’s been thinking about these things a lot and also does

research on Virtual Worlds.



Just to start off, I just want to make a couple quick points. One: Actual, and I don’t think it

necessarily was sort of off topic to mention the stock market or whatever because while

there’s still sometimes a tendency to denigrate or talk down about Virtual Worlds, they are

an important part of what’s happening in human societies around the world. And we have a

new research community coming into being. It has a history, but it’s really come into being

about studying these Virtual Worlds, online games and such, and I think it’s really important

that we get that research community on the right foot, get it set off to be as broad and

engaged as possible because out there in popular culture the most common view of Virtual

Worlds is still from the Matrix movies where they use a Virtual World to enslave humanity.

But, that’s not all that Virtual Worlds can be for, and I really think developing strong research

methodologies around this is very important.



And also to talk about keeping away from the Jerry Springer thing, these things can often

get re-interpreted after the fact and misunderstood, so someone in the audience mentioned

the Stanford anthropology split. It didn’t actually split over these issues. I was a Ph.D.

student there during the split; that’s where I got my Ph.D. But, after the fact, that’s how it’s

been portrayed now in the media, in a way that I think actually isn’t helpful, and I think that

kind of thing can happen with these things around methodology.



So in Robert’s original comments, I think it’s very important that we try and create the
broadest possible research community that we can. And so, for instance, when in his

comments he said things like, “The larger problem is figuring out a systematic way to test it,”

that’s assuming that a certain notion of testing that Thomas was talking about

earlier--Malaby--is the only way that something can work to be called a science. Or when he

said, “The problem is that anthropologists are walking into existing cultures and working

backwards, trying to figure out whether their theories might explain the culture that’s already

arisen,” that, to us, that’s not a problem, and we don’t see it as working backwards. We think

we’re the ones moving forwards actually, if anything, because that’s not the notion of

explaining that we’re trying to do.



Just like we’re not trying to predict when will the dinosaurs become extinct again because

that kind of question doesn’t even make sense, if that’s our analytical framework. To us,

that’s not a problem. And so also when Robert said in his comments originally in Connecting

The Dots, “Despite a reputation for being free thinkers who challenge old ideas, academics

tend to be a pretty cautious lot,” and so on. People who do the kind of research that we’re

talking about, in terms of what I might call interpretive research--Thomas is absolutely right

that the qualitative-quantitative is not the key issue--don’t see it as more traditional in any

way, shape or form. So I think it’s very important to keep that very, very clear.



Because the really important issue here is that I think that all of these methods can be very

valid. I think that experimental methods can answer certain kinds of questions very

effectively and that interpretive kinds of approaches can also work very effectively. In one of

my other research lives, I do research on HIV/AIDS in Indonesia. If I want to understand

transmission rates of HIV, I might want to use experimental models or look at certain kinds
of statistical work that’s been done on different kinds of transmission rates. If I want to

understand why is it that maybe in a certain community certain kinds of things that I might

think of as sex, they don’t even call sex, and thus sex education wouldn’t make any sense to

them, then I’m going to need to use an interpretative kind of approach.



The example that I give in my book The Coming of Age in Second Life book about this; it’s

not a perfect example, but it’s one way to think about it. It’s sort of along the lines of Celia’s

comment about a plane versus a car--is that I could go to a different country and, let’s say,

go to Japan and do an experimental project or a survey on the Japanese language, and I

might learn a lot of idioms of the Japanese language. But, on the other hand, I could spend

a year with just fifteen or twenty or even, for argument’s sake, five people, and, after a year

with them, become fluent in a language that I could then use to speak to millions of people.

Would I learn every word in Japanese? No. Would I learn every accent? No. But I would

learn more than what I had started out with. I would have learned something about that

particular language.



And so the key issue here, I think, is the attempt to put one methodology or way of looking

at the world above any other, and that’s the concern that I have with the way that

experimental methods are sometimes portrayed as more scientific or providing a sort of

deeper or more valid truth. They can do a lot, and then there’s certain things that they can’t

do. And, in the same way, interpretative approaches can’t do everything either. That isn’t

what I would only want to use in my HIV/AIDS research in Indonesia, but it can be very

powerful for certain kinds of things.

And, in terms of the idea of trying to create laws that can predict the future, this is a slightly
separate discussion we could have, but the question is: Does culture work in that way? Can

you create experiments that will predict culture? And I personally think that’s very difficult. I

don’t see many examples of that. Predicting behavior can be done in certain ways. But

trying to predict culture, to me, is like trying to predict where a language will go. Why did

Middle English become Modern English? But that’s not all that experimental methods could

do. They could do lots of other really great things, and you see already in Virtual Worlds

research many wonderful examples of things that Virtual Worlds have done.



But what I don’t want to have happen is that we have as something gets set up where we

assume that that is what’s scientific and that what anthropologists or other folks do is sort of

a little precursor step to sort of get to know the lay of the land before you do the real science

stuff. Because, as Thomas and Celia have mentioned, that does a disservice as to what

counts as science because not all science is experimental. And it does a disservice to the

incredibly powerful things you can learn from ethnographic or interpretive research that’s

done well. It can be done well or less well, and, even if it’s done really well, it never gives us

perfect knowledge, but since when does any method give us perfect knowledge? Right? It

moves the conversation forward, and, to me, that needs to be the goal of what we’re trying

to do as researchers.



And, as this is such a new research community in many ways, I really want to try and build

the broadest possible research community and really hold our hands up whenever we see

this kind of disciplinary or methodological partisanship that wants to rank these different

methods. So I think the metaphor of the plane versus the car--I don’t know if that’s the right

one--but that’s certainly one nice way to think about how different methods cast different
light on a shared problem. And, in some cases, the exact problems that we’re looking at

might be different as well. But there are many ways in which we can work together, but not

by ranking.



So I’ll stop there for now because there have been so many interesting comments in the

backchat already, and I’m sure maybe the other speakers as well want to say things. I think

we should just sort of let things circulate. But I think this is a very important discussion to

have early, at this stage, in this research community, before things get set in stone so that

we have the most tolerant, broadminded, robust research community that we can have.

ROBERT LEGRAND: Thank you, Tom. So we’re proceeding to the second round and, once

again, I will ask Beyers Sellers to comment.



CELIA PEARCE: Can I ask a question quickly?



ROBERT LEGRAND: Yes, Celia.



CELIA PEARCE: I’m sorry to interrupt. I was hoping that what we would have here would

be more of a discussion, and it feels like it’s heading in the direction of a debate. So I just

wanted to throw that in. I’m hoping that part of this conversation will be a dialogue rather

than each of us being allocated five minutes to pontificate.



ROBERT LEGRAND: Yes, I know, but we will have--



CELIA PEARCE: But I’m happy to pontificate.
ROBERT LEGRAND: --about 90 minutes so I think that, after this second round, my

intention was to stop this kind of pontifications and just open up the discussion. But I just

wanted to give every one of the four people here the opportunity to spell out what exactly

they wanted to say and to what their exact positions are. Because, of course, what we

learned right now, we had a number of incredible rich comments. For instance, what you

said, Celia, about the fact that the empirical and experimental--well, the problem is that we

are using those concepts, and we got the impression, I believe, that anyway Beyers Sellers

was seeing some oppositions there. Maybe this was not the right interpretation of what he

said, but anyway, there is this positivist perspective where we were talking about and the

issue of objectivity, whether is there such a thing or is there not such a thing and should we

work with a view on objectivity.



The issue of positivism came back a few times, and also, for instance, what Tom was saying

in his very eloquent closing statement, saying that, well, okay, we should have a broad

community here, research community and against partisanship, against the fact that some

methods would be considered better or higher or more scientific as the other. And, of

course, this beautiful metaphor of the car and the plane, that car nor plane is superior to the

other and can be suited to for certain ends or maybe both fail. But anyway, there should be

no partisanship in going for either the car or the plane.



So I really would like, but very briefly, in order indeed to avoid the kind of monologue that

every one of you would very briefly respond on what the others were saying, and then we’ll

just open it up, and we’ll have, I believe, about an hour then to go on, on those different
topics. Beyers, could you elaborate a bit on what you’ve said previously and in regard to

what your colleagues here are saying?



ROBERT BLOOMFIELD: Yeah. I’ll say first, great insights and lots and lots of them--so now

that I’ve seen Doubledown’s definition that he pulled offline: To pontificate means to speak

in a pompous or dogmatic manner. I would have to pontificate a long time to respond to

everything that came up. I’m just going to pick and choose a few of them.



ROBERT LEGRAND: Try to be very brief, yes.



ROBERT BLOOMFIELD: Yeah. No, I’m just going to pick and choose a few. I’d actually like

to start with Tom Boellstorff’s remarks at the end, where he responded to my comment

about academics being traditionalists and also talk about that we should be working to build

a new research community. And so I just want to give this a little bit more context. The

reason that academics tend to be traditionalists is because of our professional pressures.

That basically, to do research, it’s got to be peer-reviewed by people who usually have been

around for a while and have been doing things, you know, they have in their minds the right

way and the wrong way to do them. And so we all take big risks by going against our own

fields, in trying to do something new in our own field.



I mean actually it turns out, while experimentalism is being viewed as something--it is very

unusual to be an experimentalist in my field. There were almost none of them in economics,

finance and accounting 20 years ago when I started doing this. I was a pathbreaker. I was

someone flirting with, you know, it was a high-risk, high-return strategy. Fortunately for me it
worked out. Now there are a lot of people running experiments. But that’s a real risk.



So I guess one thing I want to say is, I really do believe that Second Life is a place where

you see researchers, the academics, coming from all sorts of different disciplines, looking at

something where there isn’t really--maybe we can create a new sensibility of what type of

research is appropriate, using new technology, looking at some slightly different questions.

And so this may be an opportunity to create a field where we have a much more eclectic

research program. So that’s the first point.



But I do want to make one more, I guess, a substantive point because the word “positivism”

was used a fair bit and actually in different ways. And I think that there are some key

distinctions here. Celia talked first about positivism and objectivity and talked about things

like eugenics. And I think it’s very important to distinguish positivism from normativism, and

this is a big thing in business where positive research is where you just try to say what is;

you just try to understand the world, describe the world’s support theories, whatever, but

you’re not saying what should happen. Whereas, normative research is where you start

making value judgments about the right thing to do. So I want to make it very clear that I

don’t think personally, you know, I’m not out there pushing for normativism. I am saying let’s

just try to figure out what is and not be making activist arguments for or against something

like is eugenics good.



Now Thomas’s comments were really getting at positivism in more of a sense--well, there’s

a distinction between positivism, realism and instrumentalism. My impression, and Thomas

can correct me if I’m wrong, is that really he has a bone to pick with realism, which says that
there is a real static unchanging true world out there, and we can understand it somehow.

So now in this context, positivism means that you are constructing theories about what that

Real World is, and I see Thomas mentions systematicity. But you’re actually going to find

ways--you’re going to be able to see observable variables, measure and test and refute and

so on and learn something about the Real World.



Positivism can be a useful approach to science, even if you don’t believe in realism, and

that’s what brings you to the instrumentalists who say, “Look, we can’t know anything about

the Real World, but we’ve got a bunch of variables we can measure, and we can tie them

together. And, you know what? If we find that consistently this variable is associated with

that one in a certain way, then we feel we know more than we did without it.”



So anyway, I’ve talked long enough. I just wanted to make those two quick points. One

strongly in favor of synthesizing some of these methods and the other making sure we

agree on which type of positivism we’re talking about.



ROBERT LEGRAND: Okay. Celia, you have anything to add to your previous metaphors?



CELIA PEARCE: I would maybe like to go after one of the other gentlemen. Is that all right?



ROBERT LEGRAND: Okay. Oh, that’s all right. Thomas.



THOMAS GUILDENSTERN: Yes. Sure, I can jump in. I was just about to type something.

Well, we could spend probably all afternoon talking about what we variously mean by
positivism. Auguste Comte is largely pointed to as someone who brought positivism into

clear focus for western philosophical thought. And, on that view, positivism is the view that

serious scientific inquiry should not search for ultimate causes deriving from some outside

source, but must confine itself to the study of relations existing between facts, which are

directly accessible to observation. So from that follows an enormous project, a scaffolding if

you will, and a faith and a narrowness of vision that the predictable, the law-like, are what

stand as real accumulated knowledge.



My view is that not even natural science, as a whole, pursues that unthinkingly. And that

such a view tends to always push aside the uniqueness, the context for change,

circumstances where processes are unfolding especially rapidly, especially where people

are involved, where any notion of generalized ability or predictability should fly very swiftly

out the window, in favor of and understanding of the processes that are in place on the

ground. That was Darwin’s lesson to what had been the Agassiz-based biology: Focus on

process. Don’t get bound up in some project of believing species are somehow real. They

aren’t. They’re just a shorthand for things that we’re grouping together for our reasons. That

Darwinian approach is, to me, something that should underwrite all kinds of empirical

inquiry.



I would add to that that I agree that we should [visitate?] when we think about what kind of

fields we want as we move forward, in thinking about Virtual Worlds and digital life and all

the rest. I completely agree that much of the reason why we are in the situation we are

today, not only here, but more broadly in terms of how these misunderstandings get

reproduced, comes down, in the end, to the incentive that various disciplines have to push
themselves forward at the expense of other disciplines, to claim exclusivity, to claim

primacy, to claim a corner on the market of “True,” with a capital “T,” and that wrangling over

resources in the academy leaves a lot of damage in its wake, not the least of which is

forgetting of what are, in fact, very old lessons.



This isn’t a new way of making science, in fact or empirical inquiry. It is, in fact, an old and

kind of well thought out and well-considered way of conceiving empirical inquiry, but it is

constantly in danger of being pushed aside as these kinds of rankings, these kinds of

scrambling to the top unfold in the course of fighting for resources.



ROBERT LEGRAND: Okay. Tom, maybe you want to add something about the struggle for

resources and from your perspective, again, partisanship or maybe some other comment.



TOM BOELLSTORFF: Sure. Sure, I can make a couple quick comments, and then I think

Celia’s going to jump in, and then we can just have it turn into more of a free-for-all.



ROBERT LEGRAND: Okay. Yes.



TOM BOELLSTORFF: Because part of my comments will be actually on a comment from

the floor, DuSanne’s comment. But first, I think that, Rob, we hit on a great example of a

misunderstanding that, if left unchecked, can become a damaging misunderstanding, and

that’s when you were talking about Thomas Malaby, Guildenstern, and about the issue of

realism, that you thought that was maybe a separate part of the issue. And I’m pretty sure

that he and I and Celia would all actually see ourselves as realists, and it’s often used as a
way to sort of dis-empower this kind of interpretive research, to claim that we don’t believe

there’s any reality.



I absolutely believe that there was an asteroid that maybe killed the dinosaurs, but I can’t do

an experiment to see that. I believe there are quasars that are real; they’re there, but we

didn’t find out about them by creating quasars in the lab. And so you can be a realist. I

believe in the English language, but the English language came into being in history. You

can’t create it in the lab. And so I think that everyone here in the room actually we’re all

realists. The question is, how do you get to or look at that reality, and I think it’s very risky to

ever insinuate that people who do interpretive work just believe that all interpretations go,

and it’s just anything goes. Any kind of research can be done well or badly. That

experimental methods don’t have the corner on realism. In fact, you could even argue in a

way they’re less realistic because they’re about creating experiments that don’t actually

happen in the physical world normally; that’s why they’re experiments. I don’t agree with

that. I believe that we’re all realists, and so I just don’t want to try and be interpreted as ever

dis-empowering another approach by claiming that they aren’t as interested in the real. And

that, I think, hits onto what Thomas was saying as well, that there are economic and political

issues that aren’t just about universities and the academy, but more broadly where there

could be sort of points scored for claiming that your method is the real “Science,” with a

capital “S,” and so you give me that grant. Don’t give it to those other people. And that’s

really unfortunate that we sometimes get set up that way. But I think that actually does have

more impact, perhaps, than we sometimes realize that could be part of it. So anyway, I think

we’re all realists. I just want to be sure that that’s not the key issue.
The other quick point: DuSanne had a great question, “What does it take to leverage the

cross-disciplinary potential of Virtual Worlds? What’s missing? Is it a language? Is it a

shared vocabulary?” DuSanne, while I’m talking, if you even want to throw it back in the chat

so people can see, I think that’s a great question. I actually have a great way to respond to

that because I’m editor-in-chief of American Anthropologist, which sort of has this issue in a

microcosm because my journal publishes cultural anthropology, but also archeology,

linguistic anthropology and biological anthropology that includes people working with

primates, people who are digging up Lucy, bones in Africa from 200,000 years ago.



So my journal, indeed not my journal, but the journal for which I’m the editor, has a huge

range, and there have been times in the past when people have said, “How can we get a

shared conversation? We have to make everything shared.” And it’s had horrible results

because it’s caused authors and researchers to feel that they have to sort of dumb down

their research. So that like a linguistic person has to explain everything so that someone

who works on chimpanzees can understand it.



And, in terms of creating a research community, I don’t think it means that we always have

to read every word that the other people write or that we even have to understand it at the

beginning because it doesn’t mean a lowest common denominator. It means that there

might be different sort of communities of research that might have their own approaches or

languages, but they are open to reading each other and engaging with each other and that

there will be some examples of research that will move in the intersections or interstices

between those different approaches. Not all research will do that, and it’s not true that only

the best research does that. Sometimes the best research is solidly in one approach. It is
purely experimental or purely interpretive. It is not the case that the best research has to be

some kind of mush that combines everything. That’s been very disastrous for anthropology,

to think that the only good research has to have all of the four field approaches in it in every

single paragraph. That’s not helpful.



ROBERT LEGRAND: Okay, Tom.



TOM BOELLSTORFF: But, there are ways to create conversation, and I think that’s the

way to think about it, that even if we don’t understand each other at the beginning, we can

build these communities of research.



ROBERT LEGRAND: Thanks a lot. Celia.



CELIA PEARCE: Yes--



ROBERT LEGRAND: Do you wish to comment on--



CELIA PEARCE: I do. Thank you.



ROBERT LEGRAND: --are you a realist? Yes?



CELIA PEARCE: I think what Tom said I would echo about that. And I want to just talk a

little bit about how these methods can complement each other in different ways. I’ve actually

used some quantitative as well as qualitative methods together to very good effect, in a way
that I think illustrates why mutual respect and synergy is beneficial, more beneficial. I think

collectively we’ll gain more knowledge by breaking down some of those partitions that Tom

is talking about.



I did a study about two years ago on baby boomer gamers, and I decided to combine a

survey with more of the qualitative research and interviews and participant observation that I

had been doing on some prior research. I’m going to give you one example of a very simple

situation where the two things informed each other, just to give a taste of what we might be

able to accomplish by bringing these ideas together. In the survey, I had this very odd

finding that 98 percent of the baby boomers that answered the survey, which was about 270

participants, which is considered statistically significant, although I will make a caveat that

all surveys are self-selecting. So, again, we can’t have a kind of an absolute positivist

conclusion about data that is collected voluntarily, and yet at the same time we’re not really

allowed to collect it involuntarily. I’ll get into that a little bit more maybe later.



But anyway, I had this finding a certain percentage--and I don’t remember the number, but it

was probably something like 25 to 30 percent of the participants had consoles in their

homes, but 98 percent of them said that they played video games almost exclusively on a

PC. So that’s an interesting data point. But it was odd to me, and I was trying to figure out,

okay, well, what’s going on with that? They have the consoles, but they don’t play them.



It wasn’t until I sat down and did the interviews that a more nuanced portrait started to

emerge of the dynamics of the PC versus the console gamer in a typical household. And

what I discovered was that most of the people that had consoles at home felt that those
consoles were for their kids. Now that’s a piece of qualitative data that would never have

come out in a survey. But by boring down and having those 30 additional interviews I did as

a follow-up to the survey, I was able to find out what was behind this statistic, in a sense. So

I think this is a great example of a place where having the--and part of what we do as

ethnographers is, we tell stories. We hear stories. We observe stories in progress. And we

try to understand how cultures are constructed and what motivates people and what their

values are. And I think Tom’s earlier comment about the definition of sex in Indonesia is an

excellent example where, again, a statistical study of sexual practices would not get at what

people’s definition of sex is. So these are the kinds of things that I think we can see a better

and more complete picture by recognizing the synergies.



I think the big concern that I’ve had, and I think that came up in the prior conversation was,

we hear often this comment made by people that do typically more quantitative research

and not just people in economics, but in other fields as well, who refer to what we do as

anecdotal. And that’s, I think, a very pejorative way to characterize anthropology and

sociology research. So I think, at the very bottom, I think we need to get past some of

these--yeah, “diss” is what someone said--some of this [dismissive?] rhetoric and really try

to approach these conversations from a position of mutual respect and also to understand

that each of our methods has strengths and weaknesses, and none of them is the ultimate,

complete “everything can be answered by my method” method. And so there are a number

of ways that I think we could actually, by dropping some of that rhetoric, get much more

interesting information by perhaps collaborating, sharing data and complement each other’s

work in a more synergistic way. All right. That’s it.
ROLAND LEGRAND: Okay. Great.



ROBERT BLOOMFIELD: I’d love to respond to that, if I could.



ROLAND LEGRAND: Yes, but we’re still having about half an hour now, and I also would

like to ask some questions which I see popping up in the backchat, and briefly go back to

you again, Robert. Because there are some questions, and I think many people out there

have those questions about the terminology used. For instance, especially, of course, the

positivism word or concept. For instance, there is Valiant Westland, who is saying

that--various responses, and one of them, Valiant, is just saying, “Positivism contends that

everything can be reduced to physiological, physical or chemical events. I think many of us

will agree that much of what happens in Virtual Worlds falls outside of these measurable

boundaries. So is there anything in Virtual Worlds which makes it more difficult for positivists

to do their thing or for instrumentalists, for my part? Is there something which is on the

logical level different about working here in a virtual environment?” Anyone is free, of

course, to respond to the questions. I will not, once again, ask each of you to respond, but

just feel free to respond.



ROBERT BLOOMFIELD: I’d be glad to respond to that.



ROLAND LEGRAND: Okay.



ROBERT BLOOMFIELD: That sounds to me more like some type of reductivism than

positivism. It sounds a little bit more like saying, “We can predict everything from physicality.
Let’s derive every human behavior by what atoms do.” I don’t think that’s been proven to be

a very useful way of going, which is why psychologists don’t do a whole lot of physics to

derive their predictions and their theories. I don’t really see it. I think that the types of

debates and perspectives that you’re seeing on the panel really, I think, are independent of

whether you were talking about virtual online behavior or Real World behavior or frankly

whether you’re talking about physics or anthropology or psychology or accounting.



THOMAS GUILDENSTERN: Could I just immediately jump in too?



ROLAND LEGRAND: Yes, please.



THOMAS GUILDENSTERN: Yeah, I think that’s right. It all comes back, in kind of pure

pragmatist fashion I think, to what questions we want to ask. If we want to ask questions

about circumstances where there is an enormous amount of regularity where there is not

rapid change in place, where the agents on the ground are not interactive, then our work

starts to look more experimental. It starts to look more about controlled conditions because

those things become possible. But when our work is about contingent events and about

complexities and social complexities, such as an example, studying AIDS in Indonesia,

those things quickly become very, very limited.



So for example, the work I did that relates directly to Second Life is a book that’s coming out

in June, and someone asked me to say the title. It’s Making Virtual Worlds: Linden Lab in

Second Life. It’s on Amazon now. And that was a work based upon ethnographic research

at Linden Lab in 2005, and 2004 through early 2006, and that was a unique period of time.
There is no stretch of time that is like that anywhere. That was Linden Lab there. And to

write about it is not to write about Linden Lab at all times in all places. It is to write about

Linden Lab then and what processes were in place then.



That kind of awareness of what our realists, as Tom well said, well put it, our realist work

should be--should be part of how we see all valuable science. It shouldn’t be that that kind

of work, because it is concerned with the contingent, is somehow less scientific because it’s

less about predictability. It’s about these complex processes and change that we’re all

deeply immersed in and about which we’re all desperate to understand more.



ROLAND LEGRAND: Okay. Thank you.



CELIA PEARCE: Yeah. I think there are some interesting things here to think about too,

and one thing I want to mention is that one of the characteristics of participant observation in

anthropology is that we study people in-situ. We go to the culture and study it as it exists,

and we take it in its own right. And I think, at this particular moment in time, considering the

large misconceptions in the popular media of Virtual Worlds, I’ve made a very important

choice through my work--and I’ll do my little book plug too, Communities of Play, which is

coming out in September--that I made a very deliberate decision to study these cultures in

their own right, in a respectful fashion, because I think that the mass media has this

propensity to want to make everything dysfunctional. And, like Tom said, most people think

of Virtual Worlds as the Matrix.



I think that one of the things we’re trying to do, in the interests of realism, is to deconstruct
some of those myths and prejudices and really try to get to the bottom of what people are

actually doing and experiencing in Virtual Worlds. So I think that’s a really valuable

contribution that we make.



And two other things I wanted to mention, that are distinct to these environments that I’m

particularly concerned with, one is, and I think this is a really wonderful kind of bridge

between Tom and my work in-world and Thomas’s work with Linden Lab and that is that

Virtual Worlds are cultures that are constructed in a very particular way. There is a piece of

software--where we are now existing inside of a piece of software that was made by

somebody, and that is the starting point for every culture that emerges in a Virtual World or

a massively multiplayer game is that there are these ingredients, these affordances, these

characteristics, that the designers put into the World. And it is through what those enable

that cultures are created. And that, to me, because my background is in game design, is a

very important and nontrivial aspect.



What that also introduces is the opportunity, since we are right now inhabiting software, and

software is essentially data, it introduces the opportunity to collect data in a different way

than we typically have in Real World ethnography. But the challenge we face is that,

according to the research ethics that we have to adhere to when we do research, when we

get grants, when we do our work, we’re not allowed to collect data on people without their

consent or knowledge. So one of the things I’d like to try to explore is how can we use the

affordances that software gives us to study cultures in a different kind of way, as data, but

do that in a way that is consistent with the types of research ethics in studying human

subjects that we not only want to adhere to but are legally required to adhere to if we work
within the academy.



ROLAND LEGRAND: Okay. I think that this issue of ethics, or research ethics makes me

go back to a question by Dusan Writer, who was saying at a certain moment, “I was

confused enough about augmentationism versus immersionist. Now I need to decide if I’m

an instrument.” And I think what, having read his blog, I think that there was also a lot of

critique in this blog about this whole idea of considering a virtual society as a place where

one actually can carry out experiments. It is about as shocking. At least this is an

interpretation of Dusan’s blog. It can be compared somehow with conducting experiments in

a tribal society. People would say, “Well, no, you cannot do it.” There would be a general

outcry. So maybe go some deeper into this ethical aspect of conducting experiments or

other research methods in virtual societies.



TOM BOELLSTORFF: I can do a quick response to some of that. And actually this will be a

sort of stream of consciousness because I’ll respond to that, and then I’m going to very

quickly respond to five of the great comments that were made, and then it looks like Rob is

going to chime in after that. So I’m going to do this as quickly as I can.



So first of all, in terms of the question of getting permission and to be able to research and

all that kind of thing, like Celia was talking about, I have a section in my book about ethics,

and that’s something that’s very interesting to me. If you do ethnographic research in the

physical world, like when I go to Indonesia, I don’t have to get a permission form every time

I see two people doing something, let’s say, in the shopping mall, where they could

conceivably imagine that anyone could walk by and see what it is that they’re doing and
write about it in a newspaper or blog or whatever, as long as I protect their confidentiality.



And so the same kind of thing can happen for a researcher in Second Life. Where you need

to get a consent form is when you’re creating any kind of new situation, by an interview, a

focus group, a survey, something where you are creating a situation that did not exist

before. Then it’s very important to get that informed consent. These issues show up for

experimental or non-interpretive. I think these ethics things are for everyone. It’s very

important when you do that kind of work and people are asking about release forms. If I

interview you someday, Valiant, I’ll give you a release form. But, actually, if you email me

later, I can email it to you just so you can see what it looks like if you’re curious. I’m happy to

share that. I have it on a note card so, if anyone emails me, I can give it to them.



So now I’m going to through four comments incredibly quickly, just to build on some of these

things. DuSanne says, “The idea that you can use Virtual Worlds to extrapolate lessons for

the actual world sort of implies that Virtual Worlds aren’t permeable.” And you’re right, and

this is one of these yes and no issues. There are all kinds of ways that Virtual Worlds and

the actual world inter-penetrate. We’re sitting on chairs, looking at grass around us. We’re

all somewhere in the physical world. We’re having a conversation. All these things are

happening. But it doesn’t all just become the same thing.



If I crash, my computer crashes, and I’m no longer in Second Life anymore, that’s a

difference that makes a meaningful difference. I won’t be in this virtual space anymore. So

Virtual Worlds and the actual world aren’t just all the same thing. The Virtual Worlds do

exist, and they are distinct, but distinct isn’t the same thing as isolated and sort of out there
on Mars all by themselves. They’re getting affected by the physical world in all kinds of

ways, and it’s that play back and forth that I think is so interesting.



And Dizzy Banjo very quickly asked, “Do you think Virtual Worlds force a reassessment of

these working methods?” They absolutely do, but what was surprising to me in my own

research was that I didn’t have to change nearly as much as I thought I would, although I did

have to change some things. And you can see that in my book; I talk about that.



Then DuSanne asks--there’s two more things I’ll very quickly respond to, and I’m not getting

everything, I’m very sorry. DuSanne asks, “Do Virtual Worlds tend to make us think we can

manipulate the world, and are there issues and challenges about that?” I think Rob could

maybe comment more about this, but, for people who do experimental research, Virtual

Worlds are incredibly tempting. It’s like a hundred-dollar bill laying on the ground because it

looks like you could create the ultimate kind of two societies that differ only by one rule, let’s

say about like how you can trade things. And then you can see what people do and how it

works differently, and that is something that you can’t do in the physical world without a

whole lot of money and ethical problems and so on.



But I do think the ethical problems remain here as well because, if you create two different

Virtual Worlds, those aren’t just experiments in a college dormitory or in someone’s room,

they are maybe more like a college dormitory--real life. I mean those are a part of people’s

social lives that are created there. So I think that’s a really interesting question that

experimenters, I think, are thinking about a lot right now.
And then very quickly, for Chimera, asked about what research traditions did we all come

out of. I think it’s interesting that all three of us--and Rob as well--all four of us didn’t

obviously start doing research in Virtual Worlds. We have research experience in other

areas. And I think, for all of us, we found that really helpful, to sort of get ideas of how to do

research in these new places. My original degrees are in linguistics and music. Very, very

different things. One very experimental and one music. But I think it’s all been very useful for

me, particularly anthropology, in trying to think about how to study Virtual Worlds. So I’ll stop

there. And I’m sorry there are more questions, but I just want to keep things moving.



ROLAND LEGRAND: Okay. Beyers, you wanted to respond to this as well.



ROBERT BLOOMFIELD: Yeah. I guess I’ll pick a few points to respond to. The first one

that I’d really like to pick up on is, Celia talked about qualitative or interpretive research

being viewed as anecdotal. It’s not just a “diss,” as someone put it, but it really is a total

misconception of that type of research. It isn’t that interpretative and qualitative research is

bad data collection, which I think is, to some extent, how my comments are being

interpreted. A lot of what I see, and I think the example that Celia gave is a great one, which

is that she ran a survey. She got some data. Some stuff didn’t make sense. She did some

more qualitative follow-up, and it gave a much more full narrative, a much more full story

about what’s going on.



So my take, and I’d be interested in seeing whether--and I see Tom is saying, “I implied not

that they were bad, but they were second order”--my take, and maybe Thomas can tell me

what he thinks of this is that I think interpretative and qualitative research is really crucial to
just understanding what type of world you’re trying to study, getting that big picture down. I

don’t view it as second order, but actually step one. I think it’s essential, and it can clean up

the pieces when you get data that are hard to understand, as in Celia’s case.



Where I came from, and I guess this goes to Tom’s comments, is that he said

experimentalists view Virtual Worlds as like a hundred-dollar bill lying on the ground; that is

exactly my situation. I’ve been doing experiments. I was looking for a better technology to

do more interesting experiments, and that is how I found Virtual Worlds as really as a

technology to help me study something that has nothing to do with Virtual Worlds. Since

then my interests have gotten a lot broader.



But I want to just take another example because my Connecting The Dots comments

actually were driven by something Celia said when she talked about how small changes in

the technology of a Virtual World or an online environment can actually affect the culture,

can affect the society, can affect behavior. Or maybe Tom, in the pre-interview, used the

example of changing the way you make friends. Friending is very binary in Second Life.

You’re either a friend or you’re not. And, the difference is whether you clicked a button, as

opposed to saying measuring how long people are talking together, and you might have a

friend meter that would go up or down, depending on the extent of your interaction. That

certainly seems like it would create a different society.



So I think here’s the question that I would like to pose to the other panelists is, it strikes me

that, if you’re going to go to a Virtual World developer and say, “I think you’ll get a very

different type of in-world culture if you do this instead of that,” maybe they’d be satisfied with
just an interpretive analysis. But I think, ultimately, they’re going to want something, frankly,

more persuasive. They’re going to want to see a lot of data. They’re going to want to see

something that they don’t think reflect the interpretations of one person. Maybe I’m just

sticking my foot in it again and saying something that’s going to be viewed as insulting, but

when I deal with businesses, when I deal with the financial market regulators, they’re not

satisfied unless I can show them a thousand data points that we took, “We looked at 500

firms; some did this, and some did that, and here’s what happened, and this is replicable.”

This is how I see the two coming together.



I want to say one more thing before I let other people respond, which is on the issue of the

ethics of experimentation. I guess one key distinction I’d like to make is between field

experiments and lab experiments. There are people who will do field experiments, and

actually, we’re subject to these all the time. Companies all the time are doing controlled

experiments on their marketing campaigns. They’re trying five different marketing

campaigns on different websites. They’re using different pricing. They have coupons.

Randomly different people will see different color schemes on the website, to see what

happens. And they’re getting huge amounts of data from that. And they’re not asking

anyone permission. So there’s a lot of experimentation that can go on, that can give people

a lot more evidence of how to make money in online sales, for example.



But, in the Real World, this goes on as well too, and just to give you a really quick example.

There’s a guy, John List, who does field experiments. He’ll go into the Real World to a

trading card, like a sport baseball card convention, and he’ll do an experiment where, for

example, he’ll have an attractive woman versus an unattractive man selling things and then
testing who got better prices. And then, finally, you have the lab experiments, which I think

are still a great use of Virtual Worlds, where, if we think that a certain technological feature

of a World is going to make people behave differently.



You know what? I can set up a World with OpenSim that won’t have more than 50 people in

it for a month. They’ll all know it’s an experiment. They’ll all sign the consent form. They’ll

probably get paid for their efforts, and we can compare Worlds with one another and get the

kind of data that people, certainly in business, whose financial interests are at stake, are

going to want to see, in addition to a compelling interpretative narrative.



ROLAND LEGRAND: Okay. I think there will be a lot of responses on this. Thomas, go

ahead please.



THOMAS GUILDENSTERN: Yes. First, I’d like to dispute that characterization of what

people out in the Real World, whether they’re in business or they’re forming policy for

government, what kinds of knowledge they want and what kinds of knowledge counts is

useful for them. What is incredibly important, whether you’re in business or running a

government or running an NGO, is not how metrics in place somewhere measure up against

a given model or a schematic understanding. What you need to know are the trends and the

processes on the ground. You need to know the five percent of people who are doing

something differently and all the reasons why that five percent is probably going to be forty

percent tomorrow. And that’s not a predictive model; that is a model based upon an entirely

valid form of generating knowledge from which to act, to create grounds for action.
One of the things that amazes me about this kind of approach is, it suggests that, as human

beings, we are so unable to think, that we need models to have gone chapter and verse to

tell us exactly what we are supposed to do in any uncertain situation going forward. Let me

give you the best example of this that there is, and it’s about as far afield from anthropology

as you could imagine. And that is, intelligence throughout the twentieth century, much of the

United States intelligence communities was built upon a very elaborate field-agent

architecture, if you will, that was utterly grounded in expert understanding, critical

observation over long periods of time, expert evaluation in critical reading of their reports

filtered upward, cross-referencing of those reports by human beings making informed

judgments and on and on up the line.



And, in the ’70s and ’80s, the intelligence community shifted away from that toward a much

more political science model-driven set of guesses about metrics and about what kinds of

economic and political factors lead to instability or this and that. And that kind of modeling,

that kind of simulation has, by no accounts--and I’ve spent a fair amount of time speaking to

the defense community that’s been interested in Virtual Worlds--by no accounts have those

kinds of methods generated any more reliable grounds for action, in fact less, than the “yes,

labor intensive,” “yes, person capital intensive,” “yes, qualitative,” but deeply rich and

process-based rather than predictability or schematic pattern-based model for how to

generate knowledge for action. That is true for businesses, and it’s true for governments.



When we go out there and we talk to these people in these fields, they are not people who

only--some of them are, but most of them or many of them are not people who only want

things that are quantitative, only want things that are predictive, because they know, and
we’ve all been very, very powerfully reminded in the most recent 12 months, of just how

limited such schemes for understanding what’s going to happen tomorrow are.



TOM BOELLSTORFF: Let me chime in too, because, Rob, I don’t think that’s what Thomas

is saying. But the thing is, when you say things like you just said where we want to reach out

to the business community, whatever, and they’re not going to be convinced unless we have

that kind of experimental data to convince them, when you say those kinds of things, that’s

the kind of thing that makes me want to bang my head into the wall because it’s just not

true. Maybe the people that you’ve been talking, but the reason why Intel hires me as a

consultant, and they have all of these anthropologists on their staff and very few

experimental people on their staff, is they actually find interpretive stuff more useful.



And there’s this huge military interest in understanding things like Virtual Worlds from an

interpretive kind of perspective. Now I’m not saying that the Military is necessarily a great

thing. I’m not saying, “Oh, great! We all get to work with a human train system kind of stuff,”

as [I’ve?] just mentioned. I’m not saying that that doesn’t have ethical problems at all. I’m

just saying there is massive business interest. Nokia. The Gates Foundation. I mean so

many different corporations out there in the business world hire our Ph.D. students from my

university precisely to do this kind of work, and it’s not an either/or situation that’s saying the

experimental work is not useful. But you, Rob, are the one who just made it sound as if,

“You know all this interpretive stuff is great, but we better get down to business and do that

experimental stuff, or the business world or the Real World out there just isn’t going to take

us seriously.” And I would submit that that’s just empirically not true.
There are businesses out there that really find the experimental work more valid. That’s

great. There are military, whoever, people out there that find that stuff more valid for certain

kinds of research questions. That’s great. But there are also definitely business,

government, other kinds of folks, nonprofits, who find the interpretive kinds of research

completely scientific, extremely valid and useful and important as well. So we don’t want to

make any sweeping statements. This is that partisanship thing that I’m trying to keep us

away from. It all can have use. And I’m tell you, out there in the business and government

world, there’s plenty of interest in all of these different kinds of research methods, and we

have to think about that ethically, regardless of the method in question.



ROBERT BLOOMFIELD: I’d like to respond.



ROLAND LEGRAND: Maybe Celia.



ROBERT BLOOMFIELD: Yeah, Celia, go ahead.



CELIA PEARCE: Sorry.



ROLAND LEGRAND: And then Robert.



CELIA PEARCE: Let’s let the little woman say something for a moment. I just wanted to

kind of echo what Tom was saying because I think that that tone and that kind of

commentary is exactly what got us into this in the first place. And I would say, in answer to

your question, Rob, yes, you did step in it. I don’t think any one of the three of us would ever
try to make an argument that the kind of work that you’re doing is somehow less legitimate

than what we’re doing and what we’re doing needs to be proven by what you are doing,

which is essentially what you just said.



As Tom has mentioned, I’ve also worked with corporations. From a business perspective

there are--in fact there’s a lot of data about business that quantitative research has

completely failed to gather. For instance, my work with diasporas of closed games. When

games close and people leave them, nobody knows where they go or what they do. This is

something that is impossible to capture through any other method than qualitative research.

And yet it is of tremendous amount to businesses to understand what happens to their

customers when they go away. So these are the kinds of things. I would encourage you in

the future to rethink your position here because I think we have much more to benefit from

taking a more egalitarian, synergetic perspective in saying what we’re doing.



As I said in the beginning, if you’re making a map from a plane and I’m making a map from

the ground, together we can create a much better defined image of the terrain than by

saying, “Well, your image of the ground is just fine, but it’s not valid until I take a picture from

an airplane.” That’s just, I think, a highly unproductive way to frame the discourse.

ROBERT BLOOMFIELD: Okay. Let me try to put this a different way because the examples

that I hear you guys using are examples in which I don’t think experimentation or sort of

large-sample archival econometric study would be possible. And so clearly, I think that’s a

very different world. And so the question I guess I would ask you is, if you are addressing a

type of question that could be addressed through controlled experimentation and through

systematic collection or archival, you know, large-sample data and doing statistical tests,
then one would be, “Why not?” and the other would be, “Don’t you think that people--” I

mean I guess I’m in a world, the topics that I look at are ones where experiments are

possible, and data’s out there.



And people do what I think you would call qualitative research in business all the time,

where they’re writing case studies, they’re writing clinical analyses, field studies, interviews,

all of that. But, if there’s something where you actually could do the experiment or could go

out and collect a large sample data, what you get back is, “Do it.” And maybe my question

is: Why is that not the case in anthropology? And, if not, why not?



ROLAND LEGRAND: There’s this question also from Joel [Shockington?], “It seems that, if

you both a qualitative analysis of an environment and they agree, that’s a pretty strong

indicator of validity, and if they disagree, maybe it indicates the need for refinement of

thinking.” Maybe some more about this realignment.



CELIA PEARCE: Again, I think it’s sort of fallacious to think that it’s about validating each

other’s experiment or each other’s research because, what I guess I’m trying to say is, when

I’m in the town, on the ground making a map of understanding why those boundaries were

put in place, that’s a different kind of information. It’s not that his airplane view is going to

validate that or my ground view is going to validate his. They give us two completely

different kinds of information, which together create a well-rounded picture of what’s going

on. So I just think that, rather than saying my data is better than yours, my methods are

better than yours, I can prove what you’ve claimed or whatever, what we’re talking about is,

we’re looking at things at a different scale, at a different resolution, and on one part, I don’t
want you to say to me, “Oh, you can’t prove any of your URU research until I do a

quantitative study of it.” I don’t really want to do a quantitative study of it.



ROBERT BLOOMFIELD: Let me say that is not at all what I’m trying to say.

CELIA PEARCE: Well, that actually is what it sounds like you’re saying.



ROBERT BLOOMFIELD: If you’re going to write an article, where, in the end you’re going

to say that you believe you’ve uncovered a more general truth than just describing exactly

what happened in that particular circumstance, then that’s not enough for me. And I do

believe, I mean the qualitative research that I’ve read does not simply say, “Here’s a list of

facts about what happened in this particular circumstance.” It draws a bunch of conclusions.

And so, to the extent that you’re--



THOMAS GUILDENSTERN: It does, but you can draw conclusions without heading

anywhere down the road that ultimately ends in some valorized forum of predictability or

generalizability. You can--



ROBERT BLOOMFIELD: But aren’t they general statements about their general

conclusions?



THOMAS GUILDENSTERN: I will answer that question exactly. They are statements about

what we understand to be the processes that were important and why they were important

in a certain circumstance and situation. And it is, of course, possible, as it is with a field

agent’s knowledge, that critical readers, people who take that knowledge and bring it to bear
on their own experience, can take something from it and understand something from it for

other circumstances. This is why, if we really wanted to avoid disaster after the breakup of

the Soviet Union and have their economy in some kind of reasonable shape now, we

wouldn’t have turned it over to the Harvard Institute for International Development and their

schemes and models for what would work. People would have read all the ethnography and

all the history and all the social geography and all of that qualitative work, and we would

have left it in the hands of people who knew how to make critical judgments amidst a lot of

very, very messy information.



I really highly recommend the preface, I think, or the introduction to Anthony Giddens’ The

Constitution of Society. The famous social theorist, Anthony Giddens, takes on this issue

that generalizability is the aim of all science. And it is an utterly convincing argument. We

are not about generalizability. We are about making claims about knowledge. Yes, about

what happened, but that is far more than simple description.



TOM BOELLSTORFF: Just to quickly throw in--



ROLAND LEGRAND: Okay. Go for it, Tom.



TOM BOELLSTORFF: Once again, Rob, I think this is a case where there’s just a really

different perspective that the way you’re framing it is coming out as being partisan. To me,

it’s almost like saying, “Well, Tom, if you were interested in doing a study of 50,000

Indonesians and their sexual behaviors and what statistically puts them at greater risk for

HIV, then wouldn’t you use an experimental method or a statistical method?” to which my
answer would be, “Well, obviously, yes, but that’s not the way that I’ve set up my questions.”

I mean that’s not the way that my research questions work.



The second issue that you’re raising here about generalizability, because, once again, my

first degree is linguistics. When I get stuck on these things, I always go back to language as

a helpful way for me to think these things through. And when you do qualitative field

research, you are not limited to making generalizations only about the people that you talk

to, that you also can’t do the whole universe either, and you have to hedge it. It becomes at

the level of hypotheses.



But, once again, if I spend time with 20 Japanese speakers and I become fluent in the

Japanese language, I have data that is not only valid for those 20 people; I could then speak

the Japanese language with many, many millions of people. I wouldn’t learn every dialect or

every word in Japanese, but I would have learned something that is more generalizable

than just those 20 people that I learned the Japanese language from. That’s how culture

works. But that’s a different kind of generalizability.



From the positivists--and here’s where positivism is important--idea of generalizability, which

is like the law of gravity, where I drop a rock anywhere on the earth, at any point in history,

and its rate of fall towards the ground I can predict how fast that rock will fall because of the

law of gravity. That’s a different kind of generalizability, and that’s not the kind that we, as

interpretive folks, are looking for, but that doesn’t mean that we can’t do any generalization.

And that’s an extremely important distinction to keep in mind.
ROLAND LEGRAND: Okay. I have a kind of closing question which is, well, this is more

based on the reactions on Dusan’s blogpost and where reference was made, and,

unfortunately he’s not here amongst us. Reference was made to Edward Castranova and

his idea that what we learn and what we research for in Virtual Worlds can, in some way, be

used to make the broader society into a better place, which is a kind of a very wild

hypothesis, as I feel it. And I really, really would like to know what you guys are thinking

about this kind of approach, which is really trying to--it seems to try to make the world into a

better place, but other people find it a really creepy--and that was the word which was

used--a creepy way of thinking and working with science. What’s your opinion about this?

Anyone wants to give it a take?



CELIA PEARCE: Yeah. I’ll start. Yeah, I think that there are some issues around this. I

mean I think that we, in the blog discussion, one other thing that came up was the issue

research ethics and that historically there have been periods of time when behavioral

scientists have tried to experiment with behavior by putting people in a quote/unquote

“controlled” or laboratory sort of setting or a situation that had some kind of variables

adjusted to it. This kind of research was deemed to be unethical because it was

manipulating people’s emotions and behavior in a way that turned out to be harmful to the

research subjects.



An example that I recently heard that was really disturbing to me was an individual who

came in Second Life, in the opposite of their natural gender, and got involved in a romantic

relationship with someone, under the auspices of experimental research, and this person

didn’t realize that they were having a relationship with somebody of the opposite gender
from the avatar. And the outcome of this experiment was, it was extremely hurtful to the

research subject. So I do think that we need to really, really seriously think about this now.



I think there are some interesting possibilities for different types of experimentation, but I

think we also need to look at the history of experiments with humans in controlled

environments, just to make sure that we’re not revisiting scenarios that we’ve already been

through in the Real World to a very deleterious effect.



ROLAND LEGRAND: Okay. Thank you. Anyone else wanting to comment on this?



ROBERT BLOOMFIELD: Well, I just comment briefly, what you said creeped people out or

something about what Ted Castranova was proposing.



ROLAND LEGRAND: Yes, indeed. Exactly. Yes.



ROBERT BLOOMFIELD: I think it goes back to the distinction between positive and

normative. I think Ted wasn’t just talking about running some experiments to learn

something; he was really talking about running experiments to enable his vision of what

would be a better society. I think that that, more than the experimentation itself--yeah,

[Discord?] I see is saying “social engineering,” and I think it is much more. I think that’s the

part, not the fact that he was talking about running controlled experiments to learn

something new, but the fact that he was talking about engineering society, in his vision,

didn’t go over particularly well.
ROLAND LEGRAND: Okay. Well, thank you for all of your contributions.



ROBERT BLOOMFIELD: Actually, if I could make one closing comment since it’s three

against one.



ROLAND LEGRAND: Yes, of course.



ROBERT BLOOMFIELD: And I’m clearly the one being schooled here. I’d just like to say

real briefly, this has given me a lot to think about, and I think the number one thing that I’m

going to be thinking about is how the differences in subject matter affect this debate that

we’re having. I think we’d all agree just generically that different methods are appropriate to

different questions. It isn’t clear to me that that is enough to explain all of the differences in

views that we have here, and so I guess I’ll just leave it at that. But, to me, if there were a

question that were amenable to all research methods, then what would this debate be,

because I think a lot of it has been people picking the types of questions that suit their

method because, of course, that’s what we all do.



So anyway, I’ll leave it at that. I just want to say thank you all. It sounds like you all have

books coming out again in the near future, so I’ll have an excuse to get you on Metanomics,

and we can talk about other stuff and maybe touch on this as well. Hopefully, I’ll be more

educated.



ROLAND LEGRAND: Okay. Thanks a lot for all of the people out there having--well, it was

more than 90 minutes. Thank you for all your comments, and thank you for the panel
members for their very insightful remarks. Bye.



THOMAS GUILDENSTERN: Thanks, everyone. Thanks for having me.



TOM BOELLSTORFF: Thanks for doing this.



ROBERT BLOOMFIELD: Yeah. This was great. I don’t know if any of you were here early

enough to see this, but they actually had the four chairs inside a big boxing ring. So it would

have given the wrong signal.



ROLAND LEGRAND: Well, Robert, I prefer this way because the boxing metaphor, of

course, it would have been more interesting from a media point of view.



ROBERT BLOOMFIELD: You’re right.



ROLAND LEGRAND: But I think this discussion was much more interesting and honest

also, just trying to find the kind of nonpartisan approach to research in Virtual Worlds. It

seems to me a lot more useful than just shouting at each other.



ROBERT BLOOMFIELD: And, Thomas, I don’t know if you’re still around?



THOMAS GUILDENSTERN: Yeah, I am. I’m just about to [CROSSTALK]



ROBERT BLOOMFIELD: Oh, great. What was that book that you mentioned? Gibbens or
Giddens or something.



THOMAS GUILDENSTERN: Anthony Giddens. Let me type it in.

TOM BOELLSTORFF: I just typed in his name.



THOMAS GUILDENSTERN: Yeah. And the book I’m thinking of is his kind of masterpiece,

The Constitution of Society. You can also look at Alistair McIntyre, the philosopher’s book

After Virtue, on the limits of generalizability or the limits of predictability in the social

sciences. I think it’s chapter eight of After Virtue, maybe chapter nine.



ROBERT BLOOMFIELD: Hold on, I’m just checking.



ROLAND LEGRAND: So, Thomas, all right. And this is--



ROBERT BLOOMFIELD: Alasdair, with a “d”?



ROLAND LEGRAND: Yes.



THOMAS GUILDENSTERN: Yeah, I just typed it in there.



ROBERT BLOOMFIELD: Geez! No one taught his parents how to spell.



CELIA PEARCE: Thank you all very much for coming and giving your time to this

discussion. I have to be going to another engagement in a parallel universe. So I’ll see
everybody later.



ROLAND LEGRAND: Thank you, Celia.



TOM BOELLSTORFF: See you later. Take care, Celia.



THOMAS GUILDENSTERN: Thank you. Cheers.



CELIA PEARCE: Bye bye.



ROBERT BLOOMFIELD: I have to talk with one of my co-authors on a paper that has no

data whatsoever. It’s just a mathematical model.



THOMAS GUILDENSTERN: All right. Take care, all. Bye bye.



TOM BOELLSTORFF: Bye bye, everyone. Thank you.



ROLAND LEGRAND: Okay. Take care.



ROBERT BLOOMFIELD: Bye bye.


Document: cor1056.doc
Transcribed by: http://www.hiredhand.com
Second Life Avatar: Transcriptionist Writer

Weitere ähnliche Inhalte

Andere mochten auch

Andere mochten auch (7)

HUMANISM – RELIGION OR LIFE STANCE? A CRITICAL AND PROVOCATIVE ANALYSIS OF TH...
HUMANISM – RELIGION OR LIFE STANCE? A CRITICAL AND PROVOCATIVE ANALYSIS OF TH...HUMANISM – RELIGION OR LIFE STANCE? A CRITICAL AND PROVOCATIVE ANALYSIS OF TH...
HUMANISM – RELIGION OR LIFE STANCE? A CRITICAL AND PROVOCATIVE ANALYSIS OF TH...
 
“Methodology for the Infinite Archive”: Exploring the Implications of Digital...
“Methodology for the Infinite Archive”: Exploring the Implications of Digital...“Methodology for the Infinite Archive”: Exploring the Implications of Digital...
“Methodology for the Infinite Archive”: Exploring the Implications of Digital...
 
Humanism's many faces
Humanism's many facesHumanism's many faces
Humanism's many faces
 
Apa d 32 1st mtg
Apa d 32 1st mtgApa d 32 1st mtg
Apa d 32 1st mtg
 
Apa d 32 1st mtg changed
Apa d 32 1st mtg changedApa d 32 1st mtg changed
Apa d 32 1st mtg changed
 
Scholarly workflow and personal digital archiving interviews
Scholarly workflow and personal digital archiving interviewsScholarly workflow and personal digital archiving interviews
Scholarly workflow and personal digital archiving interviews
 
The challenge of the humanistic management
The challenge of the humanistic managementThe challenge of the humanistic management
The challenge of the humanistic management
 

Ähnlich wie 033009 Vw Methods Research Panel Metanomics Transcript

February 7th daryl j. bem, social psychologist emeritus joins robert bloomfield
February 7th daryl j. bem, social psychologist emeritus joins robert bloomfieldFebruary 7th daryl j. bem, social psychologist emeritus joins robert bloomfield
February 7th daryl j. bem, social psychologist emeritus joins robert bloomfieldDoug Thompson
 
Capital Punishment Argument Essays. Online assignment writing service.
Capital Punishment Argument Essays. Online assignment writing service.Capital Punishment Argument Essays. Online assignment writing service.
Capital Punishment Argument Essays. Online assignment writing service.Vanessa Perkins
 
Changing the Story - Using Social Media in Library Customer Services
Changing the Story - Using Social Media in Library Customer ServicesChanging the Story - Using Social Media in Library Customer Services
Changing the Story - Using Social Media in Library Customer ServicesRob Wannerton
 
Hospital Volunteer Application Essay. Online assignment writing service.
Hospital Volunteer Application Essay. Online assignment writing service.Hospital Volunteer Application Essay. Online assignment writing service.
Hospital Volunteer Application Essay. Online assignment writing service.Anita Gomez
 
Hamburger Essay Outline Template. Online assignment writing service.
Hamburger Essay Outline Template. Online assignment writing service.Hamburger Essay Outline Template. Online assignment writing service.
Hamburger Essay Outline Template. Online assignment writing service.Amy Colantuoni
 
ASSESSMENT CHECKLIST FOR DISCUSSION POSTS Content .docx
ASSESSMENT CHECKLIST FOR  DISCUSSION POSTS  Content .docxASSESSMENT CHECKLIST FOR  DISCUSSION POSTS  Content .docx
ASSESSMENT CHECKLIST FOR DISCUSSION POSTS Content .docxfredharris32
 
Visual Rhetoric, January 28, 2013
Visual Rhetoric, January 28, 2013Visual Rhetoric, January 28, 2013
Visual Rhetoric, January 28, 2013Miami University
 
Persuasive Essay On Drugs
Persuasive Essay On DrugsPersuasive Essay On Drugs
Persuasive Essay On DrugsErica Robertson
 

Ähnlich wie 033009 Vw Methods Research Panel Metanomics Transcript (16)

Hamming
HammingHamming
Hamming
 
February 7th daryl j. bem, social psychologist emeritus joins robert bloomfield
February 7th daryl j. bem, social psychologist emeritus joins robert bloomfieldFebruary 7th daryl j. bem, social psychologist emeritus joins robert bloomfield
February 7th daryl j. bem, social psychologist emeritus joins robert bloomfield
 
Metanomics Transcript Nov 18 2009
Metanomics Transcript Nov 18 2009Metanomics Transcript Nov 18 2009
Metanomics Transcript Nov 18 2009
 
Metanomics Transcript Nov 18 2009
Metanomics Transcript Nov 18 2009Metanomics Transcript Nov 18 2009
Metanomics Transcript Nov 18 2009
 
Capital Punishment Argument Essays. Online assignment writing service.
Capital Punishment Argument Essays. Online assignment writing service.Capital Punishment Argument Essays. Online assignment writing service.
Capital Punishment Argument Essays. Online assignment writing service.
 
Connecting Sentences
Connecting SentencesConnecting Sentences
Connecting Sentences
 
Metanomics Transcript Oct 14 2009
Metanomics Transcript Oct 14 2009Metanomics Transcript Oct 14 2009
Metanomics Transcript Oct 14 2009
 
What does-it-all-mean
What does-it-all-meanWhat does-it-all-mean
What does-it-all-mean
 
Metanomics Transcript Oct 14 2009
Metanomics Transcript Oct 14 2009Metanomics Transcript Oct 14 2009
Metanomics Transcript Oct 14 2009
 
Changing the Story - Using Social Media in Library Customer Services
Changing the Story - Using Social Media in Library Customer ServicesChanging the Story - Using Social Media in Library Customer Services
Changing the Story - Using Social Media in Library Customer Services
 
Hospital Volunteer Application Essay. Online assignment writing service.
Hospital Volunteer Application Essay. Online assignment writing service.Hospital Volunteer Application Essay. Online assignment writing service.
Hospital Volunteer Application Essay. Online assignment writing service.
 
Hamburger Essay Outline Template. Online assignment writing service.
Hamburger Essay Outline Template. Online assignment writing service.Hamburger Essay Outline Template. Online assignment writing service.
Hamburger Essay Outline Template. Online assignment writing service.
 
ASSESSMENT CHECKLIST FOR DISCUSSION POSTS Content .docx
ASSESSMENT CHECKLIST FOR  DISCUSSION POSTS  Content .docxASSESSMENT CHECKLIST FOR  DISCUSSION POSTS  Content .docx
ASSESSMENT CHECKLIST FOR DISCUSSION POSTS Content .docx
 
Process Essay Thesis
Process Essay ThesisProcess Essay Thesis
Process Essay Thesis
 
Visual Rhetoric, January 28, 2013
Visual Rhetoric, January 28, 2013Visual Rhetoric, January 28, 2013
Visual Rhetoric, January 28, 2013
 
Persuasive Essay On Drugs
Persuasive Essay On DrugsPersuasive Essay On Drugs
Persuasive Essay On Drugs
 

Mehr von Remedy Communications

Mehr von Remedy Communications (20)

Metanomics transcript june 23 2010
Metanomics transcript june 23 2010Metanomics transcript june 23 2010
Metanomics transcript june 23 2010
 
Metanomics transcript june 23 2010
Metanomics transcript june 23 2010Metanomics transcript june 23 2010
Metanomics transcript june 23 2010
 
Metanomics transcript june 9 2010
Metanomics transcript june 9 2010Metanomics transcript june 9 2010
Metanomics transcript june 9 2010
 
Metanomics transcript june 9 2010
Metanomics transcript june 9 2010Metanomics transcript june 9 2010
Metanomics transcript june 9 2010
 
Metanomics transcript april 21 2010
Metanomics transcript april 21 2010Metanomics transcript april 21 2010
Metanomics transcript april 21 2010
 
Metanomics transcript april 14 2010
Metanomics transcript april 14 2010Metanomics transcript april 14 2010
Metanomics transcript april 14 2010
 
Metanomics transcript april 14 2010
Metanomics transcript april 14 2010Metanomics transcript april 14 2010
Metanomics transcript april 14 2010
 
Metanomics Transcript April 7 2010
Metanomics Transcript April 7 2010Metanomics Transcript April 7 2010
Metanomics Transcript April 7 2010
 
Metanomics Transcript April 7 2010
Metanomics Transcript April 7 2010Metanomics Transcript April 7 2010
Metanomics Transcript April 7 2010
 
Metanomics Transcript Mar 31 2010
Metanomics Transcript Mar  31 2010Metanomics Transcript Mar  31 2010
Metanomics Transcript Mar 31 2010
 
Metanomics Transcript Mar 31 2010
Metanomics Transcript Mar  31 2010Metanomics Transcript Mar  31 2010
Metanomics Transcript Mar 31 2010
 
Metanomics Transcript March 17 2010
Metanomics Transcript March 17 2010Metanomics Transcript March 17 2010
Metanomics Transcript March 17 2010
 
Metanomics Transcript March 17 2010
Metanomics Transcript March 17 2010Metanomics Transcript March 17 2010
Metanomics Transcript March 17 2010
 
March10 Metanomics Transcript
March10 Metanomics TranscriptMarch10 Metanomics Transcript
March10 Metanomics Transcript
 
Metanomics Transcript Mar 3 2010
Metanomics Transcript Mar  3 2010Metanomics Transcript Mar  3 2010
Metanomics Transcript Mar 3 2010
 
Metanomics Transcript Mar 3 2010
Metanomics Transcript Mar  3 2010Metanomics Transcript Mar  3 2010
Metanomics Transcript Mar 3 2010
 
Metanomics Transcript Feb 10 2010
Metanomics Transcript Feb  10 2010Metanomics Transcript Feb  10 2010
Metanomics Transcript Feb 10 2010
 
Metanomics Transcript Feb 10 2010
Metanomics Transcript Feb  10 2010Metanomics Transcript Feb  10 2010
Metanomics Transcript Feb 10 2010
 
Metanomics Transcript Feb 3 2010
Metanomics Transcript Feb 3 2010Metanomics Transcript Feb 3 2010
Metanomics Transcript Feb 3 2010
 
Metanomics Transcript Feb 3 2010
Metanomics Transcript Feb 3 2010Metanomics Transcript Feb 3 2010
Metanomics Transcript Feb 3 2010
 

Kürzlich hochgeladen

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slidevu2urc
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsMaria Levchenko
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationMichael W. Hawkins
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slidespraypatel2
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountPuma Security, LLC
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Allon Mureinik
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024Results
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking MenDelhi Call girls
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure servicePooja Nehwal
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityPrincipled Technologies
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...shyamraj55
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersThousandEyes
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Drew Madelung
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerThousandEyes
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 3652toLead Limited
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Miguel Araújo
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationSafe Software
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking MenDelhi Call girls
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘RTylerCroy
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Paola De la Torre
 

Kürzlich hochgeladen (20)

Histor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slideHistor y of HAM Radio presentation slide
Histor y of HAM Radio presentation slide
 
Handwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed textsHandwritten Text Recognition for manuscripts and early printed texts
Handwritten Text Recognition for manuscripts and early printed texts
 
GenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day PresentationGenCyber Cyber Security Day Presentation
GenCyber Cyber Security Day Presentation
 
Slack Application Development 101 Slides
Slack Application Development 101 SlidesSlack Application Development 101 Slides
Slack Application Development 101 Slides
 
Breaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path MountBreaking the Kubernetes Kill Chain: Host Path Mount
Breaking the Kubernetes Kill Chain: Host Path Mount
 
Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)Injustice - Developers Among Us (SciFiDevCon 2024)
Injustice - Developers Among Us (SciFiDevCon 2024)
 
A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024A Call to Action for Generative AI in 2024
A Call to Action for Generative AI in 2024
 
08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men08448380779 Call Girls In Civil Lines Women Seeking Men
08448380779 Call Girls In Civil Lines Women Seeking Men
 
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure serviceWhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
WhatsApp 9892124323 ✓Call Girls In Kalyan ( Mumbai ) secure service
 
Boost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivityBoost PC performance: How more available memory can improve productivity
Boost PC performance: How more available memory can improve productivity
 
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
Automating Business Process via MuleSoft Composer | Bangalore MuleSoft Meetup...
 
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for PartnersEnhancing Worker Digital Experience: A Hands-on Workshop for Partners
Enhancing Worker Digital Experience: A Hands-on Workshop for Partners
 
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
Strategies for Unlocking Knowledge Management in Microsoft 365 in the Copilot...
 
How to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected WorkerHow to Troubleshoot Apps for the Modern Connected Worker
How to Troubleshoot Apps for the Modern Connected Worker
 
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
Tech-Forward - Achieving Business Readiness For Copilot in Microsoft 365
 
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
Mastering MySQL Database Architecture: Deep Dive into MySQL Shell and MySQL R...
 
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time AutomationFrom Event to Action: Accelerate Your Decision Making with Real-Time Automation
From Event to Action: Accelerate Your Decision Making with Real-Time Automation
 
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men08448380779 Call Girls In Greater Kailash - I Women Seeking Men
08448380779 Call Girls In Greater Kailash - I Women Seeking Men
 
🐬 The future of MySQL is Postgres 🐘
🐬  The future of MySQL is Postgres   🐘🐬  The future of MySQL is Postgres   🐘
🐬 The future of MySQL is Postgres 🐘
 
Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101Salesforce Community Group Quito, Salesforce 101
Salesforce Community Group Quito, Salesforce 101
 

033009 Vw Methods Research Panel Metanomics Transcript

  • 1. VW METHODS RESEARCH PANEL - MARCH 30, 2009 ROBERT LEGRAND: Okay. Shall we get started? Okay, we officially start now? ROBERT BLOOMFIELD: Yeah, if everyone is here and everyone’s voice is good. I guess we should just get an okay from Joel, if he’s doing the recording. ROBERT LEGRAND: Okay. So welcome, everybody, on this special session of Metanomics, where we will have a rather long discussion. And, first of all, thank you for having me as your moderator today. Maybe I will just present myself just briefly. My name is Roland Legrand, and I am a business reporter working for Mediafin. That’s a Belgian publisher of business newspapers and electronic services. And I have a personal blog about Virtual Worlds, Mixed Realities. And so today we are here, in fact, to discuss the topic: What can qualitative and experimental methods tell us about Virtual Worlds and culture? And, in fact, what happened was, at the previous Metanomics gathering, there was the famous Connecting The Dots, a part of the discussion, where Beyers Sellers, who is here--and we’ll just present everyone briefly after this--but Beyers Sellers, or Professor Robert Bloomfield was making some suggestions to send anthropology into a more experimental direction, and that provoked a lot of discussion. And so now we are here all together, in order to have a very tough discussion of what it is all about. So first of all, let me present. Celia is sitting here at my right hand-- Artemesia Sandgrain, and in I would not say real life,
  • 2. but let us say, in the other world, Celia Pearce, assistant professor of Digital Media School of Literature, Communication and Culture, and director of the experimental gamelet, director of the Emergent Game Group at the Georgia Institute of Technology. Of course, all our participants in this panel have really lots and lots of activities and publications, and I’m sure they will correct me if I’m really too incomplete or if I’m even wrong in describing all the wonderful things what you folks are doing there. And sitting next to Celia is Beyers Sellers, Professor Robert Bloomfield, professor of management and professor of accounting at the Cornell University Johnson Graduate School of Management. As we all know, he has a special interest in experimental economics, and he is, of course, the host of our Metanomics show. Then on my left hand, we have Thomas Guildenstern, and Thomas Guildenstern happens to be Thomas M. Malaby, associate professor at the University of Wisconsin Milwaukee, social cultural anthropologist, and his principal research interest seems to be the relationships between institutions, unpredictability and technology, which seems to be, in these times, extremely interesting to--especially the unpredictability part of his interests. And then at my far left, we have Tom Bukowski, Tom Boellstorff, associate professor, Department of Anthropology at the University of California Irvine, editor-in-chief of the American Anthropologist, and the writer of many things, but among those things is Coming of Age in Second Life. Welcome here on this panel. What I would like to propose for this session is that each of you will talk for about five
  • 3. minutes, responding to what Professor Bloomfield, Beyers Sellers, told at his famous Connecting The Dots part of the previous show, that everyone comments on that. And then we will have a second round again, of about five minutes, where that everyone can respond to the other panel members’ statements. And, in fact, without further ado, I would like Beyers Sellers to repeat or reformulate or explain a bit what he exactly meant during that Connecting The Dots segment of the previous show. So, Professor Bloomfield, the floor is all yours. ROBERT BLOOMFIELD: Thanks a lot, Roland. Thanks to all the panelists for joining us. And, finally, thanks to all of you for coming. This type of philosophy of science topic is something that I love. We’re all academics on the panel, and part of it is just doing our work, and part of it is thinking about why we do what we do and how we could do it differently. And so I’m really looking forward to hearing what everyone has to say. First, I’d like to say that, as far as I know, this particular Connecting The Dots segment is the first one that people actually responded to because it really got much more of a reaction than I was anticipating. I didn’t realize I was actually stepping my foot into, I think, a number of hot-button issues and contentious issues and a larger discussion that has been going on in anthropology and related fields, particularly in Virtual Worlds. So just real quickly I want to emphasize what I see as the key point that I was trying to make and give just a little bit of background on that, and Roland will cut me off, I’m sure, if I yammer on too long.
  • 4. ROBERT LEGRAND: I will do it. I will. ROBERT BLOOMFIELD: There’s one part in particular, I’ll just read this. Oh, I should mention that the entire Connecting The Dots comment piece is available somewhere. Maybe someone can post this in text chat. There’s a box around here somewhere that says “Touch for note card,” and you can touch that, and you can actually get a note card with the entire content on it. But let me just read this one part from what I said before, “I think our guests today Tom Boellstorff and Celia Pearce would probably agree with a theory or a prediction like cultures arising in Virtual Worlds are shaped by the Real World cultures of the players, by platform features governing their behavior and their in-world goals and interactions and also affected by their early experiences.” And then I go on to say, “But that’s not really specific enough because, if we’re going to test assertions like this, we’re going to need to know how to measure cultural behaviors and cultural differences, features and experiences. And we’re going to have to posit much more specifically how which features and experiences will result in which cultural differences. But specifying a theory precisely is only the first problem. The larger problem is figuring out a systematic way to test it.” Now, I am, by trade, an experimental researcher, and that’s a particular method that I’ve used for as much by historical accident because I think it’s a good way to do research. There are lots of opportunities, and once you get involved in a certain method, it’s what you know, it’s what you use and you continue to use it. But I think that experiments can provide one particular thing that’s especially valuable for cultural anthropologists, and I’d like to
  • 5. explain this in light of--oh, I see someone is saying my voice is cutting out so I guess, if other people are having problems, let me know through backchat, and I’ll know that it’s me. Okay, looks like a couple other people can hear me fine so, good. Okay. I’d like to point out that I am an accounting professor, and that will actually color a lot of my remarks throughout this session, but, in particular, one of the things that I’d like to point out is that accounting is the language of business, but it’s a very unusual language, in that it’s a language that’s used when there is a very strong skepticism that you can believe what people are saying. So the very short version of how I see experiments fitting in with qualitative methods is that qualitative methods are really essential to gaining new understandings about culture or about anything else. But it’s very hard to win over skeptics with qualitative methods. If people are inclined to disagree with you, either because they hold to a different ideology, because they think you have a vested interest in misrepresenting, or they have a vested interest in not believing you, then it’s helpful to have the additional controls that experiments provide. And it’s helpful to have the sort of objective measures so that it’s just very difficult for them to disagree with you. And so just to anticipate what I think the anthropologists will say in defending qualitative methods, it’s very easy to talk about the limits of experimental research. It’s easy to talk about the fact that--there are no slides. It just sounds like there should be, given the way I’m talking. I apologize. There are certainly problems and limitations of experiments, and I think that what I’m hoping we will come away with at the end of the day is not just me saying, “Here are the problems
  • 6. with qualitative research,” and the qualitative researchers saying, “Here are the problems with experimentation.” But seeing how they complement one another. I’ll just end it there because I think, obviously I have a lot more to say, but it’s going to be important to let the qualitative research get set up so we can have the insiders talk about its strengths. ROBERT LEGRAND: Okay. Thank you. Celia Pearce, Artemesia, what is your take on this? CELIA PEARCE: Well, I wasn’t sure if you wanted us to respond to this with general comments from last time or to specifically what Rob is saying now. ROBERT LEGRAND: Well, yes, you can comment on both, of course. I don’t think it’s really contradictory what he is telling now, but maybe you see differently. So please go ahead. CELIA PEARCE: Well, there are a couple of key ideas that I think were expressed a little bit here and more so in the previous comment. There was a suggestion made that qualitative research is [not?] empirical, which is a kind of bias, what Tom has referred to as a disciplinary partisanship that, somehow, if it’s qualitative research, it’s not empirical. And I think that most people who work in the kind of research that the three of us do would refute that claim. But I think there’s sort of a larger sort of theological issue here. I would say that Rob is also saying that quote/unquote “Skeptics do not take qualitative research seriously,” and I think that’s a relative kind of perception because, in some fields, we’re taken actually quite
  • 7. seriously. So people in economics and accounting may not take qualitative research seriously; however, in other fields, even in some business fields, qualitative research is taken quite seriously and considered to be empirical and legitimate as a research form. So I want to just put that on the table because I don’t want people to have a misconception that everyone thinks qualitative research is not empirical. The second is this issue of quote/unquote “objective,” the idea of quote/unquote “objective research,” and I think Tom will probably elaborate on this, and he’s much better informed than I am. But this is basically what is referred to as a positivist perspective, and that means that, among some sciences and among some practitioners of science, there is a belief that one can make a positive determination about something that is absolute. In other words, the sky is blue. This is a fact. We can’t refute it. It’s not subjective in any way. So there are some people that would argue that science is based on making those sorts of objective assertions. However--and also the question of a theory being proven, I think one of the issues that maybe we need to explore here is what do we mean by theory. In physics, a theory is essentially a hypothesis. In anthropology and sociology and humanities, a theory is not a hypothesis that is meant to be proven. The word “theory” actually means something quite a bit different from that so I think that’s something we should try to tackle here. And the other issue is the question of understanding that, you know, I think some of us would argue that all science is subjective, and if you look historically, you will find that at various points in time objectivism was used to support extremely subjective points of view.
  • 8. And probably the best example of this would be eugenics, which was a quote “scientific” discipline that was essentially used to reinforce racism and assert forms of sort of [biological?] colonialism. So I think we need to sort of unpack some of these issues, and I guess I’m just setting the stage for where I think some of the debates lie. One: What do we mean by “objective”? And is that, in fact, a real thing that exists? I would argue that there is no such thing as objectivity. If a human is doing anything or thinking about anything, it’s inherently subjective. And then this issue of qualitative research somehow being less legitimate than other types of research, which I think is, in some of the blogging that went on, post our last conversation, I think part of what we were feeling was that there was a statement made that quote/unquote “we should be doing something more useful, that we should be doing empirical research,” etcetera, and all of these sort of unleashed a whole set of assumptions and disciplinary biases that I think we wanted to kind of unpack here. At the same time, I don’t think this is about a Jerry Springer approach to a research discussion. It’s not going to be us versus them. I think we all need each other, and I’d like to close with sort of a metaphorical way to think about this. A good friend of mine, who goes by Catherine Barth here in Second Life, made a great comment to me a few years ago, which I had been pondering, especially since I do a lot of traveling. She said that you can travel across the country by plane or by car, and, in each vehicle, you’re going to see something different. It doesn’t mean that what you see is more true or more accurate. It’s just a different way of seeing it.
  • 9. And what I was sort of thinking about as I was flying back yesterday from doing some qualitative research up in Seattle, looking out of the plane window, was thinking that there’s some types of research that allow you to see the big picture from above, and there’s some types of research that allow you to have a more intimate view at close range. And I think the challenge we face is, you really can’t do both at the same time. I’ve done both quantitative and qualitative research. And there’s some information you just cannot get from interviews and from participant observation. And there’s some information you just cannot get from surveys or other kinds of data-capturing methods. So the best outcome of this, I’m hoping, is that we will strengthen our mutual appreciation for the disciplines that we work in and perhaps find ways to collaborate so that we can use each other’s perspective, i.e., the airplane view versus the on-the-ground view, to help eliminate the subjects of our queries even further. So that’s basically my opening commentary. ROBERT LEGRAND: Thanks a lot. So we cannot travel at the same time by plane and by car. So, Thomas, shoot. THOMAS GUILDENSTERN: Okay. Here we are. I’m back on. Yes, I have a lot of reactions to this whole debate, and I very much appreciate Robert and Celia and Tom inviting me to attend here. Issues of epistemology here, scientific epistemology, philosophy of science, whatever you want to call it, basically the question of what kinds of claims can we make and on what grounds do we make them have long been an interest of mine. I’m really struck by the way certain kinds of dichotomies are going unexamined here, and I think Celia was
  • 10. already gesturing toward some of them. In particular, from Robert’s comments, both the ones from the previous event, but also the ones starting out today, this contrast between qualitative methods and experimentation, from a scientific epistemology point of view makes no sense, and I think it’s important to mention that. If you consider Gregor Mendel, the first and perhaps one of the most famous examples of experimental science on beans and the genetics of past characteristics, this was not quantitative; this was entirely qualitative. There’s nothing that makes experimentation non-qualitative or quantitative, and there’s nothing that makes qualitative work non-experimental to court. We have to be very, very careful and educated about those different terms and what they mean. I think that this kind of blurring, I’m sorry to say, is most common in the hotly contested territory of the social sciences, where it seems that some fields have felt a need to try to gain legitimacy by drawing hard boundaries between certain kinds of approaches and other kinds of approaches. What Celia said is right, in my opinion. The issue here is one of positivism, and I see positivism as the enduring faith in a law-driven world, that those laws are out there waiting to be discovered, if we can only find them. And the funny thing is, for this conversation, from my point of view, that you don’t even have to get to the social scientists or the humanists that you’re worried about, if you want to find scientists who have directly questioned that point of view. In fact, there are a couple of the most famous scientists we’ve had over the last couple of hundred years, both Charles Darwin and James Clerk Maxwell, founder of the
  • 11. thermodynamics laws, were people who did not believe in the positivist project. They did not believe that experiment-driven science, based upon a presumption of laws of cause and effect that would ultimately reveal more and more true laws about the world was the way science needed to be. In fact, they felt that that was precisely what science needed to not be. Rather, science needed to be built on claims that it always knew were provisional about a changing world. And I’m just going to close with one more comment about that. We might hold close to our discussion just to speak about natural sciences, even though I broadly don’t feel that there is much of a distinction between all the parts of the academy. But even just thinking about the natural sciences, you’ve got paleontologists, biologists, astronomers, all different fields of the natural sciences that use qualitative methods, that do not use experimentation, that use quantitative methods that are not experimental and qualitative methods that are, that are doing exploratory work, that are doing work on specific contingent events in history that are not about predictability and generalized ability, but are rather about the unique circumstances that lead to certain processes and from which other outcomes followed, things like the extinction of the dinosaurs. Contingent events. Generalized ability and predictability are not the sine qua non of scientific inquiry or empirical inquiry writ large. ROBERT LEGRAND: Okay. Thank you. That was a very clear statement. Tom Bukowski, Tom Boellstorff, your turn. TOM BOELLSTORFF: Great. Well, once again, thanks for doing this, Roland, for moderating this, and thanks for having this happen. I was the one actually who had the
  • 12. original idea to invite Thomas because I really wanted to bring in his perspective into the discussion because I know he’s been thinking about these things a lot and also does research on Virtual Worlds. Just to start off, I just want to make a couple quick points. One: Actual, and I don’t think it necessarily was sort of off topic to mention the stock market or whatever because while there’s still sometimes a tendency to denigrate or talk down about Virtual Worlds, they are an important part of what’s happening in human societies around the world. And we have a new research community coming into being. It has a history, but it’s really come into being about studying these Virtual Worlds, online games and such, and I think it’s really important that we get that research community on the right foot, get it set off to be as broad and engaged as possible because out there in popular culture the most common view of Virtual Worlds is still from the Matrix movies where they use a Virtual World to enslave humanity. But, that’s not all that Virtual Worlds can be for, and I really think developing strong research methodologies around this is very important. And also to talk about keeping away from the Jerry Springer thing, these things can often get re-interpreted after the fact and misunderstood, so someone in the audience mentioned the Stanford anthropology split. It didn’t actually split over these issues. I was a Ph.D. student there during the split; that’s where I got my Ph.D. But, after the fact, that’s how it’s been portrayed now in the media, in a way that I think actually isn’t helpful, and I think that kind of thing can happen with these things around methodology. So in Robert’s original comments, I think it’s very important that we try and create the
  • 13. broadest possible research community that we can. And so, for instance, when in his comments he said things like, “The larger problem is figuring out a systematic way to test it,” that’s assuming that a certain notion of testing that Thomas was talking about earlier--Malaby--is the only way that something can work to be called a science. Or when he said, “The problem is that anthropologists are walking into existing cultures and working backwards, trying to figure out whether their theories might explain the culture that’s already arisen,” that, to us, that’s not a problem, and we don’t see it as working backwards. We think we’re the ones moving forwards actually, if anything, because that’s not the notion of explaining that we’re trying to do. Just like we’re not trying to predict when will the dinosaurs become extinct again because that kind of question doesn’t even make sense, if that’s our analytical framework. To us, that’s not a problem. And so also when Robert said in his comments originally in Connecting The Dots, “Despite a reputation for being free thinkers who challenge old ideas, academics tend to be a pretty cautious lot,” and so on. People who do the kind of research that we’re talking about, in terms of what I might call interpretive research--Thomas is absolutely right that the qualitative-quantitative is not the key issue--don’t see it as more traditional in any way, shape or form. So I think it’s very important to keep that very, very clear. Because the really important issue here is that I think that all of these methods can be very valid. I think that experimental methods can answer certain kinds of questions very effectively and that interpretive kinds of approaches can also work very effectively. In one of my other research lives, I do research on HIV/AIDS in Indonesia. If I want to understand transmission rates of HIV, I might want to use experimental models or look at certain kinds
  • 14. of statistical work that’s been done on different kinds of transmission rates. If I want to understand why is it that maybe in a certain community certain kinds of things that I might think of as sex, they don’t even call sex, and thus sex education wouldn’t make any sense to them, then I’m going to need to use an interpretative kind of approach. The example that I give in my book The Coming of Age in Second Life book about this; it’s not a perfect example, but it’s one way to think about it. It’s sort of along the lines of Celia’s comment about a plane versus a car--is that I could go to a different country and, let’s say, go to Japan and do an experimental project or a survey on the Japanese language, and I might learn a lot of idioms of the Japanese language. But, on the other hand, I could spend a year with just fifteen or twenty or even, for argument’s sake, five people, and, after a year with them, become fluent in a language that I could then use to speak to millions of people. Would I learn every word in Japanese? No. Would I learn every accent? No. But I would learn more than what I had started out with. I would have learned something about that particular language. And so the key issue here, I think, is the attempt to put one methodology or way of looking at the world above any other, and that’s the concern that I have with the way that experimental methods are sometimes portrayed as more scientific or providing a sort of deeper or more valid truth. They can do a lot, and then there’s certain things that they can’t do. And, in the same way, interpretative approaches can’t do everything either. That isn’t what I would only want to use in my HIV/AIDS research in Indonesia, but it can be very powerful for certain kinds of things. And, in terms of the idea of trying to create laws that can predict the future, this is a slightly
  • 15. separate discussion we could have, but the question is: Does culture work in that way? Can you create experiments that will predict culture? And I personally think that’s very difficult. I don’t see many examples of that. Predicting behavior can be done in certain ways. But trying to predict culture, to me, is like trying to predict where a language will go. Why did Middle English become Modern English? But that’s not all that experimental methods could do. They could do lots of other really great things, and you see already in Virtual Worlds research many wonderful examples of things that Virtual Worlds have done. But what I don’t want to have happen is that we have as something gets set up where we assume that that is what’s scientific and that what anthropologists or other folks do is sort of a little precursor step to sort of get to know the lay of the land before you do the real science stuff. Because, as Thomas and Celia have mentioned, that does a disservice as to what counts as science because not all science is experimental. And it does a disservice to the incredibly powerful things you can learn from ethnographic or interpretive research that’s done well. It can be done well or less well, and, even if it’s done really well, it never gives us perfect knowledge, but since when does any method give us perfect knowledge? Right? It moves the conversation forward, and, to me, that needs to be the goal of what we’re trying to do as researchers. And, as this is such a new research community in many ways, I really want to try and build the broadest possible research community and really hold our hands up whenever we see this kind of disciplinary or methodological partisanship that wants to rank these different methods. So I think the metaphor of the plane versus the car--I don’t know if that’s the right one--but that’s certainly one nice way to think about how different methods cast different
  • 16. light on a shared problem. And, in some cases, the exact problems that we’re looking at might be different as well. But there are many ways in which we can work together, but not by ranking. So I’ll stop there for now because there have been so many interesting comments in the backchat already, and I’m sure maybe the other speakers as well want to say things. I think we should just sort of let things circulate. But I think this is a very important discussion to have early, at this stage, in this research community, before things get set in stone so that we have the most tolerant, broadminded, robust research community that we can have. ROBERT LEGRAND: Thank you, Tom. So we’re proceeding to the second round and, once again, I will ask Beyers Sellers to comment. CELIA PEARCE: Can I ask a question quickly? ROBERT LEGRAND: Yes, Celia. CELIA PEARCE: I’m sorry to interrupt. I was hoping that what we would have here would be more of a discussion, and it feels like it’s heading in the direction of a debate. So I just wanted to throw that in. I’m hoping that part of this conversation will be a dialogue rather than each of us being allocated five minutes to pontificate. ROBERT LEGRAND: Yes, I know, but we will have-- CELIA PEARCE: But I’m happy to pontificate.
  • 17. ROBERT LEGRAND: --about 90 minutes so I think that, after this second round, my intention was to stop this kind of pontifications and just open up the discussion. But I just wanted to give every one of the four people here the opportunity to spell out what exactly they wanted to say and to what their exact positions are. Because, of course, what we learned right now, we had a number of incredible rich comments. For instance, what you said, Celia, about the fact that the empirical and experimental--well, the problem is that we are using those concepts, and we got the impression, I believe, that anyway Beyers Sellers was seeing some oppositions there. Maybe this was not the right interpretation of what he said, but anyway, there is this positivist perspective where we were talking about and the issue of objectivity, whether is there such a thing or is there not such a thing and should we work with a view on objectivity. The issue of positivism came back a few times, and also, for instance, what Tom was saying in his very eloquent closing statement, saying that, well, okay, we should have a broad community here, research community and against partisanship, against the fact that some methods would be considered better or higher or more scientific as the other. And, of course, this beautiful metaphor of the car and the plane, that car nor plane is superior to the other and can be suited to for certain ends or maybe both fail. But anyway, there should be no partisanship in going for either the car or the plane. So I really would like, but very briefly, in order indeed to avoid the kind of monologue that every one of you would very briefly respond on what the others were saying, and then we’ll just open it up, and we’ll have, I believe, about an hour then to go on, on those different
  • 18. topics. Beyers, could you elaborate a bit on what you’ve said previously and in regard to what your colleagues here are saying? ROBERT BLOOMFIELD: Yeah. I’ll say first, great insights and lots and lots of them--so now that I’ve seen Doubledown’s definition that he pulled offline: To pontificate means to speak in a pompous or dogmatic manner. I would have to pontificate a long time to respond to everything that came up. I’m just going to pick and choose a few of them. ROBERT LEGRAND: Try to be very brief, yes. ROBERT BLOOMFIELD: Yeah. No, I’m just going to pick and choose a few. I’d actually like to start with Tom Boellstorff’s remarks at the end, where he responded to my comment about academics being traditionalists and also talk about that we should be working to build a new research community. And so I just want to give this a little bit more context. The reason that academics tend to be traditionalists is because of our professional pressures. That basically, to do research, it’s got to be peer-reviewed by people who usually have been around for a while and have been doing things, you know, they have in their minds the right way and the wrong way to do them. And so we all take big risks by going against our own fields, in trying to do something new in our own field. I mean actually it turns out, while experimentalism is being viewed as something--it is very unusual to be an experimentalist in my field. There were almost none of them in economics, finance and accounting 20 years ago when I started doing this. I was a pathbreaker. I was someone flirting with, you know, it was a high-risk, high-return strategy. Fortunately for me it
  • 19. worked out. Now there are a lot of people running experiments. But that’s a real risk. So I guess one thing I want to say is, I really do believe that Second Life is a place where you see researchers, the academics, coming from all sorts of different disciplines, looking at something where there isn’t really--maybe we can create a new sensibility of what type of research is appropriate, using new technology, looking at some slightly different questions. And so this may be an opportunity to create a field where we have a much more eclectic research program. So that’s the first point. But I do want to make one more, I guess, a substantive point because the word “positivism” was used a fair bit and actually in different ways. And I think that there are some key distinctions here. Celia talked first about positivism and objectivity and talked about things like eugenics. And I think it’s very important to distinguish positivism from normativism, and this is a big thing in business where positive research is where you just try to say what is; you just try to understand the world, describe the world’s support theories, whatever, but you’re not saying what should happen. Whereas, normative research is where you start making value judgments about the right thing to do. So I want to make it very clear that I don’t think personally, you know, I’m not out there pushing for normativism. I am saying let’s just try to figure out what is and not be making activist arguments for or against something like is eugenics good. Now Thomas’s comments were really getting at positivism in more of a sense--well, there’s a distinction between positivism, realism and instrumentalism. My impression, and Thomas can correct me if I’m wrong, is that really he has a bone to pick with realism, which says that
  • 20. there is a real static unchanging true world out there, and we can understand it somehow. So now in this context, positivism means that you are constructing theories about what that Real World is, and I see Thomas mentions systematicity. But you’re actually going to find ways--you’re going to be able to see observable variables, measure and test and refute and so on and learn something about the Real World. Positivism can be a useful approach to science, even if you don’t believe in realism, and that’s what brings you to the instrumentalists who say, “Look, we can’t know anything about the Real World, but we’ve got a bunch of variables we can measure, and we can tie them together. And, you know what? If we find that consistently this variable is associated with that one in a certain way, then we feel we know more than we did without it.” So anyway, I’ve talked long enough. I just wanted to make those two quick points. One strongly in favor of synthesizing some of these methods and the other making sure we agree on which type of positivism we’re talking about. ROBERT LEGRAND: Okay. Celia, you have anything to add to your previous metaphors? CELIA PEARCE: I would maybe like to go after one of the other gentlemen. Is that all right? ROBERT LEGRAND: Okay. Oh, that’s all right. Thomas. THOMAS GUILDENSTERN: Yes. Sure, I can jump in. I was just about to type something. Well, we could spend probably all afternoon talking about what we variously mean by
  • 21. positivism. Auguste Comte is largely pointed to as someone who brought positivism into clear focus for western philosophical thought. And, on that view, positivism is the view that serious scientific inquiry should not search for ultimate causes deriving from some outside source, but must confine itself to the study of relations existing between facts, which are directly accessible to observation. So from that follows an enormous project, a scaffolding if you will, and a faith and a narrowness of vision that the predictable, the law-like, are what stand as real accumulated knowledge. My view is that not even natural science, as a whole, pursues that unthinkingly. And that such a view tends to always push aside the uniqueness, the context for change, circumstances where processes are unfolding especially rapidly, especially where people are involved, where any notion of generalized ability or predictability should fly very swiftly out the window, in favor of and understanding of the processes that are in place on the ground. That was Darwin’s lesson to what had been the Agassiz-based biology: Focus on process. Don’t get bound up in some project of believing species are somehow real. They aren’t. They’re just a shorthand for things that we’re grouping together for our reasons. That Darwinian approach is, to me, something that should underwrite all kinds of empirical inquiry. I would add to that that I agree that we should [visitate?] when we think about what kind of fields we want as we move forward, in thinking about Virtual Worlds and digital life and all the rest. I completely agree that much of the reason why we are in the situation we are today, not only here, but more broadly in terms of how these misunderstandings get reproduced, comes down, in the end, to the incentive that various disciplines have to push
  • 22. themselves forward at the expense of other disciplines, to claim exclusivity, to claim primacy, to claim a corner on the market of “True,” with a capital “T,” and that wrangling over resources in the academy leaves a lot of damage in its wake, not the least of which is forgetting of what are, in fact, very old lessons. This isn’t a new way of making science, in fact or empirical inquiry. It is, in fact, an old and kind of well thought out and well-considered way of conceiving empirical inquiry, but it is constantly in danger of being pushed aside as these kinds of rankings, these kinds of scrambling to the top unfold in the course of fighting for resources. ROBERT LEGRAND: Okay. Tom, maybe you want to add something about the struggle for resources and from your perspective, again, partisanship or maybe some other comment. TOM BOELLSTORFF: Sure. Sure, I can make a couple quick comments, and then I think Celia’s going to jump in, and then we can just have it turn into more of a free-for-all. ROBERT LEGRAND: Okay. Yes. TOM BOELLSTORFF: Because part of my comments will be actually on a comment from the floor, DuSanne’s comment. But first, I think that, Rob, we hit on a great example of a misunderstanding that, if left unchecked, can become a damaging misunderstanding, and that’s when you were talking about Thomas Malaby, Guildenstern, and about the issue of realism, that you thought that was maybe a separate part of the issue. And I’m pretty sure that he and I and Celia would all actually see ourselves as realists, and it’s often used as a
  • 23. way to sort of dis-empower this kind of interpretive research, to claim that we don’t believe there’s any reality. I absolutely believe that there was an asteroid that maybe killed the dinosaurs, but I can’t do an experiment to see that. I believe there are quasars that are real; they’re there, but we didn’t find out about them by creating quasars in the lab. And so you can be a realist. I believe in the English language, but the English language came into being in history. You can’t create it in the lab. And so I think that everyone here in the room actually we’re all realists. The question is, how do you get to or look at that reality, and I think it’s very risky to ever insinuate that people who do interpretive work just believe that all interpretations go, and it’s just anything goes. Any kind of research can be done well or badly. That experimental methods don’t have the corner on realism. In fact, you could even argue in a way they’re less realistic because they’re about creating experiments that don’t actually happen in the physical world normally; that’s why they’re experiments. I don’t agree with that. I believe that we’re all realists, and so I just don’t want to try and be interpreted as ever dis-empowering another approach by claiming that they aren’t as interested in the real. And that, I think, hits onto what Thomas was saying as well, that there are economic and political issues that aren’t just about universities and the academy, but more broadly where there could be sort of points scored for claiming that your method is the real “Science,” with a capital “S,” and so you give me that grant. Don’t give it to those other people. And that’s really unfortunate that we sometimes get set up that way. But I think that actually does have more impact, perhaps, than we sometimes realize that could be part of it. So anyway, I think we’re all realists. I just want to be sure that that’s not the key issue.
  • 24. The other quick point: DuSanne had a great question, “What does it take to leverage the cross-disciplinary potential of Virtual Worlds? What’s missing? Is it a language? Is it a shared vocabulary?” DuSanne, while I’m talking, if you even want to throw it back in the chat so people can see, I think that’s a great question. I actually have a great way to respond to that because I’m editor-in-chief of American Anthropologist, which sort of has this issue in a microcosm because my journal publishes cultural anthropology, but also archeology, linguistic anthropology and biological anthropology that includes people working with primates, people who are digging up Lucy, bones in Africa from 200,000 years ago. So my journal, indeed not my journal, but the journal for which I’m the editor, has a huge range, and there have been times in the past when people have said, “How can we get a shared conversation? We have to make everything shared.” And it’s had horrible results because it’s caused authors and researchers to feel that they have to sort of dumb down their research. So that like a linguistic person has to explain everything so that someone who works on chimpanzees can understand it. And, in terms of creating a research community, I don’t think it means that we always have to read every word that the other people write or that we even have to understand it at the beginning because it doesn’t mean a lowest common denominator. It means that there might be different sort of communities of research that might have their own approaches or languages, but they are open to reading each other and engaging with each other and that there will be some examples of research that will move in the intersections or interstices between those different approaches. Not all research will do that, and it’s not true that only the best research does that. Sometimes the best research is solidly in one approach. It is
  • 25. purely experimental or purely interpretive. It is not the case that the best research has to be some kind of mush that combines everything. That’s been very disastrous for anthropology, to think that the only good research has to have all of the four field approaches in it in every single paragraph. That’s not helpful. ROBERT LEGRAND: Okay, Tom. TOM BOELLSTORFF: But, there are ways to create conversation, and I think that’s the way to think about it, that even if we don’t understand each other at the beginning, we can build these communities of research. ROBERT LEGRAND: Thanks a lot. Celia. CELIA PEARCE: Yes-- ROBERT LEGRAND: Do you wish to comment on-- CELIA PEARCE: I do. Thank you. ROBERT LEGRAND: --are you a realist? Yes? CELIA PEARCE: I think what Tom said I would echo about that. And I want to just talk a little bit about how these methods can complement each other in different ways. I’ve actually used some quantitative as well as qualitative methods together to very good effect, in a way
  • 26. that I think illustrates why mutual respect and synergy is beneficial, more beneficial. I think collectively we’ll gain more knowledge by breaking down some of those partitions that Tom is talking about. I did a study about two years ago on baby boomer gamers, and I decided to combine a survey with more of the qualitative research and interviews and participant observation that I had been doing on some prior research. I’m going to give you one example of a very simple situation where the two things informed each other, just to give a taste of what we might be able to accomplish by bringing these ideas together. In the survey, I had this very odd finding that 98 percent of the baby boomers that answered the survey, which was about 270 participants, which is considered statistically significant, although I will make a caveat that all surveys are self-selecting. So, again, we can’t have a kind of an absolute positivist conclusion about data that is collected voluntarily, and yet at the same time we’re not really allowed to collect it involuntarily. I’ll get into that a little bit more maybe later. But anyway, I had this finding a certain percentage--and I don’t remember the number, but it was probably something like 25 to 30 percent of the participants had consoles in their homes, but 98 percent of them said that they played video games almost exclusively on a PC. So that’s an interesting data point. But it was odd to me, and I was trying to figure out, okay, well, what’s going on with that? They have the consoles, but they don’t play them. It wasn’t until I sat down and did the interviews that a more nuanced portrait started to emerge of the dynamics of the PC versus the console gamer in a typical household. And what I discovered was that most of the people that had consoles at home felt that those
  • 27. consoles were for their kids. Now that’s a piece of qualitative data that would never have come out in a survey. But by boring down and having those 30 additional interviews I did as a follow-up to the survey, I was able to find out what was behind this statistic, in a sense. So I think this is a great example of a place where having the--and part of what we do as ethnographers is, we tell stories. We hear stories. We observe stories in progress. And we try to understand how cultures are constructed and what motivates people and what their values are. And I think Tom’s earlier comment about the definition of sex in Indonesia is an excellent example where, again, a statistical study of sexual practices would not get at what people’s definition of sex is. So these are the kinds of things that I think we can see a better and more complete picture by recognizing the synergies. I think the big concern that I’ve had, and I think that came up in the prior conversation was, we hear often this comment made by people that do typically more quantitative research and not just people in economics, but in other fields as well, who refer to what we do as anecdotal. And that’s, I think, a very pejorative way to characterize anthropology and sociology research. So I think, at the very bottom, I think we need to get past some of these--yeah, “diss” is what someone said--some of this [dismissive?] rhetoric and really try to approach these conversations from a position of mutual respect and also to understand that each of our methods has strengths and weaknesses, and none of them is the ultimate, complete “everything can be answered by my method” method. And so there are a number of ways that I think we could actually, by dropping some of that rhetoric, get much more interesting information by perhaps collaborating, sharing data and complement each other’s work in a more synergistic way. All right. That’s it.
  • 28. ROLAND LEGRAND: Okay. Great. ROBERT BLOOMFIELD: I’d love to respond to that, if I could. ROLAND LEGRAND: Yes, but we’re still having about half an hour now, and I also would like to ask some questions which I see popping up in the backchat, and briefly go back to you again, Robert. Because there are some questions, and I think many people out there have those questions about the terminology used. For instance, especially, of course, the positivism word or concept. For instance, there is Valiant Westland, who is saying that--various responses, and one of them, Valiant, is just saying, “Positivism contends that everything can be reduced to physiological, physical or chemical events. I think many of us will agree that much of what happens in Virtual Worlds falls outside of these measurable boundaries. So is there anything in Virtual Worlds which makes it more difficult for positivists to do their thing or for instrumentalists, for my part? Is there something which is on the logical level different about working here in a virtual environment?” Anyone is free, of course, to respond to the questions. I will not, once again, ask each of you to respond, but just feel free to respond. ROBERT BLOOMFIELD: I’d be glad to respond to that. ROLAND LEGRAND: Okay. ROBERT BLOOMFIELD: That sounds to me more like some type of reductivism than positivism. It sounds a little bit more like saying, “We can predict everything from physicality.
  • 29. Let’s derive every human behavior by what atoms do.” I don’t think that’s been proven to be a very useful way of going, which is why psychologists don’t do a whole lot of physics to derive their predictions and their theories. I don’t really see it. I think that the types of debates and perspectives that you’re seeing on the panel really, I think, are independent of whether you were talking about virtual online behavior or Real World behavior or frankly whether you’re talking about physics or anthropology or psychology or accounting. THOMAS GUILDENSTERN: Could I just immediately jump in too? ROLAND LEGRAND: Yes, please. THOMAS GUILDENSTERN: Yeah, I think that’s right. It all comes back, in kind of pure pragmatist fashion I think, to what questions we want to ask. If we want to ask questions about circumstances where there is an enormous amount of regularity where there is not rapid change in place, where the agents on the ground are not interactive, then our work starts to look more experimental. It starts to look more about controlled conditions because those things become possible. But when our work is about contingent events and about complexities and social complexities, such as an example, studying AIDS in Indonesia, those things quickly become very, very limited. So for example, the work I did that relates directly to Second Life is a book that’s coming out in June, and someone asked me to say the title. It’s Making Virtual Worlds: Linden Lab in Second Life. It’s on Amazon now. And that was a work based upon ethnographic research at Linden Lab in 2005, and 2004 through early 2006, and that was a unique period of time.
  • 30. There is no stretch of time that is like that anywhere. That was Linden Lab there. And to write about it is not to write about Linden Lab at all times in all places. It is to write about Linden Lab then and what processes were in place then. That kind of awareness of what our realists, as Tom well said, well put it, our realist work should be--should be part of how we see all valuable science. It shouldn’t be that that kind of work, because it is concerned with the contingent, is somehow less scientific because it’s less about predictability. It’s about these complex processes and change that we’re all deeply immersed in and about which we’re all desperate to understand more. ROLAND LEGRAND: Okay. Thank you. CELIA PEARCE: Yeah. I think there are some interesting things here to think about too, and one thing I want to mention is that one of the characteristics of participant observation in anthropology is that we study people in-situ. We go to the culture and study it as it exists, and we take it in its own right. And I think, at this particular moment in time, considering the large misconceptions in the popular media of Virtual Worlds, I’ve made a very important choice through my work--and I’ll do my little book plug too, Communities of Play, which is coming out in September--that I made a very deliberate decision to study these cultures in their own right, in a respectful fashion, because I think that the mass media has this propensity to want to make everything dysfunctional. And, like Tom said, most people think of Virtual Worlds as the Matrix. I think that one of the things we’re trying to do, in the interests of realism, is to deconstruct
  • 31. some of those myths and prejudices and really try to get to the bottom of what people are actually doing and experiencing in Virtual Worlds. So I think that’s a really valuable contribution that we make. And two other things I wanted to mention, that are distinct to these environments that I’m particularly concerned with, one is, and I think this is a really wonderful kind of bridge between Tom and my work in-world and Thomas’s work with Linden Lab and that is that Virtual Worlds are cultures that are constructed in a very particular way. There is a piece of software--where we are now existing inside of a piece of software that was made by somebody, and that is the starting point for every culture that emerges in a Virtual World or a massively multiplayer game is that there are these ingredients, these affordances, these characteristics, that the designers put into the World. And it is through what those enable that cultures are created. And that, to me, because my background is in game design, is a very important and nontrivial aspect. What that also introduces is the opportunity, since we are right now inhabiting software, and software is essentially data, it introduces the opportunity to collect data in a different way than we typically have in Real World ethnography. But the challenge we face is that, according to the research ethics that we have to adhere to when we do research, when we get grants, when we do our work, we’re not allowed to collect data on people without their consent or knowledge. So one of the things I’d like to try to explore is how can we use the affordances that software gives us to study cultures in a different kind of way, as data, but do that in a way that is consistent with the types of research ethics in studying human subjects that we not only want to adhere to but are legally required to adhere to if we work
  • 32. within the academy. ROLAND LEGRAND: Okay. I think that this issue of ethics, or research ethics makes me go back to a question by Dusan Writer, who was saying at a certain moment, “I was confused enough about augmentationism versus immersionist. Now I need to decide if I’m an instrument.” And I think what, having read his blog, I think that there was also a lot of critique in this blog about this whole idea of considering a virtual society as a place where one actually can carry out experiments. It is about as shocking. At least this is an interpretation of Dusan’s blog. It can be compared somehow with conducting experiments in a tribal society. People would say, “Well, no, you cannot do it.” There would be a general outcry. So maybe go some deeper into this ethical aspect of conducting experiments or other research methods in virtual societies. TOM BOELLSTORFF: I can do a quick response to some of that. And actually this will be a sort of stream of consciousness because I’ll respond to that, and then I’m going to very quickly respond to five of the great comments that were made, and then it looks like Rob is going to chime in after that. So I’m going to do this as quickly as I can. So first of all, in terms of the question of getting permission and to be able to research and all that kind of thing, like Celia was talking about, I have a section in my book about ethics, and that’s something that’s very interesting to me. If you do ethnographic research in the physical world, like when I go to Indonesia, I don’t have to get a permission form every time I see two people doing something, let’s say, in the shopping mall, where they could conceivably imagine that anyone could walk by and see what it is that they’re doing and
  • 33. write about it in a newspaper or blog or whatever, as long as I protect their confidentiality. And so the same kind of thing can happen for a researcher in Second Life. Where you need to get a consent form is when you’re creating any kind of new situation, by an interview, a focus group, a survey, something where you are creating a situation that did not exist before. Then it’s very important to get that informed consent. These issues show up for experimental or non-interpretive. I think these ethics things are for everyone. It’s very important when you do that kind of work and people are asking about release forms. If I interview you someday, Valiant, I’ll give you a release form. But, actually, if you email me later, I can email it to you just so you can see what it looks like if you’re curious. I’m happy to share that. I have it on a note card so, if anyone emails me, I can give it to them. So now I’m going to through four comments incredibly quickly, just to build on some of these things. DuSanne says, “The idea that you can use Virtual Worlds to extrapolate lessons for the actual world sort of implies that Virtual Worlds aren’t permeable.” And you’re right, and this is one of these yes and no issues. There are all kinds of ways that Virtual Worlds and the actual world inter-penetrate. We’re sitting on chairs, looking at grass around us. We’re all somewhere in the physical world. We’re having a conversation. All these things are happening. But it doesn’t all just become the same thing. If I crash, my computer crashes, and I’m no longer in Second Life anymore, that’s a difference that makes a meaningful difference. I won’t be in this virtual space anymore. So Virtual Worlds and the actual world aren’t just all the same thing. The Virtual Worlds do exist, and they are distinct, but distinct isn’t the same thing as isolated and sort of out there
  • 34. on Mars all by themselves. They’re getting affected by the physical world in all kinds of ways, and it’s that play back and forth that I think is so interesting. And Dizzy Banjo very quickly asked, “Do you think Virtual Worlds force a reassessment of these working methods?” They absolutely do, but what was surprising to me in my own research was that I didn’t have to change nearly as much as I thought I would, although I did have to change some things. And you can see that in my book; I talk about that. Then DuSanne asks--there’s two more things I’ll very quickly respond to, and I’m not getting everything, I’m very sorry. DuSanne asks, “Do Virtual Worlds tend to make us think we can manipulate the world, and are there issues and challenges about that?” I think Rob could maybe comment more about this, but, for people who do experimental research, Virtual Worlds are incredibly tempting. It’s like a hundred-dollar bill laying on the ground because it looks like you could create the ultimate kind of two societies that differ only by one rule, let’s say about like how you can trade things. And then you can see what people do and how it works differently, and that is something that you can’t do in the physical world without a whole lot of money and ethical problems and so on. But I do think the ethical problems remain here as well because, if you create two different Virtual Worlds, those aren’t just experiments in a college dormitory or in someone’s room, they are maybe more like a college dormitory--real life. I mean those are a part of people’s social lives that are created there. So I think that’s a really interesting question that experimenters, I think, are thinking about a lot right now.
  • 35. And then very quickly, for Chimera, asked about what research traditions did we all come out of. I think it’s interesting that all three of us--and Rob as well--all four of us didn’t obviously start doing research in Virtual Worlds. We have research experience in other areas. And I think, for all of us, we found that really helpful, to sort of get ideas of how to do research in these new places. My original degrees are in linguistics and music. Very, very different things. One very experimental and one music. But I think it’s all been very useful for me, particularly anthropology, in trying to think about how to study Virtual Worlds. So I’ll stop there. And I’m sorry there are more questions, but I just want to keep things moving. ROLAND LEGRAND: Okay. Beyers, you wanted to respond to this as well. ROBERT BLOOMFIELD: Yeah. I guess I’ll pick a few points to respond to. The first one that I’d really like to pick up on is, Celia talked about qualitative or interpretive research being viewed as anecdotal. It’s not just a “diss,” as someone put it, but it really is a total misconception of that type of research. It isn’t that interpretative and qualitative research is bad data collection, which I think is, to some extent, how my comments are being interpreted. A lot of what I see, and I think the example that Celia gave is a great one, which is that she ran a survey. She got some data. Some stuff didn’t make sense. She did some more qualitative follow-up, and it gave a much more full narrative, a much more full story about what’s going on. So my take, and I’d be interested in seeing whether--and I see Tom is saying, “I implied not that they were bad, but they were second order”--my take, and maybe Thomas can tell me what he thinks of this is that I think interpretative and qualitative research is really crucial to
  • 36. just understanding what type of world you’re trying to study, getting that big picture down. I don’t view it as second order, but actually step one. I think it’s essential, and it can clean up the pieces when you get data that are hard to understand, as in Celia’s case. Where I came from, and I guess this goes to Tom’s comments, is that he said experimentalists view Virtual Worlds as like a hundred-dollar bill lying on the ground; that is exactly my situation. I’ve been doing experiments. I was looking for a better technology to do more interesting experiments, and that is how I found Virtual Worlds as really as a technology to help me study something that has nothing to do with Virtual Worlds. Since then my interests have gotten a lot broader. But I want to just take another example because my Connecting The Dots comments actually were driven by something Celia said when she talked about how small changes in the technology of a Virtual World or an online environment can actually affect the culture, can affect the society, can affect behavior. Or maybe Tom, in the pre-interview, used the example of changing the way you make friends. Friending is very binary in Second Life. You’re either a friend or you’re not. And, the difference is whether you clicked a button, as opposed to saying measuring how long people are talking together, and you might have a friend meter that would go up or down, depending on the extent of your interaction. That certainly seems like it would create a different society. So I think here’s the question that I would like to pose to the other panelists is, it strikes me that, if you’re going to go to a Virtual World developer and say, “I think you’ll get a very different type of in-world culture if you do this instead of that,” maybe they’d be satisfied with
  • 37. just an interpretive analysis. But I think, ultimately, they’re going to want something, frankly, more persuasive. They’re going to want to see a lot of data. They’re going to want to see something that they don’t think reflect the interpretations of one person. Maybe I’m just sticking my foot in it again and saying something that’s going to be viewed as insulting, but when I deal with businesses, when I deal with the financial market regulators, they’re not satisfied unless I can show them a thousand data points that we took, “We looked at 500 firms; some did this, and some did that, and here’s what happened, and this is replicable.” This is how I see the two coming together. I want to say one more thing before I let other people respond, which is on the issue of the ethics of experimentation. I guess one key distinction I’d like to make is between field experiments and lab experiments. There are people who will do field experiments, and actually, we’re subject to these all the time. Companies all the time are doing controlled experiments on their marketing campaigns. They’re trying five different marketing campaigns on different websites. They’re using different pricing. They have coupons. Randomly different people will see different color schemes on the website, to see what happens. And they’re getting huge amounts of data from that. And they’re not asking anyone permission. So there’s a lot of experimentation that can go on, that can give people a lot more evidence of how to make money in online sales, for example. But, in the Real World, this goes on as well too, and just to give you a really quick example. There’s a guy, John List, who does field experiments. He’ll go into the Real World to a trading card, like a sport baseball card convention, and he’ll do an experiment where, for example, he’ll have an attractive woman versus an unattractive man selling things and then
  • 38. testing who got better prices. And then, finally, you have the lab experiments, which I think are still a great use of Virtual Worlds, where, if we think that a certain technological feature of a World is going to make people behave differently. You know what? I can set up a World with OpenSim that won’t have more than 50 people in it for a month. They’ll all know it’s an experiment. They’ll all sign the consent form. They’ll probably get paid for their efforts, and we can compare Worlds with one another and get the kind of data that people, certainly in business, whose financial interests are at stake, are going to want to see, in addition to a compelling interpretative narrative. ROLAND LEGRAND: Okay. I think there will be a lot of responses on this. Thomas, go ahead please. THOMAS GUILDENSTERN: Yes. First, I’d like to dispute that characterization of what people out in the Real World, whether they’re in business or they’re forming policy for government, what kinds of knowledge they want and what kinds of knowledge counts is useful for them. What is incredibly important, whether you’re in business or running a government or running an NGO, is not how metrics in place somewhere measure up against a given model or a schematic understanding. What you need to know are the trends and the processes on the ground. You need to know the five percent of people who are doing something differently and all the reasons why that five percent is probably going to be forty percent tomorrow. And that’s not a predictive model; that is a model based upon an entirely valid form of generating knowledge from which to act, to create grounds for action.
  • 39. One of the things that amazes me about this kind of approach is, it suggests that, as human beings, we are so unable to think, that we need models to have gone chapter and verse to tell us exactly what we are supposed to do in any uncertain situation going forward. Let me give you the best example of this that there is, and it’s about as far afield from anthropology as you could imagine. And that is, intelligence throughout the twentieth century, much of the United States intelligence communities was built upon a very elaborate field-agent architecture, if you will, that was utterly grounded in expert understanding, critical observation over long periods of time, expert evaluation in critical reading of their reports filtered upward, cross-referencing of those reports by human beings making informed judgments and on and on up the line. And, in the ’70s and ’80s, the intelligence community shifted away from that toward a much more political science model-driven set of guesses about metrics and about what kinds of economic and political factors lead to instability or this and that. And that kind of modeling, that kind of simulation has, by no accounts--and I’ve spent a fair amount of time speaking to the defense community that’s been interested in Virtual Worlds--by no accounts have those kinds of methods generated any more reliable grounds for action, in fact less, than the “yes, labor intensive,” “yes, person capital intensive,” “yes, qualitative,” but deeply rich and process-based rather than predictability or schematic pattern-based model for how to generate knowledge for action. That is true for businesses, and it’s true for governments. When we go out there and we talk to these people in these fields, they are not people who only--some of them are, but most of them or many of them are not people who only want things that are quantitative, only want things that are predictive, because they know, and
  • 40. we’ve all been very, very powerfully reminded in the most recent 12 months, of just how limited such schemes for understanding what’s going to happen tomorrow are. TOM BOELLSTORFF: Let me chime in too, because, Rob, I don’t think that’s what Thomas is saying. But the thing is, when you say things like you just said where we want to reach out to the business community, whatever, and they’re not going to be convinced unless we have that kind of experimental data to convince them, when you say those kinds of things, that’s the kind of thing that makes me want to bang my head into the wall because it’s just not true. Maybe the people that you’ve been talking, but the reason why Intel hires me as a consultant, and they have all of these anthropologists on their staff and very few experimental people on their staff, is they actually find interpretive stuff more useful. And there’s this huge military interest in understanding things like Virtual Worlds from an interpretive kind of perspective. Now I’m not saying that the Military is necessarily a great thing. I’m not saying, “Oh, great! We all get to work with a human train system kind of stuff,” as [I’ve?] just mentioned. I’m not saying that that doesn’t have ethical problems at all. I’m just saying there is massive business interest. Nokia. The Gates Foundation. I mean so many different corporations out there in the business world hire our Ph.D. students from my university precisely to do this kind of work, and it’s not an either/or situation that’s saying the experimental work is not useful. But you, Rob, are the one who just made it sound as if, “You know all this interpretive stuff is great, but we better get down to business and do that experimental stuff, or the business world or the Real World out there just isn’t going to take us seriously.” And I would submit that that’s just empirically not true.
  • 41. There are businesses out there that really find the experimental work more valid. That’s great. There are military, whoever, people out there that find that stuff more valid for certain kinds of research questions. That’s great. But there are also definitely business, government, other kinds of folks, nonprofits, who find the interpretive kinds of research completely scientific, extremely valid and useful and important as well. So we don’t want to make any sweeping statements. This is that partisanship thing that I’m trying to keep us away from. It all can have use. And I’m tell you, out there in the business and government world, there’s plenty of interest in all of these different kinds of research methods, and we have to think about that ethically, regardless of the method in question. ROBERT BLOOMFIELD: I’d like to respond. ROLAND LEGRAND: Maybe Celia. ROBERT BLOOMFIELD: Yeah, Celia, go ahead. CELIA PEARCE: Sorry. ROLAND LEGRAND: And then Robert. CELIA PEARCE: Let’s let the little woman say something for a moment. I just wanted to kind of echo what Tom was saying because I think that that tone and that kind of commentary is exactly what got us into this in the first place. And I would say, in answer to your question, Rob, yes, you did step in it. I don’t think any one of the three of us would ever
  • 42. try to make an argument that the kind of work that you’re doing is somehow less legitimate than what we’re doing and what we’re doing needs to be proven by what you are doing, which is essentially what you just said. As Tom has mentioned, I’ve also worked with corporations. From a business perspective there are--in fact there’s a lot of data about business that quantitative research has completely failed to gather. For instance, my work with diasporas of closed games. When games close and people leave them, nobody knows where they go or what they do. This is something that is impossible to capture through any other method than qualitative research. And yet it is of tremendous amount to businesses to understand what happens to their customers when they go away. So these are the kinds of things. I would encourage you in the future to rethink your position here because I think we have much more to benefit from taking a more egalitarian, synergetic perspective in saying what we’re doing. As I said in the beginning, if you’re making a map from a plane and I’m making a map from the ground, together we can create a much better defined image of the terrain than by saying, “Well, your image of the ground is just fine, but it’s not valid until I take a picture from an airplane.” That’s just, I think, a highly unproductive way to frame the discourse. ROBERT BLOOMFIELD: Okay. Let me try to put this a different way because the examples that I hear you guys using are examples in which I don’t think experimentation or sort of large-sample archival econometric study would be possible. And so clearly, I think that’s a very different world. And so the question I guess I would ask you is, if you are addressing a type of question that could be addressed through controlled experimentation and through systematic collection or archival, you know, large-sample data and doing statistical tests,
  • 43. then one would be, “Why not?” and the other would be, “Don’t you think that people--” I mean I guess I’m in a world, the topics that I look at are ones where experiments are possible, and data’s out there. And people do what I think you would call qualitative research in business all the time, where they’re writing case studies, they’re writing clinical analyses, field studies, interviews, all of that. But, if there’s something where you actually could do the experiment or could go out and collect a large sample data, what you get back is, “Do it.” And maybe my question is: Why is that not the case in anthropology? And, if not, why not? ROLAND LEGRAND: There’s this question also from Joel [Shockington?], “It seems that, if you both a qualitative analysis of an environment and they agree, that’s a pretty strong indicator of validity, and if they disagree, maybe it indicates the need for refinement of thinking.” Maybe some more about this realignment. CELIA PEARCE: Again, I think it’s sort of fallacious to think that it’s about validating each other’s experiment or each other’s research because, what I guess I’m trying to say is, when I’m in the town, on the ground making a map of understanding why those boundaries were put in place, that’s a different kind of information. It’s not that his airplane view is going to validate that or my ground view is going to validate his. They give us two completely different kinds of information, which together create a well-rounded picture of what’s going on. So I just think that, rather than saying my data is better than yours, my methods are better than yours, I can prove what you’ve claimed or whatever, what we’re talking about is, we’re looking at things at a different scale, at a different resolution, and on one part, I don’t
  • 44. want you to say to me, “Oh, you can’t prove any of your URU research until I do a quantitative study of it.” I don’t really want to do a quantitative study of it. ROBERT BLOOMFIELD: Let me say that is not at all what I’m trying to say. CELIA PEARCE: Well, that actually is what it sounds like you’re saying. ROBERT BLOOMFIELD: If you’re going to write an article, where, in the end you’re going to say that you believe you’ve uncovered a more general truth than just describing exactly what happened in that particular circumstance, then that’s not enough for me. And I do believe, I mean the qualitative research that I’ve read does not simply say, “Here’s a list of facts about what happened in this particular circumstance.” It draws a bunch of conclusions. And so, to the extent that you’re-- THOMAS GUILDENSTERN: It does, but you can draw conclusions without heading anywhere down the road that ultimately ends in some valorized forum of predictability or generalizability. You can-- ROBERT BLOOMFIELD: But aren’t they general statements about their general conclusions? THOMAS GUILDENSTERN: I will answer that question exactly. They are statements about what we understand to be the processes that were important and why they were important in a certain circumstance and situation. And it is, of course, possible, as it is with a field agent’s knowledge, that critical readers, people who take that knowledge and bring it to bear
  • 45. on their own experience, can take something from it and understand something from it for other circumstances. This is why, if we really wanted to avoid disaster after the breakup of the Soviet Union and have their economy in some kind of reasonable shape now, we wouldn’t have turned it over to the Harvard Institute for International Development and their schemes and models for what would work. People would have read all the ethnography and all the history and all the social geography and all of that qualitative work, and we would have left it in the hands of people who knew how to make critical judgments amidst a lot of very, very messy information. I really highly recommend the preface, I think, or the introduction to Anthony Giddens’ The Constitution of Society. The famous social theorist, Anthony Giddens, takes on this issue that generalizability is the aim of all science. And it is an utterly convincing argument. We are not about generalizability. We are about making claims about knowledge. Yes, about what happened, but that is far more than simple description. TOM BOELLSTORFF: Just to quickly throw in-- ROLAND LEGRAND: Okay. Go for it, Tom. TOM BOELLSTORFF: Once again, Rob, I think this is a case where there’s just a really different perspective that the way you’re framing it is coming out as being partisan. To me, it’s almost like saying, “Well, Tom, if you were interested in doing a study of 50,000 Indonesians and their sexual behaviors and what statistically puts them at greater risk for HIV, then wouldn’t you use an experimental method or a statistical method?” to which my
  • 46. answer would be, “Well, obviously, yes, but that’s not the way that I’ve set up my questions.” I mean that’s not the way that my research questions work. The second issue that you’re raising here about generalizability, because, once again, my first degree is linguistics. When I get stuck on these things, I always go back to language as a helpful way for me to think these things through. And when you do qualitative field research, you are not limited to making generalizations only about the people that you talk to, that you also can’t do the whole universe either, and you have to hedge it. It becomes at the level of hypotheses. But, once again, if I spend time with 20 Japanese speakers and I become fluent in the Japanese language, I have data that is not only valid for those 20 people; I could then speak the Japanese language with many, many millions of people. I wouldn’t learn every dialect or every word in Japanese, but I would have learned something that is more generalizable than just those 20 people that I learned the Japanese language from. That’s how culture works. But that’s a different kind of generalizability. From the positivists--and here’s where positivism is important--idea of generalizability, which is like the law of gravity, where I drop a rock anywhere on the earth, at any point in history, and its rate of fall towards the ground I can predict how fast that rock will fall because of the law of gravity. That’s a different kind of generalizability, and that’s not the kind that we, as interpretive folks, are looking for, but that doesn’t mean that we can’t do any generalization. And that’s an extremely important distinction to keep in mind.
  • 47. ROLAND LEGRAND: Okay. I have a kind of closing question which is, well, this is more based on the reactions on Dusan’s blogpost and where reference was made, and, unfortunately he’s not here amongst us. Reference was made to Edward Castranova and his idea that what we learn and what we research for in Virtual Worlds can, in some way, be used to make the broader society into a better place, which is a kind of a very wild hypothesis, as I feel it. And I really, really would like to know what you guys are thinking about this kind of approach, which is really trying to--it seems to try to make the world into a better place, but other people find it a really creepy--and that was the word which was used--a creepy way of thinking and working with science. What’s your opinion about this? Anyone wants to give it a take? CELIA PEARCE: Yeah. I’ll start. Yeah, I think that there are some issues around this. I mean I think that we, in the blog discussion, one other thing that came up was the issue research ethics and that historically there have been periods of time when behavioral scientists have tried to experiment with behavior by putting people in a quote/unquote “controlled” or laboratory sort of setting or a situation that had some kind of variables adjusted to it. This kind of research was deemed to be unethical because it was manipulating people’s emotions and behavior in a way that turned out to be harmful to the research subjects. An example that I recently heard that was really disturbing to me was an individual who came in Second Life, in the opposite of their natural gender, and got involved in a romantic relationship with someone, under the auspices of experimental research, and this person didn’t realize that they were having a relationship with somebody of the opposite gender
  • 48. from the avatar. And the outcome of this experiment was, it was extremely hurtful to the research subject. So I do think that we need to really, really seriously think about this now. I think there are some interesting possibilities for different types of experimentation, but I think we also need to look at the history of experiments with humans in controlled environments, just to make sure that we’re not revisiting scenarios that we’ve already been through in the Real World to a very deleterious effect. ROLAND LEGRAND: Okay. Thank you. Anyone else wanting to comment on this? ROBERT BLOOMFIELD: Well, I just comment briefly, what you said creeped people out or something about what Ted Castranova was proposing. ROLAND LEGRAND: Yes, indeed. Exactly. Yes. ROBERT BLOOMFIELD: I think it goes back to the distinction between positive and normative. I think Ted wasn’t just talking about running some experiments to learn something; he was really talking about running experiments to enable his vision of what would be a better society. I think that that, more than the experimentation itself--yeah, [Discord?] I see is saying “social engineering,” and I think it is much more. I think that’s the part, not the fact that he was talking about running controlled experiments to learn something new, but the fact that he was talking about engineering society, in his vision, didn’t go over particularly well.
  • 49. ROLAND LEGRAND: Okay. Well, thank you for all of your contributions. ROBERT BLOOMFIELD: Actually, if I could make one closing comment since it’s three against one. ROLAND LEGRAND: Yes, of course. ROBERT BLOOMFIELD: And I’m clearly the one being schooled here. I’d just like to say real briefly, this has given me a lot to think about, and I think the number one thing that I’m going to be thinking about is how the differences in subject matter affect this debate that we’re having. I think we’d all agree just generically that different methods are appropriate to different questions. It isn’t clear to me that that is enough to explain all of the differences in views that we have here, and so I guess I’ll just leave it at that. But, to me, if there were a question that were amenable to all research methods, then what would this debate be, because I think a lot of it has been people picking the types of questions that suit their method because, of course, that’s what we all do. So anyway, I’ll leave it at that. I just want to say thank you all. It sounds like you all have books coming out again in the near future, so I’ll have an excuse to get you on Metanomics, and we can talk about other stuff and maybe touch on this as well. Hopefully, I’ll be more educated. ROLAND LEGRAND: Okay. Thanks a lot for all of the people out there having--well, it was more than 90 minutes. Thank you for all your comments, and thank you for the panel
  • 50. members for their very insightful remarks. Bye. THOMAS GUILDENSTERN: Thanks, everyone. Thanks for having me. TOM BOELLSTORFF: Thanks for doing this. ROBERT BLOOMFIELD: Yeah. This was great. I don’t know if any of you were here early enough to see this, but they actually had the four chairs inside a big boxing ring. So it would have given the wrong signal. ROLAND LEGRAND: Well, Robert, I prefer this way because the boxing metaphor, of course, it would have been more interesting from a media point of view. ROBERT BLOOMFIELD: You’re right. ROLAND LEGRAND: But I think this discussion was much more interesting and honest also, just trying to find the kind of nonpartisan approach to research in Virtual Worlds. It seems to me a lot more useful than just shouting at each other. ROBERT BLOOMFIELD: And, Thomas, I don’t know if you’re still around? THOMAS GUILDENSTERN: Yeah, I am. I’m just about to [CROSSTALK] ROBERT BLOOMFIELD: Oh, great. What was that book that you mentioned? Gibbens or
  • 51. Giddens or something. THOMAS GUILDENSTERN: Anthony Giddens. Let me type it in. TOM BOELLSTORFF: I just typed in his name. THOMAS GUILDENSTERN: Yeah. And the book I’m thinking of is his kind of masterpiece, The Constitution of Society. You can also look at Alistair McIntyre, the philosopher’s book After Virtue, on the limits of generalizability or the limits of predictability in the social sciences. I think it’s chapter eight of After Virtue, maybe chapter nine. ROBERT BLOOMFIELD: Hold on, I’m just checking. ROLAND LEGRAND: So, Thomas, all right. And this is-- ROBERT BLOOMFIELD: Alasdair, with a “d”? ROLAND LEGRAND: Yes. THOMAS GUILDENSTERN: Yeah, I just typed it in there. ROBERT BLOOMFIELD: Geez! No one taught his parents how to spell. CELIA PEARCE: Thank you all very much for coming and giving your time to this discussion. I have to be going to another engagement in a parallel universe. So I’ll see
  • 52. everybody later. ROLAND LEGRAND: Thank you, Celia. TOM BOELLSTORFF: See you later. Take care, Celia. THOMAS GUILDENSTERN: Thank you. Cheers. CELIA PEARCE: Bye bye. ROBERT BLOOMFIELD: I have to talk with one of my co-authors on a paper that has no data whatsoever. It’s just a mathematical model. THOMAS GUILDENSTERN: All right. Take care, all. Bye bye. TOM BOELLSTORFF: Bye bye, everyone. Thank you. ROLAND LEGRAND: Okay. Take care. ROBERT BLOOMFIELD: Bye bye. Document: cor1056.doc Transcribed by: http://www.hiredhand.com Second Life Avatar: Transcriptionist Writer