Review of “Who Owns the Future” by Jaron Lanier

Jaron Lanier, Who Owns the Future?, New York, Simon & Schuster. 2013.  http://www.jaronlanier.com/futurewebresources.html

Jaron Lanier is a strange and interesting guy. I’ve never met him in person, but he’s been around forever, and always seems to be doing something interesting.  His new book, “Who owns the future?” is a blockbuster, hitting on a bunch of things I’ve been worrying about, with authoritative insight.  Like me, he is implicated in the development of today’s Internet, and like me, he feels a responsibility to try to make it  better (a “humanistic information economics”, in his case).  Unlike me, he has some fairly deep and broad ideas that deserve to be implemented.

To summarize the problem as Lanier states it, today’s networked systems are built wrong, designed to create winner-take-all super servers (which he terms “Siren Servers”).  The problem with this is that the servers make money by taking data from everyone for free, and passing risk to everyone else.  Too much is “forgotten”, taken off the books, no longer counted as “value”, and otherwise fraudulently not accounted for. This is not just wrong, it is unsustainable, because it is demonetizing the value created by most people, shrinking the overall economy.  Even the super servers can’t last long in the business of destroying value.

Lanier wants to create a “humanistic” architecture, with humans at the center.  “Information is people in disguise, and people ought to be paid for the value they contribute that can be steered or stored on a digital network.” (p. 245) From this principle, the whole argument flows.

Of course, you must read his book to get the details.

As technologists, we cannot be content with just complaining, nor can we pretend that the horse can be returned to the barn.  Lanier gives a thoughtful set of “tweaks”, which change the way networks do business.  The crux of the matter is to pay for everything of value that happens.  Concretely, this means that any data that is produced by you, will earn you a micropayment.  On the other side of the coin, everyone must pay reasonable amounts for what they do on the net.  The argument is that this will vastly enlarge the digital economy, and will provide a way for humans to create a dignified life.

I liked this book for many reasons.  Some of the technical details hit on topics I’ve encountered in my own career.  The basic technical feature missing from the WWW is two-way linking, which was posited in Ted Nelson’s Xanadu from the 1960s, if you can believe it.  (I think Lanier might want to be a “nelsonite” if such things were possible. “Our huge collective task in finding the best future for digital networking will probably turn out to be like finding our way back to approximately where Ted was at the start.” (p. 221))

Two-way links never get stale, and you automatically know who is linking to you, so you can trace back.  This feature is so important that, as Lanier points out, Google and others scrape the whole web every day to compute these relations, which should have been engineered in in the first place.  Sigh.

I clearly recall discussions at NCSA in the early days of Mosaic about one-way links (the WWW) versus two-way links (favored by information scientists, librarians, and anyone who understood actual information systems).  We could have built in two-way links, but we didn’t because it would have been difficult and would have slowed the viral dissemination of mosaic and related technology.  Even then, zillions of free downloads was the metric of success, regardless of sustainability.

I’m not the only one who knows this.  The noise about the “semantic web” is  mainly due to the fact that it enables arbitrary, multi-way linking—even better than two way links.  Even Tim Berners-Lee quickly realized the flaw in his one-way link architecture (Berners-Lee, T., J. Hendler, and O. Lassila, The Semantic Web. Scientific American, 284, 5 (2001) 35-43.).

Lanier also nods at the importance of provenance, which I learned much about from another sensei, Jim Myers in 2005-11 (e.g. Myers, J.D. The Coming Metadata Deluge. In: New Collaborative Relationships: The Role of Academic Libraries in the Digital Data Universe Workshop. (2006)).  Also, Lanier’s micropayments concept has been invented as a solution to the related problem of citations and attribution in scholarly work (e.g, see the work out of Ben Shneiderman’s lab (Jennifer Preece and Ben Shneiderman, The Reader-to-Leader Framework: Motivating Technology-Mediated Social Participation. AIS Transactions on Human-Computer Interaction, 1, 1 (2009) 13-32.

By the way, Lanier has much to say about 3D printing (I hadn’t thought about the coolness of “unprinting”–using the 3D print programs in reverse to recycle objects.  Wow!)  But even he is falling behind:  at one point he wonders if you will go to your local library where there will be public access 3D printers.  The answer is “yes”, and in fact you already can. For example, our local public library has fabrication equipment, though they are still working out what kinds of services to offer.

Basically, I’m saying Lanier’s technical analysis is sound, whether he cites all the academic sources or not.

Of course, as a grumpy old guy, I was greatly entertained by JL’s dope slapping the business practices of today’s “siren servers”.  Lanier is not amused by almost anything on the Internet, and knows exactly how they work, so you would be well advised to read his critique.  He gives us a blistering faux EULA  (pp. 79-82) and gets grouchy about the future of the book (pp. 354-8), and starts everything off with a very dark science fiction future vision of the virtual world (pp. 1-3).  Ouch.

Lanier also provides an interesting perspective on “Big Data”, differentiating between “Big Science Data” (which is accurate and very hard work) and “Big Business Data” (which is sloppy, possibly not correct, but very valuable) (see Chapter 9).  It is also “stolen”, in that the sources are not paid. This is actually a very useful distinction, because the terminology is so confused, and mixed very legitimate advances (e.g., scientific modeling) with bogosity (e.g., fiddling with pricing on a Web store).

I also enjoyed his insider’s version of life in Silicon Valley.  I never moved to California (my version of “humanistic” living involved having a home town), so I never did understand a lot of the crazy stuff coming out of there.  Lanier gives some history, showing the ties between the Bay Area New Age culture, and the Internet, quite visible now in the form of the Singularity University and related religious manias. If you read only one part of the book, read  pp. 211-231.  Seriously, it’s worth getting the book, just for this section.

If there were any doubt that this is a current topic, see George Packer’s article in the New Yorker about Silicon Valley’s political culture and its surprisingly incompetent entry into US politics (George Packer, Change the World: Silicon Valley transfers its slogans—and its money—to the realm of politics, in The New Yorker. May 27, 2013, pp. 44-55 ).  Packer notes the isolation of the techies from the communities they live in, starkly apparent in their sealed, inwardly facing campuses.  How could we expect anything sensible from such a broken environment? Yet they believe they are the future for everyone

For that matter, judging from reviews , Google’s Eric Schmidt and Jared Cohen’s new book, The New Digital Age (Schmidt, E. and J. Cohen, THE NEW DIGITAL AGE: Reshaping the Future of People, Nations and Business, New York, Alfred A. Knopf. 2013.),  has a lot to say on the same topics.  (Sorry, I haven’t read it yet.) I seriously doubt that he will agree with Lanier on most points.  (As much as I distrust Mr. Assange, I suspect his review of TNDA is probably much more interesting that the book itself.)

And while we are on the topic of “humanistic” computing, let’s look at Kevin Kelly’s massive “What Technology Wants” (Kelly, K., New York, Penguin Group. 2010).  (You can tell it’s going to be interesting, because Jaron Lanier provided a cover blurb, “It isn’t often that a book is so important and well crafted that I feel compelled to urge everyone to buy it and read it even though I profoundly disagree with aspects of it. … You can’t understand the most important conversation of our times without reading this book.”)

Kelly’s general thesis is to take the viewpoint of technology as if it were an autonomous being, to try to understand what it “wants”—where it comes from, where it is going to, how it really works, why it sometimes fails spectacularly.

I’m not sure I agree with Kelly’s approach, but I liked it a lot for his social psychological perspective.  In particular, you should read Chapter 11, “Lessons of Amish Hackers”. Kelly has spent considerable time with Amish friends, and presents a revealing and helpful explanation of how they approach technology.   “In contemporary society our default is set to say yes to new things, in Old Order Amish communities the default is set to “not yet”.” (p. 218)  At the bottom, “…is the Amish motivation to strengthen their communities.” (p. 218)  Buy the book for this chapter alone.

Part of Kelly’s point is that everyone should be as conscious as the Amish are about technology, in particular about technology uptake.  This is a brilliant insight, and has made me feel much better about my idiomatic adoption of tech.  I’ve been acting Amish, and not even knowing it.

So, by our own different paths, Brother Kevin, Brother Jaron, and Brother Bob all end up in just about the same place.  There must be something there, huh.

So what have we learned?

Let’s look at something that crossed my eyeballs earlier this week. Apparently Google is “teaching” people how to organize their content in ways that will help Google.  Google isn’t clear why I would want to do this, but I get the idea from guru’s such as Terri Griffith, “How we can help Google better track our websites”. I guess this is supposed to be a reasonable motivation.

Let’s look at this offer with Lanier’s “humanistic” principle.  The Siren Server (in this case Google) wants you to donate your labor to help them make money.  Your benefit, if any, is that they will be able to use your data better, maybe get a few more people to look at your content–via Google who gets their eyeballs first.  The value you have added to Google is demonetized, and you do not get any of the wealth Google might generate.

Let’s look at how Bob’s grumpy bad attitude would apply.  What would I charge if Google wanted to hire me to provide structured content for them?  Well, who knows, but my general rates for corporate consulting are, like $250 per hour.  So why would I give this to Google for free?  I don’t get it.

To borrow from Laniers blurb for Kelly,

You can’t understand the most important conversation of our times without reading ‘Who Owns the Future?’.


[Note: This post was updated on 24 March 2014 to fix a couple of broken links.]

NSA Mining Phone Records? Well, Duh.

A quick post about this week’s flap about electronic spying.  (Stay tuned for a review of Jaron Lanier’s new book, which is quite interesting in this context, as well.)

First—most of the information is deeply secret, so we really don’t know anything.

Second—the guys holding the secrets are very good at keeping secrets and at manipulating public opinion.  Some “leaks” are deliberate misinformation, the best misinformation is plausible and partly true.

That said, there appears to be an amazing amount of hand-wringing about reports that the US security agencies are routinely obtaining all the phone records from telecom companies.  Some of the reports seem “shocked” that this is happening, despite the fact that it has been happening for many years.  Note that this is legal (under current laws), reported to Congress (per usual mechanisms), and routine.  I.e., the US national security agencies should be assumed to have all phone records in their databases, as a matter of course.

A slightly more interesting report is that the NSA has access to or taps in all major internet services, giving access to all the data there.  This is all the more interesting because, unlike the phone records, this story was denied by the companies, and it is not clear what the legal context would be.

UPDATE 6/8/13:  The next day.  Now the companies have refined their denials, and have spun the most transparently preposterous stories for the media.  This has become a mish mash of pious nonsense about transparency in government (from the totally non-transparent mega companies), denials, non-denials, plausible stories, and implausible stories.  One thing that should not be obscured:  the US security agencies can and do get whatever they need from these large server systems.

So, we must be careful about believing the media reports.  Here’s why I am cautious.

First, these programs are quite similar to what was done with telegraphs and the central phone exchanges in the 20th century  (e.g., see James Bamfords’s The Puzzle Palace (1983)).  So it would make sense that they would be brought up to date with contemporary technology.  Anyone who imagined that moving from land line to the Internet would somehow “defeat” the US NSA or FBI’s ability to track your communications, is living in a dream land.

So, in my view the reports are plausible:  these are things the US government could want to do.

But what are they really doing?  Massing all the records seems like a lot of work just to find a few social networks. Does this make sense?  I don’t know.  It depends on what you are trying to do, and where you want to expend your resources.

(And by the way, you would almost certainly want all the financial transactions and other records to go with that, which I assume they would obtain through the same mechanisms. Back end into banks, credit agencies, Walmart, etc.—assuming they don’t already run their own credit tracking/business intelligence companies, which would be a really smart play, since it would be have just as much access and could actually make a profit.)

The way I look at it is this:  sitting in the NSA and similar seats, I would have a mission to protect the country on the “electronic battlefield”.  This means we must somehow make it safe and useful for “our side”, and deprive “adversaries” of using it.  This is a tricky and difficult challenge. We can’t just blow it up, because the world economy depends on it.  We can’t “own” it all, since it is too distributed, and we don’t make all the key parts as we once upon a time did.

So we fight on many fronts.  Offensively, we tap into the networks and develop robust analytics so we can take advantage of our (at least slight) computational advantage to know more, faster, than others. Also, we will flood the world with disinformation, false services, fake hackers, and dirty tricks.  (It is amazing how many enemies of the US intelligence agencies are “caught” with child pornography on their computers.)

And equally importantly, defensively, we deprive adversaries of the use of the networks.  How?  Well, the key is to make them believe we are watching at all times, with lethal consequences.  It is vital not only that we can find and kill anyone in the world, but everyone must know this.

In my view, this is why there are so many “leaks” about drones, about locating specific terrorists via cellphones, and about massive monitoring of social media.  (How difficult is it to feed just enough of just the right information, with just the right amount of theatrical disguise, to all the reporters, bloggers, “activists”, and plain old nut jobs out there?)

As it is, real, serious terrorists (and criminals and rogue nations and whatnot) are well aware of this and must expend tremendous effort to avoid the general internet and other services, and do without many of the common.

Note:  from this point of view, there is nothing wrong with this picture.  This is exactly what the NSA et al should be doing.  As long as they exist, its their job do do it, and the security of the nation depends on them being good at their job.  We might want to replace them with other mechanisms, which is a topic for another day.

Anyway, for these reasons, I take all public releases on these topics, from whatever source, with a large grain of salt

We don’t actually know what the capabilities and intentions of the US security agencies are.  We can make some plausible assumptions, but we really don’t know.

And, by the way, if you want to know what real secrecy would be like, I refer you to Charles Stross’s The Atrocity Archives.

Book Review: Mark Mazetti, “Way of the Knife”

Mark Mazzetti, The Way of the Knife: The CIA, a Secret Army, and a War at the Ends of the Earth, New York, Penguin. 2013.

In recent weeks we have seen much discussion of the US military drone programs, the expanding fleet of remote piloted aircraft operated throughout the world.

As a techie, I’ve always loved RC aircraft, robots, and remote operated devices. These toys have now been brought to use in the most serious ways, which moves them from “it’s so cool” to  “what the hell are we doing”.

The headline story is the use of drones to execute targeted killings (“signature” killings, specifically authorized by President Obama), in places not officially war zones, of people not in direct combat, and, most controversial of all, US citizens.

This topic is tangled in the overall “war on terror”, and the US secret wars everywhere (Sanger, D.W., Confront and Conceal: Obama’s Secret Wars and Surprising Use of American Power, New York, Crown Publishers. 2012).  The secret war has danced on the edges of the law, developing parallel intelligence and paramilitary forces under the DOD and the CIA, as well as private forces available to both (Scahill, J., Blackwater: The Rise of the World’s Most Powerful Mercenary Army, New York, Avalon Publishing Group, Inc. 2007).

On May 23 we Heard President Obama speak on these issues, in a self-described effort to create a legal framework for future presidents (Baker, P., In Terror Shift, Obama Took a Long Path, in The New York times. 2013: May 27, http://www.nytimes.com/2013/05/28/us/politics/in-terror-shift-obama-took-a-long-path.html?ref=world).

This speech, and its supposed policy implications, are much easier to understand if you read Mark Mazzetti’s “Way of the Knife”, written during the secret discussion leading to the speech. Mazzetti’s book could have been designed by the Obama administration to underpin the May speech.  (Given his earlier reporting, it appears that Mazzetti has been fed a lot of information by the administration, so it is not out of the question that they have used him to get their story out.)

To understand the drone wars, it is important to look back at where they come from.  Military use of pilotless aircraft has increased dramatically in the last 20 years, concurrent with technical developments.  Singer reports these developments, as the military and intelligence community came to understand the capabilities and economics of remote operated aircraft (Singer, P.W., [Wired for War}: The Robotics Revolution and Conflict in the 21st Centurey, New York, Penguin. 2009).

They must be extremely useful, since they are displacing piloted aircraft (pilots and aviators are a powerful force within the services), and even obsoleting spy satellites

The case is simple.  A drone can do the same job as a piloted aircraft, with almost no risk to US aircrews.  A drone is generally cheaper than a piloted aircraft, even neglecting the expense of aircrews.  Furthermore, drones can do the same job as satellites, usually a better job, and cost much, much less.

The advantages have relatively few drawbacks.  There are surely challenges for drone pilots, attempting to understand what is happening far away through electronic links.  Of course, remote operation faces latency issues, reducing reaction time and the ability to precisely track moving objects.

A bigger issue is that a drone operated at a distance blurs the conventional laws of war.  Theoretically, the joystick jockey commuting to an air-conditioned base at home is a legitimate military target—possibly endangering civilians.  (See, perhaps Singer 2007 on this point).

Finally, there is a psychological issue for decision makers.  Armed drones have proved to be effective at targeted strikes, making it possible to remotely execute lethal attacks on individuals no matter where they are. While remote killing is scarcely new, the effectiveness of drones has proved to be seductive.  If we can kill individuals with little risk, there is a temptation to turn national policy into a mafia-like hit list.

This last issue is what Mazzetti focuses on, and one of the points Obama wishes to legalize. (Mazzetti reports that when faced with the real prospect of leaving office in 2012, Obama was motivated to regularize the ad hoc decision making framework, rather than leave his successors a totally free hand.)

Actually, much of Mazzetti’s book is about the competition between the CIA and the DOD, on several fronts.  Originally, there was a separation of roles, with the CIA charged with gathering and analyzing information, and the Pentagon applying military force.  But effective war and international policy requires both, so there has always been pressures on the DOD to collect intelligence and the CIA to operate paramilitary forces.  Naturally, parallel, even duplicate activities developed, as the Pentagon deployed intelligence gathering units, and the CIA used paramilitary forces.  And various mercenary groups blur the distinction, by providing the same services to both DOD and CIA, possibly at the same time.

Mazzetti also points out that the two agencies operate under different legal authorities, giving each advantages and limitations in certain situations.  For example, US military forces cannot operate in friendly or neutral countries without serious repercussions. The CIA is under no such restriction. On the other hand, CIA activity is often deniable, which is a huge risk for their operatives.  While a captured soldier may expect to be imprisoned until exchanged or paroled, a captured spy expects to be tried for espionage or murder, best case.

Having duplicate programs is convenient then, enabling specific operations to be labeled as needed.  For example, the raid on Bin Laden was executed by Nave SEALs, which would constitute a military invasion of Pakistan (an act of war). So, voila, the SEALs were assigned to the CIA, making it an espionage operation–possibly still an act of war, but not a violation of US law.

In the case of drones, the Pentagon and the CIA have developed similar programs (i.e., duplicates), which obviously leads to the possibility of chaos and serious questions about who can legally operate drones where and for what purpose.

Drones have been called into play in areas outside declared war zones, for attacks that support “American interests”.  Again, the agencies have different legal frameworks.  The military has pushed the limits of its legal authority, asserting the right to conduct intelligence anywhere in the world that might become a battlefield—which is pretty much everywhere.  The CIA has become focused on man hunts to roll up terrorist groups, which would appear to violate its rules against assassinations.  (Neither the DOD or CIA are supposed to operate within the US. I’m sure there are legal loop holes when needed.)

From reading Mazzetti’s book, we can immediately understand Obama’s remarks, and the proposal to consolidate drones within the Pentagon.  The goal is to reduce the redundancy, reduce the CIA’s focus on targeted killings, and probably to get the CIA back on intelligence.  Also, placing lethal drones in the Pentagon makes them subject to a specific legal framework, rather than an ad hoc patchwork of authorities.

I don’t know enough about the details to judge them.  In any case, the President can propose whatever he wants, we’ll have to see how the Pentagon and CIA fare in the bureaucratic fights to come.

One last comment.  Mazzetti has two chapters on Somalia that are worth reading the book for alone.

Notes from the Inferno (satire)

Recent upgrades have added several circles of Hell, to punish to the sins of the software industry.

Any soul who has developed widely-used software will be required to participate in ALL of the Hells below, simultaneously, as well as any other Hells they have earned, for eternity.  You will be assigned to appropriate sections, according to your sins.

Your participation, while completely mandatory, is “voluntary”, in that you are required to agree to a 50 page EULA, which is updated approximately daily. It is important to note that each circle requires its own EULA (in some cases, a separate agreement is required for each torment), though the EULAs are mutually incompatible, so you will be litigated throughout all eternity.

An incomplete list of new Circles follows.

Circle of iHell

The elegantly designed torments of Hell will be provided to you in special, proprietary iHell versions, which are only available through the iSouls program, the license for which is only available through the HellStore.

Note that the integrated torture system (iTS) requires the use of iHell™ components.  The tortures are completely programmable (provided you join the proprietary iTorture Developer Program), though they are written in Subjective C, which iHell’s unique and undocumented programing environment.

Circle of WinHell

Expensive, yet poorly designed torments wlll be provided to you through the WinHell associates program.  A complete array of tortures are bundled, so there is no possibility to avoid even one excruciating pain.  See the overview, in a MindSucking OwiePoint presentation (reading time 25 hours).

Note that there are 32-bit, 64-bit, and mobile versions, as well as backward compatibility for a half dozen mutually incompatible older versions of WinHell—amounting to at least 256 different versions of each WinTorture.  Each day, you will experience each version of each torture.

Circle of Hell++

Perhaps using the motto about “do no evil” was ill considered, as we thinks you didst protest too much.

In Hell++, the torture is “agile” and decentralized and networked. No torment lasts longer than a few hours, at which point it begins again, perhaps the same, perhaps changed.  Many torments build on other circles of Hell, aggregated for your benefit in Hell++.

All tortures are freely available (though mandatory), and you will also receive the benefit of advertisements while you suffer throughout eternity.

Your excruciation is subject to constant, data based improvement. This means that you will be subjected to the same torment repeatedly (for all eternity), with slight variations that will allow Hell++ to refine the pain.

Using “Big Torment” techniques, Hell++ uses the torments of all the damned to fine tune and amplify your own experience, and to target advertisements to fit your own individual Hell.  Personalized torment, for eternity.

You will receive daily missives explaining how Hell++ represents the model for the future of Hell, solving all problems.

Circle of The OpenHell Consortiun

Completely open source, the OpenHell enables (and requires) you to suffer in a Hell of your own construction.  Sadly, there are thousands of openHell components, most of which do not work, though you will spend eternity attempting to make them work together.

Not only will you spend each hour of eternity assembling your own torments, you will work as part of an infinitely large “community” of similarly damned souls, arguing about features, schedules, and arcana, without reaching any useful result.

You will be required to agree to EULAs that assure that the fruits of your torture is given away for free, to the benefit of all souls in Hell. Sadly, that means that your kharmic debts can never be paid, no matter how long you contribute to OpenHell.

 

 

Cool Augmented Reality at University of Illinois Graduation

JUST IN:  A documentary about this project, broadcast on the Big Ten network.

The Champaign Urbana Community Fab Lab contributed to a special Augmented Reality installation for the University of Illinois graduation ceremonies.

See the official press release and ‘about‘ page, and the students’ pix.

The Alma Mater statue is being restored, which created a crisis:  this would be the only year for decades in which graduates could not get their picture snapped under Alma’s wide arms.

The campus powers that be wanted to do something, and asked Alan Craig if it would be possible to us Augmented Reality to put Alma–restored–on her pedestal, so people could get a snapshot.  Alan and I conferred, and the conclusion was YES WE CAN, and more over, WE MUST.

The image was based on high resolution scans of the statue in the restoration process (which the University should have done for documentation anyway), cleaned up and rendered by the Beckman Institute Visualization Lab led by our colleague Travis Ross.

Besides my own contribution as software integrator, Andrew Knight of the CUCFL created a custom rig to hold an tablet computer on a camera tripod.  You can buy similar products, but why would you buy one when you have a Fab Lab? 🙂

I was particularly happy to contribute to this project because it is a great example of using contemporary IT to create experiences that strengthen our social and human bonds.  This installation was meaningful mainly as part of a social ritual (largely invented bottom-up by generations of students), and could only be experienced–together–at the specific time and place.

And I learned a bit about the particular challenges of life size (larger than life size in this case), outdoor, Augmented Reality.

I was surprised how well the AR worked.  The large target far from the camera worked as well or better than small versions in the lab.  The outdoor lighting didn’t seem to be a big problem, though obviously there are lot’s of conditions that will fail.

On the other hand, it was difficult to test, since the site was not under our control, and wrangling the rig was not trivial. So we had only one actual test of the final product, with no chance to iterate.  Fortunately, it worked well enough we didn’t need to iterate.

We also found a number of technical problems we need to understand, such as how to map the virtual geometry to a complex setting in the real world.  More on these issues later.

I should note that Joel Steinfeldt of the University Public Affairs Office was the key organizer who made it all happen.  This project was a collaboration of many departments at the UI, and had a really tight schedule (with a very immovable deadline), so many things had to come together just right to get enough working to be useful. Joel was essential to this process:  well done!

Draft White Paper: Community Fab Lab Experience and Perspectives

Here is a draft of a White Paper I’ve been working on this year.  This is a review and analysis of the Champaign Urbana Community Fab Lab.  This is a much longer and deeper treatment of the points made in my presentation at HASTAC.

The Fab Lab is something you DO, not something you talk about, so this document is incomplete and will need to be revised in the coming months.

Here is the abstract and an excerpt.  See the draft at the link below.

Abstract

Digital technology is enabling new forms of community, new forms of expression, and changes in the living culture of contemporary life in many ways. One example is the emergence of local community-based fabrication spaces.  This paper discusses one such space is the Champaign Urbana Community Fab Lab (CUCFL), which deploys a combination of technical, human, and social resources to develop local technological capabilities and opportunities. The CUCFL community is also connected to a network of like-minded Fab Labs and Maker spaces across the planet, as well as global markets and opportunities. These digital connections enable broad knowledge sharing, exchange of designs, and discovery of expertise.

The success of the CUCFL and similar labs depends on a combination of technology, a local community organization, and an open culture of learning, teaching, and sharing. All these elements are critical. These community labs also have significance beyond the local users and specific technologies, they are models of democratized technology, and harken back to earlier humanist workshop traditions, reintegrating technology, art, business, and community.

The paper discusses the technical and social background of personal fabrication, and the emergence of local community maker spaces. Then we consider one example of a local community-based Fab Lab in some detail, and then concludes with implications of this phenomenon.

1. Introduction

Digital technology is enabling new forms of creative and scholarly communities, new forms of expression, and augmenting the living culture of contemporary life in many ways. Contemporary technology enables enhancements to existing techniques, some new methods (such as pattern recognition and data mining), new forms of expression, and the adoption of new approaches to old problems of creation, dissemination, and communication. Many of these ubiquitous digital technologies have reached wide audiences beyond traditional engineering, scholarship, and art, opening the way for “humanistic” practices which are reintegrating with the living culture of contemporary life in many ways.

One such reintegration has emerged around digital fabrication technology, especially in the form of personal fabrication and local community fabrication spaces. This technology is widely viewed as revolutionary, potentially transforming the global industrial and consumer economy. The availability of personal fabrication technology, for design and realization or products, opens the way to the same transformations as seen in the realm of “bits”, now in the realm of “atoms”, including global scale peer-to-peer sharing, and the exploitation of “fat tail” phenomena.

As with any revolution, we will find evidence of it in local communities. This paper considers a local community-based Fab Lab which combines several technical and social approaches to we are developing an approach to fostering collaboration and creativity. The Champaign Urbana Community Fab Lab (CUCFL) combines technical, human, and social resources into a community-building process to develop local technological capabilities and opportunities. The CUCFL provides access to a suite of digital technologies access to knowledge necessary to use the technology that were not previously available to people, outside relatively privileged settings such as University labs.

The CUCFL is building a community of makers, dedicated to learning and teaching, with a consciously inclusive ethos. The lab has successfully welcomed many into our community of makers through an active learning environment within a supportive and friendly environment. The volunteer ethos in which everyone, not just a privileged elite, is a creator, a learner, and a teacher has encouraged people to discover just how much they know, and how much then can contribute. The CUCFL community is also connected to a network of like-minded Fab Labs and Maker spaces across the planet, as well as global markets and opportunities. These digital connections enable broad knowledge sharing, exchange of designs, and discovery of expertise.

The success of the CUCFL depends on a combination of technology, a local community organization, and an open culture of learning, teaching, and sharing. All these elements are critical, and to date, have sufficed to sustain the lab. Community fab labs have broader significance, beyond their local users and specific technologies. They are models of democratized technology, which may have profound social, education, and personal effects that change communities, economies, and individuals. Interestingly, it can be argued that a community fab lab harkens back to earlier humanist workshop traditions, reintegrating technology, art, business, and community.

The paper is laid out as follows. Section 2 discusses the technical and social background of personal fabrication, and the emergence of hundreds of local community maker spaces. Section 3 considers one example of a local community-based Fab Lab in some detail. Finally, Section 4 concludes with some implications of this phenomenon.

Download the full White Paper [PDF]

A personal blog.

%d bloggers like this: