National Geographic “Expeditions Granted”

This might be interesting to pass along.

National Geographic is calling for “Expeditions” that they will support with some funding.

The thing I like most about this is their expansive view of what “exploration” might mean.  Not everyone of their selected projects seems interesting to me, but they have quite a few good things in here.

For example,  this one touches on several things I care about, and seeks to use digital technology to enhance cultural preservation and live dance/movement.  I don’t know much else about M. Scofield, but this project is right on target, and something I wouldn’t mind contributing to.

Building a toolkit to help preserve the unique styles of movement (dance, martial arts, etc.) from around the world

Comment on Kramer et al. Facebook Experiment

Let me join in the hue and cry about the Facebook study published last week, “Experimental evidence of massive-scale emotional contagion through social networks“, by Adam D. I. Kramer and colleagues.

The results weren’t especially Earth shattering, most of us believe that biasing news can bias what people talk about.  After all, isn’t this what advertising is all about?

The uproar was about the fact that the experimenters manipulated the contents of thousands of ordinary Facebook users and then observed their postings.  This was done without explicit information or permission.

As has been pointed out many times, this project could not be done at a legitimate University or Research facility, which requires strict protection of the rights of human subjects.  In particular, any research on humans requires legally operative informed consent.  Absent that, it is against the law and will result in discipline, firing, fines, lawsuits, and/or prosecution.

As a historical lesson, the legal foundation for these laws is built on the legacy of Nuremburg trials, which for the first time established that experimenting on humans without consent is not just a civil matter (e.g., assault or batttery), but a crime against humanity.  Serious freaking stuff.

In the US this has been implemented by so called “Institutional Review Boards” at every major institution, which reviews all research beforehand and check that process was followed.  The IRBs consider whether collecting personal information is justified by the research in question, whether the consent procedures ae adequate, and seek to ensure that data will be protected and not used for any purpose other than the specified research.  Also, one must consider the welfare of the participants–very broadly construed–to provide them with information about the results and findings, and follow up if there is any distress or harm.

Note that all research must be reviewed, even “harmless” questionnaires.  The IRB will decide if you are exempt or not.  (There is an exemption for basic software testing–mercifully for everyone the IRBs don’t have to review every time you ask someone to try your new web page.)

Quaintly, video taping people for research is considered particularly intrusive, and is rigorously policed.  Permission must be obtained, and the recordings must be saved in secure, encrypted vaults and never used except for the research project.  No “long tail” on this data, its not legal.

This particular restriction is increasingly ironic, as the participants were no doubt videoed multiple times by securing, snoops, and their friends on the way to the experiment, and may well be videoing the experiement themselves  with their own phones.  The only thing you have to get permission for is the science.

Needless to say, the research in question would never have passed an IRB review, on several grounds.

So how did this happen.

Obviously, the rules don’t apply to big companies.  Actually, they do, but since it wasn’t publicly funded, they aren’t routinely policed.

I note that in both advertising and software engineering, it is routine to collect behavioral data without explicit permission.  After all, advertising is all about manipulating people without their consent.   I have no doubt that it never occurred to the Facebookers that this kind of monkeying with people could be wrong–its their entire business model.  (When a news source slants the news, it is evil, when an aggregater selects which slants to present, it is what?)

In fact, the official “apology” actually says, “The goal of all of our research at Facebook is to learn how to provide a better service.”–explicitly acknowleging that this is not legitimate scientific research (but also diving into the “software testing” loophole).

One more point. In the public sector, the research would also be reviewed for financial conflict of interest.  This is yet another way that this research might have failed to pass muster.  The academic researcher should not be providing “objective science” dressing for commercial research, at least not without scrutiny.  In this case, it appears harmless enough, but there is clearly a potential conflict of interest between the funder and the interests of the users who are exploited in the study.

Overall, this study might have been allowed by an IRB, once reviewed and carefully justified.  It might have been considered legitimately public information, and harmless.  Also, being a software service, it might pass muster as algorithm testing.  Maybe.

But you can’t just blow off ethical tests in the name of “improving our (for profit) service”.

The inexplicable thing for me is how Guillory and Hancock (of UCSF and Cornell) could justify participating without getting IRB approval from their institutions.  They should have known better and be in hot water for embarassing their schools with such visible misbehavior.

One last thing:  regardless of this case, the IRB rules certainly deserve to be reviewed.  The supposed “intrusiveness” of video and other data needs to be reconsidered to be in line with everyday reality.  Routine questionnaires do not need to be treated as if they are experimental brain surgery.  Other factors maybe should be added, particularly about the responsibility of privately funded organizations–privatizing crimes against humanity aren’t OK just because the government wasn’t involved.

And engineers and business majors should be taught about research using human subjects, and that the principles apply to them.

 

 

IEEE Computer Issue on Mobile App Security

More catching up, looking at June issue of IEEE Computer.

The issue has the theme  “Mobile App Security”, which is kind of an oxymoron.  Several of the articles are pretty technical, while pretty irrelevant.  Securing a device that isn’t secure and uses unsecured networks is just not possible (though we can probably make the device unusable if we add enough biometrics and other trickery.)

The best article “Imagineering an Internet of Anything” by Irena Bojanova and colleagues.  I’ll read anything about “imagineering”!  Forget the Internet and the Internet of Things, we’re talking the Internet of Everything, and the Internet of Anything.  The IofE includes anything that sensors can sense, etc..  The IofA includes actors of many kinds: algorithms, groups, who knows?

Example comment:

“[..W]e posit that “thins” can form communities similar to clouds. These things can be rogue.” p. 76

Basically, everything you ever thought about privacy and security not only breaks, it doesn’t even make sense.  Identity?  Autonomy? Trust?  These things can’t really be defined.  When the system has to try to figure out “Are you a thing, a human, or another living thing?” (p. 74), we’ve got big, big, problems.

Cool article.

The second best article may be “Securing the “Bring Your Own Device” Paradigm” by Alessandro Armando and colleagues.  The title alone gives you the picture:  in the never-ending pursuit of “efficient markets”, capital provides less and less to workers: no contract, no benefits, no office, and these days, no IT.  “Bring your own job” seems to be the theme.

Anyway, the “cunning” idea that workers can just use their own computers for work has serious implications when the devices are as unprotected and unprotectable as mobile devices and public networks.

The article actually talks about a complicated system that mimics what IT departments used to do for in-house networks.  It’s not clear that the resulting device is actually secure (or usable), depending on the configuration.  In any case, it’s clear that the “solution” is for the company to manage “your” device, so it isn’t really BYOD so much as “you pay for the device and do the sysadmin for us”.

Finally, Hal Berghal has an odd piece, “Leadership Failures in the National Security Complex”.  He complains about military thinking, which he finds is completely wrong for “cyber” tasks.  No points awarded for discovering that the military brass has plenty of, well, powerpoint jockeys and politicians. (The odd part is how certain he seems to be about how to do “effective cybersecurity leadership”.)

His thesis is that the broad surveillance with disregard to privacy or political sensitivities reflects poor leadership out of the military ranks.  (A computer science professor, he clearly thinks the world should be led by computer scientists.)

For me, the piece is an interesting counterpoint to my own assumption that US cyberwar folks know what they are doing:  public ham-handedness is a powerful infowar message.  Even the dullest adversary has got the picture by now:  “we are watching everyone, everywhere”.  As I have pointed out, this message is extremely valuable, denying swaths of IT and Internet to adversaries, and also prompting friends to pay attention to cyberdefenses.

But Berghal argues that this is incompetence at the top, which has led to stupid, poorly thought out, programs.  Incompetence is always my first choice to explain human behavior, so I can’t dismiss this out of hand.

A hybrid theory would be that much of the reported activity might be badly thought out, and revelations were embarassing. But smart people have used the plodding incompetance to (a) mask the actual useful stuff and (b) create the infowar message I refer to.

 

Interesting Articles from June CACM

Catching up on the June Communications of the ACM, it was a toss up for the most interesting article.

“Neuromorphic Computing Gets Ready for the (Really) Big Time” by Don Monroe

“Neuromorphic” computing–a science fiction favorite for so many reasons–hacking brains! programming minds! the Borg!–has been languishing in the shadow of Moore’s Law: silicon based hardware has had such a great run that carbon based computation has not had much traction.

Don Monroe reports that this comparative disadvantage is fading as the screaming Moore’s curve levels off.

Crucially, researchers are not trying to recreate the brain precisely (though, of course, neuroscience is working hard to understand brains), but to use the physical and functional principles to construct workable “chips” that do something interesting.

It is now possible to create chips with neurons at the rough scale of a mouse’s brain, though much smaller (insect scale) chips can do very interesting things.

Two huge mountains have to be climbed (and the Sherpas are already in the foothills, of course.)

Obviously, one wants these systems to learn.  Ironically, many of the cleverest silicon algorithms are simulating neuro learning behavior. So we would expect a neuromorphic system that can learn to be really spiffy – small, cheap, etc.

Second, we need to be able to program the darn things, one way or another.  And it’s likely to be “another”, since Von Neumann style logic doesn’t make much sense.  Current systems are doing some things adapted from the past, that look a bit like modular programming.  The fact that it works at all is amazing to me, but clearly there is lots of blue sea here to explore.

Time for a Change by

3D printing, hell. Give me 4D printing.

Since 3D printing is now in the hands of children and grandmothers (and me), it stands to reason that it no longer is the “next thing”.  Let’s add a dimension.

Neil Savage reports on “4D Printing”

So, combining “smart materials” with 3D printing, we design objects that are printed in one piece (out of multiple materials) and then unfolds or moves or whatever.

Another science fiction favorite:  a “seed” that grows into a chair when you get it home, or whatever!

Again, lots of mountains to climb, including:

Multimatierial printing isn’t trivial: the additive processes are pretty specific to each material. Just fer instance:  try printing copper on a plastic base:  the hot copper vaporizes the plastic.  Etc.

Behavior modelling.  Not only does the CAD system have to know about the geometry and materials (we know how to do this), it has to have a really good model of the dynamics and interaction of the materials.  The last bit is, ahem, “not solved”.

(I didn’t realize that Autocad had a Bio/Nano/Programmable Matter Group.  Cool.)

Article in July “Very Much Wow”

This month’s Very Much Wow published my article titled, “You Shall Not Crucify The Internet On This Cross of Bitcoin“.

The article discusses some the nineteenth century U.S. currency wars: “Free Silver” vs. “Greenbacks” vs. “Gold Standard” (the title refers to William Jennings Bryan’s famous speech condemning the gold standard, the “cross of gold”).  This episode in US politics illustrates how currency technologies are connected to competing cultural narratives.

I lay out some of the thinking that has developed in this blog, discussing the significance of the plethora of  (technically similar) cryptocurrencies.

Please check out the full article (p. 33).

I note that in the same issue, the widely regarded Sensei Andreas Antonopoulos is interviewed.  He actually expresses similar views, imagining that everybody could have their own cryptocurrency. It is gratifying to see others thinking the same way.

By the way, Antonopoulos compares cryptocurrencies to Pokeman cards–which is an extremely apt analogy!  The best thing about this metaphor is the picture I get of Marc Andreessen (for instance) as a dragon sitting on a huge pile of…very expensive Pokeman cards! 🙂

(Don’t forget to tip the magazine if you like the articles!)

“Internet of Things”: Probably Say “No” For Now

There is much excitement these days about the coming of the “Internet of Things” (a term coined by Kevin Ashton in 1999, but now coming true.

I’ve been interested in this stuff since before Ashton coined the phrase,

There is so much cool that can be done with smart environments, how could be not be excited?

However, along the way, things languished in unimaginative and short sighted apps–honestly, I don’t need to automate the light switches, they work just fine–and a lack of contact with the real world.

My favorite rant in this area is about the plethora of “intelligent” rooms that sense the wishes of the inhabitant.  Not the singular.  One person per domicile.

So, it knows your schedule and anticipates when to start the coffee, etc.  I adjusts lighting and music to the tastes of the (one) user. And so on.  (Also, these systems tend to be given the personality of a servant–a psychologically and sociologically troubling fantasy.)

In real life, for most real people, there are more than one person in the family.  So you can’t talk about “optimizing” for the one user, that makes not sense.  And optimizing for multiple people is hard–and in any case is what relationships are about.  Not really what your thermostat should be doing.

Example of why this is hard:  many services offer to take my collection of music recordings, crunch on them, and then make recommendations for me.

Here are some real world problems with that concept.

First, my collection extends decades back.  My tastes have changed over the years.  (Essentially, I’m not “one person” as far as the algorithm would have to assume.)

Second, the collection was merged with my spouse decades ago.  She has her own tastes, which also have evolved.

So how would machine learning deal with this?  I have no idea, but I’m sure I don’t care.

Where was I?  Oh, right.  Internet of Things.

What’s coming out this summer are a bunch of things from Internet/Mobile app people in alliance with appliance maker and marketers.  The technology is based on home scale internet things:  moderate bandwidth, light security, almost no local storage or cycles.

For some reason, people seem to think that hooking all this up to the Internet, kind of like a phone,  will be OK.  Are you nuts?

Aside from the sheer insanity of letting Google anywhere near my thermostat (to pick one example), there are so many reasons I don’t want my house full of low grade computing.

For example, we hear about cases such as a hack attack that dropped malware on home storage devices that secretly mined cryptocoins.  This went on for months because-wait for it-random home owners in Taiwan do not take time monitoring CERT bulletins and emergency patches for their home appliances.

When I read this story I realized that, even though I am experienced and well informed, I have no idea how many of the devices in my house may have bluetooth or wifi, nor if they could be hacked.  How would I know?  Why would I want to know?

Glancing at the web documentation from Google Nest, I found that I was supposed to be reassured by the fact that they use openSSL (!) and will ask before sending data to an app on your phone.  Problem solved!

If you want and even more thorough walk through, I refer you to recent books on this topic.

Until there is an application I really need or want, and a system that is really isolated from the Internet, I’m saying “no” to the Internet of Things.

A personal blog.

%d bloggers like this: