Category Archives: University of Illinois at Urbana Champaign

A Warning From The Dawn Of The Internet

These days it seems like every pundit in the world is “discovering” that the internet is a bad thing.  And many seem to think this is news.

It isn’t news.

Way back when, at the very beginning of the World Wide Web, we saw where it was going, and warned the world.

I recently dug out an old article about “Digital Commerce” [1] from the National Center for Supercomputing Applications where Mark Andreessen’s “genius” technology came from.  (You didn’t think he invented the technology he commercialized, did you?)

I want to be clear here:  this article was written in 1995.  Netscape was six months old, and hadn’t IPOed yet, MS Internet Explorer wasn’t out for another six months.  Amazon booted up later that year, Paypal was three years in the future. Zuck was what, in third grade?  Vitalik Buterin (Mr. Ethereum) was in diapers.  Travis Kalanick (Mr. Uber) was applying to UCLA (from which he eventually dropped out).

In the article, Sensei Adam Cain (a student at the time, who didn’t drop out) and I described the landscape of the near future of internet commerce [1].  The early tech was very crude and laughably out of date now, but we could clearly see what was coming.

The most interesting thing to me is the discussion at the end of the paper.  We noted that digital commerce was going to be destabilizing (“disruptive”) in many ways, and would challenge governments and laws.  We worried about digital markets, and about digital cash (pre-Paypal and pre-Bitcoin!), for exactly the reasons we now worry about them.

Well, that all surely happened.

“As more and more economic and social activity is conducted online, what will this mean for society and the economy? The prognosis is far from clear. Digital commerce occurs with blinding speed, unrestrained by boundaries or distance- often beyond human comprehension and regulation. Will the digital economy be wildly volatile, full of lightning surges and panics of worldwide proportions? Can nation-states, as we know them, exist without a monopoly on money? If not, then what sort of governments, laws, and public institutions will come to exist?” (p. 39)

Equally important, we called out the social effects, the rise of digital communities, and the corresponding erosion of physical communities.  In a memorable phrase, I argue that “A home page is no substitute for a home or a hometown.”   And we ask the vital question, “If digital commerce does not offer support for a decent way of life, what good is it?”  (p. 39)

Quite.

“Digital commerce may help make new virtual communities economically viable. Just as small towns and regions are held together by cultural ties and supported by local economic activity, online communities will form that will be supported by digital economics. In the end, though, commerce is not culture, and digital communications are cold and impersonal. A home page is no substitute for a home or a hometown. If digital commerce does not offer support for a decent way of life, what good is it? “

We knew, right at the beginning, the deep, dangerous changes we were initiating.

And we told you.

So don’t say we didn’t warn you.


  1. Adam Cain and Robert E. McGrath, “Digital Commerce on the World Wide Web, in NCSA access magazine. 1995, National Center for Supercomputing Applications: Urbana. p. 36-39. http://hdl.handle.net/2142/46291

 

Winding Down Black Hole Research at NCSA

In another installment of “winding down science at the NCSA” (a series that I wish wasn’t happening), this month sees a report on a huge computational simulation on the formation of black holes [2].

This is a particularly apt “last paper” for the National Center for Supercomputing Applications  (NCSA), because the center was founded by astronomers who wanted to enable large scale computational cosmology. Indeed, one of the coauthors of the new paper is Mike Norman, who was one of the first users of “the cray” when NCSA opened, and probably used as much compute time over the decades as any single user.  So it is very appropriate for him to be involved in this “final” effort.

The new study also shows that, if NCSA accomplished nothing else, it has contributed to major advances in the computational modelling of black holes!

The study itself is pretty far beyond my own understanding of the early universe and galaxy formation.  But I do understand the significance of the 72 terabyte dataset, results of large scale multiphysics simulations.

The project used these earlier computations to identify “areas of interest” (two 40 megaparsec cubes), which they simulated at even finer detail.  These new simulations yielded interesting new understandings of conditions in the early universe, and how block holes could have formed at that time, which is the subject of the paper.

This paper also represents the transition to the next systems.  The original 72TB dataset was computed on Blue Waters at Illinois a few years ago.  The new computations used the newer and continuing “Stampeded2” in Texas.

I’ll note that the science team is located in a number of institutions, and ran the simulations remotely over high speed internet connections.  This methodology has always been the case, these computational astronomers are nomads, always hunting cycles where ever they can be found.  (Mamas, don’t let your babies grow up to be computational astrophysicists.)

On a historical note, it is no coincidence that supercomputing centers like NCSA and today’s descendants have always been leaders in high speed networking.  There’s only a handful of high end computers, and the users are all over the world.  So we had to invent network technology, just to do the job.  And out of that, came a minor side-project:  the world wide web.

You’re welcome.


  1. NCSA, Black hole research featuring simulations from the Blue Waters supercomputer published in Nature, in NCSA News. 2019. http://www.ncsa.illinois.edu/news/story/black_hole_research_featuring_simulations_from_the_blue_waters_supercompute
  2. John H. Wise, John A. Regan, Brian W. O’Shea, Michael L. Norman, Turlough P. Downes, and Hao Xu, Formation of massive black holes in rapidly growing pre-galactic gas clouds. Nature, 2019/01/23 2019. https://doi.org/10.1038/s41586-019-0873-4

Winding Down Science at NCSA

The National Center for Supercomputing Applications (NCSA) in Urbana is fading into history. Founded to enable computational science, NCSA boosted the Internet with the release of the Mosaic web browser.  In recent years, NCSA has operated the Blue Waters supercomputer (With tens of thousands of compute nodes and petabytes of memory, Blue Waters is something like 4 orders of magnitude bigger than “the” Cray NCSA was founded to operate.)

But all things come to an end, and funding for Blue Waters is ending soon, so current science must wrap up and move to other systems.  (By a strange coincidence, prime replacement systems are located in heavily Republican Texas.)

One of the more romantic projects at NCSA in recent years has been the Dark Energy Survey.  Part of a world wide collaboration, NCSA has hosted the data process for the archives for DES. Notably, the data is collected in South America and zapped up the spine of the Americas every night to Illinois.

This too is winding down.  Data collection for this initial archive officially ended January 9, 2019, though the archive will continue (and we all hope, will be preserved for future use) [1].  The telescope will presumably continue to be used, but apparently not funded by the US. Perhaps Chinese funding will be forthcoming?

This is not just a cool IT project (I mean, it’s a transcontinental data pipeline!). The press release boasts about the massive amount of data collected and the ground breaking results. Observations of “a billion galaxies that are billions of light-years from Earth”, mapping Dark Energy, and contributing to the study of gravity waves, and other important findings.

So, the first Dark Energy Survey is a wrap. Well done all.  We haven’t seen all the results yet, we can hope that the massive effort will yield even more important understanding of Dark Energy.

-30-


  1. Kristin Williamson, NCSA brings Dark Energy Survey data to science community into 2021, in NCSA News. 2019. http://www.ncsa.illinois.edu/news/story/ncsa_brings_dark_energy_survey_data_to_science_community_into_2021

 

New LIGO Data Analysis Technique

For many decades, one of the big questions for astrophysics has been how to confirm the behavior of gravity waves. Theoretically, they must exist, but they are awfully hard to detect amid all the “noise” of the rest of the universe going about everyday business.

In the last few years the Laser Interferometer Gravitational-wave Observatory (LIGO) project has successfully detected gravitational waves for the first time.  This heroic effort involves careful extraction of the interesting signal from a mass of noise.  Details are available in published papers and the data and code are available at the LIGO Open Science Center.


The basic goal of this software is to find needles in a messy haystack, which is actually a pretty generic task with lots of possible techniques that might work. Not surprisingly, other approaches have been applied.

The project has employed trendy “citizen science” techniques crowd sourcing “Gravity Spy” (via Zooniverse, naturally), and a variant of SETI@HOME (EINSTEIN@HOME).  The latter searches through masses of data to identify useful objects.  The former uses human perception to listen to the data rendered as sound, and classify it.  These human classifications are used as input to other software.

Over the last several years, a team at the National Center for Supercomputing Applications has applied machine learning techniques to this problem.  They are publishing their latest results this spring [2].

The project uses neural networks trained from the initial LIGO studies and the Gravity Spy classifications to train the nets, and then are able to create an unsupervised classifier that exceeds the capability of the original system.  I suspect that it probably exceeds the capabilities of the Gravity Spy crowd, as well.

Well done, all.


That this work is being done at a supercomputing center is a hint that it sucks a lot of CPUs, or, in this case, GPUs.  NCSA and other HPC centers have been developing ways to use GPUs for various numerical problems for a couple of decades (not even counting Illiac IV which was essentially a prehistoric GPU the size of a warehouse).

 

In one sense, this isn’t a surprising result.  Neural nets should be able to solve this problem, given enough training data.  But there is huge difference between should and really does.  Which goes to show you why you need the kind of multi discipline, full service HPC center that NCSA has pioneered for 35 years.


  1. Daniel George and E. A. Huerta, Deep Learning for real-time gravitational wave detection and parameter estimation: Results with Advanced LIGO data. Physics Letters B, 778:64-70, 2018/03/10/ 2018. http://www.sciencedirect.com/science/article/pii/S0370269317310390
  2. Daniel George, Hongyu Shen, and E. A. Huerta (2018) Classification and clustering of LIGO data with deep transfer learning. Physical Review D, https://journals.aps.org/prd/accepted/14078Q33Z9aEa21d90d88b77cee7844e90f7d512d

 

WaggleNet: IoT Sensing for Beehives

Yet more Bee research from the University of Illinois Urbana-Champaign: WaggleNet.

This spring undergraduate students report a neat project, implementing an ad hoc sensor net to measure the conditions in Bee hives [1].

The students pulled contemporary technology; low cost, low power computers, radios, and sensors, to implement an inexpensive package suitable to drop in to beehives in the field. The datastreams ping pong from one node to another until they find a router and finally reach an internet connected data repository.  The data can be analyzed to monitor the environment and other aspects of the bee environment.

The prese release notes that this is an interesting project for several reasons.  The initial idea is driven by a “customer”, a bee keeper who wants to monitor the bees over winter.  The technology requires solving the whole end-to-end problem which includes not only the electronics, packaging, and networking, but also dealing with the real world of bee hives.

This is an undergraduate project, and nicely done.  It’s fine that it isn’t exactly ground-breaking.  But let me drop some links  to other work they may want to look at.


This is a great age of sensornets and “smart farming”.

There probably have been many, many beehive sensing projects (not to mention zillions of agricultural sensing designs. I know of at least one project in Utah that is extremely similar to WaggleNet.

The team expresses a desire to make this available to beekeepers everywhere.

I certainly encourage the effort to make an open source version.  I’ll note the “open source hardware” movement as one place to publish it (e.g., see this, this, this, this, this, this, as well as things like Instructables  which has dozens of DiY bee hives and gazillons of DIY sensor projects).  Publishing the whole thing, hardware, software, instructions will take considerable work.  (Contact me if you want some help organizing all this.)

On another front, I’ll point out that if this is to be really used in the world, it wil probably need to be (re-)built with solid security.   if the data is ever to be trusted the system has to be secure.  For that matter, it is important that the sensors and network cannot be hijacked to spy on people or invade other networks.

I know that this product seems harmless and not worth hacking, but unfortunately, that’s just not good enough.  (If the team has any dreams of commercial products, then they really, really need to make things secure from A to Z.)


Again, this is a very nice piece of work.  Making a real produce and/or publishing an open source version will require even more work, and collaborations with additional experts.


  1. Heather Coit, Students Develop Beekeeping IoT for Renowned Research Lab, in Ilinois Enginering – News. 2018. https://engineering.illinois.edu/engage/wagglenet.html

Toward an Internet of Agricultural Things

The “Internet of Things” is definitely the flavor of the month, though it isn’t clear what it is or why anyone wants it in their home. I’m frequently critical of half baked IoT, but I’m certainly not against the basic idea, where it makes sense.

Case in point: a local start up made a splash at CES with a classic IoT for agriculture. Amber Agriculture is a bunch of low cost sensors deployed to grain storage which continuously sense the conditions and optimize aeration, and alert to problems. The web site indicates that the system implements optimizing algorithms (“rooted in grain science principles”, AKA, actual science) to automatically controls fans.

 

This is a nice example of IoT: the sensor net not only replaces human oversight, the small sensors can give data that is difficult to obtain otherwise. The economic benefit of this fine grain optimization is apparently enough to pay for the sensors. (I would be interested to see actual peer reviewed evidence of this cost analysis.)

I can’t find a lot of technical details, so I wonder how the sensors are deployed (do you just mix them into the grain?), how they are separated from the grain when it is removed, or exactly what it is measuring. Are the sensors reusable? Does it work for different kinds of grain?


It is interesting to think about extensions of this technology.

What other features cold be added?

I wonder what could be done with microphones to listen to the stored grain. Are there sonic signatures for, say, unexpected movement (indicating a leak or malfunction?), or perhaps sounds indicating the presence of pests.

Similarly, the sensors might have optical, IR, or even radio beacons, which might detect color, texture, or other surface properties. Could this early detect disease or contamination?

Anyway, well done all.

(And I learned that there is an internet domain name, ‘.ag’)


  1. Amber Agriculture. Amber Agriculture. 2017, http://www.amber.ag/.
  2. Nicole Lee,  Presenting the Best of CES 2017 winners! .January 7 2017, https://www.engadget.com/2017/01/07/presenting-the-best-of-ces-2017-winners/

 

Hackerrank ranks schools that generate “coders”

This month ‘Hackerrank” released a list of the Universities with the “best coders” in the world.

My initial glance gives the list at least some face validity, because University of Illinois at Urbana Champaign is ranked 14 in the world and third in the US schools. I would expect no less! (And where, indeed, are MIT, Stanford, and CMU in the list???  Probably busy starting companies and schmoozing with venture capitalists.)

Naturally the list is dominated by China and India, and their #1 is a school in Moscow, which is extremely plausible, too.

We can be sure that many of the entrants at UIUC and other US schools are from overseas, as well, definitely including India and China.


While these results tickle my school pride (and certainly bear out the proud history of UIUC), what the heck is this list based on, and what does it mean, if anything?

Hackerrank appears to be sort of a talent search and placement company, that entices young programmers to tackle programming problems as a competitive game. (Just like we did when I was in school….) These tests seem to be about the ability to write software and solve problems related to programming. As they say, this is an assessment of “coders”.

While I have written plenty of code in my life, my career actually depended on a lot of skills other than coding, including working in groups, understanding user needs, explaining ideas, and thinking about long term sustainability. And the fact is that only the tiniest fraction of code is written from scratch. Most coding is modifying (and fixing) existing code, which is kind of a different skill.

My point is that this kind of coding contest isn’t a measure of all the things a successful programmer needs to know.

That said, anyone who does well at these games has a lot of practical knowledge under their command, which is a good foundation.

It may be even more significant that these high scoring institutions have lots of strong coders, and a culture steeped in knowledge of software and problem solving. That is generally a key to the development of cool stuff. (Such as Mosaic.)

More on “Internet Mind Control”

My earlier post about the latest alarming news from Mad Scientist Simone Giertz naturally focused on the implications for public safety.

On a technical note, I note that the Giertz cyborg would be even deadlier if combined with something like this to generate arguments even more authentic than authentic.

 

My own proprietary solution for “arguing with the internet” is based on judo-like principle: “let the opponents strength defeat him”. For many years I have “won” the Internet by letting it argue with itself. My studies clearly show that the amount of information gained by totally ignoring the Internet is greater than or equal to the total information generated by Internet based arguments and comments. In short, Internet arguments, on average, reduce the amount of useful information to the reader.

An Awesome Hackathon Project

I was a volunteer judge at a local Hackathon lalst weekend. With 200 some projects and more than 500 participants (not counting judges and staff, it was huge—probably too big to really work well.

I have to say that I’m not a big fan of the hackathon concept. I’m more of a deliberative designer. Aside from the blatant age-ism in the “24 hours straight” format, this just isn’t the way to really solve problems, in my opinion.

Hackathons are good for one thing for sure: they are “zones of permission” (a la Kennedy), not only inviting, but insistently demanding free wheeling ideas. And no one is terribly worried by the prospect that most of the projects will be duds. Have fun, try something out, practice the craft of making stuff.

With so many projects, most of which I didn’t see in action, I won’t attempt to comment on the overall field. As a judge, I was struck by the fact that many of the projects suffered from a poor understanding of the problem they claimed to address. (I was particularly unimpressed by several problems supposedly aimed to help “farmers”, which revealed a deep and almost insulting ignorance of what actual farmers do and know.)

Of course, the other main observation is that the “solutions” all followed the path of least resistance, employing popular software interfaces and available hardware. Google maps and Arduino and so on are the hammers we have, so everyone looked for problems that could be hit like a “nail”.   Sigh.  (And there were plenty of Inappropriate Touch Screen interfaces, needless to say.)

These are, of course, “features” not bugs of the Hackathon, which is based on the notion that “ousiders”, unburdened by previous knowledge of the problem, and employing ubiquitous, general purpose tools, can radially reinvent, disrupt, and so on. In 24 hours.

And don’t get me started about the “our project will be so cool, we’ll be hired by Google on the spot” fantasy. This is totally the wrong thing to be thinking about when you are trying to do creative design. For goodness sak, design for the users, not to please your corporate overlords!

Anyway, here’s one project that (a) is awesomely cool and (b) actually works amazingly well, considering. Very nice work!

Sorting Hat by Neel Mouleeswaran, Chamila Amithirigala, and Vignesh Vishwanathan.

 

Illini Bat Bot, B2

In an earlier post I noted the really cool Bat inspired flying robot from our local Aerospace Robotics and Control Lab.

Continuing development, they released a video showing the latest version of Bat Bot (B2), clad in a Silicone membrane that resembles natural bat wings.

The videos show that, at least some of the time, this device sure looks like a bat in flight!

Of course, there is much more to do, including working out how to navigate!

But this is a really nice job, and a great result so far.

 

Robot Wednesday