Category Archives: Computer Programming

Very cool link between quantum experiments and graphs

A quick bookmark here for a cool report from U. Vienna, who have shown “a surprising link between experimental setups to realize high-dimensional multipartite quantum states and graph theory.”

I’m neither a physicist nor a mathematician, so I can’t possible argue about the technical details.

But I am a computer scientist, so I know basic graph theory, and I am quite aware of deep relationships between abstract graphs and concrete computations. So I know how significant this kind of result can be, both conceptually and practically.

This particular finding is interesting (and, I gather, unexpected), in part because it is such a complete match. These quantum events (if that is an acceptable term) map to graphs, and graphs map to quantum events. Graph theory can be used to describe the quantum states, and quantum states can, in principle, be used to compute graphs.

This is really cool.

Of course, this is scarcely the first or only time that graphs or other mathematical structures have been mapped to “experiments” or other physical events. In computation, graph theory (in many variations) maps to many algorithms, including many variations of searching (matching, optimization, test coverage).  For that matter, many real world processes and structures are modeled as graphs (e.g., mechanical systems, social systems, brains).  And so on.

The point is, finding deep ties to math in general, and graphs in particular is an incredibly powerful tool, and a great intellectual achievement.

Well done!


  1. Mario Krenn, Xuemei Gu, and Anton Zeilinger, Quantum Experiments and Graphs: Multiparty States as Coherent Superpositions of Perfect Matchings. Physical Review Letters, 119 (24):240403, 12/15/ 2017. https://link.aps.org/doi/10.1103/PhysRevLett.119.240403
  2. Univerität Wien, Hidden bridge between quantum experiments and graph theory uncovered using Melvin, in Univerität Wien – Medienportal. 2017. https://medienportal.univie.ac.at/presse/aktuelle-pressemeldungen/detailansicht/artikel/hidden-bridge-between-quantum-experiments-and-graph-theory-uncovered-using-melvin/

     

Are Coders Obsolete?

Jay Jay Billings and colleagues from Oak Ridge National Laboratory made a little splash this month with a paper titled, “Will humans even write code in 2040?” [1]

I had to read the paper to figure out what kind of “writing code” they are talking about.

They are specifically concerned with scientific problem solving, which is reasonable, considering the mission of ORNL. For them, the important task is to answer a problem posed by a human with a solution computed by one or more computers.

So, for starters, let me point out that this job encompasses a lot of stuff besides “coding”; including natural language understanding, expert domain knowledge, and contextual understanding (e.g., of economic and social constraints).

Indeed, the main point of the article is that AI may well replace humans for these non-coding aspects of computational problem solving, not least because coding qua coding is already highly automated and virtualized.

No one programs computers directly, and haven’t for several decades now. In most cases, it’s not even possible to do so.  Most “code” is input to vast oceans of interpreters, translators, APIs, and code generators.  And this is a good thing, because  a lot of “coding” is boring, tedious, and error-prone detail, which should be and already is automated.

It looks to me like Billings and colleagues are basically predicting that problem understanding systems will be plugged in to the coding technology already used by humans, and there you go.  No need for puny humans at all.

Their paper is also concerned with “extreme heterogeneity”, which seems to be about dealing with the ever changing universe of computational devices.  This seems to me to be a key part of the expert domain knowledge of programmers, at least system programmers like me. Billings et al rightly wonder how well AI can learn to optimize for new targets and environments. That’s an interesting question, but I’m pretty sure AI will do very well, if only because it takes AI to create new devices in the first place.

My own view is that they are describing a natural development of software development. People will spend less time thinking about how to make the computer do this or that, and will focus on figuring out what questions to ask, what kind of answer is acceptable, and how to interpret and check the results.

This is actually what a software engineer already spends time doing. She figures out what problems need to be figured out, and what is feasible within resources.  She spends tons of time testing and documenting the quality of the system, and modifies the system as requirements and the environment inevitably change.  Coding qua coding is a trivial piece of this job.

So, sure, in the future humans will write very little code, even less than they write now.  But they will spend even more time trying to grok what the heck the code really does, and whether or not it does what they meant to do. The more AI involved, the more grokking needed!

“Coders” have been obsolete for two decades now, but designers and engineers will be needed more than ever.


  1. Jay Jay Billings, Alexander J. McCaskey, Geoffroy Vallee, and Greg Watson, Will humans even write code in 2040 and what would that mean for extreme heterogeneity in computing?, arXive arXiv:1712.00676, 2017. https://arxiv.org/abs/1712.00676

Hours Of Code?

…while I’m being grumpy…,

The ACM is promoting “Computer Science Education Week, December 4-10”, urging local “Hour of Code” events.

The Hour of Code started as a one-hour introduction to computer science, designed to demystify “code”, to show that anybody can learn the basics, and to broaden participation in the field of computer science

The example projects are, as one would expect, flashy toys suitable for and designed to attract school kids.

To be fair, the purpose of this project is to entice kids to begin to study computer science especially kids who otherwise wouldn’t do so.  I.e., to raise interest among the broadest range of kids possible.  This is a good thing for sure, though I don’t know if one hour of play will have much impact.

The thing that rubs me the wrong way is that it is such a misrepresentation of computer science, software development, and even of “coding”.

  1. Do you learn anything meaningful about software? No.
  2. What do you really learn? You learn that “coding” is a nearly trivial game, that it is something you do with no preparation and it only takes an hour. These are all grave misrepresentations of actual practice.
  3. And, by the way, “coding” is the least important part of creating good software. Most of the effort and skill is in design thinking, deep problem solving, integrating complex systems, as well as the groaty work of testing, maintenance, documentation, etc,.

In short, this may or may not recruit kids to computer science, but it certainly isn’t much of an example of what “coding” is about.

And “450,000,00” one hour exposures means little.  It takes thousands of hours of study and practice to get good at “coding”, and thousands of hours to make real software.

(File this under “Bah! Humbug!”)


  1. Code.org. Hour of Code. 2017, https://hourofcode.com/us/how-to.

McGraw on Software vs Security

I enjoyed Gary McGraw comments in IEEE Computer about “Six Tech Trends Impacting Software Security[1].

His main point is that software development (and I would say runtime environments, too) have changed rapidly in the last couple of decades, obsoleting many software security assurance techniques (which I would say were iffy even in their heighday).

The past few years have seen radical shifts in the way software is developed, in terms of both process and the technology stack. We must actively track these changes to ensure that software security solutions remain relevant.” ([1], p. 20)

His list includes:

  • Continuous integration and continuous development
  • “The Cloud”
  • The Internet of Things—software is in everything
  • Software containers, dynamic composition
  • AI
  • Software security leaders are newbs

These are some of the trendiest trends!

Interestingly, McGraw does not see “the cloud” as particularly troubling in itself, and he has a point. If anything, deploying software in standardized server farms is a good thing for security, compared to installing everything on a zillion platforms out in the wild world. (But see “Internet of Things”.)

As he says, continuous development is a hazard not only for security for quality and everything else. To me, continuous development is hard to distinguish from just plain hacking, and that’s not good for quality or security or anything except speed to market.

McGraw doesn’t mention documentation, but please spare a moment to have a kind thought for the poor technical writer, who is tasked with explaining the software, even as it changes from hour to hour.

I myself have already beefed about the IoT many times, which is a hazard from almost every angle. But I have to say that I don’t think it is even theoretically possible to good write code for the IoT, secure or not. And it is deployed out in the world with no one actually in charge. How can this be anything but a catastrophe?

As McGraw suggests, AI cuts both ways. It creates vast possibilities for bugs and breaches beyond human understanding, but also enables tools and processes that can greatly improve software (again, beyond human capabilities). As he says, a lot of this isn’t so much new, but there are so many cycles and gazoogabytes available to anyone, even old tricks can yield amazing results, for better or worse.

The unifying theme in all this is that systems are bigger, faster, and way, way more complicated than ever. Including the Internet, “the system” extends to every corner of the globe, encompassing zillions of nodes and links, under the control of everyone and no one . No human can understand what’s going on, what the software does, or even how the software is configured. If you can’t understand it, you can’t make it secure.

McGraw’s last point is interesting. Security professionals are not stupid, but many of them are young. From my point of view, the question is, “are they paranoid enough?” Probably not.

There are plenty of other tech trends that create security hazards. I’ll just mention my own favorite bugaboo, virtualization. Over my umpty-ump decades of software development, everything has moved to be more and more virtualized. Information hiding, standardization, and emulation are powerful technologies and, honestly, without them we’d never be able to produce software fast enough to even keep up.

But virtualization has led to the situation where even the smallest program depends on an unknowable stack of software. “Unknowable” because even if you intend to freeze the configuration, you really can’t.

Like everyone, I have see cases where developers don’t (and can’t) fix a bug, so they just roll back a magic VM to the magical last safe point where it worked, and restart. Tell me that isn’t a security problem.

The fact that software works at all is a tribute to the skill of we, the programmers. But it is difficult to be optimistic that it won’t all come tumbling down.

If builders built buildings the way programmers wrote programs, then the first woodpecker that came along would destroy civilization.” Gerald Weinberg’s Second Law

And if the woodpeckers are out to get us, just how long will civilization last?


  1. Gary McGraw, Six Tech Trends Impacting Software Security. Computer, 50 (5):100-102, 2017. http://ieeexplore.ieee.org/document/7924264/

 

Four Colors Still Suffice

This fall marks the 40th anniversary of the publication of the first proof of the Four Color Map Problem.

Two professors a the University of Illinois used the relative abundance of computer power at UIUC to produce a groundbreaking computer assisted proof of this perennial question.

I remember very well getting my issue of Scientific American, and there it was:

I knew immediately what it must mean. (As any Illini can tell you, there is a postal substation in the Math building. They arranged a special postal cancellation to mark the announcement.)

The essence of their 1977 proof is to enumerate all possible layouts, and systematically crunch through them all [1, 2]. For their proof they dealt with some 1400 configurations, which took months to process on the IBM 360. Today, you can probably do it in minutes on your phone. Then it took a special allocation of time on “the computer”.

The result was not without controversy. Is it a valid mathematical proof if it has not and cannot be confirmed by a human? (As an Illinois alum, I say it’s a valid proof!)

This theorem has been proved many times since 1977, so there isn’t much doubt about the result. But the first time was a heroic milestone in human knowledge.

Unfortunately, much about this major human accomplishment is effectively lost to historical and intellectual analysis

It was written in IBM assembler, and punched on Hollerith cards. (You youngsters can look those thing up.) I know that Prof. Appel still had the deck twenty five years ago because he showed it to me in a box propping open the door. Even back then there was no way to run the program (no card readers left, nor any IBM 360s).

So there are many questions that cannot be answered. Was the original program correct? What was their coding style like? Is it a pretty or clever code? And so on.

We don’t know, and it would be difficult to find out.

Still. All hail Haken and Appel, computational heroes! We are not worthy!

Giants still walk among us!

Alma Mater


  1. K. Appel, and W. Haken, Every planar map is four colorable. Part I: Discharging. Illinois J. Math., 21 (3):429-490, 1977/09 1977. https://projecteuclid.org:443/euclid.ijm/1256049011
  2. K. Appel, W. Haken, and J. Koch, Every planar map is four colorable. Part II: Reducibility. Illinois J. Math., 21 (3):491-567, 1977/09 1977. https://projecteuclid.org:443/euclid.ijm/1256049012
  3. Samantha Jones, Celebrating the Four Color Theorem, in College of Liberal Arts – News. 2017, University of Illinois Urbana. http://www.las.illinois.edu/news/article/?id=23831&/news//news/2017/fourcolortheorem17/

 

HFOSS – Humanitarian Free and Open Source Software

Open source software is a good thing, and humanitarian applications are a good thing, too.

So Humanitarian Free and Open Source Software should be a really good thing, no? It’s even got an acronym, HFOSS.

This fall, Gregory W. Hislop and Heidi J. C. Ellis discuss a related point, the potential value of Humanitarian Open Source Software in Computing Education. [1]

For one thing,, any open source project is a potential arena for students to learn about real life software development. By definition, FOSS projects are open and accessible for anyone, including students. An active and successful FOSS project will have a community of people contributing in a variety of roles, and usually will have open tasks that students might well take up. In addition, the decision making process is visible, and, as Hislop and Ellis note, the history of the project is available. A sufficiently motivated student could learn a lot.

(We may skip over the question of whether FOSS projects represent best or even common practices for all software projects. I.e., FOSS isn’t necessarily a “real world” example for many kinds of software.)

Humanitarian projects are interesting for other reasons. For one thing, by definition, a successful humanitarian project of any kind is focused on problem solving for people other than programmers, college students. Simply figuring out how and even whether technical solutions actually help the intended targets is a valuable exercise, in my opinion.

In addition, real life humanitarian software generally addresses large scale, long term problems, with non-trivial constraints. They are excellent challenge problems, all the more so because the price point is zero dollars and the IP must be robustly open to everyone.

Hislop and Ellis make some interesting observations about ways in which these projects can be sued in computing education.

They encourage thinking about all the roles in a technology project, not just coding or testing. (Hear, hear!) Documentation, planning, above all, maintenance not only consume most of the work effort, but are usually the difference between success and failure of a software project. Get good at it, kids!

(I’ll also point out that designing a solution involves so much more than whacking out software–you need to understand the problem from the user’s point of view.)

They also point out the value of connecting the digital problems solving with an understanding of the actual, on the ground, problems and customers. Technological glitz generally does not survive contact with the customer, especially if the customer is an impoverished mission-oriented organization. Good intentions are only the starting point for actually solving real world humanitarian problems.

This last point is actually the main distinction between FOSS and HFOSS. There is just as much practical value in participating in most FOSS projects. And, for that matter, there is a long tradition of service learning, much of it “humanitarian”. HFOSS is the intersection of these educational opportunities, and it is actually pretty tiny. Most FOSS isn’t “humanitarian”, and most human service or humanitarian problems don’t need software.

In fact, engagement with actual community organizations and initiatives is highly likely to teach students that humanitarian problems don’t have technological solutions, especially software solutions. Digital technology may be able to help, at least a little. But humanitarianism is really a human-to-human thing.

If I were supervising a HFOSS class, I would probably want to try to get the students to think about a number of philosophical points relevant to their potential careers.

First off all, students should observe the personal motivations of participants in an HFOSS project, and compare them to motivations for people doing the same kind of work—the exact same kind of work—for other contexts (e.g., large corporation, personal start-up, government agency). Working on something with the goal to make someone else’s life better is kinda not the same thing as angling for a big FU payout.

The second thing that students will need to learn is just how problematic it can be to try to help “them” to solve “their” problems. However great your Buck Rogers tech might be, swooping in from on high to “fix the world” isn’t likely to garner a lot of enthusiasm from the people you mean to help. In fact, “they” may not think they need wheeze-bang new software at all.

Working with real people to understand and solve real problems is rewarding. And in some cases, a bit of HFOSS might be a home run. But HFOSS for the sake of HFOSS cannot possibly succeed. And that is a lesson worth learning.


  1. Gregory W. Hislop and Heidi J. C. Ellis, Humanitarian Open Source Software in Computing Education. Computer, 50 (10):98-101, 2017. http://ieeexplore.ieee.org/document/805731

Survey of Quantum Computing Software

If we are going to build Quantum Computers (and we definitely are going to build them a soon as possible), what kind of software will then need?

I admit that my math skills aren’t really up to groking the details of QC, but software and software tools I do understand.

There are several important QC projects already in progress by Google [4], IBM [1], Microsoft [3], and probably others. These companies even aspire to sell time on their QC, which means they have to have some way for people to actually use them, no?

So how do you program these marvels?

This month Frederic T. Chong, Diana Franklin, and Margaret Martonosi survey the challenges of Programming languages and compiler design for realistic quantum hardware [2].

Kewwl!

They point out that QC is in a state similar to classical computing in the 19050s: hardware is rare and expensive, and every iota of performance counts. However, unlike ENIAC days, we have deep and broad “toolchains” with the concomitant abstractions and optimizing technology. The trick, then, is to work out the best ways to let humans specify programs, and compilers generate code for QC.

To make quantum programming manageable and quantum systems practical, we must expose the appropriate set of low-level details to upper software layers. The trick is to pick the best ones, those that will allow programmers to express programs while producing software that gets the most computation out of physical machines.” ([2], p. 180)

This is what we spent several decades doing for classical computers, and then again for multicomputer systems, and along the way, for problem specific languages, such as logic design. With QC, it’s back to basics, except we have a whole lot of experience at this game now.

One interesting factor is that QC (at least today) is similar to both general purpose computing (i.e., the programmer describes an algorithm) and also resembles hardware definition languages (the programmer needs to specify strict resource constraints). This is, so far, done in a “quantum co-processor” model, and most systems support QASM assembler code (about which I know nothing at all).

(Note that these early systems run on conventional computers so they, by definition, cannot simulate or debug realistic quantum programs. Eventually, the compiler and debugger will need to run on a quantum machine.)

One of the interesting requirements is that the quantum physics of the computer must be enforced by the compiler. (This kind of physical constraint is true of classic computing tools as well, but we just don’t notice any more. For example, the compiler “knows” that time is discrete and runs forward, that registers have only one value at a time which does not change without external input. Etc.) In the case of QC, there is quantum weirdness that must be respected.

These physical details of the hardware are not yet “virtualizable” in the way that classical computing provides a simplified, abstract virtual machine.  Perhaps that will come someday, which will be interesting to see, and also will be yet another significant contribution to human knowledge.

The Nature paper describes some mind-blowing challenges, such a Qubit “garbage collection”, and runtime checking that operations on a Qubit are kosher with respect to other Qubits that are entangled. Again, it is theoretically impossible to fully (or even close to fully) simulate the computation, so these optimizations are done through heuristics and run time checks.

The paper describes quantum programming languages that have been developed, both functional and imperative. The authors note that current languages do not support verification (e.g., specification of correctness conditions). In QC there is also a need for knobs to specify error tolerance and precision, at the level of each individual output!

Compiling quantum programs is a bit different. For one thing QC programs are not as general as classical programs, and it is often the practice to compile the reprogram separately for each input value. This means that the compiler knows a lot about the program path, and can, for instance, aggressively unroll loops. (This stuff is certainly a juicy problem for the compiler jocks out there!)

Other interesting tasks are managing run time compilation for certain operations, and managing the interactions between a classical computer controller and the quantum coprocessor. (This sounds pretty hairy to me.)

Finally, bugs. Bugs, bugs, bugs. Whole new categories of bugs!

when a new computer is built and tested, it will have errors. How will we distinguish between hardware and software bugs?” ([2], p.186)

Bearing in mind that conventional computers cannot simulate quantum calculations in any meaningful sense, how do we detect bugs? I note that some of my favorite debugging techniques, such as single stepping and repeatedly running a program with slight variations will not be very useful or even possible on a QC.  Heck, you can’t even copy a value or otherwise snoop on the execution. What’s a poor programmer to do, ma?

We are going to need new and probably weirdly magical debugging techniques.

Awesome.

This is an amazing and inspiring survey. It also makes my brain hurt. : – )


  1. Davide Castelvecchi, IBM’s quantum cloud computer goes commercial. Nature, 543:159, March 6 2017.
  2. Frederic T. Chong, Diana Franklin, and Margaret Martonosi, Programming languages and compiler design for realistic quantum hardware. Nature, 549 (7671):180-187, 09/14/print 2017. http://dx.doi.org/10.1038/nature23459
  3. Steve Dent, Microsoft’s new coding language is made for quantum computers. Engadget.September 28 2017, https://www.engadget.com/2017/09/26/microsoft-new-coding-language-is-made-for-quantum-computers/
  4. Masoud Mohseni, Peter Read, Hartmut Neven, Sergio Boixo, Vasil Denchev, Ryan Babbush, Austin Fowler, Vadim Smelyanskiy, and John Martinis, Commercialize quantum technologies in five years. Nature, 543:171–174, March 9 2017.