Annual report: Freelancing in America 2017

Every year the Freelancers Union*  produces a report on “Freelancing in America”.

This year’s report follows up the 2016 report, asserting that 57.3 million workers are freelancing, including 47% of “millennials” [2].   The total is up from 55 million in 2016 and 54 million in 2015. They project forward from these figures to imagine that freelancers will be more than 50% of workers by 2027.

As in the previous reports, this report defines “freelancer” to be “Individuals who have engaged in supplemental, temporary, project- or contract- based work, within the past 12 months.” [1] However, examining the methodology, these labels are misleading (from [1]):

Diversified Workers (a mix of employment, including freelancing) (35% / 19.8 million)

Independent Contractors (full or part time) (31% of the independent workforce / 17.7 million professionals)

Moonlighters (23% / 13.0 million)

Freelance Business Owners (who define themselves as “freelance workers”) (6% / 3.4 million)

Clearly, the number of freelance workers who have the equivalent to a full time job is much smaller than 57 millions, perhaps 20-30 million depending on how you classify self-employed business owners. (Considering this, the future projection is even less believable.)

I quibble about this point because the report portrays freelancing as the future of work, and paints a rosy picture. However, if the future of work is mainly about underemployment and self-employment, this is not such a rosy picture.

In this survey, the self-identified full time freelancers report an average of 34 hours of work per week [1]. In addition, freelancers report income unpredictability, low savings, and high debt. Many freelancers rely on ACA for health insurance, which is highly uncertain at this time.

In short, freelancers may report high satisfaction, and a determination to never choose conventional employment, the objective measures describe marginal employment, and possibly a race to the bottom.


The 2017 report focuses on several impacts of technology. Obviously, the gig economy is enabled by digital technology, and a majority of freelancers report finding work online.

The report spins freelancing as an adaptation to the “fourth industrial revolution”.

Freelancers report anxiety about AI and robotics displacing them. Nearly half of them say that they have already been affected. Freelancers expect technical change, and upgrade their skills frequently. (Online job services are a good guide to chasing the demand for specific skills.)

It is clear that freelancers are in the front lines of this revolution, though it isn’t clear that they are doing better than other workers, or that freelancing is either necessary or sufficient to survive.


Sara Horowitz demands that we “don’t call it the gig economy”. Nearly half of freelancers prefer to call it “the freelance economy” [3]. That’s fine, and obviously its the Freelancers Union, not the Gig Workers Union. (Though The Gig Workers of the World would be a great name for either a union or a rock band. Slogan: “Gig Strong! Gig power!”)

Look, I’m a member of the FU, and I strongly support the union and stand with my fellow workers (whatever they care to call themselves). One for all, and all for one.

But I can’t let this kind of misuse of data pass without objection.

Freelancing is important, and it is a significant part of the new way of work. But it isn’t reasonable to claim that it is going to be the predominant mode of employment any time soon (if ever). And if it does dominate the economy, it will be an economy characterized by massive under employment, economic insecurity, and poverty.

The whole point of the FU is prevent the last part from coming true. Let’s not lie to ourselves about it.


*Disclosure: I am a proud member of the FU.


  1. Edelman Intellignece, Freelancing in America: 2017. Freelancers Union, 2017. https://www.slideshare.net/upwork/freelancing-in-america-2017/1
  2. Freelancers Union and UpWork, Freelancing in America: 2017. Freelancers Union, 2017. https://s3.amazonaws.com/fuwt-prod-storage/content/FreelancingInAmericaReport-2017.pdf
  3. Sara Horowitz, Freelancing in America 2017, in Freelancers Union Blog. 2017. https://blog.freelancersunion.org/2017/10/17/freelancing-in-america-2017/

 

Quantum Machine Learning

The third in the Nature articles on Quantum Software and Post Quantum Cryptography,  is about quantum machine learning [1].

The crux of the issue is that Quantum Computers “can produce statistical patterns that are computationally difficult for a classical computer to produce”. ([1], p. 195) The question is, can we find algorithms to discover patterns that classical computers can’t?

The math is intimidating as usual, but their explanations of the quantum algorithms actually gives me a clearer understanding of the more familiar classical algorithms.

I had never heard of quantum annealers, which are really cool! (This is what the much hyped D-Wave quantum systems do—which I didn’t know.) Simulated annealing is a interesting learning method. In classical computers, the heating and cooling are simulated, but the process maps directly to quantum states. Cool!

One interesting feature of quantum machine learning is that it can operate directly on quantum data, e.g., the quantum states of light and matter. QML to analyze patterns in physical systems!  Whoa!

An interesting idea is to use machine learning (e.g., genetic algorithms) to tune quantum computing systems for error correction and control logic. Wow!

The paper lists four main challenges for quantum machine learning:

  1. The input problem
  2. The output problem
  3. The costing problem (how many Qubits are needed for QML to beat classical computing?)
  4. The benchmarking problem (is QML faster that classical ML?)

Transferring data between classical and quantum computers can be a killer. In bad cases, any advantage of the quantum computation is more than eaten up by the overhead of input and output.

The authors make the important point that the theoretical classical performance of many machine learning problems isn’t known, so it isn’t easy to compare the theoretical performance of a quantum algorithm.

The important question is how well quantum computation scales in practice. i.e., systematic benchmarking is needed. There are a number of quantum approaches that seem promising, though how well they work for problems of different sizes needs to be determined.

I learned a ton from this article, though it is still wondrously mysterious to me.


  1. Jacob Biamonte, Peter Wittek, Nicola Pancotti, Patrick Rebentrost, Nathan Wiebe, and Seth Lloyd, Quantum machine learning. Nature, 549 (7671):195-202, 09/14/print 2017. http://dx.doi.org/10.1038/nature23474

Ethereum Forks Yet Again

It seems to have gone OK…but it wasn’t pretty.

This month Ethereum is executing a perfectly normal software upgrade, which would be absolutely routine for any sensible software. But cryptocurrency software is not normal software, and Nakamotoan blockchains are essentially immune to reasonable engineering.

Ethereum’s new version isn’t forward compatible, so the old software will not work with the new. This is a pretty common occurrence in software land, but for a cryptocurrency it is a “hard fork”, which means that users who keep the old software are effectively using a different currency than the new one. If everyone goes along, its fine. If not, it can be traumatic and potentially catastrophic, as Etherheads should be well aware.

The tension is even higher because no one knows for sure what will happen. Evidently there hasn’t been an upswell of public endorsement for the new fork, leaving the result in question. Things are not helped by the fact that the switch over was triggered at a particular record, which will happen when it happens.

The upgrade seems to have gone smoothly, although there were critical bug fixes right up to the switch over. There doesn’t seem to be a major split in the network (at least not yet, phew!) but there is a lot of software that hasn’t picked up the last minute fixes yet–and a large fraction who may still be using the old, incompatible software. (And how many big bugs will come to light not that they are live?)

In short, no one even knows if the upgrade happened smoothly or not.  Sigh.

This would all be funny if it weren’t for the tens of millions of dollars (at current exchange rates) that could be at stake.


And by the way, Bitcoin, the patriarch of the troubled House of Nakamoto, is scheduled to have its own hard fork in November. The main goal is to address the scaling issues that Ethereum just addressed. Unfortunately, Bitcoin’s upgrade is “adversarial” to quote Bailey Reutzel. Uh, oh!

In the case of Bitcoin, there are tens of billions of dollars at stake (at current exchange rates). This is not even remotely funny.

Why do people put up with this stuff?

 

Cryptocurrency Thursday

“Artificial Creatures” from Spoon

There are so many devices wanting to live with us, as well as a crop of “personal” robots. Everything wants to interact with us, but do we want to interact to them?

Too many products and not enough design to go around.

Then there is Spoon.

We design artificial creatures.

A partner to face the big challenges rising in front of us.

A new species, between the real and digital domains for humans, among humans.

OK, these look really cool!

I want one!

But what are they for?

This isn’t very clear at all. The only concrete application mentioned is “a totally new and enhanced experience while welcoming people in shops, hotels, institutions and events.” (I guess this is competing with RoboThespian.)

Anyway, it is slick and sexy design.

The list of company personnel has, like, one programmer and a whole bunch of designers and artisans. Heck, they have an art director, and a philosopher, for crying out loud.

Did I forget to say that they are French!

I have no idea exactly what they are going to build, but I will be looking forward to finding out.

 

Robot Wednesday

HFOSS – Humanitarian Free and Open Source Software

Open source software is a good thing, and humanitarian applications are a good thing, too.

So Humanitarian Free and Open Source Software should be a really good thing, no? It’s even got an acronym, HFOSS.

This fall, Gregory W. Hislop and Heidi J. C. Ellis discuss a related point, the potential value of Humanitarian Open Source Software in Computing Education. [1]

For one thing,, any open source project is a potential arena for students to learn about real life software development. By definition, FOSS projects are open and accessible for anyone, including students. An active and successful FOSS project will have a community of people contributing in a variety of roles, and usually will have open tasks that students might well take up. In addition, the decision making process is visible, and, as Hislop and Ellis note, the history of the project is available. A sufficiently motivated student could learn a lot.

(We may skip over the question of whether FOSS projects represent best or even common practices for all software projects. I.e., FOSS isn’t necessarily a “real world” example for many kinds of software.)

Humanitarian projects are interesting for other reasons. For one thing, by definition, a successful humanitarian project of any kind is focused on problem solving for people other than programmers, college students. Simply figuring out how and even whether technical solutions actually help the intended targets is a valuable exercise, in my opinion.

In addition, real life humanitarian software generally addresses large scale, long term problems, with non-trivial constraints. They are excellent challenge problems, all the more so because the price point is zero dollars and the IP must be robustly open to everyone.

Hislop and Ellis make some interesting observations about ways in which these projects can be sued in computing education.

They encourage thinking about all the roles in a technology project, not just coding or testing. (Hear, hear!) Documentation, planning, above all, maintenance not only consume most of the work effort, but are usually the difference between success and failure of a software project. Get good at it, kids!

(I’ll also point out that designing a solution involves so much more than whacking out software–you need to understand the problem from the user’s point of view.)

They also point out the value of connecting the digital problems solving with an understanding of the actual, on the ground, problems and customers. Technological glitz generally does not survive contact with the customer, especially if the customer is an impoverished mission-oriented organization. Good intentions are only the starting point for actually solving real world humanitarian problems.

This last point is actually the main distinction between FOSS and HFOSS. There is just as much practical value in participating in most FOSS projects. And, for that matter, there is a long tradition of service learning, much of it “humanitarian”. HFOSS is the intersection of these educational opportunities, and it is actually pretty tiny. Most FOSS isn’t “humanitarian”, and most human service or humanitarian problems don’t need software.

In fact, engagement with actual community organizations and initiatives is highly likely to teach students that humanitarian problems don’t have technological solutions, especially software solutions. Digital technology may be able to help, at least a little. But humanitarianism is really a human-to-human thing.

If I were supervising a HFOSS class, I would probably want to try to get the students to think about a number of philosophical points relevant to their potential careers.

First off all, students should observe the personal motivations of participants in an HFOSS project, and compare them to motivations for people doing the same kind of work—the exact same kind of work—for other contexts (e.g., large corporation, personal start-up, government agency). Working on something with the goal to make someone else’s life better is kinda not the same thing as angling for a big FU payout.

The second thing that students will need to learn is just how problematic it can be to try to help “them” to solve “their” problems. However great your Buck Rogers tech might be, swooping in from on high to “fix the world” isn’t likely to garner a lot of enthusiasm from the people you mean to help. In fact, “they” may not think they need wheeze-bang new software at all.

Working with real people to understand and solve real problems is rewarding. And in some cases, a bit of HFOSS might be a home run. But HFOSS for the sake of HFOSS cannot possibly succeed. And that is a lesson worth learning.


  1. Gregory W. Hislop and Heidi J. C. Ellis, Humanitarian Open Source Software in Computing Education. Computer, 50 (10):98-101, 2017. http://ieeexplore.ieee.org/document/805731

New Study of Mass Extinctions

There have been five mass extinctions in the history of life on Earth, during which vast numbers of animals and plants died out. So far, after each big die off, new species and families have evolved, filling the world with a new, but just as diverse array of life forms as before the disaster

The general intuition is that a mass extinction creates an impoverished, less diverse collection of species. The survivors who weather the disaster are the founders of the great radiation of new diversity. (This pattern is seen at a smaller scale in local disasters, such as volcanic eruption that obliterates almost all life.)

This intuition is often applied to our own age, which we recognize as the beginning of the sixth great extinction.. We see many specialized species reduced and wiped out, while other robust “generalists”, such as cockroaches or rats, thrive and spread. Presumably, 100,000 years from now, there may be a vast radiation of new species of rodents, expanding into the empty niches of the post human Earth.

But is this process really what has happened in the past extinctions?

This month, David J. Button and colleagues publish a report of their study of “faunal cosmopolitanism” among 1046 early amniote species ranging from 315–170 M years ago. This period includes the Permian–Triassic and Triassic–Jurassic mass extinctions [1].

They take into account the relationships among the species, so that individuals from related but distinct species can reflect the geographical range of the group, even if only a few samples are available.

The basic finding supports the common intuition: there is a sharp rise in their index of cosmopolitanism (phylogenetic biogeographic connectedness) after the Permian–Triassic extinction, followed by a decrease (i.e., more geographic specialization through the Triassic, and another spike after the Triassic–Jurassic extinction.

Furthermore, they find evidence that “the increases in pBC following each extinction were primarily driven by the opportunistic radiation of novel taxa to generate cosmopolitan ‘disaster faunas’, rather than being due to preferential extinction of endemic taxa .” (p. 4)  I.e., new “cosmopolitan” species emerge, rather than a old species survives to spread over the world.  (This is bad news for cockroaches and rats, I’m afraid.)

These results certainly indicate the importance of unique events in the history of life, such as mass extinctions. They also suggest that mass extinctions have a predictable effect, at least at a global level.

Neat.


  1. David J Button, Graeme T. Lloyd, Martin D.Ezcurra, and Richard J. Butler, Mass extinctions drove increased global faunal cosmopolitanism on the supercontinent Pangaea. Nature Communications, 8 (1):733, 2017/10/10 2017. https://doi.org/10.1038/s41467-017-00827-7

Book Review: “A Spool of Blue Thread” by Anne Tyler

A Spool of Blue Thread by Anne Tyler

I haven’t read very much by Tyler, though she has been writing longer than I’ve been able to read (which is a long time, now.). That says as much about me and my own reading tastes as about her writing.

A Spool of Blue Thread (2015) is about a family and a house in mid-twentieth century Baltimore. The family is like every other family, filled with loyalty, affection, conflict, and history. This family also has secrets and mysteries. I wouldn’t say these are deep secrets or mysteries (this isn’t a Da Vinci Code or anything like that). Mostly they matter only to the family itself.

The plot centers on the end of the lives of the “middle” generation, and circle back to the previous generation (in the 1940s and 50s), and we glance at the beginnings of the next generation of children and grand children.

There is a spool of blue thread appears in the story, though I didn’t really grok the exact metaphor. At least partly, the thread is an unexplained connection between generations. The family is bound together in ways that they don’t really understand.

(Maybe I got it after all.)

There isn’t a lot of action, most of the story is a slow uncovering of the past and how it has come out in the present. As the novel unfolds, we come to discover some unexpected hidden depths in some of the people and their relationships. It is fair to say that they both understand and misunderstand each other.

It is notable that at the end, there remain mysteries about the current generation, as well as uncertainty about the future. How will this generation turn out? Will they remain close, or scatter? What, after all, makes them tick?

We don’t know, and Tyler seems to suggest that we never will know.


  1. Anne Tyler, A Spool of Blue Thread, New York, Ballantine Books, 2015.

 

Sunday Book Reviews

A personal blog.

%d bloggers like this: