One of the maddening things about the contemporary Internet is the vast array of junk apps—hundreds of thousands, if not many millions—that do nothing at all, but look great. Some of them are flat out parodies, some are atrocities, many are just for show (no one will take us seriously if we don’t have our own app). But some are just flat out nonsense, in a pretty package. (I blame my own profession for creating such excellent software development environments.)
The only cure for this plague is careful and public analysis of apps, looking deeply into not only the shiny surface, but the underlying logic and metalogic of the enterprise. This is a sort of “close reading” of software, analogous to what they do over there in the humanities buildings. Where does the app come from? What does it really do, compared to what they say it does? Whose interests are served?
I commented a while ago about an early version of Crystal, “the greatest thing since spell checking”. This product is still going, now featuring an plug in for your email program that helps you “Become a better communicator”. Wow!
As far as I can tell, this product has several pieces.
- Content analysis that uses email, social media, and messaging to classify individuals on the 4 dimensional DISC personality scale.
- Visualizations to present this assessment for individuals in a netork or organization
- Advisor software that relates your communications to these dimensions, i.e., which communication connected with which “types”.
- An email plug in to give real time advice on email content for each receiver, based on their type.
This is so cool! Total and utter nonsense, of course, but brilliantly executed.
First, let’s consider the crucial conceptual backbone: the personality dimensions derived from the DISC (1920s theory, + 1950s use case). Even if you grant that DISC might measure something real (which I sincerely doubt), it isn’t obviously true that these dimensions have anything to do with the goals of this tool.
Crystal seems to rely on a confusing array of unproven assumptions, including
- DISC dimensions are said to be related to the “reception” of electronic communications. In particular, they assert that differently worded messages will persuade people better or worse according to their type. “Each DISC type has its own preferred communication style, and you can have a more effective conversation with each of them by showing empathy and adjusting your own style.”
- They claim that their analysis of messages can build a DISC profile, i.e., equivalent to the person actually filling out the questions. “By analyzing public data and text, Crystal created personality profiles for anyone in your network.”
- They assert that they can help you construct email, in real time, that will improve your “communication” with individuals. It isn’t clear what sort of advice is on tap, but the example on the web page offers extremely generic advice, such as “Be specific in what you want, and offer a defined next step.” I don’t really need a personality test to offer that advice to you.
And so on.
It is possible that this software works, at least for some cases. But I would need to see some kind of evidence before I would take these assumptions as self evident.
The problem is, of course, validating this dreck would be difficult. You would have to be able to measure the effectiveness of communication, with and without the advice. You would have to demonstrate that the supposed personality types are accurately measured by the tool. You would have to demonstrate that the types are related to communication in the ways expected.
There is no evidence that Crystal has actually done this validation. If they have, they should definitely publish their evidence.
For my money, it looks like a bunch of plausible ideas that might make you feel like you are doing something, but aren’t really doing anything.
Think about it. Just how much difference does the style of an email actually make? And when it makes a difference, is that due to some deep “personality” type of the receiver? Or is it due to other factors, such as what they already know, and the context in which the message arrives?
I want to be fair here. They have done a ton of work on this product, They have pulled together a bunch of technology, including content analysis, analytics (probably including machine learning), and some serious interface design. It’s got a lot of nice software, and it looks great.
The only problem is that it’s complete nonsense.
Speaking of brilliantly implemented nonsense…
I was pointed to another awesome project, Knack, “Knack for unlocking the world’s potential”. <<link>>
I love this product!
Knack is a mobile game that helps people “to discover their talent”. Alternatively, it also can be used to help guide education, hiring, and career choice.
“Knackalytics” is technically advanced! They tell us that it “combines the magic of games, the rigor of science, and the insight of machine-intelligent data analytics”. Yessir! Get your magic beans here!
As far as I can tell, the game is a bunch of problems to solve, and from your responses, the system constructs a multidimensional profile . The profile identifies “traits”, which they call “knacks”, such as, logical reasoning, social intelligence, motivation, resilience, creativity, action orientation. These personality traits knacks are mapped to concepts from psychological literature on “cognitive abilities, personality traits, emotional and social abilities, mindsets and aptitudes.”
The overall pitch is twofold.
First, they may the usual claims that these measures are useful for prediction (especially, of job performance) and a sort of personalization of education and career path. The latter is supposed to help you discover your own hidden talents and preferences, to help you select a career. For the former, the company goes so far as to claim that Knack is “a quantifiable predictor of workplace performance that can substitute for the use of traditional qualifications in screening and hiring.” (Qualifications, smalifications!)
They assert that these measures are “independently validated”, though I have not found any such validation. Who knows what that means. I can only hope that their corporate partners, who are very serious people, have demanded proof. If nothing else, their legal departments are going to need justification for hiring that ignores “traditional qualifications”.
The second big claim is that using a game rather than conventional testing is better. In particular, there is an implicit argument that kids will be “engaged” by a game, and therefore it will get better data. This argument gets a further political twist, in that they argue that this game is especially valuable for disadvantaged kids (which they tag with the codeword, “opportunity youth”).
This latter claim is rather complicated. They argue that kids, especially poor kids, have trouble getting a first job because they lack credentials and experience. These kids have great potential, but (a) don’t have any proof and (b) they don’t even know what their strengths are.
Knack solves this problem by offering a clever way for kids to “discover their own talent”, and for employers to find new workers who would otherwise be passé over as unqualified. Crystal believes t;hat this combination is a key to a better future.
Much of the research that Knack publishes on their website is about this point. They did a study with Rockefeller Foundation trying to demonstrate that this game based assessment helps kids find jobs, even kids without conventional credentials .
A close reading of their study and examples is troubling to me.
I certainly agree that youth employment is a critical challenge. I’m less certain that the problem is that kids are qualified but no one knows it. Most kids are like I was—eager and well meaning, not really prepared for work.
It’s called “being young”.
In the Rockefeller Foundation study, the approach is to develop profiles for particular jobs, and then score young people on their “potential to perform successfully” at the jobs. These profiles were developed with machine learning (so it must be right!)
The study indicates that they used four such job profiles,
“(1) entry-level customer service role at a financial institution, (2) claims processing role, (3) restaurant service role in a restaurant chain, and (4) highly skilled financial analyst role in a large insurance firm.”
OK, #4 is certainly interesting, and maybe #2 is less trivial than it sounds. But #1 and #3 are, well, pretty standard “first job” fare. I’m not sure that we need fancy analytics to hire “restaurant service” positions. And it’s not obvious what the measure of “success” would be for these positions.
The report is mostly about making the point that so called “opportunity youth” (which sounds like a racial code word to me) “have a distribution of traits, abilities and aptitudes no different from that of the general population” and of current employees. In other words, racial and cultural stereotypes are not supported by the data. Furthermore, the tool is supposed to help identify the gems among the dross. That’s certainly good for employers, though I’m not sure if picking out the best is going to solve the overall challenge of underemployment.
I note that the Rcckefeller study doesn’t actually show that the youth matched in this way actually were successful. Nor does it say how many were placed in actual jobs.
It’s one thing to say that hundreds of kids showed evidence that they were just as qualified as “successful” employees. It’s another thing to say that there are hundreds of jobs, and that they successfully filled them. The Rockefeller study found the former, not the latter.
There is so much to critique here it’s overwhelming.
The whole thing is shot through with good intentions and unproven assumptions.
One of the biggest assumptions is that somehow the gamified assessment tool is better than other approaches. Here there is a deep assumption that the data is revealing information that is somehow not available otherwise.
There may be a grain of truth here, that kids will put more work into a gamified interface. I’ll grant that 18-24 year olds might like it. And I’ll grant that there may be advantages to putting it on their mobile, since they are more likely to attend to the device than other media. So, maybe this is a good way to do assessment, at least for this population. (Obviously, it is highly problematic for people with visual or other limitations that interfere with gaming.) But even if that is true, that doesn’t make this data any more valid than any other assessment tool. And I can be forgiven if I hesitate to toss out “traditional qualifications”.
There is a deep assumption that the data is revealing information that is somehow not available otherwise. Maybe this assessment tool will help a young person discover that he or she has the talent to be a financial analyst. But maybe it will just reveal that their special “Knack” is to be a restaurant worker. Such a discovery would not be that surprising, though is might not be welcome, and certainly doesn’t reflect long term potential. And it is absolutely disastrous to be telling a young person that all he is cut out for is menial, dead end work. I want kids to dream of becoming something they aren’t, not to decide that this is all they will ever be. In fact, this borders on criminal malpractice, as far as I’m concerned.
There is also an assumption that there are tons of jobs out there, but not enough workers to fill them. and that employers would hire more kids, if they could just know which ones are “the good ones”. This is clearly the perspective of employers, especially employers of large numbers of low wage, short term workers. They have lots of slots, and need quick ways to find workers to fill them (and who won’t cause trouble).
Most important of all, the whole thing is built on claims that these profiles are valid predictions for job performance. Mobile game or not, this seems like a shaky proposition to me. It certainly has not been proved to do any such thing, at least not in the available information from Knack.
Finally, I have trouble swallowing they whole “opportunity youth” thing. If this isn’t a racial codeword, it is pretty damn close. The Rockefeller Report has pages of text devoted to debunking “myths” about “those people”. The “myths” are basically recitations of prejudiced stereotypes held by employers. These myths are “harmful” (no kidding!), and the solution, they say, is to replace human intuition with “he power of scientific, data-driven processes”.
This is sort of a “blame the victim” idea. Employers are prejudiced (and probably culturally blind), so they can’t find “qualified” kids. Furthermore, this is the fault of kids who have no way to “prove” that they could be good employees. Sigh.
OK, let me cut them some slack, now.
I realize that much of the rhetoric is deliberately designed to appeal to employers, especially employers who want to be enlightened and do the right thing. A pitch aimed only at workers or kids would be different.
Second, I have a lot of respect for the use of as much data as possible in hiring decisions. Intuition is the bane of fair hiring, and the demand for experience and credentials is pointless for first time workers. In other words, I can’t fault the general idea, however much I question their specific measures.
And, I have to say, the whole thing is brilliantly put together, and brilliantly pitched. “Knackalytics”! I love it!
Together these two apps are perfect examples of brilliantly executed, highly questionable designs.
Both of them are deployed as mobile apps because…if its not on my phone it doesn’t exist.
Both promise to “disrupt” hiring and work (though they are based on very old technology and theory).
Both claim to be scientific, yet neither offers even a single peer reviewed or any scientific evidence that they do what is claimed.
Both are pitched to employers, students, and young workers, with shallow and blinkered perceptions of the interests of actual people.
So it goes.
- Niko Canner, Abigail Carlton, Guy Halfteck, and John Irons, Impact Hiring: How Data Will Transform Youth Employment. The Rockefeller Foundation, New York, 2015. https://assets.rockefellerfoundation.org/app/uploads/20151222091528/Impact-Hiring-How-Data-Will-Transform-Youth-Employment.pdf