Imagine my delight with a produce that offers to write my email for me! Who wouldn’t want that? The product is called Crystal, and is lets you “Communicate with empathy.” “This is the end of email miscommunication.”
In short, it is the Greatest Thing Since Spell Checking.
How does this work?
I haven’t found any peer reviewed or published technical explanations of this technology, so I can only go on the limited information in the web site and press interviews. Perhaps the most informative article is Kyle VanHemert reports at wired.com
“If someone were looking you up, the site would start by examining things you’ve written publicly—social media profiles being a primary source—and analyzing factors like writing style and sentence structure. Then it processes what others have written about you. Using those data points, the site identifies you as one of 64 communicative types, which the company has adapted from well-known personality frameworks. Crystal doesn’t really know you, in other words, it just knows what you’re like.”
It’s not clear exactly what data and analyses they use, though it sounds like the force fit everyone into one of 64 bins. (Is this “64” categories really a 2x2x2x2x2x2 matrix?)
Trying It Out
I signed up and tried out a couple of profiles. The mystical email plug in only works for chrome (which I won’t touch) but the profiles give you a good idea of the advice they give.
My test was to request profiles for a couple of people I know well and have emailed many times. I grant you this is the worst case and most difficult challenge for Crystal, because I have way, way more information about these people than they could possibly find on line. The deck was even more stacked because the individuals in question are relatively underrepresented in public social media.
Nevertheless, the algorithm gamely located them and coughed up a profile for me in a few seconds. At least it didn’t waste too much of my time!
The results were, as I expected, shallow and wrong. The “shallow” parts were generic tips like “be brief”. Good advice, but you don’t need fancy algorithms to tell you that.
The “wrong” parts were characterizations of the “personality” and preferences of the individuals which I know for a fact to be the opposite of reality. I showed one guy his own profile, and got a combination of revulsion (“that’s creepy”), incredulity (it was the opposite of his actual preferences), and derision.
If these people were strangers, and I trusted these profiles, there would be serious misunderstandings and miscommunications. Not so good.
Is this product even plausible?
I would say that much depends on the models and what this 64-way classification is. My own view is that 64 is a rather low number of classifications (if there are like 1 billion people using email, each category has about 50 million people in it? I.e., the classification is accurate ‘to the nearest 25 million people’.) I would also wonder about culture and language diversity.
I would be shocked if the same 64 categories work in English as Chinese. Furthermore, I would be surprised if twenty-somethings would be the same categories as 60 year olds (not that Crystal cares about old farts). Children and teenagers will bedifferent from parents and adults. Colleagues are different from sales calls, teachers and students different than bosses and employees.
There also appears to be no consideration of learning, change over time, or any kind of situational variables.
Basically, the whole theory seems too simple for reality. Just like most “personality” theory.
The second aspect of the technology is the data. They are said to be sucking in what they claim is “public” data, e.g., social media profiles and twitter messages and such like. (Note that profiles are self-generated, and twitter is, well, not always intended to be honest reflections of personal character.)
Unless this data includes actual examples of emails to and from the person in question, I don’t understand how it can work very well. Including email collections would improve the data, but would be a lot more intrusive than they advertise.
Given the weakness this data, I have to guess that they built their models from huge samples. E.g., millions of Facebook profiles. Whatever this dataset is (and they aren’t saying), how in the world could they have validated the model against the real people?
Speaking of validation, I won’t be the first person to wonder if the results are any more accurate than randomly generated profiles or even astrology? Did they even do validation studies against such basic control conditions? If so, there is no mention of it.
For that matter, a very simple experiment would be to collect profiles, remove the identifying information, and have a person try to match the profile to the person (for people he or she knows). How well would they do? Hasn’t Crystal done this experiment? Inquiring minds want to know.
Overall, I have to think that the underlying technology is not based on solid theory, and the analytics are way more powerful that the data supports.
Considering all this, it actually worked pretty well. 🙂
I wouldn’t recommend using this software, though.
For one thing, the concept of trying to adjust your message to what you think the recipient wants to hear is probably a mistake. You would be much better off developing your own personality, and learning to present yourself effectively (and briefly). (Some people call this your “personal brand”, I call it being human.) Your receiver would rather learn about you than learn about what your software thinks he or she wants to hear.