Tag Archives: Mari Sako

Business Models for AI

Large Language Models are the flavor of the week this year. 

Despite numerous preposterous demos and shaky legal status, the developers are rushing to commercialize ChatGPT and friends.  Perhaps they need to recoup some of the insane expense of operating these models.  Or perhaps these tech capitalists have no imagination beyond their own limited experiences exploiting the Internet.

Anyway, I remain curious about how these ML models can be commercialized.  What is there that you can you sell, and who will buy it?  And, just how much money is there to be made, anyway? 


This spring, for instance, Matthew S. Smith reports that OpenAI (which is not especially “open” any more since it went commercial in 2019) offers “access” to a version of the GPT3.5 model, which is believed to be the technology powers ChatGPT and Bing Chat [3]. 

As far as I can tell, “access” means that it is possible to lease a virtual machine and access an instance of GPT3.5 via an API.  You can upload your own data and train a model, which you can incorporate in a product available over the network.  

As Smith notes, this is far cheaper than previous products like this, and it “Swings the Doors Wide Open”.  This access is cheap enough that it can be included in a “free” product, or as a “cool new feature”, such as Snapchat’s recent reinvention of Clippy “My AI” ChatBot [1].  (Apparently, part of Snap’s business model is a form of extortion:  the ChatBot is pinned to your view whether you want it or not, and the only way to get rid of it is pay.)

It’s far from clear to me how much money can be made by such technology, and whether there are any sustainable businesses here.  So I’m a bit surprised at the rush to make commercial products.  This rush to commercialize the technology is all the more surprising because there are a ton of unknowns about just who owns what.


On that front, this spring Mari Sako discusses just how undefined the very notion of  “Contracting for Artificial Intelligence” is [2].  When OpenAI or Snap sell or give away their AI based ChatBot, there is a legal agreement. But it is far from clear that those agreements are complete, or if they are, in fact, legal.

First, these are data intensive tools.  Who owns the data?  This is especially important because most interesting targets for this AI involve data about people, i.e., personal data.  Does the ChatBot have proper permission to its training data? (Almost certainly not.)

Worse, these AI generate new data, including data about people.  Who owns this generated data?  And who is liable for it’s accuracy and proper use?

And liability is a second huge, huge issue.  The current generation of demos are reknowned for their, *ahem*, “hallucinations”.  I.e., they confidently make up false facts.  This is funny, until it isn’t.  The inscrutible, inexplicable, opaque behavior of machine learning systems makes it nearly impossible to assign responsibility for such goofs.  There will be lawsuits, but who should be sued?  And how do you defend?

Sako notes that this opacity works the other way, too.  When there is an economic benefit from one of these models, how can the profits be assigned?  If your ChatBot creates something valuable, who owns the goodies? Again, there will be lawsuits.

Sako also points out that for many commercial situations, companies need to guard their proprietary data. But most ML models work best if data is aggregated from multiple sources. This presents a tricky situation, because client data is generally not supposed to be shared, even with other clients, let alone outside the firm. But that data is, in a sense, one of the key products of the company, and all the clients would, presumably benefit from a better AI tool.

While Silicon Valley feels free to steal from a billion people on the Internet, corporations are going to find it difficult to just plain appropriate their client’s data.  There will need to be new kinds of contracts, enabling aggregation of data and apportioning profit and responsibility for the results.


Overall, this rush to commercialize AI ChatBots is surely premature. 

This current generation of apps are basically “Clippy V2”, which will be exciting for a brief moment, and then users will rise up with torches and pitchforks.

At the same time, there will be lawsuits.  Many, many lawsuits.  There may even be criminal cases.  It’s going to be ugly.


  1. Tom Gerken, Snapchat introduces AI chatbot to mixed reviews, in BBC News – Tech, April 26, 2023. https://www.bbc.com/news/technology-65388258
  2. Mari Sako, Contracting for Artificial Intelligence. Communications of the ACM, 66 (4):20–23,  2023. https://cacm.acm.org/magazines/2023/4/271231-contracting-for-artificial-intelligence/abstract
  3. Matthew S. Smith, OpenAI Swings the Doors Wide Open on ChatGPT, in IEEE Spectrum – Artificial Intelligence, March 9, 2023. https://spectrum.ieee.org/chatgpt-2659513223

Mari Sako, Contracting for Artificial Intelligence, Matthew S. Smith, OpenAI Swings the Doors Wide Open on ChatGPT,