And We Told You This Would Happen, Too

The deployment of machine learning models capable of generating usable code has generated a lot of chit chat about what this means for ordinary hard-working programmers

But, of course, the same technology that helps kids write their school assignment (and, perhaps, influence government policy [3] <<link>>) can help criminals write plausible phishing emails.  And if it can write programs to be in the pack in a programming contest, it can write malware to steal or destroy information.

Checkpoint Research reports that this is already happening [1].   We shouldn’t be surprised.  A lot of cybercrime is pretty much cookie cutter techniques, copying and pasting what others have done.  Like a lot of successful crime, it’s not technically innovative, just gutsy. 

And cyber crime has always been motivated to use the best software engineering available, because—profit!  What’s better than a malware system that suck up money for you?  A malware system that you push a button to create, and then sucks up money for you!

Actually, the AI isn’t really capable of generating functional malware (or anything).  But it’s a super useful programming assistant, capable of generating all the pieces of a working system [2].

The best thing, from the point of view of creating malware, is that the ML is really good at cut and paste, picking up and reproducing common practice.  This is really useful, because there is a lot of stuff, like routine encryption of messages, that is very repetative, but fiddly.  If and when you goof up the routine stuff, that’s where the white hats crack your system.  Using an AI to remind you of all the t’s to cross and i’s to dot will make things all the harder to stop.

I gather that natural language generation is also contributing to other crimes.  Phishing is all about creating plausible BS, plausible enough to make them click on a link.  Plausible BS is ChatGPT’s wheelhouse!


Thinking about this, I have to think about defending against automated script kiddies.

Obviously, the primary response is to deploy ML in defense.  In this case, I guess that means that routine defense and countermeasures could be automated as much as possible.  Sadly, we know that ML based assistants are not necessarily good at securing code.  But there probably are other tasks that it could do well, such as routine scanning, check lists, and updates.  Have a bot that makes sure we do all those mind-numbingly boring things we should do every day, and that we do them correctly and with the latest updates.

A more exotic possibility might be offensive operations to attack the model itself.  Can we poison the ML, disabling or distorting its ability to generate malware?  Or even better, manipulate the ML so it will generate malware with a kill switch or infects the source rather than the target.  Even partially or occasionally successfulpoisoning would greatly reduce the confidence in the assistant, and deter its use.


  1. Cp<r> by Checkpoint, OPWNAI : Cybercriminals Starting to Use ChatGPT. Checkpoint Research, 2023. https://research.checkpoint.com/2023/opwnai-cybercriminals-starting-to-use-chatgpt/
  2. Dan Goodin, ChatGPT is enabling script kiddies to write functional malware, in ArsTechnica, January 6, 2023. https://arstechnica.com/information-technology/2023/01/chatgpt-is-enabling-script-kiddies-to-write-functional-malware/
  3. Nathan E. Sanders and Bruce Schneier, How ChatGPT Hijacks Democracy, in New York Times. 2023: New York. https://www.nytimes.com/2023/01/15/opinion/ai-chatgpt-lobbying-democracy.html

2 thoughts on “And We Told You This Would Happen, Too”

Leave a comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.