
Trigger warning – this post contains the topic of Child Sexual Abuse Material (CSAM)
Whilst browsing Xitter the other day, I read a post by one of the people I follow – Troy Hunt
For those who don’t know about Troy – Troy is a Microsoft Regional Director and Microsoft Most Valuable Professional, blogger at troyhunt.com, international speaker on information security and the creator of HaveIBeenPwned.com – The go-to site to check to see if your credentials have been leaked in a security breach.
On the 8th October, Troy posted about some research he had been doing into a recent data breach at the website Muah.ai – A website that uses Artificial Intelligence to generate an “AI companion” – someone(thing) you can have conversations with, that is made by the prompts you provide.
His research was spurred on by an article on 404media.co which stated that Muah.ai had been hacked and the data of its users had been leaked – data which not only contained PII (Personally identifiable information), but also the prompts used by those users to generate the AI companions & scenarios.

It’s pretty obvious where this is going – just by looking at some of the images you see when you visit the home page of the site.
In his post, Troy explains that the concept of the website is no different to many other, similar sites currently running across the web, and that generating an AI companion in itself not an illegal, or illicit act.
Unfortunately, though whilst examining the leaked data – Troy became very aware that many, if not most of the prompts were of a much darker nature.


It must be said that whilst the prompts shown above might not be to everyone’s tastes, they are not in themselves illegal in their nature of what they are describing. Unfortunately though – it does get worse – much worse.
In his thread, Troy goes on to explain that many of the prompts he examined contained some very graphic depictions of what can only be described as CSAM.

As you would expect – these findings are too graphic to be described in any detail, and Troy has stated that this breach has been one of a few that has made him report his findings to law enforcement.
This is where things get interesting – In the data Troy examined, the prompts were stored alongside the real names and addresses of those people who had subscribed to muah.ai
To confirm that the PII was/is real, and the accounts on muah.ai are not created by anyone other than the owner of the email address – Troy did a bit more digging and proved that to sign up to the site you have to enter an email address:

and the confirm you received the email by clicking on the verification link:

So unless the email account has been hacked, the owner of the email address is certainly the owner of the muah.ai account, and the person responsible for generating the AI prompts.
As part of his research – Troy sent over 8 thousand emails to accounts found in the data leak – all of which were real accounts.

Troy then shows that the prompts generated can be directly linked to real identities:


This data can then be connected to a live LinkedIn account – heavily redacted for obvious reasons

Troy goes on to comment that many of the email addresses used to sign up to the site are not just personal ones, but ones linked to company, and government accounts.
In an associated post, another security researcher on Xitter – @laughing_Mantis has since posted that they are aware of at least 2 extortion attempts based on the leaked Muah.ai data – both people targeted are IT developers who have been sent demands with credible data proving that the threat actors have knowledge of their activities.
There are some very disturbed people out there, and I hope they all get charged in line with the current laws of producing CSAM.