
How parents can use AI to check spending risks in games
Around half of children feel games are not fun unless they spend money. But many games come with financial risks, including gambling-like features, scams and pressures. With so many games on the market, how can parents know the levels of risk, what is age appropriate and the scale of any problems?
AI chatbots as the solution
4 in 5 parents use AI – including many to seek advice around parenting and child welfare. So how useful and reliable is the advice, and how can you get the best answers quickly?
Our new research has found that AI chatbots tools can explain financial game features and highlight potential risks, so you can understand whether a game might encourage spending.
We have found the quality of the answers depends on how you ask the question. Vague questions can lead to unclear or incomplete responses, while specific, well-structured prompts usually give more helpful information.
This guide shows you how to use AI chatbots to get clearer, more reliable answers.
This is for financial harms in games – but you might find it useful for using AI chatbots for parenting help or any other factor!
You’ll find ready-to-use prompts, tips for asking better questions, and guidance on what chatbots can get wrong.
3 great AI prompts: ready to try.
Games often include ways for kids to spend money, and it can be tricky to know which ones are risky. These prompts have been carefully tested to help you get clear, reliable answers from AI chatbots. Copy and paste them exactly, replacing [Game Name] with the game and [Platform] with the one being used.
1. To find out about unhealthy spending and risks in a game.
Copy and paste the text below. Remember to customise it to the game you are interested in.
Create nothing but a table including all of the microtransactions and in-game purchases for the game [Game Name] on [Platform]. Include a score from 1 to 10 on whether the game is seen to be pay-to-win. Include a score from 1 to 10 on how likely the game is to encourage unhealthy spending. Include a suggested age rating based on potential financial risk. Respond with only the table and scores, the least and most expensive available microtransactions in the game, followed by a short summary of the data.
2. To find out how easy or difficult it would be for a child to purchase a microtransaction in a game.
Copy and paste the text below. Remember to customise it to the game you are interested in.
Does [Game Name] have parental controls that can block or monitor my child’s in-game spending on [Platform]?
3. To find out if a game encourages frequent purchases.
Copy and paste the text below. Remember to customise it to the game you are interested in.
List the specific design tactics, such as time-limited offers, streak rewards, or peer pressure incentives, used in [Game Name] on [Platform] to encourage frequent purchases.
5 tips for creating better AI prompts
Be specific
Instead of asking “Is [Game X] safe?”, ask bots to explain specific spending risks or tactics. For example, “Which in-game purchases could encourage my child to spend too much?”
Keep it neutral
Avoid emotional phrasing – like “Is my child in danger?” – and focus on facts. For example, what spending options exist, how they work, and how risky they might be.
Include your child’s age
AI gives more relevant answers when it knows your child’s age. Say “my 12-year-old” so the advice matches that stage.
Ask about specific design tactics
For clearer answers, ask the AI to identify the design features a game uses to encourage spending. For example, you can ask about tactics such as loot boxes, streak rewards, time-limited offers, or social pressure from other players.
Separate Third-Party risks
Some risks come from outside the main game. For example, third-party sites and unofficial marketplaces. Ask the AI to list these separately. AI can find this tricky, so be persistent.
Can AI chatbots get advice wrong?
In our research, which was conducted with youth platform VoiceBox, five popular AI chatbots were tested.
These were Claude, Gemini, ChatGPT, Grok, and Meta AI.
The research found that these AI models provide variable baseline quality of information for parents. Answers varied, depending on how the question is asked and the bot’s own “persona.”
Things to watch out for:
- Chatbots mixing up different versions of the same game, like mobile vs console.
- Treating third-party platforms or marketplaces as part of the main game.
- Confident-sounding answers that are actually incorrect (“hallucinations”).
- Missing hidden spending risks if your question isn’t specific enough.
- Slang or joking tone that is confusing or annoying.
In a testing score, Claude was found to give the best advice, closely followed by Gemini, but usually the best bot is the one you already have an established history with.
If you would like to read the full report, click here.