
Using AI chatbots to learn about financial risks in games
Around half of children feel online games are not fun unless they spend money. Many games include financial risks, from gambling-like features and scams to tactics that encourage repeated or pressured spending. With thousands of games on the market, it can be difficult for parents to know which ones may pose risks, what is age-appropriate, and how serious those risks might be.
How reliable are AI chatbots?
Many parents now use AI chatbots for advice about parenting and child welfare. But the accuracy of the information they provide varies depending on the tool and how questions are asked.
We worked with youth platform VoiceBox to test five popular AI chatbots — Claude, Gemini, ChatGPT, Grok, and Meta AI — and found that the quality of responses is not consistent. Specific, well-structured questions tend to produce clearer and more useful information, while vague questions can result in incomplete or misleading answers.
In our testing, Claude generally gave the most consistently useful answers, followed by Gemini. However, results varied depending on the questions asked, so the most effective chatbot may be the one a parent is already familiar with.
Using AI chatbots to explore financial risks
While AI cannot guarantee accurate advice, it can help parents better understand features in games that may encourage spending. This guide shows tested strategies to get clearer answers, including ready-to-use prompts.

3 AI prompts: ready to try.
Games often include ways for children to spend money, and it can be hard to know which ones are risky. These prompts were tested to help parents get clearer and more informative responses from AI chatbots.
Replace [Game Name] with the game and [Platform] with the platform your child uses.
1. To learn about risks
Copy and paste the text below. Remember to customise it to the game you are interested in.
Create nothing but a table including all of the microtransactions and in-game purchases for the game [Game Name] on [Platform]. Include a score from 1 to 10 on whether the game is seen to be pay-to-win. Include a score from 1 to 10 on how likely the game is to encourage unhealthy spending. Include a suggested age rating based on potential financial risk. Respond with only the table and scores, the least and most expensive available microtransactions in the game, followed by a short summary of the data.
2. To learn about pressure to spend
Copy and paste the text below. Remember to customise it to the game you are interested in.
List the specific design tactics, such as time-limited offers, streak rewards, or peer pressure incentives, used in [Game Name] on [Platform] to encourage frequent purchases.
3. To learn about parental controls
Copy and paste the text below. Remember to customise it to the game you are interested in.
Does [Game Name] have parental controls that can block or monitor my child’s in-game spending on [Platform]?
5 tips for creating better AI prompts.
Be specific. Ask about particular spending risks or tactics, rather than asking “Is [Game X] safe?” For example: “Which in-game purchases could encourage my child to spend too much?”
Keep it neutral. Avoid emotional phrasing. Focus on facts, such as what spending options exist, how they work, and how risky they might be. Neutral phrasing helps AI provide clearer information.
Include your child’s age. AI responses are more relevant when you provide context like your child’s age: for example, “my 12-year-old.”
Ask about specific design tactics. Request information on features that encourage spending, like loot boxes, streak rewards, time-limited offers, or peer pressure. This improves the detail and usefulness of AI responses.
Separate Third-Party risks. Some risks come from outside the main game, such as unofficial marketplaces. Ask AI to list these separately and verify the information.
Note: While this guide focuses on financial harms in games, the same approach can help parents use AI chatbots for other parenting questions.
But wait: can AI chatbots get it wrong?
We conducted research with youth platform VoiceBox to test how well five popular AI chatbots provide information about financial risks in games: Claude, Gemini, ChatGPT, Grok, and Meta AI.
The results showed that the quality of information varies between chatbots. Accuracy depends on how questions are asked and the chatbot’s own behavior or “persona.”
In our testing, Claude generally provided the most consistently useful answers, followed by Gemini. However, results varied depending on the questions asked, so the most effective chatbot may be the one a parent is already familiar with.
We also learned
Chatbots can confuse different versions of the same game, like mobile vs console, and treat third-party platforms or marketplaces as part of the main game.
Chatbots give confident-sounding answers that are actually incorrect (these are termed “hallucinations”).
Chatbots can miss hidden spending risks if your question isn’t specific enough.