Keeping Young People Safe Online
Artificial Intelligence and Emerging Technologies
We know new technology can be overwhelming for some – especially if you’re the type of tech user who is still getting used to your latest phone update! But whether you’re a total ‘techie’ or take your time adjusting, it’s important to be aware of the safeguarding risks that are presented by new and emerging technologies such as artificial Intelligence (AI), the metaverse or even VR headsets.
Click or Tap this box to learn more about where you can find support
Other contacts:
- Internet Matters: Offers guidance on using AI with children and young people. For more information on this topic, please visit the Internet Matters website.
- INEQE Safeguarding Group: For resources and advice on AI and emerging tech, visit INEQE’s Website.
What you need to know
By educating yourself on different AI applications, acknowledging the range of AI young people are exposed to and their associated risks, parents/carers and safeguarding professionals can help protect them. To do this, the key things you should do are…
- Age limits: Check age limits concerning the AI technologies the young person in your care has access to. Decide whether they are mature enough to use these applications and have a conversation with them about safe use.
- Use content filters, parental controls and safety settings: Implementing functions like safe search filters and using your internet providers parental control features will help make your child’s online experiences with AI safer. Before using these safety settings, we recommend having a conversation with the young person in your care as to why these are necessary.
- Factual inaccuracies and misinformation: Generative AI such as Chat GPT largely depends on publicly available data and can occasionally generate incorrect information, misinformation or disinformation.
- AI generated harmful content: There are some AI technologies that have the ability to create images or audio based on a few words. Although this is useful in business, unfortunately this can result in the production of ‘deepfakes*’ which has potentially harmful consequences.