Contents
An intro to AI, made for students
Adorable, operatic blobs. A global, online guessing game. Scribbles that transform into works of art. These may not sound like they’re part of a curriculum, but learning the basics of how artificial intelligence (AI) works doesn’t have to be complicated, super-technical or boring.
To celebrate Digital Learning Day, we’re releasing a new lesson from Applied Digital Skills, Google’s free, online, video-based curriculum (and part of the larger Grow with Google initiative). “Discover AI in Daily Life” was designed with middle and high school students in mind, and dives into how AI is built, and how it helps people every day.
AI for anyone — and everyone
“Twenty or 30 years ago, students might have learned basic typing skills in school,” says Dr. Patrick Gage Kelley, a Google Trust and Safety user experience researcher who co-created (and narrates) the “Discover AI in Daily Life” lesson. “Today, ‘AI literacy’ is a key skill. It’s important that students everywhere, from all backgrounds, are given the opportunity to learn about AI.”
“Discover AI in Daily Life” begins with the basics. You’ll find simple, non-technical explanations of how a machine can “learn” from patterns in data, and why it’s important to train AI responsibly and avoid unfair bias.
First-hand experiences with AI
“By encouraging students to engage directly with everyday tools and experiment with them, they get a first-hand experience of the potential uses and limitations of AI,” says Dr. Annica Voneche, the lesson’s learning designer. “Those experiences can then be tied to a more theoretical explanation of the technology behind it, in a way that makes the often abstract concepts behind AI tangible.”
Guided by Google’s AI Principles, the lesson also explores why it’s important to develop AI systems responsibly. Developed with feedback from a student advisor and several middle- and high-school teachers, the lesson is intended for use in a wide range of courses, not just in computer science (CS) or technology classes.
“It’s crucial for students, regardless of whether they are CS students or not, to understand why the responsible development of AI is important,” says Tammi Ramsey, a high school teacher who contributed feedback. “AI is becoming a widespread phenomenon. It’s part of our everyday lives.”
Whether taught in-person or remotely, teachers can use the lesson’s three- to six-minute videos as tools to introduce a variety of students to essential AI concepts. “We want students to learn how emerging technologies, like AI, work,” says Sue Tranchina, a teacher who contributed to the lesson. “So students become curious and inspired to not just use AI, but create it.”
Recommendations for Regulating Artificial Intelligence to Minimize Risks to Children and Their Families
From January 2023 to March 2024, multiple entities have published guidance on artificial intelligence (AI),underscoring growing public concerns regarding AI governance. Meanwhile, as federal and state legislators weigh the need for AI regulations to safeguard the public from various risks, recent discourse about AI risk has overlooked the use of AI by children and their families or caregivers.
This gap is widening as students increasingly turn to AI for homework assistance and interact with AI-generated content (including images and videos), and as caregivers (including both parents and educators) attempt to use AI to foster child engagement. Drawing on lessons from a recent Child Trends study on the capabilities of AI systems, we propose stronger guidance and regulations to ensure rigorous assessment of potential harm by AI systems in contexts involving children and families.
For our study, we created two AI systems—both based on two prominent Large Language Models (LLMs; see Methods note below this blog)—and found that these two models showed strong agreement on simple tasks (e.g., identifying articles on compensating the early childhood workforce) but diverged when handling complex subjects (e.g., analyzing articles on change framework). This divergence in handling complex subjects illustrates a potential risk within AI systems: AI’s interpretation (or misinterpretation) of complex human ideas and values could expose children and caregivers to incorrect information or harmful content.
We propose that federal and state regulators mandate proper assessment of three aspects of AI systems to minimize the potential risks of AI to children and families. First, regulators should mandate AI assessments that are capable of distinguishing between AI systems that can reliably handle both simple and complex subjects and those that cannot. Our experience with the AI systems created for our study illustrates the need for safeguards so that AI tools and systems meant for young people—from chatbots to virtual reality devices—can be trusted to not generate images and suggestions that are harmful, dangerous, or unethical.
Second, we propose that regulators underscore that it’s critical for AI assessment to consider the developmental appropriateness and safety of AI-generated content for different age groups in specific contexts. As explained above, the capability of AI systems to handle complex subjects should be part of the determination of developmental appropriateness and safety. The European Union’s new Artificial Intelligence Act proposes a risk classification system for AI, raising the question of whether AI systems should follow an age rating system akin to motion picture ratings, the Entertainment Software Rating Board ratings used for video games, and the Pan European Game Information ratings (used in 39 countries).
In the U.S. context, the California Age-Appropriate Design Code Act would mandate specific design and operation standards for digital services likely to be accessed by children, ensuring that these services are appropriate and safe for various age groups. The need to help the public distinguish age-appropriate AI content has become increasingly pertinent as AI extends beyond text generation to the creation of images, audios, and videos.
Third, we propose that regulators mandate continuous quality improvement for AI systems—using such processes as regular license renewals—because training data evolves over time and can make existing versions of AI systems outdated. During Child Trends’ assessment of the two AI systems in our study, we found that one model couldn’t recognize “Latinx” due to its outdated training data. This limitation has significant real-world consequences for children trying to understand their world.
Cultural norms and language are constantly emerging and changing—consider terms such as “woke,” “cisgender,” and “equity”—as communities work to create a more inclusive society for children and families, and as they engage in fierce debates over issues of race, gender, ideology, and religion. Changes in cultural norms and framings underscore the need to monitor and refine AI systems on an ongoing basis to ensure that they remain relevant, accurate, and inclusive of evolving societal and linguistic dynamics. Regulators should consider the authorization of AI systems for market entry as the beginning—not the end—of the oversight process and should mandate regular follow-up evaluations similar to an annual license renewal process.
As we navigate an increasingly AI-integrated world, it becomes ever more imperative to develop robust regulations and guidelines for AI development—especially in contexts impacting children and families. Congress and state legislatures play a pivotal role in shaping the legal framework to ensure that AI systems are deployed responsibly and ethically. This would involve not just a superficial adoption of technology, but a deeper, more informed understanding of how AI impacts child development and well-being that can pave the way for an AI-empowered future that is safe for our younger generations.
Methods note
For the study cited in this blog, Child Trends constructed two AI systems based on two prominent Large Language Models (LLMs) to screen and extract relevant information from a vast repository of over 10,000 articles, factsheets, and reports. We developed and implemented a set of performance metrics to measure the reliability, validity, and accuracy of information extraction.