Scott Babwah Brennen is the head of online expression policy at the Center on Technology Policy at UNC-Chapel Hill, and Matt Perault is the Center’s director. Shutterstock When the U.S. House of Representatives Committee on Energy and Commerce recently held a hearing with TikTok CEO Shou Zi Chew, Congressman Buddy Carter (R-GA) asked how the app determines the ages of its users. Chew responded by describing the app’s inferential system that analyzes users’ public posts to see if their content matches the age users claim to be. Before he could finish, Rep. Carter interrupted, exclaiming “That’s creepy!” The exchange highlighted a tension in the emerging policy debates in online child safety: to protect children, you first must know who is a child. But determining who is a child online not only means ramping up surveillance on everyone, it means introducing new security risks, equity concerns, and usability issues. The safety of children online has become perhaps the most pressing concern in technology regulation. Federal and state legislators are considering dozens of new bills addressing children’s online safety. This year, Utah, Arkansas, and Louisiana have all passed laws that require children under 18 to have parental consent to have a social media account, requiring platforms to verify the ages of all users. Proposed federal legislation, including the Social Media Child Protection Act, the Making Age Verification Technology Uniform, Robust, and Effective (MATURE) Act, and the Protecting Kids on Social Media Act would all restrict children on social media and require platforms…To Protect Kids Online, Policymakers Must First Determine Who is a Kid