• 0 Posts
  • 30 Comments
Joined 10 months ago
cake
Cake day: June 4th, 2025

help-circle




  • That’s a thorny question. The main ways we currently got either involve the sites in question collecting personally identifiable information, such as government issued ID, and making a determination as to what to serve based on the information that contains, or the sites adhereing to a voluntary code, such as RTA, to include an identifying header, and parents installing, configuring and maintaining the software or services to restrict access. The former method is obviously dangerous as it requires handing over your ID, and the latter is all voluntary and so there is little impetus to do it, and the complexity will also act as a deterent for parents. Turning it around and just having the computer send a flag for the age bracket gets rid of the need to transmit personally identifiable information and makes the parents’ setup job much easier with a one-shot, tick a box and carry on, process.


  • Again, I’m only going by the Californian bill, but that one is pretty clear that the person setting up the account should either supply the user’s birth date, or the age bracket they are in. There is nothing indicating this should be validated in any way. I’d agree that, if the machine was compromised, and the user’s birth date was used, it would be possible to leak that data, but given those preconditions, it would be one of the least interesting things leaked. I’d certainly prefer to just store the age bracket, and have a way for the computer admin update it as the user grows towards their 18th birthday.




  • These age band laws basically work in the opposite way to the usual parental controls. Rather than having to install and maintain the control software and having the filtering at the client end of the connection, parents need only set a flag and filtering will occur at the source end of the connection.

    Will these laws provide perfect protection that eliminates the need for parental oversight of childrens’ internet access? No. Will they help stop kids accidentally stumbling into unsuitable content, reducing harm overall? Yes. As a parent, one of the things I worry about is my kids browsing sites such as youtube. Even if they’re using it for research for school projects, I can never be certain it wont prompt them to watch an unsuitable video. With a simple “this user is a child, don’t show them anything unsuitable” flag, I wouldn’t have to spend so much energy monitoring everything and could spend more energy talking to them about what they’re actually watching.

    One of the key parts of the Californian law is that if the client machine sends the flag, the service must treat it as authoratative, and should not use other means of checking. That is good news, as it means there is no incentive for sites to integrate more intrusive measures such as third parties scanning givernment issued ID.





  • No, and this wouldn’t be impossible to bypass either. I don’t think the aim is 100% perfection, so much as harm reduction, and I don’t think you’ll get more than that no matter how onerous the law becomes. Most kids, most of the time, are not going to be trying to circumvent it, and it would still be up to the parents to look out for cases where they were.

    The current proposal requires storing and transmitting a flag that can take one of four values (under 13, 13-16, 16-18, 18+), and prohibits sites using other means of age verification. It’ll work adaquately to stop kids accidentally seeing pornography, and hopefully things like andrew tate, giving the parents some space to do their part to help their kids learn how to understand what they migjt be exposed to.



  • notabot@piefed.socialtoPrivacy@lemmy.worldsignal w
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    1 month ago

    They provide a suite of services, most of which can be provided in a private manner. Blowing a hole in that by providing email seems counter productive. As I said, they could point you at a separate email service. Even if they provided that service it could ensure an adaquate break between the private services and and the non-private.

    As a service, is it more privacy conscious than, say, Gmail? Yes, but you’re still ultimately just asking the postman not to read your postcards.




  • To bring charges under those sorts of laws there’s going to have to be some external evidence of harm. Either the kid is acting in a way that causes an agency sufficient concern that they investigate the family, or the government mandate much stricter monitoring of exactly who is doing what online. The former case is unlikely, but should probably be persued vigerously when it does hapoen, and the latter case is something I imagine we all very much want to avoid.

    By providing a simple, privacy conscious, way of taking some of the burden of vigilence off of the parents (the child is less likely to stumble on inappropriate material) it makes it easier for them to provide actually beneficial guidance, and reduces the risk of law enforecement getting involved to investigate minor transgressions.



  • Whilst parents absolutely should be guiding and helping the kids determine where they go online, and what they look at, I’m trying to envision where, or how, parents would be liable for them looking at something inappropriately “adult”, barring actual child neglect.

    A system like this would actually help parents be more confident that little Johnny wasn’t going to stumble across something in appropriate, because, yes, in a way this is about control. It’s about controlling what kids are exposed to before they are intellectually ready for it. Yes, there are potentially serious issues around that, such as limiting access to LGBTQ+ or other helpful material for young adults, but that should be a discussion around what information is needed at each age, rather than how to indicate that age.

    Age gating on the open internet will happen, I don’t see any way that it wont, what matters is how it is implemented. We know that submitting government issued ID to every site with potentially contentious content is a terrible idea; this neatly sidesteps the need for that, and actually forbids it.