What I find puzzling about these proposals is that it SEEMS like they could be designed to achieve 90% of the stated goals with almost 0% of the loss of privacy.
The idea would be that devices could "opt in" to safety rather than opt out. Allow parents to purchase a locked-down device that always includes a "kids" flag whenever it requests online information, and simply require online services to not provide kid-unfriendly information if that flag is included.
I know a lot of people believe that this is just all just a secret ploy to destroy privacy. Personally, I don't think so. I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.
jacquesm 1 days ago [-]
You are mistaking cause for effect. The loss of privacy is the goal, not a side effect, the rest is just a fig leaf.
mindslight 18 hours ago [-]
I generally try to think of things like this in terms of the natural incentives of systems, politicians, and well-meaning voters smoking hopeium. But now with the revelations of how insidious the Epstein class is, I have to wonder if the reason all these digital lockdowns are being shamelessly pushed with a simultaneous urgency is really just because of a giant fucking conspiracy. The common wisdom has always been that conspiracies naturally fall apart as they grow, succumbing to an increasing possibility of a defector. But I think that calculus might change when the members have all got mortal crimes hanging over their heads.
Never mind thinking about how legitimacy was laundered through scientific institutions, and extrapolating to wondering how much that same dynamic applies to "save the children" lobbying NGOs and whatnot.
c22 1 days ago [-]
Better yet, require online services to send a 'not for kids' flag along with any restricted content then let families configure their devices however they want.
Even better, make the flags granular: <recommended age>, <content flag>, <source>, <type>
But as said multiple times, the childs are the distraction, the targets are privacy and freedom.
mjevans 1 days ago [-]
No - Kid friendly should be something site's Attest to and claim they ARE. That becomes an FTC enforceable market claim (or insert other thing here).
Foreign sites, places that aren't trying to publish things for children? The default state should be unrated content for consumers (adults) prepared to see the content they asked for.
c22 1 days ago [-]
Okay...
0+, kid friendly, self, interactive content
struant 1 days ago [-]
Just say the whole internet is not for kids without adult supervision and leave it at that.
It doesn't even matter if you can get something that technically works. Half the "age appropriate" content targeted at children is horrifying brainrot. Hardcore pornography would be less damaging to them.
Just supervise your damn children people.
glenpierce 1 days ago [-]
This gets complicated when you need to start giving your kids some degree of independence. I would also argue this could be implemented in a more accessibility-oriented approach.
Also, not all 13-year-olds are of equal level of maturity/content appropriate material. I find it very annoying that I can’t just set limits like: no drug-referencing but idgaf about my kid hearing swear words.
On other machines:
I do not want certain content to ever be displayed on my work machine. I’d like to have the ability to set that.
Someone who has specific background may not want to see things like: children in danger. This could even be applied to their Netflix algorithm. The website: does the dog die, does a good job of categorizing these kinds of content.
nitwit005 22 hours ago [-]
But, in essence, they want to strip the ability of parents to give their kids the responsibility you describe. No letting your kids use social media, look adult content, or whatever else. It's simply banned.
cowboylowrez 1 days ago [-]
yep, 18+, show id at the time of purchasing access soooo easy and zero technical issues.
duskwuff 1 days ago [-]
Other advantages include:
- It's much easier for web sites to implement, potentially even on a page-by-page basis (e.g. using <meta> tags).
- It doesn't disclose whether the user is underage to service providers.
- As mentioned, it allows user agents to filter content "on their own terms" without the server's involvement, e.g. by voluntarily displaying a content warning and allowing the user to click through it.
wiml 24 hours ago [-]
This exact method was implemented back around the turn of the century by RSAC/ICRA. I think only MSIE ever looked at those tags. But it seems like they met the stated goal of today's age-verification proposals.
That's why I have a hard time crediting the theory that today's proposals are just harmlessly clueless and well intentioned (as dynm suggests). There are many possible ways to make a child-safe internet and it's been a concern for a long time. But, just in the last year there are simultaneous pushes in many regions to enact one specific technique which just happens to pipe a ton of money to a few shady companies, eliminate general purpose computing, be tailor made for social control and political oppression, and on top of that, it isn't even any better at keeping porn away from kids! I think Hanlon's razor has to give way to Occam's here; malice is the simpler explanation.
user3939382 1 days ago [-]
Internet Explorer had content ratings back in the day
AnthonyMouse 1 days ago [-]
The "problem" back then was that nothing required sites to provide a rating and most of them didn't. Then you didn't have much of a content rating system, instead you effectively had a choice for what to do with "unrated" sites where if you allow them you allow essentially the whole internet and if you block them you might as well save yourself some money by calling up your ISP to cancel.
This could pretty easily be solved by just giving sites some incentive to actually provide a rating.
asdff 1 days ago [-]
As others have said, the goal is the surveillance. But this notion goes further than that. So many ills people face in life can be solved by just not doing something. Addicted to something? Just stop. Fat? Stop eating. Getting depressed about social media? Stop browsing.
Some people have enough self control to do that and quit cold turkey. Other people don't even consciously realize what they are doing as they perform that maladaptive action without any thought at all, akin to scratching a mosquito bite.
If someone could figure out why some people are more self aware than others, a whole host of the worlds problems would be better understood.
r2_pilot 16 hours ago [-]
The Purpose Of a System Is What It Does. Whether it is stated (or even designed) to protect kids, if it does anything more or different from that goal, it will perform those actions regardless of what is said about what the System should be doing.
KoolKat23 1 days ago [-]
I have not once seen a proposal actually contain zero knowledge proof.
This isn't something exotic or difficult.
It is clear to me there is ulterior motives, and perhaps a few well meaning folks have been co-opted.
Apple and Google age verification are both zero knowledge based.
digiown 1 days ago [-]
A ZKP will work as a base, but the proof mechanism will have to be combined with anti-user measures like device attestation to prevent things like me offering an API to continually sign requests for strangers. You can rate-limit it, or you can add an identifier, both of which makes it not zero knowledge.
Parent's proposal is better in that it would only take away general purpose computing from children rather than from everyone. A sympathetic parent can also allow it anyway, just like how a parent can legally provide a teen with alcohol in most places. As a society we generally consider that parents have a right to decide which things are appropriate for their children.
KoolKat23 1 days ago [-]
Honestly I think no measure is and should be perfect. It's completely disproportionate. If there's a will there's a way.
mindslight 21 hours ago [-]
> A ZKP will work as a base, but the proof mechanism will have to be combined with anti-user measures like device attestation to prevent things like me offering an API to continually sign requests for strangers
Spot on! The "technical" proposal from Google of a ZKP system is best seen as technically-disingenuous marketing meant to encourage governments to mandate the use of Google's locked down devices and user-hostile ecosystem.
The only sane way to implement this is to confine the locked-down computing blast radius to the specific devices that need child protection, rather than forcing the restrictions onto everyone.
digiown 19 hours ago [-]
I'm not sure what I feel about depriving teens of general purpose computing devices, either, which is the logical consequence of both the pseudo-ZKP scheme and parent's "underage signal". I believe most of us here learned programming through being able to run arbitrary programs, and that would never have happened if we only had access to locked down devices. And that habit of viewing computers as appliances controlled by other people isn't going to go away on their 18th birthday either.
Overall I think while there is a reasonable argument in favor of age verification for some types of sites, the harms of implementing it would drastically outweigh any benefits that it should not be done.
mindslight 19 hours ago [-]
Sure, I'm sympathetic to that idea. The point is that it leaves such a decision up to parents, putting non-locked-down computers in the same position as any other potentially-harmful thing you might want to keep away from your kids.
Keeping parents in control also lets them make decisions contrary to what the corporate surveillance industry can legally get away with. For example we can easily imagine an equivalent of Facebook jumping through whatever hoops it needs to do to target minors, perhaps outright banned various places but not generally in the US. If age restrictions are going to be the responsibility of websites, then parents will still have been given no additional tools to prevent their kids from becoming addicted to crap like that.
Shooting from the hip about the situation you describe, I'd be tempted to give a kid a locked-down phone with heavy filtering (or perhaps without even a web browser so they can't use Facebook and its ilk), and then an unrestricted desktop computer which carries more "social accountability".
digiown 18 hours ago [-]
I think banning facebook/instagram/etc is one of the special cases where it makes more sense to be enforced by the site, because people use these out of mainly peer pressure and network effect. If a majority is kept off, the rest have little use for it regardless of their personal wishes. Heck, I'd reckon most kids don't actually want to use them all that much. Regardless of technical details, giving parents this control will also cause a lot of resentment if most parents don't go along.
As opposed to censoring internet content in general, which does not work because there will always be sites not under your jurisdiction and things like VPNs. I don't support any such censorship measures as a result.
mindslight 18 hours ago [-]
But why not both? I'm coming from a USian perspective here where I don't see much possibility of actual widespread bans of these types of products, rather just a retrenching to what can be supported by regulatory capture.
Also, we're getting the locked down computing devices anyway - mobile phones as they are right now are a sufficient root of trust for parental purposes. So it seems pointless to avoid using that capability (which corpos are happy to continue embracing regardless) but instead put an additional system of control front and center.
digiown 9 hours ago [-]
> don't see much possibility of actual widespread bans
Why do you think there would be regulation to honor the "underage signal", but not explicitly ban social media sites for "unverified" users?
> seems pointless to avoid using that capability
It's not pointless, because relying on it will soon make these locked down devices mandatory for everyone under 18, and they will keep using it past 18. Everyone will lose general purpose computing, along with adblocking and other mitigations that protect you from various harms. It also leads to widespread surveillance being possible as parents will want to be able to "audit" their teen's usage.
> put an additional system of control front and center
The problem should be controlled at the source, not the destination, if feasible.
mindslight 5 hours ago [-]
> Why do you think there would be regulation to honor the "underage signal"
This means any legislation should be aimed at directing device manufacturers to implement software that can respect content assertions sent by websites.
> relying on it will soon make these locked down devices mandatory for everyone under 18, and they will keep using it past 18
Okay, but in 2026 we're basically at this point. Show me a mobile phone that doesn't have a bootloader locked down with "secure boot." For this particular threat that we had worried about for a long time, we've already lost. Not in the total-sweeping way that analysis from first principles leads you to, but in the day to day practical way. It's everywhere.
The next control we're staring down is remote attestation, which is already being implemented for niches like banking. The scaffolding is there for it to be implemented on every website - "verifying your device's security" - I get that on basically everywhere these days. As soon as 80% of browsers can be assumed to have remote attestation capabilities, we can be sure they will start demanding these signals and slowly clamping down on libre browsers (as has been done with browser/IP fingerprinting over the past decade)
Any of these talks of getting the server involved intrinsically rely on shoring up "device security" through remote attestation. That is exactly what can end ad-blocking and every other client-represents-the-user freedom.
> The problem should be controlled at the source, not the destination, if feasible.
You've already acknowledged VPNs and foreign jurisdictions, which means "at the source" implies a national firewall, right?
Unless your goal is to undermine any solution on this topic? I'm sympathetic to this, I just don't see that being realistic in today's environment!
digiown 21 minutes ago [-]
I agree with controls on addictive/exploitative platforms like Facebook or Instagram. These can be feasibly controlled at the source.
In principle I agree with keeping some content away from children, but I don't think any of the implementations will work without causing worse problems, so I disagree with implementing those.
> in the day to day practical way
There's a world of difference between practically required and it being illegal to use anything else, even if initially for a small set of population. You still have a choice to avoid those now. Moreover there is a fairly large subculture of gamers etc opposed to these movements, and open computing platforms will take a long time to fizzle out without intervention.
If you mandate locked down devices for kids, it will very quickly become locked down devices for everyone except for "licensed developers", because no one gets a bunch of new computers upon becoming an adult, and a new campaign from big tech will try to associate open computers with criminals.
1 days ago [-]
rolph 1 days ago [-]
it may be simple to sleuth out over time kid status or not, but i would be very uncomfortable with a tag that verifies kid status instantly no challenges, as it would provide a targeting vector, and defeat safety.
9x39 21 hours ago [-]
> I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.
Advancing a case for a precedent-creating decision is a well-known tactic for creating the environment of success you want for a separate goal.
It's possible you can find a genuine belief in the people who advance the cause. Charitably, they're perhaps naive or coincidentally aligned, and uncharitably sometimes useful idiots who are brought in-line directly or indirectly with various powerful donors' causes.
OutOfHere 1 days ago [-]
It has nothing much to do with kids and everything to do with monitoring and suppressing adults.
ImHereToVote 1 days ago [-]
You are assuming good faith. This is why you are puzzled.
sublinear 1 days ago [-]
I completely agree. The problem is the lack of compromise on both sides of the issue.
I wouldn't say it's a lack of understanding, but that any compromise is seen as weakness by other members of their party. That needs to end.
dwedge 1 days ago [-]
It's tiring how legislation like this is becoming predictable and feels inevitable. This article even mentions the verification needing to be embedded in the operating system itself, spelling the death of open computing
sixtyj 1 days ago [-]
Some people have been saying for so long that you should need a license to use the internet, and now that we have it, it's a little different than we intended :(
rockskon 1 days ago [-]
We don't have it yet and there is still time to head this off. Not much time - but still time! Talk to your federal lawmakers and state AG's office!
mothballed 1 days ago [-]
I'd argue it's more like KYC for the internet. Something HN users have brutally and ruthlessly defended for banking every time I argue it's a 4A violation (in fact, it's one of the most fiercely defended things anytime I bring it up).
Give in 20+ years and you'll be called a kook for thinking otherwise.
kbelder 1 days ago [-]
KYC is one of the greatest government overreaches in the last several decades. I'll back you up on that.
tired-turtle 1 days ago [-]
Can you explain the connection between KYC in banking and the Fourth Amendment? How does KYC constitute a government search/seizure?
mothballed 1 days ago [-]
The government requires the bank to search your identity documents to open an account, even when there is no individualized suspicion you've broken the law as to why your papers need to be searched, as part of the KYC regulations passed post 9/11. Technically it's not in the statute that they actually search your documents, but rather enforced through a byzantine series of federal regulatory frameworks that basically require them to do something that approximates "industry standard" KYC compliance which ends up being, verifying the customer through inspecting their identity and perhaps other documents. This is why i.e. when I was homeless even my passport couldn't open an account anywhere -- they wanted my passport plus some document showing an address to satisfy KYC requirements.
Maybe I will have more energy for it tomorrow, I've been through this probably a couple dozen times on HN and I don't have the energy to go through the whole rigmarole today because usually it results in 2-3 days of someone fiercely disagreeing down some long chain and in the end I provide all the evidence and by that point no one is paying attention and it just goes into this pyrrhic victory where I get drained dry just for no one to give a shit. I should probably consolidate it into a blog post or something.
DennisP 1 days ago [-]
Fwiw I lean to your side and would be interesting in reading what you have to say about it.
sneak 1 days ago [-]
I’d happily host that blog. Contact info is in my profile.
brandensilva 1 days ago [-]
It isn't a coincidence we have two Palantir articles on the front page and this. It's in the cards and American's seem to be ignoring it and are more than happy to accept the dystopian future where this leads.
It's incredibly sad as an optimistic person trying to find any silver lining here.
latency-guy2 1 days ago [-]
Bad actors like --
William Tong, Anne E. Lopez, Dave Yost, Jonathan Skrmetti, Gwen Tauiliili-Langkilde, Kris Mayes, Tim Griffin, Rob Bonta, Phil Weiser, Kathleen Jennings, Brian Schwalb, Christopher M. Carr, Kwame Raoul, Todd Rokita, Kris Kobach, Russell Coleman, Liz Murrill, Aaron M. Frey, Anthony G. Brown, Andrea Joy Campbell, Dana Nessel, Keith Ellison, Lynn Fitch, Catherine L. Hanaway, Aaron D. Ford, John M. Formella, Jennifer Davenport, Raúl Torrez, Letitia James, Drew H. Wrigley, Gentner Drummond, Dan Rayfield, Dave Sunday, Peter F. Neronha, Alan Wilson, Marty Jackley, Gordon C. Rhea, Derek Brown, Charity Clark, and Keith Kautz
--
Always operate under the assumption that the people serve the state, not the other way around. There are some names in that list that are outwardly infamous of this behavior, and none are surprising considering what type of person looks to be an AG. Maybe fighting fire with fire is appropriate - no such thing as a private life for any of these people, all their communications are open to the public 100% of the time and there are precisely 0 instances where it is not the case. It's only fair considering that is what their goal is for everyone not of the state.
plagiarist 1 days ago [-]
Poettering will help get us remote attestation on Linux so we won't have to switch to Windows when it dies.
halJordan 1 days ago [-]
It's already the law in California. I don't remember any outrage here when it was passed there.
rockskon 1 days ago [-]
No it isn't. There is no law in California that mandates showing an ID to see ambiguously-defined adult content.
csense 19 hours ago [-]
Anonymous or pseudonymous publishing is an essential element of Constitutionally protected freedom of speech [1]. If the government knows who's posting what, it's a lot easier for them to harass, intimidate, punish, or otherwise suppress people who post uncomfortable or politically inconvenient things.
Recent New York Times headline: "Homeland Security Wants Social Media Sites to Expose Anti-ICE Accounts". I'm sure the administration would be very happy if there was a database of government issued photo ID's for every account on Facebook. And if the government gets the ID's of those accounts, I'm quite sure nothing good will come of it for either the individuals involved, or the ability of the people to understand whether the government department in question is going about its duties in a way that respects the law and the Constitution.
[1] My username refers to an anonymously published pamphlet that played a key role in US history.
rockskon 1 days ago [-]
If this really bothers you, talk to your state AG's office and your federal lawmaker
The worst that can happen is you don't change things.
The best? Maybe you'll find a receptive ear. Your lawmaker stops co-sponsoring KOSA. Your state AG stops pushing for it.
rolph 1 days ago [-]
i think the worst that can happen is you could be put on a list of dissention.
rockskon 1 days ago [-]
A list of people who dissent to KOSA is not a bad list to be on as a constituent.
You need to make it easier for your lawmakers to be on that list too. Show them there's people who won't rake them over the coals for bowing out.
Trasmatta 1 days ago [-]
Don't comply in advance
hammock 1 days ago [-]
Isn’t that the point? Our lawmakers keep track of how many constituents approve or disapprove of pending legislation so they can act accordingly
rolph 1 days ago [-]
so they can act accordingly is the variable, a simple headcount is one thing, but when it creeps like a census, then it is prone to polyusary.
putting the consiracy hat on, the exploit is to direct as many installed AGs to push for such bills, with no big letdown if they dont pass, why/because, the demographics on dissention are valuable and are, passed to a hostile federal government.
rockskon 1 days ago [-]
Being active about KOSA won't get you put on a "list of dissenters". This is an issue being pushed by the States and your federal lawmakers, not the executive branch.
mothballed 1 days ago [-]
Just as a note, federal officials are retaliating against those providing respectful comment on policy.[]
So the worst that can happen could be worse than nothing.
Did I say to provide feedback to the Trump administration - an entity that has not made an official stance on this matter?
No.
I said your state Attorney General's office and your elected federal Senators and members of the House.
So I reiterate - the worst that can happen is you don't change where things have been going to.
The best? Your elected officials bow out of this.
shevy-java 1 days ago [-]
They want to sniff after everyone. The "omg terrorists" or "omg children" is the lie.
fluidcruft 1 days ago [-]
Would this mean spammers and advertisers cannot send me email and ads if I refuse to allow my mailbox to authenticate my age to them?
potsandpans 20 hours ago [-]
No
fluidcruft 19 hours ago [-]
Drats
Pikamander2 1 days ago [-]
> 40 State Attorneys General Want To Tie Online Access to ID
Here's the actual title of the article, which is much more concerning than the HN title.
1 days ago [-]
pdonis 1 days ago [-]
Instead of lobbying for taking away everyone's privacy, why isn't the government going after those they say are the actual culprits? From the article:
"The attorneys general argue that social media companies deliberately design products that draw in underage users and monetize their personal data through targeted advertising. They contend that companies have not adequately disclosed addictive features or mental health risks and point to evidence suggesting firms are aware of adverse consequences for minors."
Okay, so why aren't they going after the social media companies?
Ok, so then why do they need all this other Federal legislation?
cadamsdotcom 1 days ago [-]
They are in some other democracies. We will see how it all plays out.
_heimdall 1 days ago [-]
And why wouldn't a state attorney want this? I expect cops would also like blanket warrants usable whenever they deem necessary.
That doesn't mean they should get what they might want, or that its Constitutional.
zi2zi-jit 1 days ago [-]
In the 90s they told kids: don't give strangers your real name online. In 2026: the government demands your ID to access the internet. Progress.
asdff 1 days ago [-]
To be fair some things loosened others tightened. In the 90s you were much more flippant with your social security number.
kjellsbells 19 hours ago [-]
I guess HN leans strongly towards the pro-anonymity viewpoint, and sure, the "think of the children!" claim has worn very thin.
But: what methods could that reduce the harm that anonymous internet discourse so often produces? People send death threats, threats of sexual assault, harassment of all kinds, unsolicited pics of their genitals, swat attacks, just absolute nonsense all day every day, hiding behind the veil of anonymity and the asymmetry between the cost of sending such trash and the cost to track it down and do something about it.
There is, quite unusually in politics, a bit of a bipartisan consensus that this is a real, real problem, and that steps like this, or repealing Section 230, would help. Would it, or would it not, and if not, what alternatives are there?
ottah 1 days ago [-]
Next they'll try to make Tor and I2p illegal.
jamboca 1 days ago [-]
At a certain point the world just becomes less appealing to live in. Day by day death becomes more appealing; what do you have to lose when life just means living in a pigpen?
Nobody who would be any good at the job wants to waste their life campaigning for it.
shevy-java 1 days ago [-]
The strange thing is that all leaders end up doing the same. I wonder if direct democracy would be possible. Right now we are like in a Trueman show movie.
moneycantbuy 1 days ago [-]
somehow need to remove money from politics.
also need a more informed citizenry able to see through propaganda.
SilverElfin 1 days ago [-]
If there are 40 state Attorney Generals signing this letter, this must include a number of Democrat lead states as well correct?
wmf 23 hours ago [-]
Lack of privacy is pretty bipartisan.
rootsudo 1 days ago [-]
RIP Internet. I don't agree with any of this, but I don't see the majority of people protesting this. If anything, promotion it because: Think of the children.
"Many social media platforms deliberately
target minors, fueling a nationwide youth mental health crisis."
". These
platforms are intentionally designed to be addictive, particularly for
underaged users, and generate substantial profits by monetizing
minors’ personal data through targeted advertising. These companies
fail to adequately disclose the addictive nature of their products or
the well-documented harms associated with excessive social media
use. Increasing evidence demonstrates that these companies are
aware of the adverse mental health consequences imposed on underage users, yet they
have chosen to persist in these practices. Accordingly, many of our Offices have initiated
investigations and filed lawsuits against Meta and TikTok for their role in harming minors. "
Yet, the comapnies aren't being regulated, nor the algorithims, the marketing or even the existence. It's the users that are the problem therefore everyone has to submit their Identity to use the Internet if this passes.
MilnerRoute 19 hours ago [-]
"We rate Reclaim The Net as Right-Biased based on story selection and editorial positions that align with a conservative perspective. We also rate them Mixed for factual reporting due to poor sourcing, lack of transparency, and one-sided biased reporting."
I love the way they pretend they are doing this of their own volition when Australia, Canada, the EU and UK are all doing the same thing.
kkfx 1 days ago [-]
The Sinicization of the West continues, yet people still aren't pushing back; there are no indefinite general strikes, nor is there anyone foaming at the mouth demanding the arrest of the coup plotters in power...
If you can't beat them join them. The rest of the world is getting the same thing as China, but less transparent and more hypocritical.
sneak 1 days ago [-]
All forms of citizen power, such as widespread publishing, will eventually be tied to strong ID requirements.
You can’t illegally retaliate against citizens if you don’t know where they sleep at night.
1 days ago [-]
1 days ago [-]
ls612 1 days ago [-]
I’m going to go against the pessimism here and say that this is the US not Europe or the UK and the First Amendment has teeth. There’s ample Supreme Court precedent that anonymous speech is a protected right (Talley vs CA, Macintyre vs Ohio, etc) so I’d expect efforts like this to flounder in the courts if push came to shove.
13415 1 days ago [-]
What the US Supreme Court decides is much less relevant than it used to be because the US executive can and will simply ignore the decision. If anyone in the US administration breaks the law, they can be pardoned by the US president,if the president breaks the law, he's immune against prosecution based on a previous decision of the Supreme Court, and no court can enforce anything if the executive doesn't comply with the court order.
direwolf20 1 days ago [-]
I have mixed feelings about this website, reclaimthenet. In one breath it supports net neutrality and and opposes ID laws, and in the next — not in this particular article — mentions the Twitter files and says the UK is a dictatorship for arresting Lucy Connolly.
2OEH8eoCRo0 1 days ago [-]
The internet as we know it will be the first casualty of a great power hot war. We are living on borrowed time.
gjsman-1000 1 days ago [-]
I blame HN and Silicon Valley in general for consistently treating keeping children online safe as a parental responsibility only, rather than a government-parent team effort like every other regulation.
This loophole, “think of the children,” would not exist if SV had gotten over itself and not called very solution unworkable while insisting that any solution parents receive, no matter how sloppy or confusing, is workable.
dwedge 1 days ago [-]
Yeah exactly, had it not been for Facebook and the rest of social media not taking children online seriously, The Simpsons wouldn't have had to mock the cultural meme of blaming everything on saving children back in 1996 https://www.youtube.com/watch?v=RybNI0KB1bg
pavel_lishin 1 days ago [-]
Aren't most of these problems directly caused by the government claiming to do these things to protect children?
1 days ago [-]
stopbulying 1 days ago [-]
What is a good resource for reviewing the advantages and disadvantages of mandatory Real ID for Internet access?
Aren't there sound reasons to support anonymous whistleblowing?
Would there be critical feedback without pseudo-anonymity on the internet?
But you folks just have to dom all the haters.
What is their favorite thing: stuffed animal brand, candy, musical artist?
But then wouldn't undercover ops be obvious?
Is this similar to the "ban all crypto" movements that periodically forget everything we've learned about infosec and protecting folks?
Do protectees' deserve privacy for their safety?
In the 1990s, they told us kids not to use our real names or addresses on the internet.
stopbulying 9 hours ago [-]
Why is this downvoted?
Rendered at 22:47:18 GMT+0000 (Coordinated Universal Time) with Vercel.
The idea would be that devices could "opt in" to safety rather than opt out. Allow parents to purchase a locked-down device that always includes a "kids" flag whenever it requests online information, and simply require online services to not provide kid-unfriendly information if that flag is included.
I know a lot of people believe that this is just all just a secret ploy to destroy privacy. Personally, I don't think so. I think they genuinely want to protect kids, and the privacy destruction is driven by a combination of not caring and not understanding.
Never mind thinking about how legitimacy was laundered through scientific institutions, and extrapolating to wondering how much that same dynamic applies to "save the children" lobbying NGOs and whatnot.
Even better, make the flags granular: <recommended age>, <content flag>, <source>, <type>
13+, profane language, user, text
17+, violence, self, video
18+, unmoderated content, user, text
13+, drug themes, self, audio
and so on...
ASACP/RTA https://en.wikipedia.org/wiki/Association_of_Sites_Advocatin...
PICS https://en.wikipedia.org/wiki/Platform_for_Internet_Content_...
POWDER https://en.wikipedia.org/wiki/Protocol_for_Web_Description_R...
Tools avaliable for decades.
But as said multiple times, the childs are the distraction, the targets are privacy and freedom.
Foreign sites, places that aren't trying to publish things for children? The default state should be unrated content for consumers (adults) prepared to see the content they asked for.
0+, kid friendly, self, interactive content
It doesn't even matter if you can get something that technically works. Half the "age appropriate" content targeted at children is horrifying brainrot. Hardcore pornography would be less damaging to them.
Just supervise your damn children people.
Also, not all 13-year-olds are of equal level of maturity/content appropriate material. I find it very annoying that I can’t just set limits like: no drug-referencing but idgaf about my kid hearing swear words.
On other machines: I do not want certain content to ever be displayed on my work machine. I’d like to have the ability to set that. Someone who has specific background may not want to see things like: children in danger. This could even be applied to their Netflix algorithm. The website: does the dog die, does a good job of categorizing these kinds of content.
- It's much easier for web sites to implement, potentially even on a page-by-page basis (e.g. using <meta> tags).
- It doesn't disclose whether the user is underage to service providers.
- As mentioned, it allows user agents to filter content "on their own terms" without the server's involvement, e.g. by voluntarily displaying a content warning and allowing the user to click through it.
That's why I have a hard time crediting the theory that today's proposals are just harmlessly clueless and well intentioned (as dynm suggests). There are many possible ways to make a child-safe internet and it's been a concern for a long time. But, just in the last year there are simultaneous pushes in many regions to enact one specific technique which just happens to pipe a ton of money to a few shady companies, eliminate general purpose computing, be tailor made for social control and political oppression, and on top of that, it isn't even any better at keeping porn away from kids! I think Hanlon's razor has to give way to Occam's here; malice is the simpler explanation.
This could pretty easily be solved by just giving sites some incentive to actually provide a rating.
Some people have enough self control to do that and quit cold turkey. Other people don't even consciously realize what they are doing as they perform that maladaptive action without any thought at all, akin to scratching a mosquito bite.
If someone could figure out why some people are more self aware than others, a whole host of the worlds problems would be better understood.
But I strongly prefer my solution!
Parent's proposal is better in that it would only take away general purpose computing from children rather than from everyone. A sympathetic parent can also allow it anyway, just like how a parent can legally provide a teen with alcohol in most places. As a society we generally consider that parents have a right to decide which things are appropriate for their children.
Spot on! The "technical" proposal from Google of a ZKP system is best seen as technically-disingenuous marketing meant to encourage governments to mandate the use of Google's locked down devices and user-hostile ecosystem.
The only sane way to implement this is to confine the locked-down computing blast radius to the specific devices that need child protection, rather than forcing the restrictions onto everyone.
Overall I think while there is a reasonable argument in favor of age verification for some types of sites, the harms of implementing it would drastically outweigh any benefits that it should not be done.
Keeping parents in control also lets them make decisions contrary to what the corporate surveillance industry can legally get away with. For example we can easily imagine an equivalent of Facebook jumping through whatever hoops it needs to do to target minors, perhaps outright banned various places but not generally in the US. If age restrictions are going to be the responsibility of websites, then parents will still have been given no additional tools to prevent their kids from becoming addicted to crap like that.
Shooting from the hip about the situation you describe, I'd be tempted to give a kid a locked-down phone with heavy filtering (or perhaps without even a web browser so they can't use Facebook and its ilk), and then an unrestricted desktop computer which carries more "social accountability".
As opposed to censoring internet content in general, which does not work because there will always be sites not under your jurisdiction and things like VPNs. I don't support any such censorship measures as a result.
Also, we're getting the locked down computing devices anyway - mobile phones as they are right now are a sufficient root of trust for parental purposes. So it seems pointless to avoid using that capability (which corpos are happy to continue embracing regardless) but instead put an additional system of control front and center.
Why do you think there would be regulation to honor the "underage signal", but not explicitly ban social media sites for "unverified" users?
> seems pointless to avoid using that capability
It's not pointless, because relying on it will soon make these locked down devices mandatory for everyone under 18, and they will keep using it past 18. Everyone will lose general purpose computing, along with adblocking and other mitigations that protect you from various harms. It also leads to widespread surveillance being possible as parents will want to be able to "audit" their teen's usage.
> put an additional system of control front and center
The problem should be controlled at the source, not the destination, if feasible.
Our ancestor comment still has the direction backwards. This is the specific dynamic that makes sense to me: https://news.ycombinator.com/item?id=47027738 .
This means any legislation should be aimed at directing device manufacturers to implement software that can respect content assertions sent by websites.
> relying on it will soon make these locked down devices mandatory for everyone under 18, and they will keep using it past 18
Okay, but in 2026 we're basically at this point. Show me a mobile phone that doesn't have a bootloader locked down with "secure boot." For this particular threat that we had worried about for a long time, we've already lost. Not in the total-sweeping way that analysis from first principles leads you to, but in the day to day practical way. It's everywhere.
The next control we're staring down is remote attestation, which is already being implemented for niches like banking. The scaffolding is there for it to be implemented on every website - "verifying your device's security" - I get that on basically everywhere these days. As soon as 80% of browsers can be assumed to have remote attestation capabilities, we can be sure they will start demanding these signals and slowly clamping down on libre browsers (as has been done with browser/IP fingerprinting over the past decade)
Any of these talks of getting the server involved intrinsically rely on shoring up "device security" through remote attestation. That is exactly what can end ad-blocking and every other client-represents-the-user freedom.
> The problem should be controlled at the source, not the destination, if feasible.
You've already acknowledged VPNs and foreign jurisdictions, which means "at the source" implies a national firewall, right?
Unless your goal is to undermine any solution on this topic? I'm sympathetic to this, I just don't see that being realistic in today's environment!
In principle I agree with keeping some content away from children, but I don't think any of the implementations will work without causing worse problems, so I disagree with implementing those.
> in the day to day practical way
There's a world of difference between practically required and it being illegal to use anything else, even if initially for a small set of population. You still have a choice to avoid those now. Moreover there is a fairly large subculture of gamers etc opposed to these movements, and open computing platforms will take a long time to fizzle out without intervention.
If you mandate locked down devices for kids, it will very quickly become locked down devices for everyone except for "licensed developers", because no one gets a bunch of new computers upon becoming an adult, and a new campaign from big tech will try to associate open computers with criminals.
Advancing a case for a precedent-creating decision is a well-known tactic for creating the environment of success you want for a separate goal.
It's possible you can find a genuine belief in the people who advance the cause. Charitably, they're perhaps naive or coincidentally aligned, and uncharitably sometimes useful idiots who are brought in-line directly or indirectly with various powerful donors' causes.
I wouldn't say it's a lack of understanding, but that any compromise is seen as weakness by other members of their party. That needs to end.
Give in 20+ years and you'll be called a kook for thinking otherwise.
Maybe I will have more energy for it tomorrow, I've been through this probably a couple dozen times on HN and I don't have the energy to go through the whole rigmarole today because usually it results in 2-3 days of someone fiercely disagreeing down some long chain and in the end I provide all the evidence and by that point no one is paying attention and it just goes into this pyrrhic victory where I get drained dry just for no one to give a shit. I should probably consolidate it into a blog post or something.
It's incredibly sad as an optimistic person trying to find any silver lining here.
William Tong, Anne E. Lopez, Dave Yost, Jonathan Skrmetti, Gwen Tauiliili-Langkilde, Kris Mayes, Tim Griffin, Rob Bonta, Phil Weiser, Kathleen Jennings, Brian Schwalb, Christopher M. Carr, Kwame Raoul, Todd Rokita, Kris Kobach, Russell Coleman, Liz Murrill, Aaron M. Frey, Anthony G. Brown, Andrea Joy Campbell, Dana Nessel, Keith Ellison, Lynn Fitch, Catherine L. Hanaway, Aaron D. Ford, John M. Formella, Jennifer Davenport, Raúl Torrez, Letitia James, Drew H. Wrigley, Gentner Drummond, Dan Rayfield, Dave Sunday, Peter F. Neronha, Alan Wilson, Marty Jackley, Gordon C. Rhea, Derek Brown, Charity Clark, and Keith Kautz
--
Always operate under the assumption that the people serve the state, not the other way around. There are some names in that list that are outwardly infamous of this behavior, and none are surprising considering what type of person looks to be an AG. Maybe fighting fire with fire is appropriate - no such thing as a private life for any of these people, all their communications are open to the public 100% of the time and there are precisely 0 instances where it is not the case. It's only fair considering that is what their goal is for everyone not of the state.
Recent New York Times headline: "Homeland Security Wants Social Media Sites to Expose Anti-ICE Accounts". I'm sure the administration would be very happy if there was a database of government issued photo ID's for every account on Facebook. And if the government gets the ID's of those accounts, I'm quite sure nothing good will come of it for either the individuals involved, or the ability of the people to understand whether the government department in question is going about its duties in a way that respects the law and the Constitution.
[1] My username refers to an anonymously published pamphlet that played a key role in US history.
The worst that can happen is you don't change things.
The best? Maybe you'll find a receptive ear. Your lawmaker stops co-sponsoring KOSA. Your state AG stops pushing for it.
You need to make it easier for your lawmakers to be on that list too. Show them there's people who won't rake them over the coals for bowing out.
putting the consiracy hat on, the exploit is to direct as many installed AGs to push for such bills, with no big letdown if they dont pass, why/because, the demographics on dissention are valuable and are, passed to a hostile federal government.
So the worst that can happen could be worse than nothing.
[] https://www.aclu.org/press-releases/department-of-homeland-s...
No.
I said your state Attorney General's office and your elected federal Senators and members of the House.
So I reiterate - the worst that can happen is you don't change where things have been going to.
The best? Your elected officials bow out of this.
Here's the actual title of the article, which is much more concerning than the HN title.
"The attorneys general argue that social media companies deliberately design products that draw in underage users and monetize their personal data through targeted advertising. They contend that companies have not adequately disclosed addictive features or mental health risks and point to evidence suggesting firms are aware of adverse consequences for minors."
Okay, so why aren't they going after the social media companies?
That doesn't mean they should get what they might want, or that its Constitutional.
But: what methods could that reduce the harm that anonymous internet discourse so often produces? People send death threats, threats of sexual assault, harassment of all kinds, unsolicited pics of their genitals, swat attacks, just absolute nonsense all day every day, hiding behind the veil of anonymity and the asymmetry between the cost of sending such trash and the cost to track it down and do something about it.
There is, quite unusually in politics, a bit of a bipartisan consensus that this is a real, real problem, and that steps like this, or repealing Section 230, would help. Would it, or would it not, and if not, what alternatives are there?
also need a more informed citizenry able to see through propaganda.
"Many social media platforms deliberately target minors, fueling a nationwide youth mental health crisis."
". These platforms are intentionally designed to be addictive, particularly for underaged users, and generate substantial profits by monetizing minors’ personal data through targeted advertising. These companies fail to adequately disclose the addictive nature of their products or the well-documented harms associated with excessive social media use. Increasing evidence demonstrates that these companies are aware of the adverse mental health consequences imposed on underage users, yet they have chosen to persist in these practices. Accordingly, many of our Offices have initiated investigations and filed lawsuits against Meta and TikTok for their role in harming minors. "
Yet, the comapnies aren't being regulated, nor the algorithims, the marketing or even the existence. It's the users that are the problem therefore everyone has to submit their Identity to use the Internet if this passes.
https://mediabiasfactcheck.com/reclaim-the-net-bias/
I suggest to (re)read
- https://www.dni.gov/files/documents/Global%20Trends_Mapping%...
- https://www.dni.gov/files/documents/Newsroom/Reports%20and%2...
These predictions have largely come true.
You can’t illegally retaliate against citizens if you don’t know where they sleep at night.
This loophole, “think of the children,” would not exist if SV had gotten over itself and not called very solution unworkable while insisting that any solution parents receive, no matter how sloppy or confusing, is workable.
Aren't there sound reasons to support anonymous whistleblowing?
Would there be critical feedback without pseudo-anonymity on the internet?
But you folks just have to dom all the haters.
What is their favorite thing: stuffed animal brand, candy, musical artist?
But then wouldn't undercover ops be obvious?
Is this similar to the "ban all crypto" movements that periodically forget everything we've learned about infosec and protecting folks?
Do protectees' deserve privacy for their safety?
In the 1990s, they told us kids not to use our real names or addresses on the internet.