I don't need to conduct 1000 transactions per day. I don't forsee a world in which it will be some sort of fatal inconvenience to need to approve all purchases. I certainly don't plan on ever just handing over my credit card to an LLM, due to its fundamental architectural issues with injection, and I still don't anticipate handing it over to any future AI architecture anytime soon because I struggle to imagine what benefits could possibly be worth the risk of taking down such a basic, cheap barrier.
All that stuff about support, though, inevitable.
ToucanLoucan 1 days ago [-]
Agreed. My only real complaint with this article is it frames needing to argue with a machine as though this is a new, freshly annoying thing. I already do this constantly.
Every time I call the Costco pharmacy, I just hit 0 immediately because: Phone. Trees. Suck. They have always sucked, it's just an awful, grindingly slow way to accomplish ANYTHING, and it's so, so much easier to, when I need help, get a person on the line who can figure out what's gone wrong and sort it.
The only people benefiting from cutting that down are the scum class (combo of shareholders and executives) and who's shocked, really. Everything is being ruined nearly at all times to benefit the scum class.
gdulli 1 days ago [-]
At least phone trees are deterministic and there's still (usually) an option to get to a person for matters that aren't covered by the multiple choice options. Talking to AI is a much worse experience and the hope of the industry is that there won't need to be a human as a fallback anymore because (they believe) the AI is intelligent enough to handle anything.
Barbing 16 hours ago [-]
The very very very lovely executives at Intuit (thank you for your contribution to society boize) have a good plan for calling their TurboTax help line: you don't spell your name to the robot, you don't talk to a human.
(unless saying "no" / "agent" etc. the fifteen time would've been the trick! Sure, my name can be "O K"...
(I would def love this system if I worked there though, just surprising it didn't have an offramp along the way... maybe they did but everyone used it)
SpicyLemonZest 1 days ago [-]
I'm surprised to find so many people who consider human-based customer support a good experience. I wasted an hour on the phone last month with a series of polite support agents who I'm sure were wonderful people in their personal lives. They kept saying they'd like to try one more thing, making me wait 5 minutes (just short enough that I can't get anything done in the interim!), and then asking for one more pointless permutation of the workflow that did not work because their website was not showing me a button the support scripts said should be there. Talking to an LLM would have let me realize a lot faster that we weren't getting anywhere.
wincy 19 hours ago [-]
This happened to me when I tried to buy Oakley’s, it was because I’d changed my router to an ad blocking DNS which made their support session lookups fail, so they couldn’t help me. Transactions failing, all because of their site being too tightly integrated into tracking and ad platforms. I ended up going with Zenni and got similar glasses for 1/5 the price.
fyredge 19 hours ago [-]
> because their website was not showing me a button the support scripts said should be there.
At that point, it's effectively a phone tree executed by a human. Colloquially, human-based support means getting a hold of someone who knows how to solve problems, and worst case, knowing who to contact to solve the problem. That means employees who know their worth which unfortunately, businesses do not want to pay.
JohnMakin 21 hours ago [-]
there are many human customer support systems where the goal is to frustrate you into saying something to make them hang up, or make you give up.
good, human customer service is a big margin my current company eats our competitors alive on
rtgfhyuj 1 days ago [-]
you're part of the scum class btw (we all hold shares)
Leomuck 1 days ago [-]
So basically more ways of trying to make people buy things, do things, think things than before? I feel like our whole world more and more circulates around manipulation and the absence of truth and discourse.
Then again, I do think LLMs are an incredible technological achievement. The issue is not so much what they do or that they exist, but how they are utilized. Right now, they are utilized to further the class divide between rich and poor.
Who are we to trust in the future? Not big companies, not the state, not LLMs. Time to organize around groups and collectives that we know we can trust and that we know have our wellbeing in mind.
groundzeros2015 1 days ago [-]
> The issue is not so much what they do or that they exist, but how they are utilized
This is exactly how we got here though. Technology is not passive. It changes incentives, procedures, ideas and shapes the world. If we don't structurally limit what and how it's used, then we are not in control, no matter what are choices personally are.
api 1 days ago [-]
A major problem is that if we structurally limit what technologies do, we are still not in control. Now whoever we empowered to control and limit the technology is in control. Who keeps them accountable?
You’ll probably get one of three outcomes: regulatory capture by monopolies, self dealing by bureaucrats to enrich themselves or gain power, or regulatory capture by self absorbed ideologues who halt all progress or force it down some ideologically approved path.
In none of those scenarios is anything aligned with the best interest of the people.
groundzeros2015 1 days ago [-]
I don’t disagree. A consumer oriented democracy is not well equipped for the challenge.
bigfudge 23 hours ago [-]
That’s what you will get in the US. It’s not clear a functioning democracy would produce the same outcome.
groundzeros2015 22 hours ago [-]
I think it’s pretty for hard for democracies not to cater to the most base desires.
api 4 hours ago [-]
As opposed to? What makes the ego and base desires of an aristocracy superior?
It’s hard for humans not to get bogged down in base desires, period, because of the dopamine system.
groundzeros2015 1 hours ago [-]
> As opposed to?
A government which can choose to protect values which are unpopular in the short term.
> What makes the ego and base desires of an aristocracy superior?
Their awareness of higher values and goals. For example how technology might impact the population.
I would recommend Aristotle’s politics for an overview of the strengths and weakness of various government types.
Nasrudith 1 days ago [-]
I hate to tell you this but nobody has ever been in control. To think you can is to think you can unring a bell.
1 days ago [-]
pixl97 1 days ago [-]
Right, and that's why we all died in a nuclear war.....
ElectronCharge 1 days ago [-]
The disincentives to nuclear war are glaringly obvious enough that even politicians (and their masters) get it.
AI isn't like that. One problem is that it's rather generally misunderstood at this point. "AI" is not "intelligence". It's intelligence-adjacent, and something like LLMs is part of our psyche...the subconscious facility that allows us to form sentences without really thinking about it.
At any rate, I have to agree with most of the points the blog author brings up.
pixl97 3 hours ago [-]
I mean, not really. The only reason we've not died in a nuclear war is building nuclear bombs is very very difficult and expensive. If suddenly it became quick and easy to get nukes, we'd flash fry pretty quick when any and every suicidal nut with convictions got their hands on one.
SoftTalker 1 days ago [-]
> I feel like our whole world more and more circulates around manipulation
Hate to break it to you but it's always been this way, and it was easier in the past when information was so much more expensive to distribute.
djeastm 22 hours ago [-]
Cheap distribution makes manipulation easier, not harder. The time it takes for a bad actor to capture attention is much smaller now. It used to be you'd read one news article about a story once a day in a newspaper and maybe once more in the evening news. In between you could think about it, talk about it with other people you know in real life, etc.
Now you're getting meme after meme of the same story multiple times throughout the day, twisted in so many ways. And since we all have our feeds/algorithms adjusted to suit our own tastes we're all getting our own silo'd view of things and can barely rely on a shared set of facts.
All that makes manipulation of people much easier.
olyjohn 20 hours ago [-]
And the memes are all just one sentence gotchas with no real substance. And that's how people ingest the news. Headlines and Memes. Who needs actual articles?
Barbing 16 hours ago [-]
>read one news article about a story once a day in a newspaper and maybe once more in the evening news. In between you could think about it, talk about it with other people you know in real life
The bit about having processing time and the social check and balance has been missing from my perspective. Thank you.
bigfudge 23 hours ago [-]
I really didn’t experience the early internet that way.
georgemcbay 17 hours ago [-]
> it was easier in the past when information was so much more expensive to distribute.
As a 52 year old my life experience disagrees.
It is much easier now because information flows both ways and "They" have a lot of information on you (and everyone else) and can use that information to manipulate you with algorithmic ragebait, and to extract maximum rents (in all aspects of commerce, not just literal rents) from you, etc.
Not that things were ever perfect in the past, they certainly weren't, but increasingly so much of everything is literally just an outright fucking scam these days and all of it is being turbocharged by various forms of "AI" adjacent technology and increasing deregulation.
SoftTalker 4 hours ago [-]
I agree that algorithms tune what you see, it's easier now to give people what they are predisposed to respond to. But back when everyone got their news from Walter Cronkite he had much more power to shape the narrative. Everyone was seeing the same thing, at the same time, and they didn't have easy access to a dozen other points of view from around the world.
You can still do that today, you can confine yourself to the CNN or the Breitbart view of the world, but that's not your only option. You have way more opportunity to be well informed now.
mentalgear 1 days ago [-]
> Time to organize around groups and collectives that we know we can trust
I’ve had the same thoughts, but if you look deeper, it all circles back to what we already had: (open, transparent) public institutions, society, and government by the people. The foundation wasn't the problem; the environment was.
Along the way, social media noise, engagement-optimisation and Kardashian-style "entertainment news" infecting real news made an attention economy where, no matter how scandalous you are, attention can be minted into dollars. That is what polluted our infosphere and lead to the lack of trust.
Now, nobody trusts these previously mentioned public entities any more - sometimes due to state-actor or ad-tech disinformation, and sometimes for good reason like when the poisoned public allowed these 80s-style telemarketer-style political weirdos and their cronies to take over public administration.
cindyllm 1 days ago [-]
[dead]
intended 1 days ago [-]
Our society, pre internet, built systems to manage trust. The conditions that allowed those systems to exist (the speed of transmission of data, the ratio of content generation to verification, the ability to shape consensus), have changed.
You are ringing the clarion call for community and cooperation, and it will not work. Not because people don’t want community or the better things, but because incentives make the world go round.
The choice between making some money at the cost of polluting the information commons is no choice at all. That degradation of the commons means no one can escape. No community you form, no group you build, dodges the fallout when someone decides to set fire to shared infrastructure.
We are moving into the dark forest era of the information economy. As models improve, inference costs drop, and capacity increases, the primary organism creating content online will be the bot.
Instead of building communities of people, build collections based on rules of engagement. Participants - be it bots or humans - must follow proscribed rules of conflict and debate.
That way it doesn’t matter if you are talking to a machine or a person. All that matters is that the rules were followed.
Barbing 16 hours ago [-]
Very interesting, I've thought in a completely different direction, towards human verification. "IRL KYC for friends" or something
I always hit problems with it though. Let's say I can find someone I trust. Maybe it's me. Say I only enter online spaces, at least with intent of discussion, with those I've met in real life. Well, at some point, someone I've met face to face would be incentivized to maybe share a link to their friend's concert. Perhaps there's a free guest list spot in it for them if the show sells out. Or maybe it's all gravy, but eventually:
I want to expand the network we've created together, and it means trusting someone else to bring in people to the online space I've never met in real life. This could again be fine for a long time, but won't someone eventually be incentivized (especially if this practice were common) to promote this supplement, promote that politician...?
(recognize astroturfing is different from the impending slop tsunami but both feel to be in the same stadium)
intended 13 hours ago [-]
Proof of human is the natural first stop.
Your solution shares its essence with a club, a WhatsApp group or interest group.
It works, but you will still be at the mercy of the large communities and economies of thought that the members are a part of.
That is the broader environment you are a part of.
Everyone from FAANG firms, governments to game companies struggle to identify real people from bots.
If your platform is global, then you have to contend with users from different legal regimes and jurisdictions.
The issue is that verification is logistically expensive, ends up infringing on rights, legally complex and on top of all that - error prone.
To top it off - If proof of life ends up gatekeeping any form of value, you will set up incentives to break verification.
LogicFailsMe 1 days ago [-]
Local models and powerful consumer HW and an informed populace that doesn't hate STEM, but that's not good for the shareholder value so you get expensive everything everywhere all at once instead. And if you dare question the mindset of hating on STEM whilst being addicted to its fruits, that just means you're another one of those maximally SV-aligned sociopaths so why bother? Evolve and let the chips fall where they may because I don't see any other options that play out in the idiocracy craving for strong confidently wrong leadership.
nalekberov 1 days ago [-]
> Right now, they are utilized to further the class divide between rich and poor.
Ironically this was the main reason LLMs were introduced in the first place, not to benefit the poor, but to widen the gap between the rich and the poor.
drzaiusx11 1 days ago [-]
The majority of human history has been written by the ruling class of the day. Transparency only seems to follow in the wake of their inevitable fall, usually at great cost in retrospective research via the oft thankless unraveling of threads of truth from their more copious fictions. Much like the machines we construct in our likeness, we too seem to get stuck in endless regressive cycles.
Folks in the "now" have always had a tendency to cling to their fictions as if they were truth for whatever reason; like nationalist exceptionalism, racial superiority, or religions rooted in "othering", etc. Humans seem to have an innate desire to fool themselves and trust in things they should not. Perhaps it's simply a sort of existential coping mechanism of living in a cold, unforgiving reality. We seek the comfort of lies.
Organizing around groups of trust, tends to lead to factionalism and conflicts. Knowing and trusting are sadly very different things in our species.
01HNNWZ0MV43FF 23 hours ago [-]
The Old Internet was a whalefall - Information online was fairly trustworthy while being more convenient and more plentiful than in-person information.
The whale's been eaten now. The broader Internet is mostly not trustworthy, or convenient, and the information is not even very plentiful.
People will and are retreating into high-trust zones. In-person networks, product recommendations from real friends, and closed group chats.
It's not the end of the world, but things have changed. We'll have to put more work into finding information than we're used to.
gaythread 21 hours ago [-]
[dead]
sassymuffinz 1 days ago [-]
Self inflating nipple shaped balloons that generate their own lift without any helium would be an incredible achievement but that doesn't mean it's useful beyond being novel. Chatbots are ultimately just predictive text on steroids, and only complete fools would base their business, or entire economy around it.
throwanem 1 days ago [-]
The invention you describe, by making possible cargo transport without dependency on either a supply of fuel or the vagaries of weather, would revolutionize the global economy to an extent little short of utopian.
You might want to pick a better example the next time you advance this argument, is what I mean to say.
sassymuffinz 10 hours ago [-]
Gemini says it’s a genius idea and I should patent it.
throwanem 9 hours ago [-]
I can't argue with that!
morgengold 1 days ago [-]
My father just changed his car key battery with the help of ai and he likes that. He also consulted it about about car insurance regulations and he got more out of it than searching the web himself.
For most simple mainstream questions I just ask ai instead of googling shitty results.
Most of the time ai is good enough and often better than the status ante.
People do not care if it is a stupid token prediction machine as long as the job gets done.
gtowey 18 hours ago [-]
But those are mostly things that were possible before basic web search became nearly unusable.
I don't disagree with you at all, I have found that I turn to LLMs to answer questions that I would have just searched with Google before.
It feels like a case of companies creating a problem to sell you the solution. The problem in their eyes is that they couldn't squeeze any more money out of search. So they bring us LLMs to replace it at what is sure to be a much higher cost. But they had to torpedo search to force users to use LLMs.
morgengold 14 hours ago [-]
Fair point gtowey. I am with you up to a point. But we have to admit, the new AI way is often faster and needs less brain power. Let's be honest, consumers love that.
dalmo3 23 hours ago [-]
Until you take a baby to the vaccine clinic, the nurse googles which vaccine to give at his age, and blindly trusts the highlighted AI snippet at the top.
Not a fictitious example.
forinti 23 hours ago [-]
I saw a doctor use AI to do some maths recently. I checked it and it was right, but trusting LLMs to do statistics is not a good idea.
20 hours ago [-]
xboxnolifes 21 hours ago [-]
Sure, but that's an incredible level of incompetence, that I can only see would be expose otherwise outside of AI use. The entire list of what vaccines to give to what age group could fit on a single piece of paper.
wpm 17 hours ago [-]
And much of what people use AI for now could be easily done without it. How many steps of a Claude Code /plan are just running basic ls commands, all for a few thousand tokens?
The entire thing reeks of laziness and incompetence. It's neat an all but its a giant sucking maw that is threatening to gobble up whats left of anything good.
vyr 1 days ago [-]
have worked closely with customer support teams, can confirm that the goal of any technical improvements that go in front of CS agents is to reduce ticket volume, and thus costs. of course they measure retention and satisfaction but ticket volume is always the big one. chatbots were big for this long before LLMs existed.
a fun side effect is that CS is also an early warning system for companies, so when you make it harder to get through to a human, you start throwing out info on your users' pain points. of course this only matters if people have a choice about whether to use your product, so that's gotta be an upside for insurance companies, etc.
gregates 24 hours ago [-]
I had a fun experience with my ISP where their chat bot couldn't help me (of course it couldn't, I don't call for "did you try turning it off and on again" problems), so it escalated me to a human agent. Said human agent was very obviously copy-pasting LLM output. I could tell because (1) the responses were nearly identical to what Claude already told me when I asked it before calling and (2) every once in a while I would get an uncharacteristically brief reply, without capitalization or punctuation, in Indian English.
I haven't a had a good experience since AT&T bought my previous ISP and forced me to switch to a different subsidiary.
slumberlust 6 hours ago [-]
Just look at the difference between Amazon and Chewy support. One actively obfuscates how to talk/chat with a human; the other has someone knowledgeable pick up after one telephony prompt.
Loughla 1 days ago [-]
Can I not shop for other insurance companies? I specifically chose my provider because I know there's an office I can call to talk to my agent or his secretary. The moment I have to interact with a chatbot, I take my business to someone else.
Lerc 1 days ago [-]
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
—IBM internal training,
1979
It took me a while to realise that the premise is saying the same thing as the reason why we have so many "Computer says no" experiences today.
The conclusion only follows if you want someone to be accountable.
If you want to avoid being accountable, computers should make all management decisions.
This has nothing to do with AI other than it provides another mechanism to do that.
People saying "I'd love to help you but the computer won't let me do that" has been happening for years now.
Websites develop abusive patterns because A/B testing lets a process decide based on the goal you want, It doesn't measure the repercussions so you have made no decision to allow them.
Management read it as
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE THERE CAN BE NO LIABILITY IF COMPUTERS MAKE ALL MANAGEMENT DECISIONS
SoftTalker 1 days ago [-]
You're misinterpreting the implication. A better phrasing might be:
A computer can never be held accountable. Therefore, since all management decisions must have accountability, a computer must never make them.
M95D 22 hours ago [-]
Since when are (human) managers accountable?
esafak 18 hours ago [-]
You've never seen a manager get fired or decide to "spend more time with their family" ?
M95D 2 hours ago [-]
No, not really. I only saw them promoted or quitting for an even better management job.
The erosion and further diffusion of responsibility is the trend that worries me the most, since it’s already how all mid-size organisations, businesses and institutions alike, operate by design, and LLMs are likely to make that much worse.
bitexploder 1 days ago [-]
Just need your own LLMs to exhaust them. The future seems like it will be owned by whoever can automate with LLMs in whatever problem domain there is.
My agent will be in touch with yours, I guess.
kevg123 1 days ago [-]
I sent the entire series by Aphyr [1] to some friends. Two of them, independently, responded with a variant of, "TLDR, can you give a summary?"
I chat with these friends a lot but I rarely send articles that I suggest they read and that I think are profound, so I expected them to read it. These are smart people that have a history of reading lots of books.
They are both huge AI proponents now and use AI for nearly everything now. Debates on various topics with them used to be rich; now, they're shallow and they just send me AI summaries of points they're clearly just predisposed to. Their attention spans are dwindling.
maybe it means they were never really as smart as you thought?
Not meant to be snarky. It's been two decades now since my first wide-eyed entry into the workforce, moving for new opportunities, meeting new people. it's been great. There's a lot of smart people out there. I also realize that many people I seen as smart had more access to more content then i did. i still appreciated their sharing , it was enlightening to me. But after 20 years, I think back and it's literally quoting things from smart youtube videos. and regurgitating the latest thought leaders.
We all do this, but like you, what's meaningful to me is the chewing, the dissection and synthesis. coming together to share different perspectives and so on. i've had those friends too! it's just not 1:1
kevg123 1 days ago [-]
You might be right but they used to read much more and our arguments used to be deeper. The changes I'm seeing in them are highly correlated to their increased use of AI.
Maybe it's something like that AI allows them to indulge in their shallowness/laziness by giving them the impression that they're not doing that.
yw3410 1 days ago [-]
That's interesting; have you talked to your friends about their changes in behaviour? Is it something they've noticed themselves?
kevg123 21 hours ago [-]
I've started down the path but I'm taking it delicately. The future may hold a whole new genre of interventions for AI atrophy.
solid_fuel 22 hours ago [-]
It's disheartening to see how shallow the engagement of some people I formerly respected has become. People I looked up to and learned from now just left ChatGPT do their thinking for them, asking for summaries of articles and topics, engaging for a minute or two at most before moving on to the next thing.
Recently, I have been taking intentional steps to avoid falling into the same tar pit. I've started corresponding over email with some of my friends, with us sending multi-page letters back and forth instead of just using chat apps. So far, it has been a wonderful breath of fresh air. Long form communication requires thought and time instead of superficial engagement, and we have had some incredibly interesting discussions that simply aren't possible over voice chat or instant messaging.
kevg123 21 hours ago [-]
Great idea, thanks!
tim333 7 hours ago [-]
Smart people usually have competing ways of spending time and may not prioritize an eight part dissertation on AI being a bit annoying?
kevg123 7 hours ago [-]
It's more of a meta point to me. I get that this series isn't landing for some people but the meta-observation is that given something of roughly equal substantiveness as before, these friends' motivations for long form content and discussion seem to have atrophied, perhaps largely due to the addition of the AI summary reality cipher to their lives.
Scholmo 1 days ago [-]
You might just overhype this blog.
I read one of his last week? and didn't like it that much. I read this one despite it because its quite high on hn for whatever reason.
I don't think everything is lies and i don't like how he thinks a LLM is just some bullshit machine.
Its also waaaay to early to even understand were this is going. We as humans have never had that much compute and used it this particular way. It could literlay be the road to a utopia or dystopia. But its very crazy to experience it.
His article series feels so negative and dismissive, that i'm not taking anything from it.
There is so much more research, money and compute behind this AI topic right now, every week or two weeks something relevant better/new comes out of this. From 2d, 3d models, new LLM versions, smaller LLms, faster inferencing (Nvidias Nemotron), we don't know how this will continue.
And the weird thing is that he clearly knows plenty about LLMs but it feels so negative dismissive, hard to put a finger to it.
apsurd 1 days ago [-]
The author uses a lot of words and references to make critical conclusions that they do disclaim aren't expert.
Rather than dismissive, I see it as effort intensive. The conclusions can be negative, but they've spawned so much discussion which i think is great.
abcde666777 19 hours ago [-]
The author's making considered speculations about where things might go. You're just putting your hands up in the air and saying 'we don't know'.
kevg123 1 days ago [-]
I wasn't even hyping it though. I shared it among friends to spark discussion. Sure, there's some hyperbole, but I found it thought provoking.
(FYI, I didn't downvote your comment)
layer8 1 days ago [-]
I wouldn’t necessarily read a lengthy blog post either just because some friend recommended it to me, and conversely I wouldn’t expect a friend to necessarily read it if I was recommending it without being prompted for recommendations. There needs to be some additional incentive and/or interest.
Also, I’m reading this comment thread instead of TFA because I didn’t find the previous part I read that great. And I’m not an AI proponent, more of an AI skeptic.
kevg123 1 days ago [-]
I didn't provide much context but, 1) I've had deep conversations with these friends for years based on long articles or videos, and 2) I recommend maybe one or two long form items per year and they used to always review them without, "TLDR?"
So my main concern here is that my experience may be a microcosm of the shallowing of discussions correlated with some people's increased use of AI. That worries me.
It's more of a meta point to me. I get that this series isn't landing for some people, yourself included, but the meta-observation is that given something of roughly equal substantiveness as before, these friends' motivations for long form content and discussion seem to have atrophied, perhaps largely due to the addition of the AI summary reality cipher to their lives.
Of course, correlation isn't causation. Maybe they both just got older and more lazy, but given their reliance on AI summaries in other debates happening recently, I'm worried.
skrebbel 1 days ago [-]
Friends don’t send friends AI summaries
grvdrm 1 days ago [-]
Coworkers, too.
potsandpans 1 days ago [-]
Or maybe they just don't want to read a long form analysis on something?
I also enjoy the series. But sometimes my friends send me things and I'm like, "not gonna read all of that."
Just because you're friends don't want to invest the same amount of time that you want to invest in your own personal enrichment doesn't mean they're getting stupid.
kevg123 7 hours ago [-]
It's more of a meta point to me. I get that this series isn't landing for some people but the meta-observation is that given something of roughly equal substantiveness as before, these friends' motivations for long form content and discussion seem to have atrophied, perhaps largely due to the addition of the AI summary reality cipher to their lives.
intended 1 days ago [-]
MIT actually has a paper on how ChatGPT use impacted cognitive skills for essay writing.
> Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
> Cognitive activity scaled down in relation to external tool use. …
> Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.
... is there any reason why I shouldn't be visiting Aphyr's site directly?
DareTheDev 1 days ago [-]
Blocked in the UK. “Online safety act”
GavinAnderegg 1 days ago [-]
Is this something you're seeing personally? If so, how do you know it's because of the Online Safety Act? This is a personal blog and it doesn't seem to have any adult content that I can find. The homepage of the site isn't blocked when I check it here: https://www.blocked.org.uk/check
Thanks for this! Seems like a bold stance… but the Online Safety Act also seems like a poor piece of legislation.
rwmj 23 hours ago [-]
It is, but Kyle's "protest" is making absolutely no difference.
zdragnar 19 hours ago [-]
What protest? If the site is illegal in the UK, he has three options:
1) add age verification to comply
2) cease operating in the UK by blocking traffic
3) operate in violation of the law, making travel to anywhere the UK has jurisdiction dangerous
I don't think he's intending on changing UK society by blocking their traffic because, as you point out, that's not a good way to effect change.
multjoy 23 hours ago [-]
The author is blocking UK ip addresses, presumably out of principle rather than because they’d fall foul of it.
tom_ 1 days ago [-]
It tells you when you visit.
gs17 1 days ago [-]
> Perhaps we’ll see distributed boycotts where many people deploy personal models to force Burger King’s models to burn through tokens at a fantastic rate.
Given how many people hate AI in general, I'm surprised there hasn't been anything like this happening. They could even get around the irony of using "AI" themselves, I bet low-tech language models like Markov chains could provide sufficient time wasting potential (I'd love to see it done with an old fashioned AIML chatbot). Asymmetric chatbot warfare.
delusional 1 days ago [-]
If your complaints about AI are largely about the industrial energy use, the poor quality of service, and the displacement of human labor, wasting more CPU time doesn't seem like a viable or useful protest. The lesson Burger King would take away from your DDoS protest isn't that they should provide better customer service, but that they shouldn't provide any customer service. You'd be giving them free cover to blame consumers for making customer service too expensive.
gs17 20 hours ago [-]
I've interacted with some anti-AI people who genuinely would prefer the "no customer service" world to even a "good AI customer service" world. They're a small minority, sure, but this sort of attack wouldn't need a huge group.
wpm 17 hours ago [-]
Why would Burger King's competitors leave such an obvious competitive edge on the table?
elzbardico 1 days ago [-]
Most people who hate AI have been completely dis-enfranchised by the system. The media won't amplificate their voices, any viable political leader that is seem as threat will be completely and utterly destroyed by the parties and the PAC machine still on the primaries.
It is an incredibly vexing situation to see whatever you're an AI hater or enthusiast.
I, for one, welcome our trillion parameters multiple layers overlords.
tim333 7 hours ago [-]
Lots of people seem to hate AI but what is the system supposed to do about it?
carlosjobim 21 hours ago [-]
[flagged]
crooked-v 20 hours ago [-]
I wasn't aware that gluten intolerant people had been subject to mass chattel slavery.
carlosjobim 19 hours ago [-]
Now you know. Just like the AI haters are subject to it.
_doctor_love 1 days ago [-]
I've been enjoying these articles by 'aphyr and I think they raise important points. Primarily though, they read to me as polemics of a curiously American nature.
The pattern goes something like this:
- this development is bad
- companies will be unrestrained in their use of this development
- there will be no rules so they can do whatever they want
- we are all fucked as a result
But then...propose that we make some laws to put rules around this stuff, also known as regulations and everybody goes "whoa hold up hold up hold up...I dunno about that part."
Dear friends - America has always been this way. Study your 19th and 20th century history. Companies will exploit the shit out of us unless we put some rules in place to prevent it. Yes, that might mean making less money in the short term as regulations cause friction. But in the long term it means we can have a better and actually livable society.
(For what it's worth I'm an American and not an uppity European or Australian taking potshots from across the pond; no offense to Euros or Aussies intended, love you guys)
SilverElfin 1 days ago [-]
It’s not that people are against regulations totally, but that the structure of society is broken. People with wealth, including corporations, can influence and control everything with money. Legislators are easily bribed. Lawsuits are expensive and take years. It’s hard to make anything happen unless you’re already rich or connected enough to access the right people.
The real issue is new amendments are needed. But that’s hard. You need 75% of states ratifying. And that seems impossible today on any topic.
scoofy 1 days ago [-]
I mean, whenever I read these types of articles and responses, my reaction is almost always the same: what is this ideal world people are hoping for?
The future of everything is lies... sure, well so was the past, what's your point?
When we say "snake oil salesmen" we literally had people selling these mineral oils, fraudulently, for a century. People yearn for a time when there wasn't an antagonistic relationship between buyer and seller, but it has never existed. There is only one way have that relationship, and it's by having a personal -- repeatable -- relationship with your seller, and that's usually expensive, even if that expense is not monetary. It just the game theory of repeated games.
The easiest way to establish this behavior is to be a regular at a bar while being a good patron that creates a positive experience for everyone else. There are multiple places I frequent where I have no doubt in my mind that if there was an emergency, they might ask me to help out in some way because I'm trustworthy. And they could hand me an envelope of money, knowing I'd not steal it, because it's not worth it for me to take a bit of money and ruin my welcome at this place.
That's not going to happen with online sellers, and it's not going to happen with most corporations. The promise was always "save money through economies of scale, such that you still win even though there is no relationship." For the most part, that's proven true. But as everything has been outsourced to China, and anyone can make a basic version of everything, we're running up against the limits of those benefits. And I truly believe that brands are going to start mattering a lot more going forward for marketplaces.
Shop at Costco because they care about their customers. In other areas where companies don't, you're going to have to do the legwork to find out who is going to treat you right. That sucks, and it'll probably be more expensive, but that's the antagonistic relationship that buyers and sellers have naturally. Regulation helps, yes, but it's not some cure all. The expectation that you can just go with the person with the lowest price and get quality services is not a thing that's every existed.
intended 1 days ago [-]
I’d andd another layer - for American tech workers, regulation also reduces profits. This hurts salaries, stocks options and career growth.
Incentives make the world go round, so even if people recognize the issue, they would rather it become someone else’s problem, than willingly harm their own future.
slopinthebag 1 days ago [-]
What regulations would you suggest?
CamperBob2 24 hours ago [-]
But then...propose that we make some laws to put rules around this stuff, also known as regulations and everybody goes "whoa hold up hold up hold up...I dunno about that part."
Of course, what those who call for regulation of AI or other nascent technologies are really saying is, "Unqualified and/or biased and/or corrupt and/or dull-witted people should make decisions that affect us all, based on incomplete, misleading or rapidly-evolving information, with the power to enforce them at gunpoint."
And then they wonder why other countries beat their own.
_doctor_love 24 hours ago [-]
I'm sorry...wut?
solid_fuel 22 hours ago [-]
This interaction nicely illustrates why so many Americans sound hopeless when talking about these developments. We can't even get our fellow citizens to do anything about the school shootings, our incredibly expensive healthcare, the homelessness crisis, or really anything else. Instead we are surrounded by conservative reactionaries who view "let's stop dumping cyanide into the town pond" as a communist plot to steal their religion.
So yeah, when people around here look at "AI" and all the harm it's already doing, they don't any hope that regulations will be put in place before more harm is done.
CamperBob2 22 hours ago [-]
The person comparing a building full of computers to dumping cyanide in the town pond is the reactionary in this conversation.
CamperBob2 23 hours ago [-]
I mean, it's pretty clear how it plays out, if you look at the EU. Those who want the US to work that way need to be fought, and fought hard.
ramon156 10 hours ago [-]
- Use AI for Knowledge holes, fact check them, then accept them as solved.
- Use agents to write code that is defined in a spec, review manually, accept them as solved.
Nothing more, nothing less
elzbardico 1 days ago [-]
The worst thing is that non-technical people, and actually a lot of technical people without experience in ML, will tend to overstimate the capabilities of those systems, neither the nuances of probabilitic thinking to properly integrate their outputs in a decision.
Remember that the polygraph still exists, now we will be dealing with a massive portion of the decision makers will treat as artificial inteligence not in the technical sense we use, but as real inteligence, maybe even super-inteligence.
Peritract 23 hours ago [-]
I don't think that's an accurate description of what's happening here. With previous technology, sure, but the breathless overstatement of AI capabilities is coming primarily from 'technical' people who should know better.
The average person on the street is familiar with consumer-facing AI but doesn't think it's really alive/magic/the solution to everything. Our supposed best-and-brightest are the ones flogging the horse.
tgsovlerkhgsel 1 days ago [-]
Regarding companies trying to block any contact with customer service and adding endless AI hurdles: In some countries, having a reachable means of contact is legally required. Is there a NOYB-style organization that specializes in enforcing this right (suing companies on behalf of consumers)?
For the "bureaucracy has royally fucked up and doesn't want to fix it", if it is something that can be fixed with money and isn't time sensitive (e.g. you need a refund rather than get the airline to actually provide you the ticket you already paid for and want to fly this weekend): In countries that have effective small claims courts, these can be a surprisingly convenient (less hassle than the "talk to the bot" wall of the company!) to resolve this kind of issue.
I hope that these resolution methods become more common - I think the tools to fight enshittification often already exist, we just don't use them enough. A welcome side effect would, of course, be that this would impose a real cost on the enshittifiers, creating an incentive to provide proper support.
xp84 1 days ago [-]
> In countries that have effective small claims courts, these can be a surprisingly convenient (less hassle than the "talk to the bot" wall of the company!) to resolve this kind of issue.
Idk where we fall on the scale of “effectiveness” vs our peers, but I do read more people’s stories of Small Claims that are positive than negative. But I’ve never used this. I suspect it would be difficult to press a claim against a random large “company” just based on how slippery their identities even are. “Oh, Apple Inc. isn’t responsible for that, it’s a different subsidiary based in Ireland for tax reasons. Go serve them.” I think most people would have to be out more money (maybe more than the S.C. limit?) before being motivated to engage with the chronically overextended legal system, sadly.
Also, if the effective tools do exist, count on American companies using the American bribery-based political system to change the laws to dull those tools or to eliminate them.
Again, if you live in some consumer-friendly country good for you, I’m just saying how it plays out in this one society. I’ll stipulate that it’s all our fault blah blah blah.
Barbing 16 hours ago [-]
"That Dropped Call With Customer Service? It Was On Purpose."
I knew that one time I needed a free Sam's Club membership for one thing and they kept on dropping me...
petermcneeley 1 days ago [-]
> Since LLMs are unpredictable and vulnerable to injection attacks, customer service machines must also have limited power
Haha yes. I interacted with a bank one. It was like press 5 for mortgages but with a text to speech front end.
At the end of the day the LLM can be tricked into doing anything.
bluefirebrand 1 days ago [-]
I wonder if there's also less of a stigma and sense of wrongdoing about tricking an LLM versus tricking an employee
We intuitively know that an employee will be punished and may get fired if we trick them. Many of us won't try to trick human employees as a result, because we would feel bad if they had bad consequences as a result of our trickery
There is likely no such hesitation around tricking LLMs. I know I personally wouldn't feel bad about it at all. Mostly because any computerized customer service process is annoying so anything I can do to limit my time dealing with it is a win in my books
solid_fuel 22 hours ago [-]
I have no moral qualms about tricking an LLM, but I would not be surprised if it gets called a violation of the CFAA in the near future.
bluefirebrand 20 hours ago [-]
I hope not. It would be a real shame to offload corporate responsibility onto their customers
LogicFailsMe 1 days ago [-]
D^HLying is easy, it's comedy that's hard...
abcde666777 20 hours ago [-]
I've found these posts to be excellent - wonderful reads really. Props to the author.
AtlasBarfed 6 hours ago [-]
Sucky customer service is a direct economic function of reduced competition and increase monopolization / cartel dynamics.
smitty1e 1 days ago [-]
So, providing actual customer service becomes a market differentiator?
"Yes, we cost more, but your get what you pay for" can be a good play.
ixtli 1 days ago [-]
Excellent essay. I see some of this is already happening imo
fandorin 1 days ago [-]
„Agentic commerce means handing your credit card to a Large Language Model” - this is simply not true. LLMs/Agents will never get any credit/debit card details, they will be just an interface.
embedding-shape 1 days ago [-]
Full quote:
> People are very excited about “agentic commerce”. Agentic commerce means handing your credit card to a Large Language Model, giving it access to the Internet, telling it to buy something, and calling it in a loop until something exciting happens.
I think you're confusing this for the other side of things. The article talks about how some people already use OpenClaw and the variations, give them access to bunch of stuff including cards to purchase things (sometimes virtual and limited cards), I think that's what the article talks about when they say "agentic commerce".
Obviously a intentional simplification in the language the author uses, but I think it gets the point across at least.
fandorin 1 days ago [-]
Ok, in this case I misunderstood author's point here. "Agentic commerce" for me is a framework that networks (MC, Visa) and other big players in the ecosystem are working on. So it's far from "i'll give my credit cards details to openclaw and hope for the best". Obviously I know that a lot of people do that, unfortunately...
hn92726819 1 days ago [-]
Millions of users, and you think exactly zero of them will hand over their credit card info?
bdangubic 1 days ago [-]
my wife is one, so definitely not zero (she uses cards from privacy.com so there’s that…)
0xbadcafebee 1 days ago [-]
This is doomerism. Yes, everything will get worse. But everything will also get better. Such is progress. (for every one of these examples of annoyances, I can think of two ways to use AI to get around the annoyance. not clever programmer things, but things an average person who learns to use Codex or Claude Desktop to operate their desktop will know)
Most of these annoyances are also things that existed before AI, and will continue to exist after, because consumerist capitalism. The good little obedient consumers get abused because they don't stand up for themselves. Customer service is an enfuriating maze? Yeah, because you voted with your dollars (and political indifference) to allow companies to make customer service (the thing you pay for) worse. We bring these problems on ourselves. It's pointless to complain if you aren't willing to do anything to change it. (And if you think you can't change it, there's other nations to look at, as well as the fact that you live in a democracy - for now - unlike the rest of the world)
Hell, we already have companies whose sole purpose is to manage your subscriptions for you because you're too lazy to do it yourself. You could look at this and say, man, the world is terrible! Or you could look at this and say, man, how great is my life that I can not only subscribe to a lot of things without going bankrupt, but I have extra cash left over to pay a company to manage my subscriptions?
Don't let the hedonic treadmill and complacency trick you into A) accepting a worse life, or B) convincing yourself your life is bad when it's actually better than most people's.
duskdozer 1 days ago [-]
>(for every one of these examples of annoyances, I can think of two ways to use AI to get around the annoyance. not clever programmer things, but things an average person who learns to use Codex or Claude Desktop to operate their desktop will know)
As the author said:
>I suspect that like the job market, everyone will wind up paying massive “AI” companies to manage the drudgery they created.
Scholmo 1 days ago [-]
It could also lead to a massive crash of capitalism and reevaluation of how our society functions.
It could lead to significant progress in every single research area.
I'm at least very impressed about the amount of open models and that it doesn't hold up that the gap between public and private diverges massivly. Public is probably one year behind.
slopinthebag 1 days ago [-]
> Yes, everything will get worse. But everything will also get better.
That is not known. Things could easily just get worse, and IMO that is far more likely. Every civilisation has collapsed, ours is clearly in decline, and AI could likely accelerate that decline.
I'm not exactly seeing the progress here. AI helps us write some software a bit faster? Doesn't seem revolutionary to me. Is it having any significant impact on peoples lives other than the various economic forces? I'm not seeing it.
> Yeah, because you voted with your dollars
In our system? No. In crony capitalism the companies who win do it through manipulating the political system. And when the government inflates the currency and destroys people buying power they simply cannot afford to "vote with their dollars". This is hilariously naive.
0xbadcafebee 1 days ago [-]
> That is not known. Things could easily just get worse, and IMO that is far more likely. Every civilisation has collapsed, ours is clearly in decline, and AI could likely accelerate that decline.
This is the definition of doomerism. Of all the ways each society has adapted to industrializaion, of all the technologies we've adapted to, and our continued use of all of them despite any negative impacts, you think an LLM is going to end things? An overcomplicated autocomplete?
There is no evidence whatsoever that suggests our society would collapse under AI. An imagination is not evidence. I mean, it's just a tool! Like the car, the telephone, the genetically engineered seed. Humans always freak out when they see something they don't understand. But it turns out there's no tool we have created that we can't control. We're stupid, but we're not idiots.
siliconc0w 16 hours ago [-]
I'm excited for the AI lawsuits and litigation. It used to be squarely in the domain of the well financed but I can see the legal system absolutely inundated with AI generated legal slop.
calvinmorrison 21 hours ago [-]
We are inseparable from technology. Technology which we cannot opt out of to live in a society. We do not get to pick and chose what types of technology we engage in. Dr. K predicted this decades ago and he was right.
bdangubic 21 hours ago [-]
we are 100% separable from technology and we can “opt out” everything we want to opt out, every technology use is by choice and convenience and not necessity. my Dad does not have a mobile or internet and is doing just fine, the only “technology” that exists in his life in TV and a landline. so absolutely 100% of the technology you engage with is by your own choice and absolutely nothing else
calvinmorrison 3 hours ago [-]
absolutely not. Look at how roads and driving have shaped our entire lives in America. If by opting out you mean you can live alone in a hut, yes sure. If you want to participate meaningfully in society, no.
gib444 24 hours ago [-]
He talks in the future tense about things already happening for some time. I've had phone systems lie about talking to a bot.
> Companies are now trying to divert support requests into chats with LLMs
More than trying they are doing it very successfully and for a long time now
I do agree things can still get 10x worse than even the current state though
> When you talk to a person, there’s a “there” there—someone who, if you’re patient and polite, can actually understand what’s going on
I've found they have been trained to be machine-like for many years now and not actually help. They focus on empathy and understanding and caring about your needs...and diverting your attention away from actually resolving the issue. Here's an example recently I experienced:
I complained to my bank about how they show refunds on the app. I got a call from a lovely sounding lady who used a comforting tone to ask if I had any special needs she needs to be aware of so she can "provide extra support" .
At beginning I made it clear I'm not chasing a particular refund but rather raising a specific complaint about how refunds are shown in the app. 4 times she mentioned a specific refund assuring me that it's been refunded, ignoring everything I said at the beginning. She explained how refunds work. She explained how pending transactions work (all off topic). She explained 3 times about how they can't (won't) create a feedback loop and begged my permission to close off the complaint, saying she hopes I can understand.
That was all a very dressed up "I'll pass your feedback to the PM. Thanks" but it was whole ridiculous long phone call trying to make me feel "heard" and I came away feeling like a 15 year old
ufocia 1 days ago [-]
AI on AI warfare
Myrmornis 1 days ago [-]
I read the first couple of posts in the series. The essay is full of criticism of LLMs, and in a couple of places the author distances himself, as if he himself isn't using them ("some people I respect tell me that...").
It's certainly worth discussing the fact that the entire industry is starting to outsource large amounts of our thinking and writing work to non-sentient statistical algorithms, but this discussion needs to honestly confront the extent to which they are successfully completing useful tasks today.
KronisLV 1 days ago [-]
> ML models will hurt innocent people.
Lots of blaming LLMs but I think the root cause lies elsewhere, I’m not even sure whether dismissing it as “capitalism” or “profit motives” would do it justice, because in general it feels more like the world that we live in lacks humanity.
Even in a capitalist world, a company could take a stance and decide not to purposefully screw people over, but in the world that we live in instead they look for ways to better screw over people and extract more money from them. It doesn’t matter whether your customer support is handled by someone from India, a crappy telephone tree or some voice model, when the incentive is the same - to do the bare minimum for customer “support” (in practice, just getting you to fuck off). Same for handling insurance claims and “dynamic pricing” of things - it doesn’t matter whether it’s some proprietary algorithm or just an LLM making crap up when the goal is to screw you over.
Blaming “AI” for all of this would be barking up the wrong tree (without that tech they’d just find other ways), though one can definitely acknowledge that this technology provides another convenient scapegoat, same as how you can lay employees off and just say cause it’s because of AI when in actuality it’s just greed and wanting to make your books look better.
duskdozer 1 days ago [-]
A lot of this has been going on for a long time and I've been sensitive to it. LLMs may not be solely responsible but they're a massive escalation.
jfengel 1 days ago [-]
In a capitalist world, the company that does decide to screw people over gets rich and the one that doesn't goes out of business.
It would be great if people chose not to do business with the former, but many simply do not care. They may think only other people get screwed. They may not take the time to think about it, especially if the company spends a ton of money obfuscating their misbehavior. Quite a few actively defend the right of companies to screw them.
Technology multiplies that like a lever. We weren't prepared for capitalism before LLMs and we're massively under-prepared now.
intended 1 days ago [-]
The articles didn’t blame LLMs, they talked about how they would get used, precisely through the lens of systems, incentives and culture.
christkv 23 hours ago [-]
Meh I'm going to run my own agent to argue with their agents. Endless patience.
zer00eyz 1 days ago [-]
Everything that is old is new again.
Payment processing, is better than it was in 2000, but still not good.
Micropayments: this is obnoxiously expensive to do.
Discovery, and discoverability: again here we have better but not good solutions (and many of the ones that were once good are enshitified).
Pricing: this is a problem everywhere, and frankly we need the law to change in a way that is pro consumer. Publishing prices, disclosure of fees, in both services and for payment processing (that 3 percent back from visa looks a lot less attractive when it's part of a 5 percent mark up).
Customer service: well there are already companies promoting models where they cut you off and send you into a black hole (google is a prime example). Good customer service will become a differentiator, and maybe a "paid for" service as well.
pessimizer 1 days ago [-]
> Good customer service will become a differentiator
This does not matter without antitrust, which is why customer service became bad in the first place. 30 years ago, the low quality of customer service we complain about now simply didn't exist, at any size or professional level of business, and never had.
If a company back then had the customer service of the average company now, or even the average government agency now, people would have suspected that it was a covert front for criminals or spies.
If a company doesn't have to compete, it can cut everything until it only has the ghost of a product and a billing department. You don't boycott monopolies, monopolies boycott you. If three companies put you on a list to not have internet, phone service, a bank account or a credit card, etc., you just can't have them. You've become a European human rights judge.
jcgrillo 1 days ago [-]
At various previous companies I've worked at product managers, executives, and engineers love bandying about the idea of "building for nontechnical users" as a way to make their widgets more "friendly". But it's just another way to otherize and denigrate "those people" who are the out group. They might, through a metacognitive defect or simple sociopathy, actually believe they're "doing good" by considering the poor creature's plight and making compassionate decisions on their behalf. But it's all crap. All they're actually doing is confirming their biases. LLMs are the divine nectar to these people, an enshittification accelerant par excellence.
redsocksfan45 1 days ago [-]
[dead]
semiinfinitely 23 hours ago [-]
this guy will probably never stop yapping after having gotten just a little bit of attention on his original post
agentultra 1 days ago [-]
To lie requires recognition of the truth and an intention to deceive. LLM’s don’t have such abilities. They are systems that generate plausible sequences of symbols based on training inputs, alignments, reinforcement, and inference. These systems don’t know or care what truth is and therefore cannot lie.
It’s already bad. I’m not looking forward to the future. These systems are terrible. It’s a future without people that they want for some reason. I’d rather deal with people incompetent, tired, annoyed people than an LLM.
fn-mote 1 days ago [-]
Ill-thought out logic.
The company that deployed the LLM is lying to you. The people who made that decision are the ones who are culpable.
We both agree that it’s terrible.
I think it’s important to have an enforcement mechanism to force companies to do what they are responsible for doing. An Anti-Kafka Law, so to speak.
agentultra 1 days ago [-]
An important distinction to make, and I whole heartedly agree.
It’s not LLMs replacing workers, it’s people. People who have a lot of money and don’t sell their labour for a paycheque. And the systems that compel them to such actions.
Scholmo 1 days ago [-]
Don't agree with this.
LLM when it came out, was perfect as an interface between a system and a normal human.
So many people call customer support for issues they could in theory fix themselves. If that LLM system can understand me well enough, its an okay interface.
In worst case you have to escalate anyway. My mum actually told me that she talked to some AI.
And yes normal systems are also not correct often enough. With AI/LLM software will get cheaper which should incresase quality overall.
I dont think ai/llm in this case will change anything.
Relevant change will happen due to the fact that humans can be replaced by AI/LLMs. It was not even imaginable a few years back how a good ai system would even look like. Translaters lost their jobs, basic arists lost their jobs. Small contracts for basic things are gone. The restaurant poster no one cares? AI. The website translation for some small business? no one cares.
davidclark 1 days ago [-]
>LLM when it came out, was perfect as an interface between a system and a normal human.
Statements like this make me feel like I live in a different universe with a different implementation of LLMs than other internet commenters.
Scholmo 1 days ago [-]
Do you want to add any argument so we can discuss this?
I mean, did you not write with ChatGPT and were surprised how well it response?
I'm schocked how well i can talk to an AI through some app like Gemini or ChatGTP. A few years ago i couldn't imagine building such a generic system which such high quality of understanding.
I was playing around with dragon naturally speaking and similiar dictation tools 10 years ago and it was horrible. And that software is expensive.
If you look how normal people use a computer, they are slow just because they don't understand basic drag and drop. Or they are unable to just create some java or php script to convert some data or clean up some data. I would just write a php script reading some csv file and converting stuff around and was faster than everyone around me.
Tool calling is bonkers.
And i tried to break GPT-3, i can literaly write an english sentence and just dropin german words, it was already that good.
Its often enough shitty in doing exactly what i want, but the quality is massive to everything we had before. Massive.
layer8 1 days ago [-]
Not the OP, but you wrote “LLM when it came out, was perfect as an interface between a system and a normal human”. That’s a specific and very encompassing claim. I can only think of very simplistic systems (like a microwave oven maybe) where a current LLM could function perfectly as the sole command interface, much less when LLMs first became available. For systems of any significant complexity, it tends to turn into an exercise in frustration and failure modes when the LLM is your only interface (and frequently even when it isn’t).
An LLM can enhance the interface of a system and can be really useful in that despite its imperfections. But that’s a very different claim.
Scholmo 1 days ago [-]
It was a significant jump from whatever we had before to a quality unseen before.
As i mentioned, i threw english and german at it.
How many people can change the time on their microwave?
How many people can ask an LLM through voice or text to change the time of the microwave?
A LLM is an interface to a service if you add a MCP Server. Now i can ask Jira things like "hey whats my current task? And what do i need to do?"
Its also an interface to documentation. I asked it to help me build up a hugo templating based website because just reading the hugo docs did not help me as much as the LLM did (and that was 2 years ago).
In best case, as long as an LLM is not AGI or ASI, we have good tools with validation behind the LLMs before the LLM becomes the system itself.
skydhash 23 hours ago [-]
> A LLM is an interface to a service if you add a MCP Server. Now i can ask Jira things like "hey whats my current task? And what do i need to do?"
What about configuring your Jira views, and then bookmark the resulting URL with a nice name like "Jira: Tasks in Progress" or "Jira: Important Tickets". That would be way faster than any LLM prompting.
> Its also an interface to documentation. I asked it to help me build up a hugo templating based website because just reading the hugo docs did not help me as much as the LLM did (and that was 2 years ago).
Those kind of claims would be better if the person has written down the goals before the activity and then score the end result according to those goals. A lot of time, there's a lot of post-rationalization (like "I spent time on it so the result must be good"), especially from non-expert.
Only if you care about doing things fast.
Scholmo 11 hours ago [-]
My hugo example is real. I'm a software engineere for 15 years, have used other templating engines but i struggled with the hugo docs for setting up the initial templating structure.
Nonetheless, I always also see this in the 'with continues progress, this will become extreme good fast' and my estimate is 5-15 years for significant progress with meaningful impact.
add-sub-mul-div 1 days ago [-]
You're on a forum with a disproportionate number of people who are trying to profit from AI and have an interest in promoting that it's a worthwhile time and resource investment. It is a different universe than other places outside this bubble.
And it's a one day old account.
Scholmo 4 hours ago [-]
As mentioned in my other comment, I just spend too much time on hn so thats why its a new account.
I do not profit from AI but I think the cat is out of the bag. We have companies like google who has so much money, R&D in AI is just something they can afford.
We then have other companies like Microsoft who have to do AI because Google is doing it.
And then we have whole countries who fight the AI race. USA vs. China (and in theory EU but Mistral is not making waves eh?).
So for now, the progress is staggering fast and I do believe whatever critisism people have, you need to spend relevant time following and keeping up with AI to take the right action in time. Decisions regarding long term investment, using AI tools properly instead of getting fired or even funding your own small company and filling a niche.
From a pure nerd pov: Its crazy! Srsly i can generate images and videos and i can talk to a computer and i can generate songs and .... I mean I wished to be alive when Linus asked on a mailing list about people being interested in Linux but this is now something im alive for.
And it solves plenty of problems for me i haven't had any good solutions. Especially the quality of parsing random texsts into semantic json.
fn-mote 1 days ago [-]
> My mum actually told me that she talked to some AI.
You have no argument here. Make an argument then we can talk. Right now it’s going in circles.
Scholmo 24 hours ago [-]
I added talking points like the one were i state that people call support just to fix issues they could fix themselves.
My point with my mum should imply that it was successful but for sure at least you are pointing something out and now we can talk about it: My mum talked to an AI and it helped her.
Rendered at 20:13:08 GMT+0000 (Coordinated Universal Time) with Vercel.
All that stuff about support, though, inevitable.
Every time I call the Costco pharmacy, I just hit 0 immediately because: Phone. Trees. Suck. They have always sucked, it's just an awful, grindingly slow way to accomplish ANYTHING, and it's so, so much easier to, when I need help, get a person on the line who can figure out what's gone wrong and sort it.
The only people benefiting from cutting that down are the scum class (combo of shareholders and executives) and who's shocked, really. Everything is being ruined nearly at all times to benefit the scum class.
(unless saying "no" / "agent" etc. the fifteen time would've been the trick! Sure, my name can be "O K"...
(I would def love this system if I worked there though, just surprising it didn't have an offramp along the way... maybe they did but everyone used it)
At that point, it's effectively a phone tree executed by a human. Colloquially, human-based support means getting a hold of someone who knows how to solve problems, and worst case, knowing who to contact to solve the problem. That means employees who know their worth which unfortunately, businesses do not want to pay.
good, human customer service is a big margin my current company eats our competitors alive on
Then again, I do think LLMs are an incredible technological achievement. The issue is not so much what they do or that they exist, but how they are utilized. Right now, they are utilized to further the class divide between rich and poor.
Who are we to trust in the future? Not big companies, not the state, not LLMs. Time to organize around groups and collectives that we know we can trust and that we know have our wellbeing in mind.
This is exactly how we got here though. Technology is not passive. It changes incentives, procedures, ideas and shapes the world. If we don't structurally limit what and how it's used, then we are not in control, no matter what are choices personally are.
You’ll probably get one of three outcomes: regulatory capture by monopolies, self dealing by bureaucrats to enrich themselves or gain power, or regulatory capture by self absorbed ideologues who halt all progress or force it down some ideologically approved path.
In none of those scenarios is anything aligned with the best interest of the people.
It’s hard for humans not to get bogged down in base desires, period, because of the dopamine system.
A government which can choose to protect values which are unpopular in the short term.
> What makes the ego and base desires of an aristocracy superior?
Their awareness of higher values and goals. For example how technology might impact the population.
I would recommend Aristotle’s politics for an overview of the strengths and weakness of various government types.
AI isn't like that. One problem is that it's rather generally misunderstood at this point. "AI" is not "intelligence". It's intelligence-adjacent, and something like LLMs is part of our psyche...the subconscious facility that allows us to form sentences without really thinking about it.
At any rate, I have to agree with most of the points the blog author brings up.
Hate to break it to you but it's always been this way, and it was easier in the past when information was so much more expensive to distribute.
Now you're getting meme after meme of the same story multiple times throughout the day, twisted in so many ways. And since we all have our feeds/algorithms adjusted to suit our own tastes we're all getting our own silo'd view of things and can barely rely on a shared set of facts.
All that makes manipulation of people much easier.
The bit about having processing time and the social check and balance has been missing from my perspective. Thank you.
As a 52 year old my life experience disagrees.
It is much easier now because information flows both ways and "They" have a lot of information on you (and everyone else) and can use that information to manipulate you with algorithmic ragebait, and to extract maximum rents (in all aspects of commerce, not just literal rents) from you, etc.
Not that things were ever perfect in the past, they certainly weren't, but increasingly so much of everything is literally just an outright fucking scam these days and all of it is being turbocharged by various forms of "AI" adjacent technology and increasing deregulation.
You can still do that today, you can confine yourself to the CNN or the Breitbart view of the world, but that's not your only option. You have way more opportunity to be well informed now.
I’ve had the same thoughts, but if you look deeper, it all circles back to what we already had: (open, transparent) public institutions, society, and government by the people. The foundation wasn't the problem; the environment was.
Along the way, social media noise, engagement-optimisation and Kardashian-style "entertainment news" infecting real news made an attention economy where, no matter how scandalous you are, attention can be minted into dollars. That is what polluted our infosphere and lead to the lack of trust.
Now, nobody trusts these previously mentioned public entities any more - sometimes due to state-actor or ad-tech disinformation, and sometimes for good reason like when the poisoned public allowed these 80s-style telemarketer-style political weirdos and their cronies to take over public administration.
You are ringing the clarion call for community and cooperation, and it will not work. Not because people don’t want community or the better things, but because incentives make the world go round.
The choice between making some money at the cost of polluting the information commons is no choice at all. That degradation of the commons means no one can escape. No community you form, no group you build, dodges the fallout when someone decides to set fire to shared infrastructure.
We are moving into the dark forest era of the information economy. As models improve, inference costs drop, and capacity increases, the primary organism creating content online will be the bot.
Instead of building communities of people, build collections based on rules of engagement. Participants - be it bots or humans - must follow proscribed rules of conflict and debate.
That way it doesn’t matter if you are talking to a machine or a person. All that matters is that the rules were followed.
I always hit problems with it though. Let's say I can find someone I trust. Maybe it's me. Say I only enter online spaces, at least with intent of discussion, with those I've met in real life. Well, at some point, someone I've met face to face would be incentivized to maybe share a link to their friend's concert. Perhaps there's a free guest list spot in it for them if the show sells out. Or maybe it's all gravy, but eventually:
I want to expand the network we've created together, and it means trusting someone else to bring in people to the online space I've never met in real life. This could again be fine for a long time, but won't someone eventually be incentivized (especially if this practice were common) to promote this supplement, promote that politician...?
(recognize astroturfing is different from the impending slop tsunami but both feel to be in the same stadium)
Your solution shares its essence with a club, a WhatsApp group or interest group.
It works, but you will still be at the mercy of the large communities and economies of thought that the members are a part of.
That is the broader environment you are a part of.
Everyone from FAANG firms, governments to game companies struggle to identify real people from bots.
If your platform is global, then you have to contend with users from different legal regimes and jurisdictions.
The issue is that verification is logistically expensive, ends up infringing on rights, legally complex and on top of all that - error prone.
To top it off - If proof of life ends up gatekeeping any form of value, you will set up incentives to break verification.
Ironically this was the main reason LLMs were introduced in the first place, not to benefit the poor, but to widen the gap between the rich and the poor.
Folks in the "now" have always had a tendency to cling to their fictions as if they were truth for whatever reason; like nationalist exceptionalism, racial superiority, or religions rooted in "othering", etc. Humans seem to have an innate desire to fool themselves and trust in things they should not. Perhaps it's simply a sort of existential coping mechanism of living in a cold, unforgiving reality. We seek the comfort of lies.
Organizing around groups of trust, tends to lead to factionalism and conflicts. Knowing and trusting are sadly very different things in our species.
The whale's been eaten now. The broader Internet is mostly not trustworthy, or convenient, and the information is not even very plentiful.
People will and are retreating into high-trust zones. In-person networks, product recommendations from real friends, and closed group chats.
It's not the end of the world, but things have changed. We'll have to put more work into finding information than we're used to.
You might want to pick a better example the next time you advance this argument, is what I mean to say.
For most simple mainstream questions I just ask ai instead of googling shitty results.
Most of the time ai is good enough and often better than the status ante.
People do not care if it is a stupid token prediction machine as long as the job gets done.
I don't disagree with you at all, I have found that I turn to LLMs to answer questions that I would have just searched with Google before.
It feels like a case of companies creating a problem to sell you the solution. The problem in their eyes is that they couldn't squeeze any more money out of search. So they bring us LLMs to replace it at what is sure to be a much higher cost. But they had to torpedo search to force users to use LLMs.
Not a fictitious example.
The entire thing reeks of laziness and incompetence. It's neat an all but its a giant sucking maw that is threatening to gobble up whats left of anything good.
a fun side effect is that CS is also an early warning system for companies, so when you make it harder to get through to a human, you start throwing out info on your users' pain points. of course this only matters if people have a choice about whether to use your product, so that's gotta be an upside for insurance companies, etc.
I haven't a had a good experience since AT&T bought my previous ISP and forced me to switch to a different subsidiary.
THEREFORE A COMPUTER MUST NEVER MAKE A MANAGEMENT DECISION
—IBM internal training, 1979
It took me a while to realise that the premise is saying the same thing as the reason why we have so many "Computer says no" experiences today.
The conclusion only follows if you want someone to be accountable.
If you want to avoid being accountable, computers should make all management decisions. This has nothing to do with AI other than it provides another mechanism to do that.
People saying "I'd love to help you but the computer won't let me do that" has been happening for years now.
Websites develop abusive patterns because A/B testing lets a process decide based on the goal you want, It doesn't measure the repercussions so you have made no decision to allow them.
Management read it as
A COMPUTER CAN NEVER BE HELD ACCOUNTABLE
THEREFORE THERE CAN BE NO LIABILITY IF COMPUTERS MAKE ALL MANAGEMENT DECISIONS
A computer can never be held accountable. Therefore, since all management decisions must have accountability, a computer must never make them.
computah says noooooo
My agent will be in touch with yours, I guess.
I chat with these friends a lot but I rarely send articles that I suggest they read and that I think are profound, so I expected them to read it. These are smart people that have a history of reading lots of books.
They are both huge AI proponents now and use AI for nearly everything now. Debates on various topics with them used to be rich; now, they're shallow and they just send me AI summaries of points they're clearly just predisposed to. Their attention spans are dwindling.
[1] https://aphyr.com/data/posts/411/the-future-of-everything-is...
Not meant to be snarky. It's been two decades now since my first wide-eyed entry into the workforce, moving for new opportunities, meeting new people. it's been great. There's a lot of smart people out there. I also realize that many people I seen as smart had more access to more content then i did. i still appreciated their sharing , it was enlightening to me. But after 20 years, I think back and it's literally quoting things from smart youtube videos. and regurgitating the latest thought leaders.
We all do this, but like you, what's meaningful to me is the chewing, the dissection and synthesis. coming together to share different perspectives and so on. i've had those friends too! it's just not 1:1
Maybe it's something like that AI allows them to indulge in their shallowness/laziness by giving them the impression that they're not doing that.
Recently, I have been taking intentional steps to avoid falling into the same tar pit. I've started corresponding over email with some of my friends, with us sending multi-page letters back and forth instead of just using chat apps. So far, it has been a wonderful breath of fresh air. Long form communication requires thought and time instead of superficial engagement, and we have had some incredibly interesting discussions that simply aren't possible over voice chat or instant messaging.
I read one of his last week? and didn't like it that much. I read this one despite it because its quite high on hn for whatever reason.
I don't think everything is lies and i don't like how he thinks a LLM is just some bullshit machine.
Its also waaaay to early to even understand were this is going. We as humans have never had that much compute and used it this particular way. It could literlay be the road to a utopia or dystopia. But its very crazy to experience it.
His article series feels so negative and dismissive, that i'm not taking anything from it.
There is so much more research, money and compute behind this AI topic right now, every week or two weeks something relevant better/new comes out of this. From 2d, 3d models, new LLM versions, smaller LLms, faster inferencing (Nvidias Nemotron), we don't know how this will continue.
And the weird thing is that he clearly knows plenty about LLMs but it feels so negative dismissive, hard to put a finger to it.
Rather than dismissive, I see it as effort intensive. The conclusions can be negative, but they've spawned so much discussion which i think is great.
(FYI, I didn't downvote your comment)
Also, I’m reading this comment thread instead of TFA because I didn’t find the previous part I read that great. And I’m not an AI proponent, more of an AI skeptic.
So my main concern here is that my experience may be a microcosm of the shallowing of discussions correlated with some people's increased use of AI. That worries me.
It's more of a meta point to me. I get that this series isn't landing for some people, yourself included, but the meta-observation is that given something of roughly equal substantiveness as before, these friends' motivations for long form content and discussion seem to have atrophied, perhaps largely due to the addition of the AI summary reality cipher to their lives.
Of course, correlation isn't causation. Maybe they both just got older and more lazy, but given their reliance on AI summaries in other debates happening recently, I'm worried.
I also enjoy the series. But sometimes my friends send me things and I'm like, "not gonna read all of that."
Just because you're friends don't want to invest the same amount of time that you want to invest in your own personal enrichment doesn't mean they're getting stupid.
> Your Brain on ChatGPT: Accumulation of Cognitive Debt when Using an AI Assistant for Essay Writing Task
> https://arxiv.org/abs/2506.08872
> Cognitive activity scaled down in relation to external tool use. …
> Self-reported ownership of essays was the lowest in the LLM group and the highest in the Brain-only group. LLM users also struggled to accurately quote their own work. While LLMs offer immediate convenience, our findings highlight potential cognitive costs. Over four months, LLM users consistently underperformed at neural, linguistic, and behavioral levels.
1) add age verification to comply
2) cease operating in the UK by blocking traffic
3) operate in violation of the law, making travel to anywhere the UK has jurisdiction dangerous
I don't think he's intending on changing UK society by blocking their traffic because, as you point out, that's not a good way to effect change.
Given how many people hate AI in general, I'm surprised there hasn't been anything like this happening. They could even get around the irony of using "AI" themselves, I bet low-tech language models like Markov chains could provide sufficient time wasting potential (I'd love to see it done with an old fashioned AIML chatbot). Asymmetric chatbot warfare.
It is an incredibly vexing situation to see whatever you're an AI hater or enthusiast.
I, for one, welcome our trillion parameters multiple layers overlords.
The pattern goes something like this:
- this development is bad
- companies will be unrestrained in their use of this development
- there will be no rules so they can do whatever they want
- we are all fucked as a result
But then...propose that we make some laws to put rules around this stuff, also known as regulations and everybody goes "whoa hold up hold up hold up...I dunno about that part."
Dear friends - America has always been this way. Study your 19th and 20th century history. Companies will exploit the shit out of us unless we put some rules in place to prevent it. Yes, that might mean making less money in the short term as regulations cause friction. But in the long term it means we can have a better and actually livable society.
(For what it's worth I'm an American and not an uppity European or Australian taking potshots from across the pond; no offense to Euros or Aussies intended, love you guys)
The real issue is new amendments are needed. But that’s hard. You need 75% of states ratifying. And that seems impossible today on any topic.
The future of everything is lies... sure, well so was the past, what's your point?
When we say "snake oil salesmen" we literally had people selling these mineral oils, fraudulently, for a century. People yearn for a time when there wasn't an antagonistic relationship between buyer and seller, but it has never existed. There is only one way have that relationship, and it's by having a personal -- repeatable -- relationship with your seller, and that's usually expensive, even if that expense is not monetary. It just the game theory of repeated games.
The easiest way to establish this behavior is to be a regular at a bar while being a good patron that creates a positive experience for everyone else. There are multiple places I frequent where I have no doubt in my mind that if there was an emergency, they might ask me to help out in some way because I'm trustworthy. And they could hand me an envelope of money, knowing I'd not steal it, because it's not worth it for me to take a bit of money and ruin my welcome at this place.
That's not going to happen with online sellers, and it's not going to happen with most corporations. The promise was always "save money through economies of scale, such that you still win even though there is no relationship." For the most part, that's proven true. But as everything has been outsourced to China, and anyone can make a basic version of everything, we're running up against the limits of those benefits. And I truly believe that brands are going to start mattering a lot more going forward for marketplaces.
Shop at Costco because they care about their customers. In other areas where companies don't, you're going to have to do the legwork to find out who is going to treat you right. That sucks, and it'll probably be more expensive, but that's the antagonistic relationship that buyers and sellers have naturally. Regulation helps, yes, but it's not some cure all. The expectation that you can just go with the person with the lowest price and get quality services is not a thing that's every existed.
Incentives make the world go round, so even if people recognize the issue, they would rather it become someone else’s problem, than willingly harm their own future.
Of course, what those who call for regulation of AI or other nascent technologies are really saying is, "Unqualified and/or biased and/or corrupt and/or dull-witted people should make decisions that affect us all, based on incomplete, misleading or rapidly-evolving information, with the power to enforce them at gunpoint."
And then they wonder why other countries beat their own.
So yeah, when people around here look at "AI" and all the harm it's already doing, they don't any hope that regulations will be put in place before more harm is done.
Nothing more, nothing less
Remember that the polygraph still exists, now we will be dealing with a massive portion of the decision makers will treat as artificial inteligence not in the technical sense we use, but as real inteligence, maybe even super-inteligence.
The average person on the street is familiar with consumer-facing AI but doesn't think it's really alive/magic/the solution to everything. Our supposed best-and-brightest are the ones flogging the horse.
For the "bureaucracy has royally fucked up and doesn't want to fix it", if it is something that can be fixed with money and isn't time sensitive (e.g. you need a refund rather than get the airline to actually provide you the ticket you already paid for and want to fly this weekend): In countries that have effective small claims courts, these can be a surprisingly convenient (less hassle than the "talk to the bot" wall of the company!) to resolve this kind of issue.
I hope that these resolution methods become more common - I think the tools to fight enshittification often already exist, we just don't use them enough. A welcome side effect would, of course, be that this would impose a real cost on the enshittifiers, creating an incentive to provide proper support.
Also, if the effective tools do exist, count on American companies using the American bribery-based political system to change the laws to dull those tools or to eliminate them.
Again, if you live in some consumer-friendly country good for you, I’m just saying how it plays out in this one society. I’ll stipulate that it’s all our fault blah blah blah.
I knew that one time I needed a free Sam's Club membership for one thing and they kept on dropping me...
Haha yes. I interacted with a bank one. It was like press 5 for mortgages but with a text to speech front end.
At the end of the day the LLM can be tricked into doing anything.
We intuitively know that an employee will be punished and may get fired if we trick them. Many of us won't try to trick human employees as a result, because we would feel bad if they had bad consequences as a result of our trickery
There is likely no such hesitation around tricking LLMs. I know I personally wouldn't feel bad about it at all. Mostly because any computerized customer service process is annoying so anything I can do to limit my time dealing with it is a win in my books
"Yes, we cost more, but your get what you pay for" can be a good play.
> People are very excited about “agentic commerce”. Agentic commerce means handing your credit card to a Large Language Model, giving it access to the Internet, telling it to buy something, and calling it in a loop until something exciting happens.
I think you're confusing this for the other side of things. The article talks about how some people already use OpenClaw and the variations, give them access to bunch of stuff including cards to purchase things (sometimes virtual and limited cards), I think that's what the article talks about when they say "agentic commerce".
Obviously a intentional simplification in the language the author uses, but I think it gets the point across at least.
Most of these annoyances are also things that existed before AI, and will continue to exist after, because consumerist capitalism. The good little obedient consumers get abused because they don't stand up for themselves. Customer service is an enfuriating maze? Yeah, because you voted with your dollars (and political indifference) to allow companies to make customer service (the thing you pay for) worse. We bring these problems on ourselves. It's pointless to complain if you aren't willing to do anything to change it. (And if you think you can't change it, there's other nations to look at, as well as the fact that you live in a democracy - for now - unlike the rest of the world)
Hell, we already have companies whose sole purpose is to manage your subscriptions for you because you're too lazy to do it yourself. You could look at this and say, man, the world is terrible! Or you could look at this and say, man, how great is my life that I can not only subscribe to a lot of things without going bankrupt, but I have extra cash left over to pay a company to manage my subscriptions?
Don't let the hedonic treadmill and complacency trick you into A) accepting a worse life, or B) convincing yourself your life is bad when it's actually better than most people's.
As the author said:
>I suspect that like the job market, everyone will wind up paying massive “AI” companies to manage the drudgery they created.
It could lead to significant progress in every single research area.
I'm at least very impressed about the amount of open models and that it doesn't hold up that the gap between public and private diverges massivly. Public is probably one year behind.
That is not known. Things could easily just get worse, and IMO that is far more likely. Every civilisation has collapsed, ours is clearly in decline, and AI could likely accelerate that decline.
I'm not exactly seeing the progress here. AI helps us write some software a bit faster? Doesn't seem revolutionary to me. Is it having any significant impact on peoples lives other than the various economic forces? I'm not seeing it.
> Yeah, because you voted with your dollars
In our system? No. In crony capitalism the companies who win do it through manipulating the political system. And when the government inflates the currency and destroys people buying power they simply cannot afford to "vote with their dollars". This is hilariously naive.
This is the definition of doomerism. Of all the ways each society has adapted to industrializaion, of all the technologies we've adapted to, and our continued use of all of them despite any negative impacts, you think an LLM is going to end things? An overcomplicated autocomplete?
There is no evidence whatsoever that suggests our society would collapse under AI. An imagination is not evidence. I mean, it's just a tool! Like the car, the telephone, the genetically engineered seed. Humans always freak out when they see something they don't understand. But it turns out there's no tool we have created that we can't control. We're stupid, but we're not idiots.
> Companies are now trying to divert support requests into chats with LLMs
More than trying they are doing it very successfully and for a long time now
I do agree things can still get 10x worse than even the current state though
> When you talk to a person, there’s a “there” there—someone who, if you’re patient and polite, can actually understand what’s going on
I've found they have been trained to be machine-like for many years now and not actually help. They focus on empathy and understanding and caring about your needs...and diverting your attention away from actually resolving the issue. Here's an example recently I experienced:
I complained to my bank about how they show refunds on the app. I got a call from a lovely sounding lady who used a comforting tone to ask if I had any special needs she needs to be aware of so she can "provide extra support" .
At beginning I made it clear I'm not chasing a particular refund but rather raising a specific complaint about how refunds are shown in the app. 4 times she mentioned a specific refund assuring me that it's been refunded, ignoring everything I said at the beginning. She explained how refunds work. She explained how pending transactions work (all off topic). She explained 3 times about how they can't (won't) create a feedback loop and begged my permission to close off the complaint, saying she hopes I can understand.
That was all a very dressed up "I'll pass your feedback to the PM. Thanks" but it was whole ridiculous long phone call trying to make me feel "heard" and I came away feeling like a 15 year old
It's certainly worth discussing the fact that the entire industry is starting to outsource large amounts of our thinking and writing work to non-sentient statistical algorithms, but this discussion needs to honestly confront the extent to which they are successfully completing useful tasks today.
Lots of blaming LLMs but I think the root cause lies elsewhere, I’m not even sure whether dismissing it as “capitalism” or “profit motives” would do it justice, because in general it feels more like the world that we live in lacks humanity.
Even in a capitalist world, a company could take a stance and decide not to purposefully screw people over, but in the world that we live in instead they look for ways to better screw over people and extract more money from them. It doesn’t matter whether your customer support is handled by someone from India, a crappy telephone tree or some voice model, when the incentive is the same - to do the bare minimum for customer “support” (in practice, just getting you to fuck off). Same for handling insurance claims and “dynamic pricing” of things - it doesn’t matter whether it’s some proprietary algorithm or just an LLM making crap up when the goal is to screw you over.
Blaming “AI” for all of this would be barking up the wrong tree (without that tech they’d just find other ways), though one can definitely acknowledge that this technology provides another convenient scapegoat, same as how you can lay employees off and just say cause it’s because of AI when in actuality it’s just greed and wanting to make your books look better.
It would be great if people chose not to do business with the former, but many simply do not care. They may think only other people get screwed. They may not take the time to think about it, especially if the company spends a ton of money obfuscating their misbehavior. Quite a few actively defend the right of companies to screw them.
Technology multiplies that like a lever. We weren't prepared for capitalism before LLMs and we're massively under-prepared now.
Payment processing, is better than it was in 2000, but still not good.
Micropayments: this is obnoxiously expensive to do.
Discovery, and discoverability: again here we have better but not good solutions (and many of the ones that were once good are enshitified).
Pricing: this is a problem everywhere, and frankly we need the law to change in a way that is pro consumer. Publishing prices, disclosure of fees, in both services and for payment processing (that 3 percent back from visa looks a lot less attractive when it's part of a 5 percent mark up).
Customer service: well there are already companies promoting models where they cut you off and send you into a black hole (google is a prime example). Good customer service will become a differentiator, and maybe a "paid for" service as well.
This does not matter without antitrust, which is why customer service became bad in the first place. 30 years ago, the low quality of customer service we complain about now simply didn't exist, at any size or professional level of business, and never had.
If a company back then had the customer service of the average company now, or even the average government agency now, people would have suspected that it was a covert front for criminals or spies.
If a company doesn't have to compete, it can cut everything until it only has the ghost of a product and a billing department. You don't boycott monopolies, monopolies boycott you. If three companies put you on a list to not have internet, phone service, a bank account or a credit card, etc., you just can't have them. You've become a European human rights judge.
It’s already bad. I’m not looking forward to the future. These systems are terrible. It’s a future without people that they want for some reason. I’d rather deal with people incompetent, tired, annoyed people than an LLM.
The company that deployed the LLM is lying to you. The people who made that decision are the ones who are culpable.
We both agree that it’s terrible.
I think it’s important to have an enforcement mechanism to force companies to do what they are responsible for doing. An Anti-Kafka Law, so to speak.
It’s not LLMs replacing workers, it’s people. People who have a lot of money and don’t sell their labour for a paycheque. And the systems that compel them to such actions.
LLM when it came out, was perfect as an interface between a system and a normal human.
So many people call customer support for issues they could in theory fix themselves. If that LLM system can understand me well enough, its an okay interface.
In worst case you have to escalate anyway. My mum actually told me that she talked to some AI.
And yes normal systems are also not correct often enough. With AI/LLM software will get cheaper which should incresase quality overall.
I dont think ai/llm in this case will change anything.
Relevant change will happen due to the fact that humans can be replaced by AI/LLMs. It was not even imaginable a few years back how a good ai system would even look like. Translaters lost their jobs, basic arists lost their jobs. Small contracts for basic things are gone. The restaurant poster no one cares? AI. The website translation for some small business? no one cares.
Statements like this make me feel like I live in a different universe with a different implementation of LLMs than other internet commenters.
I mean, did you not write with ChatGPT and were surprised how well it response?
I'm schocked how well i can talk to an AI through some app like Gemini or ChatGTP. A few years ago i couldn't imagine building such a generic system which such high quality of understanding.
I was playing around with dragon naturally speaking and similiar dictation tools 10 years ago and it was horrible. And that software is expensive.
If you look how normal people use a computer, they are slow just because they don't understand basic drag and drop. Or they are unable to just create some java or php script to convert some data or clean up some data. I would just write a php script reading some csv file and converting stuff around and was faster than everyone around me.
Tool calling is bonkers.
And i tried to break GPT-3, i can literaly write an english sentence and just dropin german words, it was already that good.
Its often enough shitty in doing exactly what i want, but the quality is massive to everything we had before. Massive.
An LLM can enhance the interface of a system and can be really useful in that despite its imperfections. But that’s a very different claim.
How many people can change the time on their microwave?
How many people can ask an LLM through voice or text to change the time of the microwave?
A LLM is an interface to a service if you add a MCP Server. Now i can ask Jira things like "hey whats my current task? And what do i need to do?"
Its also an interface to documentation. I asked it to help me build up a hugo templating based website because just reading the hugo docs did not help me as much as the LLM did (and that was 2 years ago).
In best case, as long as an LLM is not AGI or ASI, we have good tools with validation behind the LLMs before the LLM becomes the system itself.
What about configuring your Jira views, and then bookmark the resulting URL with a nice name like "Jira: Tasks in Progress" or "Jira: Important Tickets". That would be way faster than any LLM prompting.
> Its also an interface to documentation. I asked it to help me build up a hugo templating based website because just reading the hugo docs did not help me as much as the LLM did (and that was 2 years ago).
Those kind of claims would be better if the person has written down the goals before the activity and then score the end result according to those goals. A lot of time, there's a lot of post-rationalization (like "I spent time on it so the result must be good"), especially from non-expert.
Only if you care about doing things fast.
Nonetheless, I always also see this in the 'with continues progress, this will become extreme good fast' and my estimate is 5-15 years for significant progress with meaningful impact.
And it's a one day old account.
I do not profit from AI but I think the cat is out of the bag. We have companies like google who has so much money, R&D in AI is just something they can afford.
We then have other companies like Microsoft who have to do AI because Google is doing it.
And then we have whole countries who fight the AI race. USA vs. China (and in theory EU but Mistral is not making waves eh?).
So for now, the progress is staggering fast and I do believe whatever critisism people have, you need to spend relevant time following and keeping up with AI to take the right action in time. Decisions regarding long term investment, using AI tools properly instead of getting fired or even funding your own small company and filling a niche.
From a pure nerd pov: Its crazy! Srsly i can generate images and videos and i can talk to a computer and i can generate songs and .... I mean I wished to be alive when Linus asked on a mailing list about people being interested in Linux but this is now something im alive for.
And it solves plenty of problems for me i haven't had any good solutions. Especially the quality of parsing random texsts into semantic json.
You have no argument here. Make an argument then we can talk. Right now it’s going in circles.
My point with my mum should imply that it was successful but for sure at least you are pointing something out and now we can talk about it: My mum talked to an AI and it helped her.