NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Enterprises are getting stuck in AI pilot hell, say Chatterbox Labs execs (theregister.com)
neepi 243 days ago [-]
One of my former contract outfits is there right now. Two failed projects so far, one of which impacted customers so badly that they ended up in trade press. The other one wrote off 5% of revenue with nothing to show.

No you can't solve everything with a chatbot because your CEO needs an AI proposition or he's going to look silly down the golf course with all the other CEOs that aren't talking about how theirs are failing...

tough 243 days ago [-]
this does make sense, but there's infinite n ways to use AI on the workplace, i gotta wonder how much bad consultants trying to just sell services are to blame here. at least as much as the CEO's trying to shoehorn products nobody asked for i guess
csomar 243 days ago [-]
They are roughly as bad as the "blockchain" consultants who want to install a blockchain to your company. The value is in the sale. This is why they have zero technical expertise.
nikanj 243 days ago [-]
It's a match made in heaven, with a buyer who just wants to report to the board that they have successfully invested in $fad, and a seller who knows the buyer is mostly motivated by the opportunity to put money towards $fad.
ben_w 243 days ago [-]
Ah, a monorail project.

(Simpsons kind, I don't know enough about civil engineering to comment on the real one).

delusional 243 days ago [-]
I have seen no consultants directly selling this yet. To me it looks like this is all coming at the CEO "organically", or at least through the same channels that it's coming to the rest of us.

At my job it's been coming through the regular channels, but is empowered by aligning with current trends. It's easier to sell an AI project, even internally, when the whole world is talking about it.

tough 243 days ago [-]
right it feels like its more a pull than push, but what i meant is that any of the big consultancies are happy to take customers with -absurd- requests, and not finish the job, cause they paid regardless.

maybe they're not directly pushing AI (cause they dont need to), but they're happy to accept shitty jobs that make no sense just cause

delusional 243 days ago [-]
> right it feels like its more a pull than push

I don't think that's the right distinction to draw here. It's definitely being pushed, just not by consultants.

> big consultancies are happy to take customers with -absurd- requests

This is of course always true. Consultants usually don't really care where they make the money, and long as you pay them, they'll find someone stupid enough to take on your task.

That's not what I'm seeing though. We're not hiring outside consultants to do big AI projects, we have people within our organization that have been convinced by the public marketing, and are pushing for these projects internally. I'm not seeing big consultancies accepting contracts, I'm seeing normal working people getting consultant brain and taking this as their chance to sell a "cutting edge" project that'll disrupt all those departments they don't understand what does.

tough 243 days ago [-]
greenfield projects have always been a way to be -seen- in big corps i guess.

AI is now the du jour's vector to get an easy YES from command.

Sad state of affairs i guess, at least put the effort to know wtf you want to build and more importantly WHY or HOW is it better than current solutions

243 days ago [-]
prmoustache 243 days ago [-]
How does that change compared to anything else consultancies are paid for?
243 days ago [-]
steveBK123 243 days ago [-]
Bad consultants exist to facilitate bad CEOs/CTOs.

"I have to do some __ / have a __ strategy / hire a Head Of __ or I look bad"

blitzar 243 days ago [-]
We are selling to willing buyers at the current fair market price.
steveBK123 243 days ago [-]
To a degree, yes.

There are a lot of leaders who are looking for problems for their solutions.

edit: I say this as someone who has been stuck on top-down POCs which I late found out originated from "so my brother in law has this startup" where we got management questions that were mostly "so how could we use this here?" rather than "how is it performing / is it a good value / does it solve the problem we want it to solve".

Some tech cannot fail, it can only be failed.

EndsOfnversion 243 days ago [-]
You will never sell anything to any of those people ever again.
steveBK123 243 days ago [-]
This is it! I'm telling you! This is it!
arethuza 243 days ago [-]
But that is spilt milk under the bridge.
blitzar 243 days ago [-]
Please, speak as you might to a young child, or a golden retriever.
neepi 243 days ago [-]
Yes. They don't know what AI actually is and what the capabilities are and the companies selling integrations are running on hope rather than technical competence and suitability. So it gets applied to unsuitable problem domains and fails.
matt3210 243 days ago [-]
They vibe coded everything so it basically a second year cs student level of work and security.
tough 243 days ago [-]
I hate consultants, their incentives are all whack from the beginning.

Hopefully more companies will encourage their own employees to explore how AI can fit on their current workflows or improve them and not try to hope for some magical thinking to solve their problems

SirBomalot 243 days ago [-]
I currently have to deal with such consultants. They want to sell their magical AI black box.

Speaking with the consultants let's me assume that they too get the pressure from the top to do ai stuff maybe because they fear that else they will be replaced by ai or so. It seems really somewhat desperate.

matt3210 243 days ago [-]
Who in their right mind would intentionally deploy non-deterministic, unreviewable and unprovable software to critical systems?
smodo 243 days ago [-]
My colleagues at the head of a company. I’m one of four bosses. One of us is pushing for AI every single meeting. The other is ignoring her. The last one is starting to ‘see her point.’ I’m considering quitting if this goes to far but unwilling to make that threat yet, as it’s a bridge I can only cross once.

Anyway. To me it just speaks to the disdain for semi-intellectual work. People seem to think producing text has some value of its own. They think they can shortcircuit the basic assumption that behind every text is an intention that can be relied upon. They think that if they substitute this intention with a prompt, they can create the same value. I expect there to be some kind of bureaucratic collapse because of this, with parties unable to figure out responsibility around these zombie-texts. After that begins cleaning up, legislating and capturing in policy what the status of a given text is etc. Altman &co will have cashed out by then.

mirekrusin 243 days ago [-]
It's interesting to still hear this kind of sentiment.

> People seem to think producing text has some value of its own.

Reading this sentence makes me think that the author actually never seen agentic work in action? Producing value out of text does work and one of good examples is putting it in a loop with some form of verification output. It's easy to do with programming - type checker, tests, linter etc. – so it can chat by itself with it's own results until the problem is solved.

I also find it personally strange that so often discussions require reminder that rate of change in capabilities is also big part of "the thing" (as opposed to pure capabilities today). It changes on weekly/monthly basis and it changes in one direction only.

dijksterhuis 243 days ago [-]
i think you might have misunderstood the meaning of “producing text” in the parent comment.

the kind of people the parent comment was talking about tend to believe they can send three emails and make millions of pounds suddenly appear in business value (i’m being hyperbolic and grossly unfair but the premise is there).

they think the idea is far more valuable than the implementation - the idea is their bit (or the bit they’ve decided is their bit) and everyone else is there to make their fantastic idea magically appear out of thin air.

they aren’t looking at tests and don’t have a clue what a linter is (they probably think it’s some fancy device to keep lint off their expensive suits).

dustingetz 243 days ago [-]
the essence of man is blind spots and hubris
christophilus 243 days ago [-]
Anyone who isn’t a software engineer. There is so much hype that non-technical people have bought into.

Their tech teams should know better, but it’s hard to say “no”, when it feels like your salary depends on you saying “yes”.

mathgeek 243 days ago [-]
> Their tech teams should know better, but it’s hard to say “no”, when it feels like your salary depends on you saying “yes”.

There's some truth to the difference between "short term profits" and "my salary depends on this" being whether you're the boss or the employee.

nikanj 243 days ago [-]
Someone who was ordered by their boss to deploy it, and made sure to get the instructions in writing - with their protests also in writing.
gamblor956 243 days ago [-]
DOGE would and did. Results were as expected... complete failure.
moron4hire 243 days ago [-]
Someone who is really pissed off at how much they have to rely on software developers to run their business. They should not have so much power and direction in the company. I mean, they don't even have memberships at the country club!
lo0dot0 243 days ago [-]
The answers can be recorded and reviewed. The other points are true, or is there a way to make outcomes deterministic, when compared to previous versions while allowing to add more knowledge in newer versions?
vintermann 243 days ago [-]
It's possible to make any model deterministic. Used to be just to save the seed, but I'm not sure it still is now that everything is distributed. Maybe a little more effort.
dustingetz 243 days ago [-]
determinism isn’t really enough, we want “predictable”. Most of these AI wavefunctions are “chaotic” - tiny changes in state can cause wildly divergent outcomes
lo0dot0 243 days ago [-]
A part of my question that you didn't go into was, can new knowledge be added in a new version without making the answers with knowledge learned in previous versions non-deterministic?
dijksterhuis 243 days ago [-]
that’s not really how training works.

changing the input (data) means you get a different output (model).

source data has nothing to do with model determinism.

as an end-user of AI products, your perspective might be that the models are non-deterministic, but really it’s just different models returning different results … because they are different models.

“end-user non-determinism” is only really solved by repeatedly using the same version of a trained model (like a normal software dependency), potentially needing a bunch of work to upgrade the (model) dependency version later on.

kevingadd 243 days ago [-]
This requires an exact lock-down of things like the hardware and driver version, doesn't it? Is that sustainable?
vintermann 241 days ago [-]
It shouldn't. It didn't used to, at least.
Yoric 243 days ago [-]
But that won't survive an upgrade, will it?
red75prime 243 days ago [-]
Anyone who doesn't fully understand current differences between existing non-deterministic, unreviewable and unprovable agents (humans) and the artificial ones.
rurban 243 days ago [-]
If you train it the right data, there is no security risk. It cannot know what it doesn't see. However, if you train it on internal secrets, they will leak, simple as that. Filtering will not help.

But this interview is only fear-mongering to sell expensive models. Ditching the industry leaders.

calebkaiser 243 days ago [-]
Before getting too invested in any conclusions drawn from this piece, it's important to recognize this is mostly PR from Chatterbox.

From the Chatterbox site:

> Our patented AIMI platform independently validates your AI models & data, generating quantitative AI risk metrics at scale.

The article's subtitle:

> Security, not model performance, is what's stalling adoption

bsenftner 243 days ago [-]
Chatterbox's PR money is being well spent. This article squarely places them in the center of that trillion dollar revenue stream. Marketing dollars very well spent.
stopthe 243 days ago [-]
https://www.chatterbox.io/ "Corporate language training powered by marginalised talent" - is that satire? Did I found the wrong Chatterbox?
simonw 243 days ago [-]
That's the wrong one. https://chatterbox.co/
243 days ago [-]
sbarre 243 days ago [-]
AI and vibe coding lets you get that rough prototype up and running so much faster than before, and so creates that illusion of momentum and completeness more than ever.

How many people here have been subjected to that "looks good, put it in production!" directive after showing off a quick POC for something? And then you have to explain how far away from being production-ready things are, etc...

There's a reason wireframing tools intentionally use messy lines, and why most UX people know better than to put brand colours in wireframes.

bowsamic 243 days ago [-]
Prototypes are very dangerous. Our team made the mistake of having our demo look very nice even though there is still a lot of unseen work to do. Now upper management of course think “this is ready, just send it out”. Prototypes live forever, no upper manager will want you to spend time on the real thing. It is unsafe for the project to come across well
pragmatic 243 days ago [-]
Head of an engineering program told us to always make sure the prototype has at least one glaring bug/flaw.

His background was electrical engineering but it applies doubly in software.

nyarlathotep_ 243 days ago [-]
Was on a few of these as a consultant, all major F500 companies. Most recent was a few months ago.

Every instance was some variation of RAG chat/langgraph thing. On multiple occasions, I heard "I don't see what value this has over ChatGPT", except they now had 5-6 figure cloud bills with it.

Technical users really weren't thrilled with it (i.e they wanted usable insights from their data (something best served by a db query), but ended up with LLM copypasta of internal docs) and seemed to expect significant functionality and utility on top of "regular" LLM use.

Stakeholders constantly complained (rightfully so) about issues with inaccuracy in responses, or "why is this presented in this fashion", resulting in hours of the data team folks coming up with new prompts and crossing fingers.

pragmatic 243 days ago [-]
So right back to basic data engineering/analytics?

“Why is this dashboard showing this number?”

That’s my concern with any data “insight” magic. How do you debug what it’s telling the users?

asudhakar11 243 days ago [-]
It should be able to tell you what assumptions it made. “Sales is $X because I assumed ARR and calendar year”. You’re then able to say great that’s what I wanted or “no I want bookings and fiscal year”.
asudhakar11 243 days ago [-]
Why weren’t you able to show usable insights from data?
bowsamic 243 days ago [-]
Slavoj Zizek says that the true terrifying situation is when the leaders act and know they no longer need to justify their actions. I am currently in this fight with our upper management. I ask why this push for AI, what it will do for our product, why are we making huge cuts on the scope of the project to rebrand as an AI project? All I receive is a bad confused response. Of course it’s just none of my business, they are the leaders
243 days ago [-]
kevin_thibedeau 243 days ago [-]
They've got to finish their blockchain deployment first. Then It'll all go smoothly.
lofaszvanitt 243 days ago [-]
LLMs should be trained on CEOs and middle management and of course politicians. Society would be very grateful.
gsky 243 days ago [-]
I have been using AI models to build 2 projects atm. Yes it's not perfect (30% wrong) but it solves problems so quickly and cheaply so I continue to use it going forward.

As a software engineer I want everything to be perfect but not as an entrepreneur.

add-sub-mul-div 243 days ago [-]
Temu also solves a problem quickly and cheaply, but I wouldn't make it a big wardrobe strategy unless I was too poor to solve the problem a better way.
Garlef 243 days ago [-]
Doesn't this mean: There's room for disruption/land grab?

If the big corporations can't move fast enough and 100 startups gamble on getting there, eventually one of them will be successful.

nikanj 243 days ago [-]
A hundred startups also gamble on perpetual motion, and their arguments always come from a place of "perpetual motion would revolutionize markets and there is strong demand", never from a place of "we have figured out how to alter laws of physics and make it possible"
Pmop 243 days ago [-]
And a lot of them cannot get up to speed, even when they want to. Many big corporations struggle with evolution and innovation due to crippling bureaucracy, created and supported by risk averse leadership. This is usually worse for publicly traded companies.

Unless it is something like Meta, then they have a Zuck, someone smart, with enough oversight and power, who can drain the swamp and make the whole machine move.

owebmaster 243 days ago [-]
Zuckerberg made a genius move from the web 2.0 to the current smartphone era we still live in. But I would not be on his capability to do it again, he failed badly with metaverse and so far is failing with AI
243 days ago [-]
cowboylowrez 242 days ago [-]
hehe "drain the swamp" this guy knows how to "trump" the naysayers!
243 days ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 15:32:08 GMT+0000 (Coordinated Universal Time) with Vercel.