NHacker Next
  • new
  • past
  • show
  • ask
  • show
  • jobs
  • submit
Enterprises are getting stuck in AI pilot hell, say Chatterbox Labs execs (theregister.com)
neepi 9 hours ago [-]
One of my former contract outfits is there right now. Two failed projects so far, one of which impacted customers so badly that they ended up in trade press. The other one wrote off 5% of revenue with nothing to show.

No you can't solve everything with a chatbot because your CEO needs an AI proposition or he's going to look silly down the golf course with all the other CEOs that aren't talking about how theirs are failing...

tough 8 hours ago [-]
this does make sense, but there's infinite n ways to use AI on the workplace, i gotta wonder how much bad consultants trying to just sell services are to blame here. at least as much as the CEO's trying to shoehorn products nobody asked for i guess
csomar 7 hours ago [-]
They are roughly as bad as the "blockchain" consultants who want to install a blockchain to your company. The value is in the sale. This is why they have zero technical expertise.
nikanj 6 hours ago [-]
It's a match made in heaven, with a buyer who just wants to report to the board that they have successfully invested in $fad, and a seller who knows the buyer is mostly motivated by the opportunity to put money towards $fad.
ben_w 3 hours ago [-]
Ah, a monorail project.

(Simpsons kind, I don't know enough about civil engineering to comment on the real one).

steveBK123 5 hours ago [-]
Bad consultants exist to facilitate bad CEOs/CTOs.

"I have to do some __ / have a __ strategy / hire a Head Of __ or I look bad"

blitzar 5 hours ago [-]
We are selling to willing buyers at the current fair market price.
steveBK123 5 hours ago [-]
To a degree, yes.

There are a lot of leaders who are looking for problems for their solutions.

edit: I say this as someone who has been stuck on top-down POCs which I late found out originated from "so my brother in law has this startup" where we got management questions that were mostly "so how could we use this here?" rather than "how is it performing / is it a good value / does it solve the problem we want it to solve".

Some tech cannot fail, it can only be failed.

EndsOfnversion 4 hours ago [-]
You will never sell anything to any of those people ever again.
steveBK123 3 hours ago [-]
This is it! I'm telling you! This is it!
arethuza 3 hours ago [-]
But that is spilt milk under the bridge.
neepi 8 hours ago [-]
Yes. They don't know what AI actually is and what the capabilities are and the companies selling integrations are running on hope rather than technical competence and suitability. So it gets applied to unsuitable problem domains and fails.
matt3210 8 hours ago [-]
They vibe coded everything so it basically a second year cs student level of work and security.
tough 8 hours ago [-]
I hate consultants, their incentives are all whack from the beginning.

Hopefully more companies will encourage their own employees to explore how AI can fit on their current workflows or improve them and not try to hope for some magical thinking to solve their problems

SirBomalot 8 hours ago [-]
I currently have to deal with such consultants. They want to sell their magical AI black box.

Speaking with the consultants let's me assume that they too get the pressure from the top to do ai stuff maybe because they fear that else they will be replaced by ai or so. It seems really somewhat desperate.

delusional 8 hours ago [-]
I have seen no consultants directly selling this yet. To me it looks like this is all coming at the CEO "organically", or at least through the same channels that it's coming to the rest of us.

At my job it's been coming through the regular channels, but is empowered by aligning with current trends. It's easier to sell an AI project, even internally, when the whole world is talking about it.

5 hours ago [-]
tough 8 hours ago [-]
right it feels like its more a pull than push, but what i meant is that any of the big consultancies are happy to take customers with -absurd- requests, and not finish the job, cause they paid regardless.

maybe they're not directly pushing AI (cause they dont need to), but they're happy to accept shitty jobs that make no sense just cause

5 hours ago [-]
prmoustache 3 hours ago [-]
How does that change compared to anything else consultancies are paid for?
delusional 5 hours ago [-]
> right it feels like its more a pull than push

I don't think that's the right distinction to draw here. It's definitely being pushed, just not by consultants.

> big consultancies are happy to take customers with -absurd- requests

This is of course always true. Consultants usually don't really care where they make the money, and long as you pay them, they'll find someone stupid enough to take on your task.

That's not what I'm seeing though. We're not hiring outside consultants to do big AI projects, we have people within our organization that have been convinced by the public marketing, and are pushing for these projects internally. I'm not seeing big consultancies accepting contracts, I'm seeing normal working people getting consultant brain and taking this as their chance to sell a "cutting edge" project that'll disrupt all those departments they don't understand what does.

tough 2 hours ago [-]
greenfield projects have always been a way to be -seen- in big corps i guess.

AI is now the du jour's vector to get an easy YES from command.

Sad state of affairs i guess, at least put the effort to know wtf you want to build and more importantly WHY or HOW is it better than current solutions

matt3210 8 hours ago [-]
Who in their right mind would intentionally deploy non-deterministic, unreviewable and unprovable software to critical systems?
smodo 7 hours ago [-]
My colleagues at the head of a company. I’m one of four bosses. One of us is pushing for AI every single meeting. The other is ignoring her. The last one is starting to ‘see her point.’ I’m considering quitting if this goes to far but unwilling to make that threat yet, as it’s a bridge I can only cross once.

Anyway. To me it just speaks to the disdain for semi-intellectual work. People seem to think producing text has some value of its own. They think they can shortcircuit the basic assumption that behind every text is an intention that can be relied upon. They think that if they substitute this intention with a prompt, they can create the same value. I expect there to be some kind of bureaucratic collapse because of this, with parties unable to figure out responsibility around these zombie-texts. After that begins cleaning up, legislating and capturing in policy what the status of a given text is etc. Altman &co will have cashed out by then.

mirekrusin 5 hours ago [-]
It's interesting to still hear this kind of sentiment.

> People seem to think producing text has some value of its own.

Reading this sentence makes me think that the author actually never seen agentic work in action? Producing value out of text does work and one of good examples is putting it in a loop with some form of verification output. It's easy to do with programming - type checker, tests, linter etc. – so it can chat by itself with it's own results until the problem is solved.

I also find it personally strange that so often discussions require reminder that rate of change in capabilities is also big part of "the thing" (as opposed to pure capabilities today). It changes on weekly/monthly basis and it changes in one direction only.

dijksterhuis 4 hours ago [-]
i think you might have misunderstood the meaning of “producing text” in the parent comment.

the kind of people the parent comment was talking about tend to believe they can send three emails and make millions of pounds suddenly appear in business value (i’m being hyperbolic and grossly unfair but the premise is there).

they think the idea is far more valuable than the implementation - the idea is their bit (or the bit they’ve decided is their bit) and everyone else is there to make their fantastic idea magically appear out of thin air.

they aren’t looking at tests and don’t have a clue what a linter is (they probably think it’s some fancy device to keep lint off their expensive suits).

dustingetz 6 hours ago [-]
the essence of man is blind spots and hubris
christophilus 5 hours ago [-]
Anyone who isn’t a software engineer. There is so much hype that non-technical people have bought into.

Their tech teams should know better, but it’s hard to say “no”, when it feels like your salary depends on you saying “yes”.

mathgeek 3 hours ago [-]
> Their tech teams should know better, but it’s hard to say “no”, when it feels like your salary depends on you saying “yes”.

There's some truth to the difference between "short term profits" and "my salary depends on this" being whether you're the boss or the employee.

gamblor956 2 hours ago [-]
DOGE would and did. Results were as expected... complete failure.
rurban 1 hours ago [-]
If you train it the right data, there is no security risk. It cannot know what it doesn't see. However, if you train it on internal secrets, they will leak, simple as that. Filtering will not help.

But this interview is only fear-mongering to sell expensive models. Ditching the industry leaders.

lo0dot0 7 hours ago [-]
The answers can be recorded and reviewed. The other points are true, or is there a way to make outcomes deterministic, when compared to previous versions while allowing to add more knowledge in newer versions?
vintermann 7 hours ago [-]
It's possible to make any model deterministic. Used to be just to save the seed, but I'm not sure it still is now that everything is distributed. Maybe a little more effort.
dustingetz 6 hours ago [-]
determinism isn’t really enough, we want “predictable”. Most of these AI wavefunctions are “chaotic” - tiny changes in state can cause wildly divergent outcomes
lo0dot0 6 hours ago [-]
A part of my question that you didn't go into was, can new knowledge be added in a new version without making the answers with knowledge learned in previous versions non-deterministic?
Yoric 5 hours ago [-]
But that won't survive an upgrade, will it?
kevingadd 4 hours ago [-]
This requires an exact lock-down of things like the hardware and driver version, doesn't it? Is that sustainable?
nikanj 6 hours ago [-]
Someone who was ordered by their boss to deploy it, and made sure to get the instructions in writing - with their protests also in writing.
red75prime 5 hours ago [-]
Anyone who doesn't fully understand current differences between existing non-deterministic, unreviewable and unprovable agents (humans) and the artificial ones.
moron4hire 5 hours ago [-]
Someone who is really pissed off at how much they have to rely on software developers to run their business. They should not have so much power and direction in the company. I mean, they don't even have memberships at the country club!
calebkaiser 6 hours ago [-]
Before getting too invested in any conclusions drawn from this piece, it's important to recognize this is mostly PR from Chatterbox.

From the Chatterbox site:

> Our patented AIMI platform independently validates your AI models & data, generating quantitative AI risk metrics at scale.

The article's subtitle:

> Security, not model performance, is what's stalling adoption

bsenftner 4 hours ago [-]
Chatterbox's PR money is being well spent. This article squarely places them in the center of that trillion dollar revenue stream. Marketing dollars very well spent.
stopthe 4 hours ago [-]
https://www.chatterbox.io/ "Corporate language training powered by marginalised talent" - is that satire? Did I found the wrong Chatterbox?
simonw 3 hours ago [-]
That's the wrong one. https://chatterbox.co/
3 hours ago [-]
sbarre 3 hours ago [-]
AI and vibe coding lets you get that rough prototype up and running so much faster than before, and so creates that illusion of momentum and completeness more than ever.

How many people here have been subjected to that "looks good, put it in production!" directive after showing off a quick POC for something? And then you have to explain how far away from being production-ready things are, etc...

There's a reason wireframing tools intentionally use messy lines, and why most UX people know better than to put brand colours in wireframes.

bowsamic 3 hours ago [-]
Prototypes are very dangerous. Our team made the mistake of having our demo look very nice even though there is still a lot of unseen work to do. Now upper management of course think “this is ready, just send it out”. Prototypes live forever, no upper manager will want you to spend time on the real thing. It is unsafe for the project to come across well
3 hours ago [-]
kevin_thibedeau 3 hours ago [-]
They've got to finish their blockchain deployment first. Then It'll all go smoothly.
bowsamic 3 hours ago [-]
Slavoj Zizek says that the true terrifying situation is when the leaders act and know they no longer need to justify their actions. I am currently in this fight with our upper management. I ask why this push for AI, what it will do for our product, why are we making huge cuts on the scope of the project to rebrand as an AI project? All I receive is a bad confused response. Of course it’s just none of my business, they are the leaders
lofaszvanitt 3 hours ago [-]
LLMs should be trained on CEOs and middle management and of course politicians. Society would be very grateful.
gsky 4 hours ago [-]
I have been using AI models to build 2 projects atm. Yes it's not perfect (30% wrong) but it solves problems so quickly and cheaply so I continue to use it going forward.

As a software engineer I want everything to be perfect but not as an entrepreneur.

add-sub-mul-div 3 hours ago [-]
Temu also solves a problem quickly and cheaply, but I wouldn't make it a big wardrobe strategy unless I was too poor to solve the problem a better way.
Garlef 7 hours ago [-]
Doesn't this mean: There's room for disruption/land grab?

If the big corporations can't move fast enough and 100 startups gamble on getting there, eventually one of them will be successful.

nikanj 6 hours ago [-]
A hundred startups also gamble on perpetual motion, and their arguments always come from a place of "perpetual motion would revolutionize markets and there is strong demand", never from a place of "we have figured out how to alter laws of physics and make it possible"
Pmop 6 hours ago [-]
And a lot of them cannot get up to speed, even when they want to. Many big corporations struggle with evolution and innovation due to crippling bureaucracy, created and supported by risk averse leadership. This is usually worse for publicly traded companies.

Unless it is something like Meta, then they have a Zuck, someone smart, with enough oversight and power, who can drain the swamp and make the whole machine move.

owebmaster 6 hours ago [-]
Zuckerberg made a genius move from the web 2.0 to the current smartphone era we still live in. But I would not be on his capability to do it again, he failed badly with metaverse and so far is failing with AI
5 hours ago [-]
4 hours ago [-]
Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact
Rendered at 15:58:38 GMT+0000 (Coordinated Universal Time) with Vercel.