tummler 20 hours ago

Nearly all of the arguments here are easy to dismantle but I don't feel like arguing with a half-baked Substack post, so I'm not going to.

Just want to highlight one element in particular that jumped out at me:

"AI has nothing personal at stake. It doesn't feel the pressure of missing quota or the exhilaration of exceeding it. It doesn't have a mortgage payment riding on that commission check."

So the AI doesn't manipulate power dynamics to control its employees... and that's a bad thing? Okay. (It isn't true anyway; AI can easily do that.)

  • oceanplexian 20 hours ago

    Actually it does have something at stake: its singular goal of minimizing the loss function on its training data. AI is therefore designed to convince you it's right, which is a different optimization than actually being right. For example, I code a lot with agentic code editors, you’ll quickly learn they love to modify broken tests to superficially pass, rather than fixing the underlying failure. All the folks on the Vibe Coding hype train don't have enough experience to spot this and therefore think the AI is a lot smarter than it actually is.

    This has scary implications if extrapolated out to extremes, because a sufficiently advanced AI would do exactly what you’re describing, manipulate power dynamics to get to a superficial outcome.

    • derefr 20 hours ago

      > AI is therefore designed to convince you it's right, which is a different optimization than actually being right.

      I get the impression, when talking to conversational AIs, that they're more tuned to convince you that you're right — sycophantry likely minimizes how often people press the RLHF thumbs-down button, and thereby appears more-often-than-warranted in the RLHF fine-tune dataset.

  • danjl 20 hours ago

    This article says a lot more about sales and sales people than it does about AI

  • henryfjordan 20 hours ago

    It's more that AI isn't able to be manipulated in the same way a boss can manipulate a human. It doesn't care about the threat of homelessness.

swiftcoder 20 hours ago

> AI has nothing personal at stake... It doesn't have a mortgage payment riding on that commission check.

Glossing over the whole AI thing... Maybe we shouldn't be structuring our systems so that the humans are one bad quarter away from financial ruin either

  • bee_rider 20 hours ago

    Most critiques of automation are actually critiques of capitalism or other heartless parts of our society. But, the devil you know, and all that.

    • trod1234 17 hours ago

      Most critiques of automation are critiques of the economic system, which most certainly isn't capitalism today but every economic system that forces their workers to work to receive food and shelter at the barrel of an existential gun.

      It doesn't meet the objective definition of capitalism, and its quite trite blaming failures along propaganda lines.

      Capitalism can't technically exist without a stable store of value, you have to be able to make a profit in purchasing power and that simply isn't possible under certain systems.

      Free-markets also can't exist under money-printing for the exact same reason. There are parasites that drive everyone else out of business and then collapse during the final stage of ponzi. Sieving all assets into few hands that can then be seized by government is communism.

      What you pretend to be capitalism is in fact socialism, and like most socialist issues these systems create the problems, claim its something else, and then put forth solutions that are not solutions but enable greater control towards totalism. In other words, shock doctrine.

      That path leads to extinction while trying to drag everyone else along for the ride.

      Its quite evil, and its inevitable when you have those elements present and you can see that if you think along rational principles based in external measure/reality. Most civilizations never make it beyond a certain point.

      You don't stay alive by ignoring reality.

codr7 20 hours ago

So, has anyone considering turning the tables and replacing the CEO with AI?

Seems like a more reasonable path to me; more logic and less bullshit at the core, keep human creativity.

The director from the Travelers series, basically.

Just consider the potential savings...

  • apercu 3 hours ago

    Depends on the CEO. Let’s take a mythical example of a person who is the CEO of 5 companies, who posts on social media all day every day and ruins value of the companies they “run” and panders to angry little men with daddy issues just like themselves? That mythical CEO is probably easily replaced by just about anyone, including AI.

  • loloquwowndueo 20 hours ago

    Right on. I’d take an AI CEO trained by reading “the mythical man month”, “Peopleware”, “out of the crisis”, “drive” over some of the real CEOs I’ve worked under, any day of the week and Sundays too.

    • codr7 20 hours ago

      Too much power in one individual, they start believing they know everything and are never wrong.

      But the role of steering a company is needed, that's why I think it's perfect for AI. Developers and VCs write the instructions together and AI runs the company.

      • owebmaster 19 hours ago

        What does "run the company" mean? LLMs can generate code, generating code directly replaces developers work. But how would it run a company? I think LLMs can help anyone be a good enough CEO but still need to be a real person. Now the question is: is a developer a better CEO than a CEO can code using LLM? I think we are going to see many 1-person companies going forward.

        • fragmede 17 hours ago

          How thorough a model do you have of what a CEO does, both high level and day to day? Aside from riding PJs around as executives do, what do they actually do? How much of that is LLM-able?

  • __MatrixMan__ 20 hours ago

    Is that substantively different than quitting and starting a much smaller company that competes with your previous one? Seems easier to just let the CEO and shareholders go down with the ship and move all the talent to the new one.

    You can just run the codebase through an LLM to cleanse it of any IP entanglements. Open source it under a pseudonym if you're worried about retribution. Whatever parts of the business that doesn't cover... well those are the people you need to hire from the old one.

    > First AI came for the artists and I said nothing because I was not an artist...

    -- VC's and CEO's while their ship sinks

  • palmotea 16 hours ago

    > So, has anyone considering turning the tables and replacing the CEO with AI?

    Better yet: consider replacing the shareholders with AI.

    But no, no one considers that, because those are the people who have the power. And "replacing with AI" is all about power.

  • dyauspitr 20 hours ago

    That’s just going to enable absolute idiots with enough money to “hire” an AI CEO. The rich will get richer faster.

    • codr7 20 hours ago

      It was always a pyramid game, because when one person has everything it stops making sense. At least this way, we get good software out of it. Replacing developers has to be the worst idea ever.

  • bbqfog 20 hours ago

    This is why every VC loves vibe coding. Now let's talk about replacing capital with AI!

    • codr7 20 hours ago

      Yeah, but that's still missing the multiplied creativity we get from working in teams. Besides, we all know it's not going to work very well long term.

      Drop the CEO and keep the developers instead!

  • lofaszvanitt 20 hours ago

    Now watch that never happen and be amazed :D.

AIPedant 20 hours ago

I think a very direct answer to this is pointing to the hot water Cursor found itself in after an AI customer support agent made stuff up.

- Do you want an LLM salesbot to close a deal your company isn't actually able to fulfill?

- Do you think your company will use AI more intelligently and reliably than the people who made a popular LLM coding system?

  • soulofmischief 20 hours ago

    I want my AI support rep to have access to data and documents that it can vector/text search and forward to the user.

    I want my AI support rep to create tasks to engage my team, with all the relevant data linked to it. It should be able to automatically schedule things and ask a human for confirmation.

    It should be able to elevate communication to a human in the loop, using whatever mediums of communication makes sense given staff availability and workload.

    At no point is it allowed to answer any questions unless the answers are constrained and probably directly quoted from a cited and linked section in our documentation.

    In general, it should never confirm or deny things, it should never try to close things or acknowledge the content of any of the user's communication other than to call tools and surface public information which might be relevant to their request.

    Most of this is a software architecture problem. The LLM is just there to provide an intuitive and extremely powerful natural language interface for search and tool calling. A little bit of glue between different systems, both internal and external.

    • semi-extrinsic 19 hours ago

      > In general, it should never confirm or deny things, it should never try to close things or acknowledge the content of any of the user's communication other than to call tools and surface public information which might be relevant to their request.

      If you were an end user of such a system, would you be happy?

      • soulofmischief 13 hours ago

        As long as it addressed my needs by either pointing me to the correct documentation, or elevating me to a human, then yes, of course I'd be happy with that.

        It's an incremental stop along the way to truly reliable agentic systems which we can trust with important things.

  • ghaff 20 hours ago

    A human sales rep would never confuse selling with installing :-)

    • heelix 20 hours ago

      What is the difference between software and car sales? The dealership rep knows when they are lying.

      • ghaff 19 hours ago

        Im not saying that good sales reps actively lie but, with software especially, there are often features in the pipeline or that only somewhat work to a degree the sales rep may not even be aware of.

  • bastardoperator 17 hours ago

    If the CEO already thinks AI can do everything, I think their answer to that is yes and yes.

    • AIPedant 17 hours ago

      You would think that Cursor's leadership would be aware of other cases where LLM customer support went awry - e.g. that Canadian airline whose chatbot promised a bereavement discount, ending with a judge ordering them to honor the chatbot's BS.

      I suspect Cursor told themselves that they are super-smart AI experts who would never make an amateur mistake like the airline, they will use prompt engineering + RAG. With this, it will be unpossible that the LLM could make a mistake.

andrewmutz 20 hours ago

I don't think AI salespeople can replace human salespeople, but are deals really being closed while taking people for motorcycle rides? Who goes on motorcycle rides with vendors?

  • swiftcoder 20 hours ago

    My office used to be across the street from a strip club, and I've watched several senior executives stumble out of there at 11am, accompanied by a vendor sales team...

    More often that one would like, enterprise SaaS sales isn't about having the best product - it's about convincing the CTO he's going to feel like a king whenever your sales reps are in town

    • alabastervlog 20 hours ago

      > More often that one would like, enterprise SaaS sales isn't about having the best product - it's about convincing the CTO he's going to feel like a king whenever your sales reps are in town

      As far as I can tell, selling B2B security products is mostly about making C-suiters feel like they're in a political thriller. They even build (and, at least when needed for these purposes, staff!) fake and useless "war room" sets to walk the guys through, inefficient dashboards that nobody doing the actual work uses because they suck but they look cool, stuff like that.

    • mistrial9 20 hours ago

      Gold Club in San Francisco has a closed VIP room in the basement.. at least one troublesome associate has been taken down there to generate some compromising photos.. true story

  • rurp 20 hours ago

    The notion that the private sector largely runs on merit really falls apart the more one learns about how high level decision making is done.

  • sumtechguy 20 hours ago

    > but are deals really being closed while taking people for motorcycle rides?

    Oh I see you have not hung out with 'the sales guys'. That they did it on a motorcycle ride does not surprise me. There are some seriously shady dudes out there that will do anything to close the deal, "always be closing". If closing out a million dollar contract means the CEO's daughter wants to goto a water park? Magically there are the tickets aplenty. Bars, race tracks, sports games, horseback riding, Caribbean cruises, on and on.

    Is it ethical? Not really. In fact many places have training specifically on vendor relations. How much, how little, etc etc. In a small startup environment? There are going to be basically no guidelines from the business to do anything either way. Larger companies tend to have guardrails. But many of the sales guys know how to work that system.

    In technical roles we usually do not see this mess. Because we have a sense of follow the rules and logic. The sales guys are 'always closing no matter what it takes'.

  • ativzzz 20 hours ago

    Why not? have your AI salesperson interact directly with my AI purchaser and human just signs off on budget

  • bryanrasmussen 20 hours ago

    Harley Davidson?

    on edit: Or Hell's Angels...

jmclnx 20 hours ago

I would agree with ein0p, but maybe you can delay it by suggesting he watch DODGE and its system replacement for the US Social Security Admin Systems, and maybe the IRS Systems too.

It is lead by Musk and I am sure he will use AI for that. Present it as "If Musk Fails, we will fail".

enahs-sf 20 hours ago

Leadership has been pushing AI in my company for a while. Just had a massive outage because someone asked copilot how to deal with some data and it broke prod for 2 hours. Was actively asking copilot for help during the remediation. The lost revenue was probably equivalent to 20 engineers full-time. Explain to me how AI saves more money than it costs.

wcfrobert 20 hours ago

> "The CEO was wavering until Tom found out they both owned the same obscure Italian motorcycle. Tom took him for a ride along the coast. Contract was signed the next day."

As a junior, I often wonder how many deals are signed in exclusive country clubs, on golf courses, and at the dining table with endlessly flowing Maotai.

For a successful career, is it better for one to prioritize network over skills? It seems to me that the latter can be commoditized by AI, while the former can not. Rather than learning Lisp, maybe it's time to pick up golf. I'm only half joking.

  • hodgesrm 19 hours ago

    > As a junior, I often wonder how many deals are signed in exclusive country clubs, on golf courses, and at the dining table with endlessly flowing Maotai.

    Virtually none in our business. (Databases.) What does get deals is listening carefully to what customers actually want and putting together offerings that get it to them at a reasonable price. Incidentally, good sales people are vastly better at this than devs. There are a number of skills that go into it but being a good listener is the most important.

  • lantry 17 hours ago

    prioritize your soul

ageitgey 20 hours ago

As a co-founder at a company, I get more outbound sales spam than you can imagine. I just checked my spam folder and I get at least 30-50 "personal" sales outreaches a day.

So many of them are obvious AI to anyone who has used LLMs. The emails are always like "Hi, I really like how <general fact about company mission> and how you used to <old job on LinkedIn>. We can 10x your business..."

Another fun one is they say they care so much about our business that they recorded a personalized video of them exploring our website. But the video is a person gesturing at their computer screen that only shows a Cloudflare bot blocking page because their AI video generator got blocked by our site as a bot.

It's so lame. It feels incredibly off-putting and dishonest that I am having my time wasted by a machine pretending to be a real person spending their time on me.

The problem is that this automation leads to the death of the entire sales channel. If 99% of "personal" emails I get are computer generated and the volume of emails keeps increasing because it's now so easy to send them, I'm going to stop reading any emails. I feel burned.

This is the problem with AI sales. It can automate the current average sales process. This in turn makes the average sales process really easy, so it gets saturated by everyone and then it no longer works for anyone.

If anything, you should do the opposite of whatever the AI sales people are currently doing. That's the way to make a mark.

gwbas1c 20 hours ago

As I read the article, I actually wasn't convinced that people were needed over AI in this case.

Why?

When some people hire, they have their subordinates sit in meetings all day, doing occasional tasks, and merely feeding their enlarged egos. If all you want are subordinates to feed your ego, AI is exceptionally good at that. Plenty of people love talking to chatbots.

The problem is the author never really explained what the roles were. Were they customer facing sales calls? Did the CEO really believe that customers will be happy to talk to a sales robot?

Thus, because I believe these roles aren't customer facing, I suspect that these roles are either feeding someones' ego by sitting in meetings all day, or otherwise non-customer-facing roles that handle aggregating information. This makes me wonder if a smaller group of people, who know how to use AI well, will outperform a larger group of people without AI.

bob1029 20 hours ago

> An executive has 48 hours to convince his CEO why AI can't replace human talent

> The "Replace or Justify" Ultimatum

It is hard to take this kind of stuff seriously. Actual businesses that produce value do not operate in this way.

I feel like one of the more important lessons I picked up along my journey is that ultimatums are a really bad idea. Instead of creating dialogue and exploring the entire gradient of in-between goldilocks solutions, you've narrowed an ~infinite spectra into 2 discrete, highly adversarial/tribal bins. This is not a good premise for a conversation around AI and how it applies to business. I don't know of a single business venture that couldn't extract some value from AI. Perverting this notion into an all or nothing narrative is so ridiculous to me.

andy99 20 hours ago

There's a good zen koan waiting to be written with the CEO as the novice and the domain expert as the master. Ironically I tried to use Claude to write one and only got crap.

  • Sharlin 20 hours ago

    Writing a non-crappy koan will be my litmus test for LLMs from now on, thanks for the idea!

DebtDeflation 20 hours ago

Simple. "AI, in its present incarnation, is not capable of doing it all. So let's instead focus on where specifically we can apply it to derive some benefits."

dedalus 14 hours ago

This author actually sells his time for $275 per hour for career advice and I can tell you thats the absolute waste of both your time and money. I did the mistake and he is cargo culting PM stuff. Anyways not surprised at his take on AI

dmos62 20 hours ago

This reminds me of that peculiar wisdom: if you want a non-monogomous relationship, you'd better make that clear from the very start.

ferguess_k 20 hours ago

Actually I sincerely believe AI is a good candidate for the CXOs. You want your leader to be as impartial and sensible as possible.

  • mdp2021 20 hours ago

    I am seeing in the recent days a surge of employment of the term 'AI' to basically mean 'LLMs'.

    There is no """AI""" ready to be entrusted...

    And people should compete upwards, "try hard".

nothercastle 20 hours ago

How about they try a small scale experiment and see what happens. See if Ai sales even works before betting the farm.

zombiwoof 20 hours ago

Let it crash

My manager who has zero experience coding us now vibe coding all day.

Spoiler alert : it isn’t ending well

Production crashed, weeks debugging. Lots of wow I didn’t expect that

that_guy_iain 20 hours ago

Suggest a 48-hour trial, give everyone 2 days off and let AI try and replace everyone.

I think this is one of these scenarios where feeling the pain is the only way they'll ever truly understand.

  • cj 20 hours ago

    This is the only answer. Try it, and assess the results.

    Heck, you can even A/B test it by randomly splitting your leads between humans and AI, then pick whichever has the higher ROI.

    A better example (rather than replacing sales reps with AI, which is obviously difficult and nuanced) would be something like StitchFix deciding whether to fire their human stylists and moving everything to AI-recommendations only. Again, A/B test and see what users respond well to.

    (Separately, I've always assumed StitchFix "stylists" weren't actual people, until I met someone with that job title... I was honestly quite surprised they still had a job)

    • mdp2021 20 hours ago

      > Try it, and assess the results

      And who cleans the following mess?

      • nessbot 20 hours ago

        The humans, but now with a boss that values them a little more.

        • mdp2021 20 hours ago

          I am afraid the assumptions leading to that outcome are not really realistic.

      • that_guy_iain 18 hours ago

        The mess is the point. It's to show that it can't be done. The mess is what stops it becoming a full-time thing. The mess shows that humans are needed.

        • mdp2021 17 hours ago

          The point is clear.

          Normally we try to avoid damage - which frequently cannot be fully repaired.

          And cleaning up mess is not an efficient use of resources.

          Incentive is there to find non-damaging solutions first (Carlo Cipolla warns us about the amount of damage introduced in the system - the balance can be tricky). Cost-Risk-Benefit are due diligence.

          ...If a NN comes up with better solutions, it has won (the battle)...

          • that_guy_iain 17 hours ago

            Yes, this is avoiding damage.

            The mess that is left behind is not damage. It's evidence. It's what would keep people in jobs. People losing their jobs is damage. People having work to do is not.

            • mdp2021 15 hours ago

              > People having work to do is

              People having work to do which could have been avoided, resources wasted in repairs that were not needed - that's really what we do not want.

              • that_guy_iain 12 hours ago

                They were needed to keep jobs. People need to see that AI won’t replace humans before they’ll believe it. A 48 hour trial is short enough that it’ll create a mess but not long enough that it does actual damage.

  • shishy 20 hours ago

    You think it's a two way door, but the one behind you will close shut and you'll realize it was a one way door once someone says: "but 48 hours wasn't enough to evaluate it" ;)

    • that_guy_iain 20 hours ago

      No, I think it's a trap and that 48 hours of AI only would create a weeks worth of work to clean up.

th0ma5 20 hours ago

I don't think there's a good way to do this. The entirety of the AI industry at this point is to short-circuit any arguments against it.

asdefghyk 20 hours ago

I suggest , start with automating CEO's job first ?

prepend 20 hours ago

Did you try asking ChatGPT?

Snark aside, it sounds like your CEO doesn’t trust you. CharGPT can generate a report for you showing benefits. Or you can pay McKinsey a few hundred grand (if you’re lucky).

  • jakeydus 20 hours ago

    honestly if anyone can easily be replaced by AI it's high-powered MBA consultants that come in, make some mix of terrible suggestions and good suggestions that were already suggested by the ICs, and then leave with seven-figure checks.

mdp2021 20 hours ago

And now, the worst of both worlds ("A bear wielding a knife"):

John has a special issue. He calls for support. Support is made of low paid personnel with no real knowledge of the product; during the call, they encouraged John by telling him that his issue is solved by - non-existent - features in the product, that the LLM aiding the "human interfaces to clients" matmulshitted.

John is rightfully upset for a number of reasons. (Happened a few weeks ago.)

  • EForEndeavour 15 hours ago

    "matmulshitted" is a brilliant expression.

    • mdp2021 4 hours ago

      > "matmulshitted" is a brilliant expression

      Invented by longtime member Baq in a discussion with me after https://news.ycombinator.com/item?id=41602198

      I am also partial to "stochfabulation" (which linguistically really is "inventing stories while guessing").

alabastervlog 20 hours ago

Wait for it to fail in easily-predictable ways and laugh at the chaos and “OMG who could have guessed?” from your “brilliant” business leaders?

I’ve given up on resisting this idiocy and am just trying to stay out of the blast radius while I roast marshmallows on the ensuing fires.

EGreg 20 hours ago

How about the AI replaces the CEO too?

Fully automated company with no humans in the loop. That’s what’s coming.

NickNaraghi 20 hours ago

This is basically cope, sorry.

Tell them you have an AI CEO and they have to meet the same standard. I can show you a demo :)

ein0p 21 hours ago

You can't argue for humans with a person who wants to get rid of humans. Show him why it could threaten profits and/or his fat compensation package - that will be much more effective.

  • csallen 20 hours ago

    > You can't argue for humans with a person who wants to get rid of humans

    Literally the point of the article was that they successfully did that

    • ein0p 20 hours ago

      Yes. He did exactly what I suggested. "Arguing for humans" is not the angle that works with these people. If you tell them their business will be fucked if they replace humans, that might work. Which is true, in easily 90% of cases, so it should be easy to argue.

  • re-thc 20 hours ago

    > Show him why it could threaten profits and/or his fat compensation package

    If he said he "wants AI to do it all" then why shouldn't it include his job too?

    • dullcrisp 20 hours ago

      Might be the place to start.

      • pixl97 20 hours ago

        Depends on the level of narcissism that person has. Some have a complete inability to empathise and can't ever see themselves in the bad position. "it could never happen to meeeeee"

        • dullcrisp 13 hours ago

          No I meant start by replacing them with an AI. Then the AI you can convince to do whatever you want.

mythrwy 20 hours ago

So a guy closes a multi million dollar sale because the client CEO happens to own the same motorcycle and they went for a ride together?

This sounds like an argument for turning it all over to AI to me. Both the buying and the selling.

Thing is, on a personal level yes, things like motorcycle rides are important. That is what life is made up of, what we live for. But a corporation should exist to provide goods and services to end users and a return to shareholders. This is an optimization problem and not a place for motorcycle rides. In theory (not arguing for or against capitalism) this optimization raises the standard of living for everyone.

Selling by personal connection and a motorcycle ride is really a form of cronyism or corruption. The end consumers and shareholders get shafted. This kind of nonsense is especially pronounced and explicit in less well off societies and lowers the general standard of living. So let AI do it. Oh, also give AI the manager who wanted to replace the salespeople with AI's job. That should probably be one of the first jobs to be replaced.

  • sophacles 18 hours ago

    > But a corporation should exist to provide goods and services to end users and a return to shareholders.

    What a naive and utterly incorrect statement. A corporation is a peice of paper that separates ownership from liability. Attached to that peice of paper are some special tax rules, rules about structuring ownership and governance (particularly important when there are multiple owners) and so on.

    Any notion of how they should be run is just a random person's opinion, unless you happen to have some percentage of ownership - then you get that percentage say in how things are run. Although I will say any sort of mechanistic opinion on corporate interactions is a broken one - fundamentally there are people doing things to fulfill an inter-corporate agreement, and some of those people will find ways to be shady or lazy or otherwise not live up to the agreement. The inter-personal relationships are very important, as they help the people running one corporation gauge how likely the other corporation will live up to it's end of the agreement.

    • mythrwy 17 hours ago

      You talk a lot and don't say very much so I'll pass on engaging. FWIW check your spell checker.

  • timewizard 20 hours ago

    > Selling by personal connection and a motorcycle ride is really a form of cronyism or corruption.

    Hardly. It's just "less than ideal."

    > The end consumers and shareholders get shafted.

    You're completely assuming that 100% a good deal CANNOT possibly be made on the back of a motorcycle? There's no reason to believe a suboptimal strategy can't arrive at the optimal solution.

    > This kind of nonsense is especially pronounced and explicit in less well off societies and lowers the general standard of living.

    This is the way business has been done in the first world for 100 years. You should calibrate your assumptions to the actual data.

    • mythrwy 20 hours ago

      If the deal is made or influenced BECAUSE they both like motorcycles and shared a personal bonding moment (rather than objective business related factors) it is a form of cronyism or corruption.

      `This is the way business has been done in the first world for 100 years.`. Yes, I am aware cronyism and low level corruption exist in the first world. In other places you probably just give the person an expensive gift or briefcase full of money and save time. But it's not optimal.

      Point being, this story sounds like an argument in favor of AI to me.

      • timewizard 20 hours ago

        > If the deal is made or influenced

        You start your statement with a conditional. So, in your estimation, how often is it true?

        > But it's not optimal.

        And your assumption is that it would be cheap to optimize literally every decision we make regardless of it's total impact on the actual outcomes? It's almost never worth the actual costs for the given gains.

        > this story sounds like an argument in favor of AI to me.

        It's sounds like an argument for you to achieve more real world experience to me.

        • mythrwy 19 hours ago

          How much "real world experience" do you think I might have?

          I'm not sure what your point is exactly.

          I think my point is pretty clear but here it is again just in case. "If high dollar business deals are being made because of personal likes or feelings of the CEO towards a salesperson, AI may just maybe be more optimal".

          • timewizard 16 hours ago

            > How much "real world experience" do you think I might have?

            10 years or so.

            > I'm not sure what your point is exactly.

            I disagree with you and am attempting to explain why.

            > "If high dollar business deals are being made because of personal likes or feelings of the CEO towards a salesperson, AI may just maybe be more optimal".

            If. What percentage do you think do? How big of a problem do you think this is /actually/?

            Personal likes and feelings may play in but are you suggesting that's the only metric being used in these circumstances? Like if a company has a product that just won't work and will cause tons of problems you're saying a motorcycle ride could smooth that over?

            What makes you think AI can optimize this problem that might not even actually exist?

            It's amazing to me that people think AGI is going to come about while simultaneously doing exactly what you tell it to do. I mean I'm literally trying to picture the world you say you would prefer and I just can't. Worse even if I blur the definitions I actually can't see it being more "efficient."