Just want to clarify, this is not my Substack, I’m just sharing this because I found it insightful.

The author describes himself as a “fractional CTO”(no clue what that means, don’t ask me) and advisor. His clients asked him how they could leverage AI. He decided to experience it for himself. From the author(emphasis mine):

I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

Now when clients ask me about AI adoption, I can tell them exactly what 100% looks like: it looks like failure. Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive. Then three months later, you realize nobody actually understands what you’ve built.

  • Agent641@lemmy.world
    link
    fedilink
    English
    arrow-up
    14
    arrow-down
    1
    ·
    7 hours ago

    I cannot understand and debug code written by AI. But I also cannot understand and debug code written by me.

    Let’s just call it even.

  • pdxfed@lemmy.world
    link
    fedilink
    English
    arrow-up
    23
    ·
    14 hours ago

    Great article, brave and correct. Good luck getting the same leaders who blindly believe in a magical trend for this or next quarters numbers; they don’t care about things a year away let alone 10.

    I work in HR and was stuck by the parallel between management jobs being gutted by major corps starting in the 80s and 90s during “downsizing” who either never replaced them or offshore them. They had the Big 4 telling them it was the future of business. Know who is now providing consultation to them on why they have poor ops, processes, high turnover, etc? Take $ on the way in, and the way out. AI is just the next in long line of smart people pretending they know your business while you abdicate knowing your business or employees.

    Hope leaders can be a bit braver and wiser this go 'round so we don’t get to a cliffs edge in software.

  • Unlearned9545@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    ·
    16 hours ago

    Fractional CTO: Some small companies benefit from the senior experience of these kinds of executives but don’t have the money or the need to hire one full time. A fraction of the time they are C suite for various companies.

  • raspberriesareyummy@lemmy.world
    link
    fedilink
    English
    arrow-up
    62
    arrow-down
    22
    ·
    17 hours ago

    So there’s actual developers who could tell you from the start that LLMs are useless for coding, and then there’s this moron & similar people who first have to fuck up an ecosystem before believing the obvious. Thanks fuckhead for driving RAM prices through the ceiling… And for wasting energy and water.

    • khepri@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      1
      ·
      15 hours ago

      They are useful for doing the kind of boilerplate boring stuff that any good dev should have largely optimized and automated already. If it’s 1) dead simple and 2) extremely common, then yeah an LLM can code for you, but ask yourself why you don’t have a time-saving solution for those common tasks already in place? As with anything LLM, it’s decent at replicating how humans in general have responded to a given problem, if the problem is not too complex and not too rare, and not much else.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        11 hours ago

        As you said, “boilerplate” code can be script generated - and there are IDEs that already do this, but in a deterministic way, so that you don’t have to proof-read every single line to avoid catastrophic security or crash flaws.

    • InvalidName2@lemmy.zip
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      7
      ·
      16 hours ago

      And then there are actual good developers who could or would tell you that LLMs can be useful for coding, in the right context and if used intelligently. No harm, for example, in having LLMs build out some of your more mundane code like unit/integration tests, have it help you update your deployment pipeline, generate boilerplate code that’s not already covered by your framework, etc. That it’s not able to completely write 100% of your codebase perfectly from the get-go does not mean it’s entirely useless.

      • Soggy@lemmy.world
        link
        fedilink
        English
        arrow-up
        18
        arrow-down
        1
        ·
        15 hours ago

        Other than that it’s work that junior coders could be doing, to develop the next generation of actual good developers.

      • raspberriesareyummy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        5
        ·
        11 hours ago

        And then there are actual good developers who could or would tell you that LLMs can be useful for coding

        The only people who believe that are managers and bad developers.

        • keegomatic@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          ·
          10 hours ago

          You’re wrong, whether you figure that out now or later. Using an LLM where you gatekeep every write is something that good developers have started doing. The most senior engineers I work with are the ones who have adopted the most AI into their workflow, and with the most care. There’s a difference between vibe coding and responsible use.

          • raspberriesareyummy@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            1
            ·
            7 hours ago

            There’s a difference between vibe coding and responsible use.

            There’s also a difference between the occasional evening getting drunk and alcoholism. That doesn’t make an occasional event healthy, nor does it mean you are qualified to drive a car in that state.

            People who use LLMs in production code are - by definition - not “good developers”. Because:

            • a good developer has a clear grasp on every single instruction in the code - and critically reviewing code generated by someone else is more effort than writing it yourself
            • pushing code to production without critical review is grossly negligent and compromises data & security

            This already means the net gain with use of LLMs is negative. Can you use it to quickly push out some production code & impress your manager? Possibly. Will it be efficient? It might be. Will it be bug-free and secure? You’ll never know until shit hits the fan.

            Also: using LLMs to generate code, a dev will likely be violating copyrights of open source left and right, effectively copy-pasting licensed code from other people without attributing authorship, i.e. they exhibit parasitic behavior & outright violate laws. Furthermore the stuff that applies to all users of LLMs applies:

            • they contribute to the hype, fucking up our planet, causing brain rot and skill loss on average, and pumping hardware prices to insane heights.
    • jali67@lemmy.zip
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      15 hours ago

      Don’t worry. The people on LinkedIn and tech executives tell us it will transform everything soon!

  • edgemaster72@lemmy.world
    link
    fedilink
    English
    arrow-up
    75
    arrow-down
    1
    ·
    22 hours ago

    Not immediate failure—that’s the trap. Initial metrics look great. You ship faster. You feel productive.

    And all they’ll hear is “not failure, metrics great, ship faster, productive” and go against your advice because who cares about three months later, that’s next quarter, line must go up now. I also found this bit funny:

    I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me… I was proud of what I’d created.

    Well you didn’t create it, you said so yourself, not sure why you’d be proud, it’s almost like the conclusion should’ve been blindingly obvious right there.

    • AutistoMephisto@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      40
      ·
      22 hours ago

      The top comment on the article points that out.

      It’s an example of a far older phenomenon: Once you automate something, the corresponding skill set and experience atrophy. It’s a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired. I’ll have to find it but there’s a story about a modern fighter jet pilot not being able to handle a WWII era Lancaster bomber. They don’t know how to do the stuff that modern warplanes do automatically.

      • LOGIC💣@lemmy.world
        link
        fedilink
        English
        arrow-up
        26
        arrow-down
        2
        ·
        21 hours ago

        It’s more like the ancient phenomenon of spaghetti code. You can throw enough code at something until it works, but the moment you need to make a non-trivial change, you’re doomed. You might as well throw away the entire code base and start over.

        And if you want an exact parallel, I’ve said this from the beginning, but LLM coding at this point is the same as offshore coding was 20 years ago. You make a request, get a product that seems to work, but maintaining it, even by the same people who created it in the first place, is almost impossible.

      • Cocodapuf@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        14 hours ago

        Once you automate something, the corresponding skill set and experience atrophy. It’s a problem that predates LLMs by quite a bit. If the only experience gained is with the automated system, the skills are never acquired.

        Well, to be fair, different skills are acquired. You’ve learned how to create automated systems, that’s definitely a skill. In one of my IT jobs there were a lot of people who did things manually, updated computers, installed software one machine at a time. But when someone figures out how to automate that, push the update to all machines in the room simultaneously, that’s valuable and not everyone in that department knew how to do it.

        So yeah, I guess my point is, you can forget how to do things the old way, but that’s not always bad. Like, so you don’t really know how to use a scythe, that’s fine if you have a tractor, and trust me, you aren’t missing much.

      • ctrl_alt_esc@lemmy.ml
        link
        fedilink
        English
        arrow-up
        15
        ·
        22 hours ago

        I agree with you, though proponents will tell you that’s by design. Supposedly, it’s like with high-level languages. You don’t need to know the actual instructions in assembly anymore to write a program with them. I think the difference is that high-level language instructions are still (mostly) deterministic, while an LLM prompt certaily isn’t.

  • Rhoeri@lemmy.world
    link
    fedilink
    English
    arrow-up
    13
    arrow-down
    9
    ·
    15 hours ago

    AI is hot garbage and anyone using it is a skillless hack. This will never not be true.

      • Rhoeri@lemmy.world
        link
        fedilink
        English
        arrow-up
        9
        arrow-down
        8
        ·
        12 hours ago

        Do you not know the difference between an automated process and machine learning?

        • nullroot@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          1
          ·
          10 hours ago

          Yes? Machine learning has been huge for protein folding and not because anyone is stupid, it’s because it’s a task uniquely suited for machine learning, of which there are many. But none of that is what this AI bubble is really about, and even though I find the underlining math and technology fascinating, I share the disdain for how the bulk of it is currently being used.

        • 5gruel@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          arrow-down
          1
          ·
          8 hours ago

          The thing with being cocky is, if you are wrong it makes you look like an even bigger asshole

          https://en.wikipedia.org/wiki/AlphaFold

          The program uses a form of attention network, a deep learning technique that focuses on having the AI identify parts of a larger problem, then piece it together to obtain the overall solution.

  • CarbonatedPastaSauce@lemmy.world
    link
    fedilink
    English
    arrow-up
    26
    arrow-down
    3
    ·
    22 hours ago

    Something any (real, trained, educated) developer who has even touched AI in their career could have told you. Without a 3 month study.

    • AutistoMephisto@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      32
      ·
      22 hours ago

      What’s funny is this guy has 25 years of experience as a software developer. But three months was all it took to make it worthless. He also said it was harder than if he’d just wrote the code himself. Claude would make a mistake, he would correct it. Claude would make the same mistake again, having learned nothing, and he’d fix it again. Constant firefighting, he called it.

      • felbane@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        20 hours ago

        As someone who has been shoved in the direction of using AI for coding by my superiors, that’s been my experience as well. It’s fine at cranking out stackoverflow-level code regurgitation and mostly connecting things in a sane way if the concept is simple enough. The real breakthrough would be if the corrections you make would persist longer than a turn or two. As soon as your “fix-it prompt” is out of the context window, you’re effectively back to square one. If you’re expecting it to “learn” you’re gonna have a bad time. If you’re not constantly double checking its output, you’re gonna have a bad time.

    • ctrl_alt_esc@lemmy.ml
      link
      fedilink
      English
      arrow-up
      17
      arrow-down
      1
      ·
      22 hours ago

      It’s still useful to have an actual “study” (I’d rather call it a POC) with hard data you can point to, rather than just “trust me bro”.

    • some_designer_dude@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      7
      ·
      22 hours ago

      Untrained dev here, but the trend I’m seeing is spec-driven development where AI generates the specs with a human, then implements the specs. Humans can modify the specs, and AI can modify the implementation.

      This approach seems like it can get us to 99%, maybe.

  • kreskin@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    16 hours ago

    I work in an company who is all-in on selling AI and we are trying desperately to use this AI ourselves. We’ve concluded internally that AI can only be trusted with small use cases that are easily validated by humans, or for fast prototyping work… hack day stuff to validate a possibility but not an actual high quality safe and scalable implementation, or in writing tests of existing code, to increase test coverage. yes, I know thats a bad idea but QA blessed the result… so um … cool.

    The use case we zeroed in on is writing well schema’d configs in yaml or json. Even then, a good percentage of the time the AI will miss very significant mandatory sections, or add hallucinations that are unrelated to the task at hand. We then can use AI to test AI’s work, several times using several AIs. And to a degree, it’ll catch a lot of the issues, but not all. So we then code review and lint with code we wrote that AI never touched, and send all the erroring configs to a human. It does work, but cant be used for mission critical applications. And nothing about the AI or the process of using it is free. Its also disturbingly not idempotent. Did it fail? Run it again a few times and it’ll pass. We think it still saves money when done at scale, but not as much as we promise external AI consumers. The Senior leadership know its currently overhyped trash and pressure us to use it anyway on expectations it’ll improve in the future, so we give the mandatory crisp salute of alignment and we’re off.

    I will say its great for writing yearly personnel reviews. It adds nonsense and doesnt get the whole review correct, but it writes very flowery stuff so managers dont have to. So we use it for first drafts and then remove a lot of the true BS out of it. If it gets stuff wrong, oh well, human perception is flawed.

    This is our shared future. One of the biggest use cases identified for the industry is health care. Because its hard to assign blame on errors when AI gets it wrong, and AI will do whatever the insurance middle men tell it to do.

    I think we desperately need a law saying no AI use in health care decisions, before its too late. This half-assed tech is 100% going to kill a lot of sick people.

    • jj4211@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      18 hours ago

      At work there’s a lot of rituals where processes demand that people write long internal documents that no one will read, but management will at least open it up, scroll and be happy to see such long documents with credible looking diagrams, but never read them, maybe looking at a sentence or two they don’t know, but nod sagely at.

      LLM can generate such documents just fine.

      Incidentally an email went out to salespeople. It told them they didn’t need to know how to code or even have technical skills, they code just use Gemini 3 to code up whatever a client wants and then sell it to them. I can’t imagine the mind that thinks that would be a viable business strategy, even if it worked that well.

        • jj4211@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          16 hours ago

          Yeah, this one is going to hurt. I’m pretty sure my rather long career will be toast as my company and mostly my network of opportunities are all companies that are bought so hard into the AI hype that I don’t know that they will be able to survive that going away.

          • IronBird@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            16 hours ago

            if you don’t mind compromising your morales somewhat and have moderate understanding of how the stock market casino works…loads of $ to be made when pops, atleast

            • jj4211@lemmy.world
              link
              fedilink
              English
              arrow-up
              3
              ·
              15 hours ago

              Yeah, but mispredicting that would hurt. The market can stay irrational longer than I can stay solvent, as they say.

              • IronBird@lemmy.world
                link
                fedilink
                English
                arrow-up
                1
                ·
                15 hours ago

                eh, not if you know how it works. basic hedging and not shorting stuff limits your risk significantly.

                especially in a bull market where ratfucking and general fraud is out in thebopen for all to see

  • KazuyaDarklight@lemmy.world
    link
    fedilink
    English
    arrow-up
    9
    ·
    22 hours ago

    My big fear with this stuff is security. It just seems so “easy”, without knowledgeable people, for AI to write a product that functions from a user perspective but is wide open to attack.

    • AutistoMephisto@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      22 hours ago

      What’s interesting is what he found out. From the article:

      I forced myself to use Claude Code exclusively to build a product. Three months. Not a single line of code written by me. I wanted to experience what my clients were considering—100% AI adoption. I needed to know firsthand why that 95% failure rate exists.

      I got the product launched. It worked. I was proud of what I’d created. Then came the moment that validated every concern in that MIT study: I needed to make a small change and realized I wasn’t confident I could do it. My own product, built under my direction, and I’d lost confidence in my ability to modify it.

      • very_well_lost@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        21 hours ago

        Typical C-suite. It takes them three months to come to the same conclusion that would be blindingly obvious to anyone with half a brain: if you build something that no one understands, you’ll end up with something impossible to maintain.

  • jaykrown@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    19 hours ago

    I needed to make a small change and realized I wasn’t confident I could do it.

    Wouldn’t the point be to use AI to make the change, if you’re trying to do it 100% with AI? Who is really saying 100% AI adoption is a good idea though? All I hear about from everyone is how it’s not a good idea, just like this post.

  • NoiseColor @lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    21 hours ago

    Wasn’t this obvious? He didn’t need to go “all-in on ai” cause there is hundreds of thousands of people who tried the same thing already and everyone of them could tell him that’s not what ai can do.