a really bad idea for SW development
I got an automated email about a new tool that purports to automatically migrate a whole codebase so it can use a different or upgraded framework, without developers having to do the work. This is alarming to me, because what is going to happen is AI (specifically LLMs, I know there are other kinds of AI), which is very prone to "hallucinating" (which should be called "bullshitting") SHOULD NOT BE TRUSTED WITHOUT FULL HUMAN VERIFICATION. I've worked on software projects with a significant amount of code, tied to frameworks, and framework upgrades or replacements were always difficult and error-prone. The kind of errors where it looks like it should work, and only careful testing and review finds that:
1. you have to do it differently now
or
2. The whole basic idea you were used to has been replaced
or
3. without warning, something in the new framework is broken (an actual bug)
or
4. The upgraded/replaced framework causes a need for other things to be upgraded, so the process is repeated ad infinitum.
SOMETIMES THESE PROBLEMS COULD ONLY BE FOUND BY HUMANS TESTING CAREFULLY.
Since almost all current software builds on frameworks/libraries/whatever that are constantly changing, this kind of use of AI is going to be very tempting and is a really BAD IDEA.
(imagine your PM saying we need this completed ASAP and use AI)
People were saying they should think of AI code generation as a dumb, erratic junior programmer whose work must be completely reviewed. This idea of having the AI/LLM revise your entire codebase is terrible. We're going to be drowning in AI generated/modified code that no one is given sufficient time to review. And don't get me started on LLM based "Agents".....
1. you have to do it differently now
or
2. The whole basic idea you were used to has been replaced
or
3. without warning, something in the new framework is broken (an actual bug)
or
4. The upgraded/replaced framework causes a need for other things to be upgraded, so the process is repeated ad infinitum.
SOMETIMES THESE PROBLEMS COULD ONLY BE FOUND BY HUMANS TESTING CAREFULLY.
Since almost all current software builds on frameworks/libraries/whatever that are constantly changing, this kind of use of AI is going to be very tempting and is a really BAD IDEA.
(imagine your PM saying we need this completed ASAP and use AI)
People were saying they should think of AI code generation as a dumb, erratic junior programmer whose work must be completely reviewed. This idea of having the AI/LLM revise your entire codebase is terrible. We're going to be drowning in AI generated/modified code that no one is given sufficient time to review. And don't get me started on LLM based "Agents".....