The difficult part of software development has always been the continuing support. Did the chatbot setup a versioning system, a build system, a backup system, a ticketing system, unit tests, and help docs for users. Did it get a conflicting request from two different customers and intelligently resolve them? Was it given a vague problem description that it then had to get on a call with the customer to figure out and hunt down what the customer actually wanted before devising/implementing a solution?
This is the expensive part of software development. Hiring an outsourced, low-tier programmer for almost nothing has always been possible, the low-tier programmer being slightly cheaper doesn’t change the game in any meaningful way.
Yeah, I’m already quite content, if I know upfront that our customer’s goal does not violate the laws of physics.
Obviously, there’s also devs who code more run-of-the-mill stuff, like yet another business webpage, but those are still coded anew (and not just copy-pasted), because customers have different and complex requirements. So, even those are still quite a bit more complex than designing just any Gomoku game.
Well, as per above, these are extremely complex requirements, so most don’t make for a good story.
One of the simpler examples is that a customer wanted a solution for connecting special hardware devices across the globe, which are normally only connected directly.
Then, when we talked to experts for those devices, we learnt that for security reasons, these devices expect requests to complete within a certain timeframe. No one could tell us what these timeframes usually are, but it certainly sounded like the universe’s speed limit, a.k.a. the speed of light, could get in our way (takes roughly 66 ms to go halfway around the globe).
Eventually, we learned that the customer was actually aware of this problem and was fine with a solution, even if it only worked across short distances. But yeah, we didn’t know that upfront…
While I do agree that management is genuinely important in software dev:
If you can rewrite the codebase quickly enough, versioning matters a lot less. Its the idea of “is it faster to just rewrite this function/package than to debug it?” but at a much larger scale. And while I would be concerned about regressions from full rewrites of the code… have you ever used software? Regressions happen near constantly even with proper version control and testing…
As for testing and documentation: This is actually what AI-enhanced tools are good for today. These are the simple tasks you give to junior staff.
Conflicting requests and iterating on descriptions: Have you ever futzed around with chatgpt? That is what it lives off of. Ask a question, then ask a follow up question, and so forth.
I am still skeptical of having no humans in the loop. But all of this is very plausible even with today’s technology and training sets.
Just to add a bit more to that. I don’t think having an AI operated company is a good idea. Even ignoring the legal aspects of it, there is a lot of value to having a human who can make irrational decisions because one customer will pay more in the long run and so forth.
But I can definitely see entire departments being a node in a rack. Customers talk to humans (or a different LLM) which then talk to the “Network Stack” node and the “UI/UX” node and so forth.
If you just let it do a full rewrite again and again, what protects against breaking changes in the API? Software doesn’t exist in a vacuum, there might be other businesses or people using a certain API and relying on it. A breaking change could be as simple as the same endpoint now being named slightly differently.
So if you now start to mark every API method as “please no breaking changes for this” at what point do you need a full software developer again to take care of the AI?
I’ve also never seen AI modify an existing code base, it’s always new code getting spit out (80% correct or so, it likes to hallucinate functions that don’t even exist). Sure, for run of the mill templates you can use it, but even a developer who told me on here they rely heavily on ChatGPT said they need to verify all the code it spits out, because sometimes it’s garbage.
In the end it’s a damn language model that uses probability on what the next word should be. It’s fantastic for what it does, but it has no consistent internal logic and the way it works it never will.
You are literally describing constraints. They can be applied to an LLM the same way they can be applied to a dev team. And if you have never had to report an API change that breaks functionality… I wish I was you.
And if your full time software engineers are just running a unit test suite all day? … are you hiring?
As for modifcations: Again, have you ever used an LLM? Have a conversation with chatgpt. It will iterate on its responses. That is iterating on code.
In the end it’s a damn language model that uses probability on what the next word should be. It’s fantastic for what it does, but it has no consistent internal logic and the way it works it never will.
And that is demonstrably false and mostly just highlights that you don’t know what you are talking about. Or what language is, for that matter.
Mate, I’ve used ChatGPT before, it straight up hallucinates functions if you want anything more complex than a basic template or a simple program. And as things are in programming, if even one tiny detail is wrong, things straight up don’t work. Also have fun putting ChatGPT answers into a real program you might have to compile, are you going to copy code into hundreds of files?
My example was public APIs, you might have an endpoint /v2/device that was generated the first time around. Now external customers/businesses built their software to access this endpoint. Next run around the AI generates /v2/appliance instead, everything breaks (while the software itself and unit tests still seem to work for the AI, it just changed a name).
If you don’t want that change you now have to tell the AI what to name things (or what to keep consistent), who is going to do that? The CEO? The intern? Who writes the perfect specification?
Yes. ChatGPT is not perfect. Because it is a general purpose LLM. Stuff like Github CoPilot and other software specific approaches are a LOT better at avoiding all the noise from bad answers on stack overflow and proposals. In large part because they have more focused training data.
But it can still do a remarkably good job so long as you have a human looking at it after the fact. Which… is how I would describe most software engineers I have ever worked with. Even the SSEs need someone to review their code. Which… is what is being described here. Combine that with a gitlab runner and you got yourself a stew.
As for APis and the like: Again, it feels like nobody here has ever actually worked with public software and think regressions don’t exist. But this is literally constraints and would be put in the requirements document that you give either the dev team or the LLM.
As for who is going to make that document: The same people who already do? Management.
Management and sound technical specifications, that sounds to me like you’ve never actually worked in a real software company.
You just said what the main problem is: ChatGPT is not perfect. Code that isn’t perfect (compiles + has consistent logic) is worthless. If you need a developer to look over it you’ve already lost and it would be faster to have that developer write the code themselves.
Have you ever gotten a pull request with 10k lines of code? The AI could spit out so much code in an instant, no developer would be able to debug this mess or do a code review. They’ll just click “Approve” and throw it on the giant garbage heap whatever the AI decided to spit out.
If there’s a bug down the line (if you even get the whole thing to run), good luck finding it if no one in your developer team even wrote the code in the first place.
Management and sound technical specifications, that sounds to me like you’ve never actually worked in a real software company.
Worked at quite a few. Once you get out of college and start engaging with companies beyond “Ugh, how dare they want me to waste my precious time by talking to people” you start to learn the value of a strong management team.
And, more importantly, where those jira tickets come from.
A bog standard development flow is “all pull requests are linked to a documented issue/ticket. All pull requests require tests to pass, code coverage to not decrease, and approval by a code owner”
How does that work in reality?
Issues/tickets (just going to say issues from here on out) are created by a combination of customer feedback, identified issues by the development team, and directives from on high (which is generally related to the overall roadmap). One or more developers work on a merge request, the person who best understands the appropriate code looks it over, it is tested, and it is merged in. After enough of those cycles happen, a release is prepared and a manager signs off on it.
How does that map to an “AI” based workflow?
Issues/tickets (just going to say issues from here on out) are created by a combination of customer feedback, identified issues by the development team, and directives from on high (which is generally related to the overall roadmap). Because LLMs can provide feedback and uncertainty measurements once you get past Google Bard. And regression testing and nightly performance testing can highlight deficiencies. The issue is put into a template, that includes all existing constraints, and the LLM generates a solution. Someone who understands the code checks to make sure that looks sane, it is tested, and it is merged in. After enough of those cycles happen, a release is prepared and a manger signs off on it.
And then it becomes a question of what level you start requiring humans. Because when I do a code review prior to a Release? I am relying VERY heavily on my team to have been doing their due diligence. I skim through the MRs and look for a few hot spots but it is mostly “Well, Fred and Nancy said this was good and it passes all the tests so…”
You just said what the main problem is: ChatGPT is not perfect. Code that isn’t perfect (compiles + has consistent logic) is worthless. If you need a developer to look over it you’ve already lost and it would be faster to have that developer write the code themselves.
I VEHEMENTLY disagree with this. If you don’t have developers looking over your code then you are not a software engineer. And if it takes them the same amount of time to review code as it does to write it? You aren’t working on interesting problems and are wasting vast amounts of money.
I can farm out a general task of “improve our code coverage” to an intern. They can spend a few days (or even weeks) doing that, and I can review their MRs in a few minutes. If something looks weird, I leave a comment and wait for them to get back to me. All the time I am working on much more interesting problems… or doing the same for my SSEs.
Once you stop worshiping the ground that “developers” walk on (which mostly comes from time and experience) you start to realize how many people spend most of their lives just filling out tickets with no understanding of “Why”. And how much work your management is putting in so that you don’t throw a temper tantrum or break the code base. Which… maps pretty well to an LLM.
Just to make it clear. I am not saying that all developers should strive to be managers. I actively disagree with that.
But if you aren’t interested in how management works? Whether it is because you want a heads up when crunch is coming or want to understand the big picture or just figure out when it is time to get getting? Then you aren’t growing as a developer and are not an engineer. You are a monkey with a typewriter in the basement.
You misunderstood, I never said management is worthless. The product managers know what customers want. The product owners keep 8 out of 10 dumb ideas away from the development team. And management again leans on the development team to find out what is actually technically possible and in what time frame.
If management just threw every customer wish into a magic black box to get code out, even if that code was perfect, you wouldn’t have a product. You’d have a pile of steaming crap.
I’ve done plenty of code reviews, they only work if they are small human readable increments. Like they say: A code review of 100 lines might take an hour. A code review of 10000 lines takes thirty minutes.
AI would spit out so much code with missing context for the developer, it would be impossible to properly review.
The difficult part of software development has always been the continuing support. Did the chatbot setup a versioning system, a build system, a backup system, a ticketing system, unit tests, and help docs for users. Did it get a conflicting request from two different customers and intelligently resolve them? Was it given a vague problem description that it then had to get on a call with the customer to figure out and hunt down what the customer actually wanted before devising/implementing a solution?
This is the expensive part of software development. Hiring an outsourced, low-tier programmer for almost nothing has always been possible, the low-tier programmer being slightly cheaper doesn’t change the game in any meaningful way.
Yeah, I’m already quite content, if I know upfront that our customer’s goal does not violate the laws of physics.
Obviously, there’s also devs who code more run-of-the-mill stuff, like yet another business webpage, but those are still coded anew (and not just copy-pasted), because customers have different and complex requirements. So, even those are still quite a bit more complex than designing just any Gomoku game.
Haha, this is so true and I don’t even work in IT. For me there’s bonus points if the customer’s initial idea is solvable within Euclidean geometry.
Now I am curious what the most outlandish request or goal has been so far?
Well, as per above, these are extremely complex requirements, so most don’t make for a good story.
One of the simpler examples is that a customer wanted a solution for connecting special hardware devices across the globe, which are normally only connected directly.
Then, when we talked to experts for those devices, we learnt that for security reasons, these devices expect requests to complete within a certain timeframe. No one could tell us what these timeframes usually are, but it certainly sounded like the universe’s speed limit, a.k.a. the speed of light, could get in our way (takes roughly 66 ms to go halfway around the globe).
Eventually, we learned that the customer was actually aware of this problem and was fine with a solution, even if it only worked across short distances. But yeah, we didn’t know that upfront…
While I do agree that management is genuinely important in software dev:
If you can rewrite the codebase quickly enough, versioning matters a lot less. Its the idea of “is it faster to just rewrite this function/package than to debug it?” but at a much larger scale. And while I would be concerned about regressions from full rewrites of the code… have you ever used software? Regressions happen near constantly even with proper version control and testing…
As for testing and documentation: This is actually what AI-enhanced tools are good for today. These are the simple tasks you give to junior staff.
Conflicting requests and iterating on descriptions: Have you ever futzed around with chatgpt? That is what it lives off of. Ask a question, then ask a follow up question, and so forth.
I am still skeptical of having no humans in the loop. But all of this is very plausible even with today’s technology and training sets.
Just to add a bit more to that. I don’t think having an AI operated company is a good idea. Even ignoring the legal aspects of it, there is a lot of value to having a human who can make irrational decisions because one customer will pay more in the long run and so forth.
But I can definitely see entire departments being a node in a rack. Customers talk to humans (or a different LLM) which then talk to the “Network Stack” node and the “UI/UX” node and so forth.
If you just let it do a full rewrite again and again, what protects against breaking changes in the API? Software doesn’t exist in a vacuum, there might be other businesses or people using a certain API and relying on it. A breaking change could be as simple as the same endpoint now being named slightly differently.
So if you now start to mark every API method as “please no breaking changes for this” at what point do you need a full software developer again to take care of the AI?
I’ve also never seen AI modify an existing code base, it’s always new code getting spit out (80% correct or so, it likes to hallucinate functions that don’t even exist). Sure, for run of the mill templates you can use it, but even a developer who told me on here they rely heavily on ChatGPT said they need to verify all the code it spits out, because sometimes it’s garbage.
In the end it’s a damn language model that uses probability on what the next word should be. It’s fantastic for what it does, but it has no consistent internal logic and the way it works it never will.
You are literally describing constraints. They can be applied to an LLM the same way they can be applied to a dev team. And if you have never had to report an API change that breaks functionality… I wish I was you.
And if your full time software engineers are just running a unit test suite all day? … are you hiring?
As for modifcations: Again, have you ever used an LLM? Have a conversation with chatgpt. It will iterate on its responses. That is iterating on code.
And that is demonstrably false and mostly just highlights that you don’t know what you are talking about. Or what language is, for that matter.
Mate, I’ve used ChatGPT before, it straight up hallucinates functions if you want anything more complex than a basic template or a simple program. And as things are in programming, if even one tiny detail is wrong, things straight up don’t work. Also have fun putting ChatGPT answers into a real program you might have to compile, are you going to copy code into hundreds of files?
My example was public APIs, you might have an endpoint
/v2/device
that was generated the first time around. Now external customers/businesses built their software to access this endpoint. Next run around the AI generates/v2/appliance
instead, everything breaks (while the software itself and unit tests still seem to work for the AI, it just changed a name).If you don’t want that change you now have to tell the AI what to name things (or what to keep consistent), who is going to do that? The CEO? The intern? Who writes the perfect specification?
Yes. ChatGPT is not perfect. Because it is a general purpose LLM. Stuff like Github CoPilot and other software specific approaches are a LOT better at avoiding all the noise from bad answers on stack overflow and proposals. In large part because they have more focused training data.
But it can still do a remarkably good job so long as you have a human looking at it after the fact. Which… is how I would describe most software engineers I have ever worked with. Even the SSEs need someone to review their code. Which… is what is being described here. Combine that with a gitlab runner and you got yourself a stew.
As for APis and the like: Again, it feels like nobody here has ever actually worked with public software and think regressions don’t exist. But this is literally constraints and would be put in the requirements document that you give either the dev team or the LLM.
As for who is going to make that document: The same people who already do? Management.
Management and sound technical specifications, that sounds to me like you’ve never actually worked in a real software company.
You just said what the main problem is: ChatGPT is not perfect. Code that isn’t perfect (compiles + has consistent logic) is worthless. If you need a developer to look over it you’ve already lost and it would be faster to have that developer write the code themselves.
Have you ever gotten a pull request with 10k lines of code? The AI could spit out so much code in an instant, no developer would be able to debug this mess or do a code review. They’ll just click “Approve” and throw it on the giant garbage heap whatever the AI decided to spit out.
If there’s a bug down the line (if you even get the whole thing to run), good luck finding it if no one in your developer team even wrote the code in the first place.
Worked at quite a few. Once you get out of college and start engaging with companies beyond “Ugh, how dare they want me to waste my precious time by talking to people” you start to learn the value of a strong management team.
And, more importantly, where those jira tickets come from.
A bog standard development flow is “all pull requests are linked to a documented issue/ticket. All pull requests require tests to pass, code coverage to not decrease, and approval by a code owner”
How does that work in reality?
Issues/tickets (just going to say issues from here on out) are created by a combination of customer feedback, identified issues by the development team, and directives from on high (which is generally related to the overall roadmap). One or more developers work on a merge request, the person who best understands the appropriate code looks it over, it is tested, and it is merged in. After enough of those cycles happen, a release is prepared and a manager signs off on it.
How does that map to an “AI” based workflow?
Issues/tickets (just going to say issues from here on out) are created by a combination of customer feedback, identified issues by the development team, and directives from on high (which is generally related to the overall roadmap). Because LLMs can provide feedback and uncertainty measurements once you get past Google Bard. And regression testing and nightly performance testing can highlight deficiencies. The issue is put into a template, that includes all existing constraints, and the LLM generates a solution. Someone who understands the code checks to make sure that looks sane, it is tested, and it is merged in. After enough of those cycles happen, a release is prepared and a manger signs off on it.
And then it becomes a question of what level you start requiring humans. Because when I do a code review prior to a Release? I am relying VERY heavily on my team to have been doing their due diligence. I skim through the MRs and look for a few hot spots but it is mostly “Well, Fred and Nancy said this was good and it passes all the tests so…”
I VEHEMENTLY disagree with this. If you don’t have developers looking over your code then you are not a software engineer. And if it takes them the same amount of time to review code as it does to write it? You aren’t working on interesting problems and are wasting vast amounts of money.
I can farm out a general task of “improve our code coverage” to an intern. They can spend a few days (or even weeks) doing that, and I can review their MRs in a few minutes. If something looks weird, I leave a comment and wait for them to get back to me. All the time I am working on much more interesting problems… or doing the same for my SSEs.
Once you stop worshiping the ground that “developers” walk on (which mostly comes from time and experience) you start to realize how many people spend most of their lives just filling out tickets with no understanding of “Why”. And how much work your management is putting in so that you don’t throw a temper tantrum or break the code base. Which… maps pretty well to an LLM.
Just to make it clear. I am not saying that all developers should strive to be managers. I actively disagree with that.
But if you aren’t interested in how management works? Whether it is because you want a heads up when crunch is coming or want to understand the big picture or just figure out when it is time to get getting? Then you aren’t growing as a developer and are not an engineer. You are a monkey with a typewriter in the basement.
You misunderstood, I never said management is worthless. The product managers know what customers want. The product owners keep 8 out of 10 dumb ideas away from the development team. And management again leans on the development team to find out what is actually technically possible and in what time frame.
If management just threw every customer wish into a magic black box to get code out, even if that code was perfect, you wouldn’t have a product. You’d have a pile of steaming crap.
I’ve done plenty of code reviews, they only work if they are small human readable increments. Like they say: A code review of 100 lines might take an hour. A code review of 10000 lines takes thirty minutes.
AI would spit out so much code with missing context for the developer, it would be impossible to properly review.
Absolutely true, but many direction into implementing those solution with AIs.
Which is why plenty of companies merely pay lip service to it, or don’t do it at all and outsource it to ‘communities’