8 mins

The rise of competent AI coding assistants could shift the role of software engineering managers considerably. Are you ready for that change?

Generative AI-powered programming tools are here – and they’re already effective. Tools like ChatGPT and GitHub Copilot can write usable code to solve both trivial and complex problems, refactor or convert code between languages, create documentation, or explain the function of unknown code.

While these tools aren’t able to replace a junior developer yet, is it only a matter of time? And what can engineering managers do to stay ahead of things?

The genie is out of the bottle

In a sense, it’s already too late to get ahead of AI. The current generation of tools are already being used in the real world, whether managers want people to or not. 

A survey conducted for GitHub found that 92% of developers are now using AI coding tools at work. According to the 500 US-based developers surveyed, “AI coding tools can help them meet existing performance standards with improved code quality, faster outputs, and fewer production-level incidents.” 

Piyush Tripathi, a lead engineer at fintech company Square, is one of these developers. His team uses ChatGPT, another GPT-based tool called Auto-GPT that he likens to “ChatGPT on steroids”, and GitHub Copilot to perform a variety of programming tasks on a daily basis. “I’ve seen engineers who were willing to learn to use AI get twice – or maybe three times – as productive,” he says. 

This creates an immediate window of opportunity for managers that are prepared to embrace these tools right away. “Everyone’s talking about AI and ChatGPT, and trying to include them in their products or use them,” he says. It just depends on “how fast you are and how flexible you are in using these AI tools to simplify your workflow.”

Understanding the risks

While Tripathi feels engineering managers need to understand and explore AI tools, they need to have realistic expectations of what they can actually do too. “AI is very experimental,” says Tripathi. “I’ve seen teams where people had high expectations and thought it was going to be a game changer, and then realized it’s buggy and has lots of issues.” He thinks that in the right environment, where it’s okay if things don’t work 100% of the time, the current crop of AI tools can make people more productive.

However, managers can’t simply expect engineers to “just magically learn everything”, Tripathi says. They will need to be trained to make the most of AI if managers want them to be more productive.

They should also anticipate some pushback. “People are going to oppose it, so introduce it in a way that tells people they are important,” he says.

Then there are the legal ramifications of inputting confidential data into these models, and using the model’s suggestions in production. To complicate matters further, it’s not just existing laws that engineering managers need to consider, but also future legislation and compliance. Loose data handling decisions made now could have serious implications in a few years time, especially if the European Union takes a hardline stance on enforcing its proposed AI Act.

The end of programming as we know it?

In the longer term, things are likely to shift radically. If the current generation of AI-powered tools are already making engineers two or three times as effective, what are they going to be capable of in five years?

Matt Welsh, chief architect at AI startup Fixie.ai and a former professor of computer science at Harvard, believes that large language models (LLMs) will bring about “the end of programming.” 

In a widely-discussed Communications of the ACM article, Welsh argues that the “the conventional idea of ‘writing a program’ is headed for extinction,” and that complex applications would be trained rather than programmed, relegating humans “to, at best, a supervisory role.”

Over Zoom, Welsh explained that it’s as much an economic argument as anything. “I did the math!” he says. “If you take a Silicon Valley engineer, one engineer is paid around $1,200 dollars a day if you consider their salary and benefits and how many working days there are in a year. Imagine you had an AI that could do their job. The amount of money you pay for the same output is something like $0.12 cent a day, so you have like a 10,000X reduction in cost.”

Welsh thinks that even if he’s wildly off in his estimate and it’s only a 10X or even 2X cost saving, it’s still a compelling argument for “massive numbers of jobs being eliminated, because AI can ultimately do this work that we’ve been expecting highly paid, highly skilled, very expensive humans to do.”

Welsh thinks that junior developers are likely to be the most impacted. “If 30% of software projects that are done by humans today could be done by AI in the future, then that’s going to disproportionately impact the people that are doing more mundane things”, like integration and writing boilerplate code. Even for the highly skilled workers that do stick around, their jobs are likely to change as a result.

Hywel Carver, CEO of team coaching service Skillerwhale believes that if the cost of writing code becomes cheaper, companies will just expect engineers to deliver more code and build out features faster. “Development is something you always want more of,” he says. “I think we will probably end up with a similar number of developers, doing quite different roles, but our expectations from them will massively increase.”

In either case, it’s clear that the role of an engineer will have to adjust as generative AI tools become more prevalent. “We’re seeing a lot of hand wringing around this right now because people don’t really know what it means to be a software engineer in this world of AI systems being extremely capable,” Welsh says.

Moving up the stack

Hands-on-keyboard coding is of course a big part of software engineering, but it’s not the whole job. Architecture design, solving complex problems, debugging, planning, and collaborating across the entire organization are all important parts too, and LLMs won’t necessarily obviate any of them.

“Literacy around the content and operation of the structure of programs is going to be extremely important, even if I’m not the one writing it,” explains Welsh. “How do you instruct models to interface to software systems? How do you ensure safety? How do you ensure good performance? How do you monitor and measure quality?”

These have always been part of the job of software engineering, but typically for those later in their careers who have been able to move up the stack to tackle more complex problems. If this becomes the entire job, that has a massive potential impact on the profession as a whole.

Oguz Acar, a professor of marketing and innovation at King’s College, London, similarly thinks that the role of engineers will shift to defining problems and making decisions. To make good use of AI going forward, he explained, “we want to get better at formulating problems and identifying their root causes… How do we frame it? How do we reframe it? How do we think about the constraints of it?” 

Identifying what problems need to be solved, breaking them down into manageable chunks, and choosing what resources are most appropriate for tackling each one sounds a lot like the job of an engineering manager. In the age of AI, does this mean every engineer will need to learn these skills to be effective?

“One of the things that becomes hard is having skills that are strong enough to do a good review and to really understand what an AI has produced when you yourself are no longer writing code,” Carver says.

Preparing for change

It’s impossible to predict exactly how this will all play out, but either way, software development is going to undergo a dramatic shift in the next few years. 

In the short term, it seems likely that coding productivity will explode, whether that is because improvements to the current generation of tools, or new tools that allow smaller numbers of engineers to oversee LLM-powered coding programs.

Engineering managers will need to prepare for a time where they need to review and ensure the quality of a significantly larger quantity of code, not all of it directly written by humans. With that they will need to pay special attention to any potential regulatory, compliance, and data handling issues. Sloppy AI practices could quickly become major legal headaches in the future.

After that, though, it’s really anyone’s guess. At one end of the spectrum you have Welsh, who foresees the end of programming as we know it, with AI programs trained to solve most coding problems. At the other end there are engineers who could certainly be accused of burying their heads in the sand and hoping that generative AI is the next Web3. While somewhere in the middle is Carver – managers who are informed enough to prepare for this change, while unprepared to rip up the current playbook just yet. 

Either way, engineering managers will still need to bring traditional skills to a new set of challenges: defining problems, ensuring software works as it should, prioritizing product decisions, and working with other departments. The big question really is how much they will be managing humans – and how much they will be managing AI.