LLMs: The best software development tutor ever - with big caveats

I've recently started learning Python for fun, and I've manually copy-typed my way to my very first Streamlit app - CatGPT Nekomancer!

In the process, I've discovered some fascinating things about Large Language Models (LLMs) like ChatGPT, and how they fit into learning a new programming language.

I'm particularly tickled by how I (mostly) implemented CatGPT Nekomancer by blindly following instructions, and then used it to understand what I'd done. There's just something magical about making something, and then having it teach you how you made it.

LLMs are great at explaining what a piece of code does

This is because they're functioning purely as "translators". The translation task plays to the strengths of LLMs - statistical pattern matching. Good judgement is not needed, because we're not concerned with "how" or "should".

Here's an example of CatGPT explaining its source code, in response to the prompt "explain what this code does", followed by the code.

This explanation was a little too high-level for me. I wanted to really understand what each line of code was doing. This was easily fixed with a different prompt: "explain this code line by line to a novice programmer". This gave me exactly the level of detail I needed.
I've worked with some pretty great software developers from all over the world who aren't native English speakers. My partner and I are both bilingual, and we'd previously experimented a little with using LLMs as translators for recipes, where we found that if we were able to avoid the LLM's tendency to interpolate (aka "hallucinate" aka "lie"), they do much better at translating than Google Translate.

So when my partner asked, "What if we get it to explain in Croatian? This would have been HUGE for me when I was learning," it was a no-brainer to give it a try with this prompt: "explain this code line by line to a novice programmer, in Croatian." For Croatian, at least, my partner verified that CatGPT's translation and explanation was "brilliant".


I can also ask the LLM to explain specific parts of the code that I don't understand, and learn new concepts that way. For example, I wasn't familiar with the concept of f-strings in Python, which I encountered when working on a different Python experiment. Thanks to CatGPT, I was very quickly able to understand that f-strings are strings that can hold expressions - nifty! I particularly love how the LLM fits so seamlessly into my personal learning "flow". Instead of having to go off to the greater interwebs to trawl through answers about f-strings in Python to figure it out from there, I have my own personal tutor.

LLMs increase the value that good software developers bring to the table

It's generally agreed - at least among software development managers and similar roles - that a good developer can pick up a new language pretty quickly and competently. That's because the core of what makes a good developer isn't the knowledge of a particular language. Rather, it's their grasp of transferrable concepts, frameworks, and understanding of best practices as principles. It's not about rote memorisation - it's about good judgement powered by understanding and experience.

Before LLMs exploded on the scene, a good developer could have everything I listed above, but when picking up a new language, or working with one they're rusty at, there'd still be a lag due to needing to learn the basics of the language (syntax, etc). With LLMs that lag is much smaller, allowing the developer to bring their strengths to bear much faster.

Caveat 1: LLMs are bad at advising on how a feature or function should be implemented

LLMs are based on statistical pattern matching, which makes them great at translation. It's also what makes them bad at anything that requires judgement calls based on a larger and often ambiguous context. They're not always wrong about "shoulds", they're just right far less often than the average human developer.

I believe that this is also what makes LLMs very weak at software development stuff that's presentational, or isn't primarily about logic. HTML, CSS, and web accessibility all fall into the bucket of not being about logic, as well as operating in a large and ambiguous context. It also probably doesn't help that LLMs have ingested the interwebs, and even today, there's probably loads more sites styling button text with <span> tags than sites using the correct approach. It's not like the LLM can tell which approach is better. After all, it's not thinking - it's pattern matching based on statistics.

Caveat 2: LLMs can't coach or mentor

Real "personalisation" is needed for coaching and mentoring. Both of these require human judgement and experience about the subject matter, as well as the individual receiving the coaching or mentoring. They also require (arguably to a lesser extent) a wish on the part of the coach or mentor for the person they're working with to learn and succeed. The simulated thing that we've come to call "personalisation" (e.g. a script grabbing your name from a database) does not and cannot work in this context.

The key to using LLMs effectively when learning software development: Know what you need

If you need explanations on what a piece of code does, LLMs are a great and reliable help. Even more so if you're not a native English speaker - LLMs can function as your own personal translator for both language and code.

If you need good advice on what you should implement, then LLMs aren't going to help much. Quite the opposite. Since they lack judgement (indeed, they do NOT judge), what they come up with is likely to be misguided at best.

It all comes down to the age-old "common sense" wisdom of using the right tools for the job. :)