Ask HN: Anyone else struggle with how to learn coding in the AI era?

44Bulldog - 23h

I'm someone who got into building/programming in early 2025, when vibe coding tools became more usable. Since then, I'd like to think that I have developed a lot as a programmer, but I still have this deep sense of imposter syndrome / worry that AI is too much of a crutch and I'm not really learning.

I have shipped a few projects, I always review AI-suggested code, do daily coding practice without AI, watch youtube videos, etc. but still don't know if I'm striking the right balance or whether I can really call myself a programmer.

I often see people say that the solution is to just fully learn to code without AI, (i.e, go "cold turkey"), which may be the best, but I wonder if the optimal path is somewhere in between given that AI is clearlly changing the game here in terms of what it means to be a programmer.

I'm curious how you have all handled this balancing act in the past few years. More concretely, what strategies do you use to both be efficient and able to ship / move quickly while ensuring you are also taking the time to really process and understand and learn what you are doing?

I've been programming for 60 years. I started working with LLM in mid-2025 and have created a number of useful tools in my consulting business. For the programming part, and actually the planning part, I estimate that it increases my productivity by a factor of 5.

Additionally, I always am looking closely at the code and I think I am an incrementally better programmer.

My two cents as a university teacher:

In my view AI tools are a sort of super-advanced interactive documentation. You can learn factual information (excluding allucinations) by either asking or looking at the generated code and explanations of it. But in the same way documentation alone was not a sufficient learning tool before, AI is not now.

What AI cannot give you and I suggest you to learn through other resources:

- algorithmic proficiency, i.e. how to decompose your problems into smaller parts and compose a solution. You don’t necessarily need a full algorithms course (even though you find good ones online for free) but familiarising with at least some classical non-trivial algorithm (e.g. sorting or graph-related ones) is mind-changing.

- high-level design and architecture, i.e. how to design abstractions and use them to obtain a maintainable codebase when size grows. Here the best way is to look at the code of established codebases in your preferred programming language. A good writer is an avid reader. A good programmer reads a lot of other people’s code.

- how programming languages work, i.e. the different paradigms and way of thinking about programming. This lets you avoid fixing on a single one and lets you pick the right tool for each task. I suggest learning both strongly-typed and dynamic languages, to get the feeling of their pros and cons.

That’s an incomplete list from the top of my mind.

You can still use AI as a tool in learning these things, but good old books and online resources (like Coursera) worked really well for decades and are not obsolete at all.

And the last thing is the most important: curiosity about how things work and about how to make them better!

This is great advice and will give a good background in programming that mirrors what you would learn in a CS program.

I'd also like to suggest studying the practical side of building software that many university programs don't spend much time on. To help address this gap, John Ousterhout wrote A Philosophy of Software Design. He has retired from teaching, but captured the hard-won lessons in the book.

This type of book offers the perspective I wish I had developed more before working in software teams early on, as it would have made me a more valuable developer right off the bat. Instead, I went deep on architecture patterns and language theory, becoming somewhat insufferable to my peers (who were very tolerant and kind in return!) for the first few years. 20 years later, I can see that I was trying to hammer a CS "peg" into a business-software-shaped hole :)

I learned all of my programming outside of university and textbooks. It’s one way to learn. Not the only way though - and it has its limits - but you can get pretty far.

But here is my advice. Learning by doing with AI seems akin to copying source from one location (I.e. view source, stackoverflow).

My tips:

- Understand all of the code in a commit before committing it (per feature/bug).

- Learn by asking AI for other ways or patterns to accomplish the something it suggests.

- Ask Claude Code to explain the code until you understand it.

- If code looks complex, ask if it can be simplified. Then ask why the simple solution is better.

- Tell AI that you’d like to use OOP, functional programming, etc.

One way to measure if you’re learning is to pay attention to how often you accept AI’s first suggestion versus how many times you steer it in a different direction,

It’s really endless if your mindset is to build AND learn. I don’t think you need to worry about it based on the fact you’re here asking this question.

I have found I'm always having to steer it in the right direction. I will think I've given it the right amount of instructions but it tends to do dumb things in ways I haven't anticipated.

Good stuff, and I’d add one more trick from the old Zed Shaw books: if you want to learn something new, type it out yourself. Can you copy paste? Can you make the robot do it? Yes, but going through the motion helps embed it in your brain.

Once it’s deep in your memory, you can take all the shortcuts you want, but now it’s for speed instead of necessity.

Came here to type something similar and saw this comment. +1

Just repeat this until you understand a language's unique ways of implementing things, and understand why a language has those choices compared to others. I always pick one of these experiments to learn a new language with/out LLM support. 1. Ray tracing 2. Gameboy Emulator 3. Expression evaluation (JSONLogic or Regex)

These are super easy to implement in 100s of lines of code, however if you want to optimize or perfect the implementation, it takes forever and you need to know a language's nuances to get it better. Focus on performance tuning these implementations and see how far you can go.

I think it's nearly impossible to "learn" to the same depth when someone else writes the code: it doesn't matter if its your teacher, friend, coworker, or AI writing the code. There absolutely is a difference between having to toil to come up with an answer, fail, fail some more, work through design flaws, then eventually come up with the right answer. You learn a lot in the process.

Versus someone or something giving you the, or even several, correct answers and you picking one. You are given what works. But you don't know why, and you don't know what doesn't work.

Learning from AI coding probably is somewhere between traditional coding and just reading about coding. I'm not sure which one it's closer to though.

However, it may not be necessary to learn to that depth now that AI coding is here. I don't really know.

As someone who learned programming the hard way I hated it even when I am really good at it. It got on the way of me doing useful products and services that I wanted to create.

Because of that I learned Lisp so I could do metaprogramming, because manually it would take multiple humans lives to be able to create what I want before I die, even (or specially) controlling a small group of people.

We use Claude code and personally I love it. It is like the electric sawmill instead of humans cutting manually, sweating and being exhausted after half an hour of work.

After decades programming I know how to tell the AI what I want, and how to control/log/assert/test/atomize everything so it behaves.

You can use AI to teach you programming, the problem is that you need to tell her what you want, and if you are not experience you don't.

So do small projects and let the AI do 80% of the work, spend the remainder 20% finishing by hand. Usually LLMs are amazing for approaching valid solutions but are really bad at making something perfect. They fix something and destroy something else. You do that manually.

But "learning programming" in abstract is like "learning to drill", why do you want to drill? What do you want to drill? Where do you want to drill?

You need to make it specific into specific projects, with specific timelines. "I want to play the piano" is abstract. "I want to play this version of Feather theme in 3 months" is specific.

From your description it sounds like you have the most important stuff: variety, a willingness to do things and a willingness to seek advice. I doubt anyone on HN really knows what it takes to learn to be a coder in the new vibe world. It is really too soon to have seen people 'grow up' and the paths that lead to success or not. In general though if you want to learn something you need to do stuff related to what you want to learn, you need to do stuff in many different ways and you need to ask others what they are doing and have done to see if their paths can help you. Keep doing those things and you will likely be fine. The only other advice I can give you you probably already know, find a passion project. For me it was (initially) fractals. Then it was a thousand other things. One passion project will get you through a lot of learning.

Learn how to program without AI. Learn about software engineering, including algorithms, data structures, software architecture and design, the lot. Work on projects entirely without AI, and once you have completed some projects, work on some more, also without AI. Continue until you have mastered software engineering.

Only once you can engineer and develop whatever you want _well_ use AI, as you will have to direct said AI to follow your design, understand everything the AI spits out, and clean up after the mistakes it makes -- things that require you to be able to program _better_ than the AI in the first place. Never make the mistake of thinking that the AI can come up with a worthwhile design for you.

Always remember that what AI produces fundamentally cannot be trusted as is -- generative AI 'hallucinates' by its very nature -- and that unlike code you written before yourself you won't truly understand what it is 'trying' to do off the bat. Sure, you may have written natural language directions for the AI, but natural languages are often ambiguous and often do not fully signal intent, so the AI may not have done what you actually intended for it to do.

Never think that quantity equals quality -- while generative AI may easily spit out reams of code, those reams of code will lack the fundamental quality of code you have hand-written yourself. Think of what the AI generates as if it were the output of a beginning junior developer chronically high on meth, with everything that entails.

Just don’t use an LLM for learning for doing projects at first. I only use it for things I already know how to do or for research. I treat it like a teenage intern.

> I only use it [..] for research.

And it can be pretty great for that. But I'm not sure if this works well for people who don't have experience reading API documentation or coding support sites like Stackoverflow. Beginners having a problem most likely don't know any abstract term for the problem they want to solve, so they'll feed their scenario meticulously to the LLM, causing it to output an already tailored solution which obfuscates the core logical solution.

It's like starting to learn how to code by reading advanced code of a sophisticated project.

1. https://exercism.org/

2. disable copilote

3. only "talk" about concepts and patterns with AI

I agree, use the AI to replace all of your typing, but none of your thinking.

Don't be afraid to go deep in simple sounding topics. The modern world is so full of learning material that there's the temptation to ingest as much of it as possible, but true learning happens when you give yourself time with one topic at a time. And I'd say this is more important than even, because generative AI is becoming great at precisely generating things, not so in understanding complex topics.

That said, learning the fundamental topics can limit your thinking first if they feel difficult, so it's an interesting question how to keep the naïve creativity of the beginner that's something that can really help when building with AI because there are less limitations in your thinking based on how things used to be.

If you can’t code without an AI then you don’t really know how to code. It’s important to learn skills manually before automating them.

If you can't do math without a calculator then you really don't know math, but when does that matter?

When you want to do function discussions or learn calculus.

We're all ultimately just learning what we need to to get the job done. After 20 years programming, it is very clear that nobody knows everything. Everyone just knows their own little slice of the software world, and even then you have to 'use it or loose it'. If you're feeling imposter syndrome, keep a study side project going where you don't use any AI, something like NAND to Tetris that forces you to learn low level concepts, and then just stay productive using AI for the rest of your work.

I'd like to draw a parallel to carpentry:

A carpenter uses tools to shape wood into furniture. Each tool in the toolbox has different uses, but some are more efficient than others. For example, a table saw lets the carpenter cut more quickly and accurately than a hand saw. Nobody would say "that's not a real carpenter, he cheats by using a table saw".

A carpenter can also have an assistant (and I'm specifically not talking about an apprentice) who can help with certain tasks. The assistant might be trained by someone else and know how to perform complex tasks. When the carpenter builds something with the assistants help, is that considered a team effort? Does the carpenter need to take responsibility for the assistants mistakes, or the trainer? Who gets credit for the work?

I don't have answers for these questions, but I think the parallel to software is straightforward: we have a new tool (assistant) that's available, and we're trying to use it effectively. Perhaps it's going to replace some of our older tools, and that's a good thing! Some of us will be lazy and offload everything to it, and that's bad.

I do think that learning the fundamentals is as necessary as ever, and AI is a great tool for that as well.

(Disclaimer: I've been programming for about 15 years, and haven't integrated AI into my workflow yet.)

I don’t try to ship quickly. I started learning programming 2024, I’d say I’m pretty good with Python, proficient in vanilla web tech, ok with C, and I know basics of React/Fullstack. Starting from nothing I’d say I have progressed very fast, I follow a uni CS course. LLM’s have certainly helped in explaining concepts and to learn, but I don’t use them to code pretty much at all.

I recognised that my weaknesses are more in understanding the mathematical fundamentals of computation, so now I’m mostly studying maths rather than coding, currently linear algebra and propability theory. Coding is the easy part I’d say. Hopefully I get to concentrate on the study of my sworn enemy, algorithms, at some point.

I’d like to be able to do low-code and graphics/sound -programming some day. Or maybe use that knowledge some other cool stuff, if we are all replaced by robots anyway.

I recommend zachtronics games. I wouldn't go as far as to claim direct knowledge or skill transfer to "real" programming, but it sure feels like it's exercising the metaphorical muscles in a very different way.

Side note, I'm assuming you find joy in programming. If you don't, there's better ways to spend your time.

One can use AI to lead you to better sources. The issue I face is, whenever I search something I want to understand in a search engine, the first 10 links are always low-quality SEO links, or surface level AI generated tutorials. There is a treasure of high-quality blogs, books, interactive tutorials out there which don't show up when you search for it. For example, if you wanted to learn socket programming, you'd be better off following Beej's guide to socket programming instead of 100 g4g pages. Similarly, for Bash, you'd actually understand how every word you write works instead of just memorizing 20 commands if you followed TLDP's book or lhunath's guide. How do you find these resources? Use Perplexity or Reddit's AI to search for high-quality resources.

I truly believe, that even before Vibe Coding the amount of abstractions one is developing against has been in the way of learning programming and feeling good about it.

You do React + Redux or any other framework and feel like a lot of decision have been made for you without grasping the actual reasoning for these decisions.

The best learners I have encountered and for a year, I am trying to implement:

Learn the platform you develop for on a side project. You develop for the web and more on the programming side: Learn JS for Web and HTML. You will encounter state management, animations, events, DOM manipulation etc.. Solve them without libraries, first.

You never stop learning to code. Experimenting with LeetCode and ClaudeCode - revising the landscape of DS&A (again) enough to steer the which, when and what, and validate the how. Eg: I know what a SkipList does, when to use and fuzzily how it works. The hardest thing about leetcode: 1) knowing you will forget it 2) knowing you will never use it much 3) knowing you are forgoing time to learn other stuff 4) knowing that AI makes knowing it redundant

Drop AI, open a basic editor and write everything by hand without asking anything to AI. Do searches by yourself. That’s how world worked for decades pre 2022. Debug by your own, without asking anything to AI as well.

AI has changed nothing in terms of learning to program, it's every bit as complicated as it ever were (well languages are better now, compared to the 1960s, but still hard).

Becoming an expert takes years, if not decades. If someone has only started programming in 2025, then they still have a long way to go. I get that seeing others move fast with AI can be discouraging, and the only advise I can give is "ignore them". In fact, ignore everyone attempting to push LLMs upon you. If your learning to program, you're not really ready for AI assisted coding, wait ten years.

There's no really satisfying answer other than: Keep at it, you're probably doing better than you think, but it will take years.

> AI has changed nothing in terms of learning to program

In terms of what you should be doing when you learn to program, I fully agree.

In terms of the effects AI has on the activity of learning to program, I think it has: it has made it very tempting (and affordable - so far) to just make the AI build and even adapt the simple stuff for you that you'd otherwise be building and adapting by yourself. I suppose it can even give you the false feeling you understand the stuff it has built for you by reading the generated code. But this makes you never go through the critical learning steps (trial and error, hard thinking about the thing, notice what you are missing, etc).

We already had the possibility to run web searches and copy paste publicly available stuff, but I think that this came with more friction, and the automated adaptation aspect was not there, you've had to do it by yourself. I think Gen AI has made it way easier to be lazy in the learning and it's a trap.

But from the rest of your comment it seems we mostly agree.

+1

If you really can't drop the AI, ask it stuff when you are really blocked, but ask it not to provide code (you need to write it to understand and learn), but I suspect you'd be better served by a regular web search and by reading tutorials written by human beings who crafted and optimized the writting for pedagogy.

It will probably feel slow and tedious, but that's actually a good, mpre efficient use of your time.

At this point of your journey, where your goal is above all to learn, I doubt the AI works in your interest. It's already unclear it provides long term productivity boost to people who are a bit more experienced but still need to improve their craft.

You don't need to optimize the time it takes to build something. You are the one to "optimize".

Is this like learning calligraphy in the typesetting era?

Before the AI era, I didn’t know much bash, but I was a reasonably OK programmer besides that I think. I found by getting AI to write me a lot of bash scripts and following along and then making edits myself when I needed small things changed I ended up with the ability to write bash now, and actually kind of appreciated as a language where as before I thought it was confusing. YMMV

Like anything with enough dedication you can achieve what you want.

This is a strange analogy, because learning calligraphy is essential for any type designer worth their salt. Read The Stroke by Gerrit Noordzij.

I don’t mean type designer I mean, the Gutenberg press. Before mechanical printing books were copied by monks using calligraphy weren’t they?

It's not exactly the same thing.

When the Gutenberg press exists, knowing how to copy whole books by hand is 0% useful anymore, including to run a book copy using a press. There's also virtually no advantage to hand copy a book when you have a press.

You still need to know how to program to build something and maintain it in the long run. You need to be able to understand the Gen AI's output or you are in for some trouble, and to deeply understand the Gen AI's output you need to have practiced programming. What's more, you need to have practiced not only (generic) programming, but the specific stuff you are working on (the domain, the specific technologies, the specific codebase).

It was a little bit of a humorous tease however, I think there’s a side to it you’re missing as valid is what you say right now is.

> It was a little bit of a humorous tease

Whoops, missed that, sorry for this!

Not sure I understand the rest of your sentence, I understand that you are saying what I'm saying is only valid right now but could change as Gen AI keep improving.

I personally think this stuff significantly improving will require a breakthrough / paradigm shift, and that the "sophisticated stochastic parrot" model, even with "reasoning" stuff "patched" on top might only go so far and might quickly plateau (this is not science, only mostly uninformed opinion though).

Hey bud I'm with you there on the next gen breakthroughs requiring more than the current models + reasoning, tho they do take it pretty far. Re the sentence: s/as valid is/as valid as/, but yeah you got me even with the error!

I think truly next gen requires embodiment so systems can learn emotions and consequences, plus reason from their own perspective. I also think the NLP processing can be radically simplified to make training/inference way lower cost. There's also probably another layer we haven't grokked yet, maybe something like NLP/transformers on abstract non linguistic symbolic reasoning, that emerges from linguistics and world models/embodied experience, to truly refine this to the ideal of intellect we are seeking. That should open the gate to AGI, tho there's probably some other magic x-factor step to take a perfectly intelligent individuated synthetic consciousness (in a robot body) to whatever we want from AGI tho. Idk, what do you think? :)