I learned to code while I was studying AI around 2017 or 2016. I wanted to work on neural networks at a young startup and although I had gone through the entire catalog of Andrew Ng’s machine learning courses, I felt unprepared.

My AI knowledge was fine but as a lover of theory, I knew very well that my weakness was practice. I was missing the tacit know-how I could only achieve by getting my hands dirty with floats, strings, and function calls. I knew everything from the bias-variance trade-off (obsolete now, right?) to the strengths and perils of the different activation functions. But I knew nothing about coding.

So I did what every junior who meets AI for the first time does: I learned Python.

Well, saying that I “learned Python” would be an overstatement (no one would’ve surely hired me to be a backend developer) but I can say, for what is worth given my limited expertise, that I liked one particular aspect of programming above all others: it helped me solve problems that I couldn’t solve otherwise.

Code—what at the time looked to me like weird-looking half sentences-half hieroglyphics—was a unique toolset that allowed me to communicate with computers at a deeper level than what users are accustomed to.

It also made me feel like a hacker, like in the movies, but a grudging one. I never managed to read code like pianists read partiture on the go or chess grandmasters read a board at a glance. I liked the goal of programming—solving problems—but never got used to the medium.

In a way, I was a problem solver (how pompous is that) who had some rudimentary notions of coding instead of a coder who happened to be tasked with solving some problems. I was never a hacker.

I didn’t go overtime into the documentation of some obscure API to make a side project to track the jets of rich people (which sounds both fun and laudable). I felt, time and again, that my only desire while I was coding was to finish so I could go read something else about neural networks.

I eventually got into my dream startup in early 2018 (which turned out to be a dud: we wanted to make a bidirectional machine translation system for sign language but the project never took off; it seems we didn’t follow the scaling laws). I told my interviewer, who would become my boss, that although I knew how to code I preferred to have a more “thinking type” job (the audacity, right?).

I passed but only because I had written an article about DeepMind’s AlphaGo Zero that got published in a governmental engineering magazine (my first-ever publication!). I guess my passion as a thinking-type person, whatever that is, shined through enough so my demands weren’t received with a mocking laugh.

And I did just that. For three years straight. I avoided coding as much as I could. I never developed an attachment to it, much less to any particular programming language (including Python). I cared about problem-solving but I realized, not without surprise to my then-introverted character, that I preferred to manage people. Much later I realized the deeper insight of that preference: Managing people is the highest abstraction skill for problem solving. If you do that well, it’s much more valuable than coding itself because you can always instruct others to code stuff for you while keeping a broader vision. (The best option was learning both skills, though, but I was not that smart.)

The startup hired a couple of engineers, both great coders and better colleagues. I went on to take a more managerial role as a decision-maker—another of those abstract-sounding titles that were only necessary because no one above me knew anything about AI at all. (Funnily, it was around that time that we found the “Attention is all you need” paper that introduced transformers, without knowing at the time we were reading a crystal ball showing us the future, for free.)

My colleagues loved to code. They hated talking with bosses and making decisions, but they loved to follow instructions so they could bury their heads into the colorful scribbles written their black-background screens (they always had two at least). They spent hours tweaking their programs, hacking their way to the next obstacle, reading documentation, trying new approaches, and discussing the bugs they couldn’t debug. When they did, the satisfaction was palpable. Not just the relief that comes from solving a math puzzle but the fulfillment and self-realization that a painter or a writer feels when a masterpiece is finished.

I was in awe. By their mastery but also by their ability to withstand eight long hours submerged in that world of gibberish symbols and unintelligible error messages, only to emerge at the end of the day, fresh like just-rinsed lettuce and charged with more energy to come back the day after—to keep trying to solve the exact. same. bug.

It was a kind of satisfaction I couldn’t understand. One I never tasted except, if not in provenance at least in intensity, when one of my decisions, the risky ones that send a company into an entirely new direction, turned out to be correct. They were tacticians. I was a strategist. We all were problem solvers. But I was never, and will never be, a coder.

For them, problem-solving was the goal. But coding was much more than that. Coding was their identity. That’s why, if they still live to code first and foremost (which I don’t know, I lost contact a while ago) they’re posed to lose a lot in the coming years.

Over the course of the last 10 years or 15 years, almost everybody … would tell you that it’s vital that your children learn computer science, that everybody should learn how to program and in fact, it’s almost exactly the opposite. It is our job to create computing technologies such that nobody has to program. And that the programming language is human. Everybody in the world is now a programmer. … This is the miracle of artificial intelligence. … For the very first time … the technology divide has been completely closed.

That’s Nvidia CEO Jensen Huang during the World Government Summit 2024, responding to the million-dollar question in the age of AI: What should people learn?

I’m not sure if veteran coders in the audience would call his prediction crazy but I’d say it’s, at the very least, counterintuitive. So the world is fully digitalized, computers are everywhere and every company and institution needs people who know how to communicate with those alien machines, but suddenly computer science is a bad career choice? How come Huang, whose hardware company has the biggest moat in the AI space (and growing by the day) thanks precisely to a software library (CUDA) is saying that software engineering is not something the coming generations should learn?



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *