Controversial take incoming: If you're building a generative AI product, you might want to think twice before rushing to hire that machine learning specialist. After months of interviewing AI professionals across the industry, from scrappy startups to tech giants, a surprising pattern has emerged. The skills that made someone a rockstar in traditional ML often don't translate directly to success in the new world of large language models (LLMs) and generative AI.
In fact, for early-stage gen AI product development, a traditional ML background might not just be unnecessary – it could even be a hindrance. Let's unpack why.
The Inverted Process of Gen AI Development
Developing products with generative AI turns the traditional ML workflow on its head. Instead of starting with data collection, model training, and then prototyping, gen AI often begins with rapid prototyping using commercial LLMs. Only later do teams worry about fine-tuning, gathering domain-specific data, or potentially training custom models.
This inversion is possible because off-the-shelf LLMs like GPT-4 are astonishingly capable out of the box, especially for natural language tasks. They often outperform specialist models that would have taken months to develop just a few years ago. This shift fundamentally changes the skills needed in early product development.
Prompting: The New Critical Skill
Enter prompting – the art and science of coaxing the desired behavior out of LLMs through carefully crafted instructions. Effective prompting has emerged as perhaps the most critical skill in gen AI development, and it requires a very different mindset from traditional ML.
The best prompters I've encountered share some key attributes:
Strong verbal and analytical skills
An ability to break complex problems into smaller, well-defined steps
Metacognition – the capacity to articulate tacit knowledge and "think about thinking"
Interestingly, these skills often align more closely with the strengths of generalists – product managers, UX researchers, or even philosophers – than with highly specialized ML engineers.
This creates what I've started calling an "impedance mismatch" between traditional ML skills and the needs of gen AI development. ML specialists are used to thinking in terms of data distributions, loss functions, and model architectures. But with gen AI, the focus shifts to natural language, cognitive task analysis, and understanding the nuances of human instruction.
The Role of Traditional Software Engineers
Of course, it's not just ML specialists facing challenges. Traditional software engineers often find themselves in unfamiliar territory with gen AI. The deterministic, rules-based thinking that serves them so well in most domains can be at odds with the probabilistic, fuzzy nature of LLM outputs.
Yet, solid software engineering practices remain crucial for building reliable, scalable gen AI products. The ability to design robust systems, handle errors gracefully, and create intuitive user interfaces is as important as ever. The best teams I've seen pair AI-savvy engineers with strong prompters, creating a powerful combination of technical rigor and creative problem-solving.
The Emerging AI Engineer Role
This new landscape is giving rise to a hybrid role that some are calling the "AI Engineer." The recent AI Engineer Summit highlighted the unique blend of skills required:
A strong foundation in software engineering
Familiarity with ML concepts and LLM capabilities
Expertise in prompt engineering and RAG (retrieval-augmented generation)
Product thinking and UX design sensibilities
It's a tall order, combining technical chops with creativity and strategic thinking. But for those who can master this skill set, the opportunities are immense.
When Traditional ML Backgrounds Become Valuable
Don't get me wrong – there's definitely a time and place for deep ML expertise in gen AI. As products mature and scale, skills like building robust evaluation pipelines, fine-tuning models on domain-specific data, and optimizing for latency and cost become crucial. This is where those with traditional ML backgrounds can really shine.
Additionally, some applications – like highly regulated industries or specialized scientific domains – may require custom models from the start. Here, domain knowledge combined with ML expertise is invaluable.
Implications and Future Outlook
For business leaders, this shift means rethinking hiring criteria for AI roles. Rather than focusing solely on ML credentials, consider candidates with strong analytical thinking, clear communication, and the ability to rapidly prototype and iterate.
Aspiring AI professionals should broaden their skill sets beyond just technical ML knowledge. Develop your prompt engineering abilities, study cognitive science and UX design, and cultivate a product mindset.
On an industry level, we may see a democratization of AI development. As the barriers to entry lower, a more diverse range of thinkers and problem-solvers can contribute to shaping the future of AI products.
However, this democratization also raises ethical concerns. As it becomes easier for anyone to create powerful AI systems, how do we ensure responsible development and deployment? The need for ethical guidelines and best practices in AI engineering has never been greater.
Conclusion
The world of gen AI is evolving at a breakneck pace, and our notions of what makes an "AI expert" need to evolve with it. While deep ML knowledge will always have its place, the immediate future belongs to those who can bridge the gap between human needs and AI capabilities.
So before you rush to hire that ML specialist for your gen AI product, take a step back. The skills you really need might be found in unexpected places. The next great AI innovator could be a linguist, a psychologist, or even a particularly insightful product manager. Keep an open mind, and you might just stumble upon the perfect blend of talent to bring your AI vision to life.