Ad image

The VP of AI at Databricks says the war for AI talent is just beginning.

MONews
15 Min Read

In our final issue of the year, we focus on the war for AI talent, a topic we’ve covered since this newsletter was launched nearly two years ago. Keep reading this week for the latest news from inside Google and Meta.

But first, we need your questions about the Mailbag issue, which we plan to publish our first issue in 2025. Submit your question through this form Or leave it in the comments.

“It’s like finding LeBron James.”

Databricks this week presentation It’s the largest known funding round for a private tech company in history. The AI ​​enterprise company is in the final stages of raising $10 billion in funding, with almost all of it going to be used to buy back vested employee shares.

How companies approach compensation is often hidden in the technology industry, despite the important role strategy plays in determining which companies get ahead faster. As I’ve covered before, nowhere is this dynamic more intense than in the battle for AI talent.

To better understand what will be driving the state of play in 2025, this week I Naveen RaoVice President of AI at Databricks. Rao is one of my favorite people about the AI ​​industry. Although he has excellent technical skills, he also has a business mind and has successfully sold several startups. His last company, mosaicML, was sold to Databricks in 2023 for $1.3 billion. He now oversees Databricks’ AI products and is closely involved in recruiting top talent.

The conversation below covers the logic behind Databricks’ massive funding round, what specific AI talent is still lacking, why we think AGI isn’t imminent, and more.

The following conversation has been edited for length and clarity.

Why is this round primarily aimed at helping employees sell stock? Because $10 billion is too large an amount. You can do a lot with it.

The company is a little over 11 years old. We have had employees who have been here for a long time. This is how you ensure liquidity.

Most people don’t understand that this is not included in Databricks’ balance sheet. This will primarily provide liquidity to former employees. [and] Liquidity is available for current and new employees. Because the stock already exists, it becomes neutral upon dilution. They are allocated to employees and this allows them to sell them to cover the taxes associated with those shares.

How much does the rapid rise in AI company valuations have to do with the war for talent?

It’s real. What’s important here is that it’s not just pure AI talent, the people who come up with the next big thing, the next big paper. We’re definitely trying to hire those people. There is a whole infrastructure of software and cloud that needs to be built to support these things. Building models and trying to scale them is not really AI talent per se. Infrastructure talent.

The perceived bubble we have around AI has created an environment where all talent is intensively recruited. We must remain competitive.

Who is most aggressive in setting the market price for AI talent?

OpenAI definitely exists. mankind. Amazon. Google. Meta. xAI. Microsoft. We are in constant competition with all these companies.

Would you like to have fewer than 1,000 researchers capable of building new frontier models?

huh. That’s why the war for talent is hot. The influence researchers have on organizations is unprecedented. One researcher’s idea can completely change a product. It’s kind of new. In semiconductors, those who came up with new transistor architectures had that influence.

This is what makes these researchers so popular. Who unlocks the next big idea and the next big thing can have a huge impact on a company’s ability to win.

Do you see the talent pool expanding in the near future? Or do you think there will continue to be restrictions?

I can see the swimming pool expanding. Its role is expanding as it becomes possible to build and manage appropriate infrastructure. This is the difficult part for top researchers. It’s like finding lebron james. Not many humans can do that.

I would say that the Inflection style argument is largely driven by this kind of thinking. These startups have a high concentration of top-tier talent, and the amounts people pay sound outrageous. But that’s not nonsense. So I think we’re seeing Google hiring again. Noam Shazer. It’s so hard to find something else Noam Shazer.

At the previous company I started, a guy named Nervana was arguably the best GPU programmer in the world. He is now at OpenAI. All inferences that occur in the OpenAI model are executed through its code. When you start calculating the downstream costs, it’s like, “Holy shit, this guy saved us $4 billion.”

“When you start calculating the downstream costs, you think, ‘Holy shit, this guy saved us $4 billion.’”

What are the advantages you have when looking to hire a researcher at Databricks?

Selection bias for different candidates begins to show. Some are AGI or busts, but that’s okay. This is a huge motivator for the smartest people. We believe we will reach AGI through product creation. When people use technology, it gets better. That’s part of our pitch.

AI has a huge growth base, but it has peaked and is on the decline. Gartner Hype Curve. I think we’re in a downward trend right now. Databricks, on the other hand, has built a very strong business. It’s very appealing to some people because we don’t think they’re that sensitive to hype.

Do the researchers you talk to actually believe that AGI is just around the corner? Is there a consensus on when that will come?

Honestly, there isn’t much consensus. I’ve been in this field for a very long time, and I’ve been very vocal that it’s not something that’s coming anytime soon. Large-scale language models are a great technology. There is a tremendous amount of economic improvement and efficiency that can be gained by building a great product around this. But it is not the mind we used to call AGI, an intelligence like that of humans or even animals.

These things do not create magical intelligence. They can more easily partition the space we call facts and patterns. It’s not the same as building a causal learner. They don’t really understand how the world works.

you may have seen Ilya Sutskever say. We are all groping in the dark. Scaling was a big unlock. No wonder so many people were excited about it. It turns out we weren’t solving the right problem.

Are test time calculations or “inference” approaches new ideas to apply to AGI?

no. I think it will be an important part of performance. We can improve the quality of our answers, perhaps reduce the likelihood of hallucinations, and increase the odds of getting a factual answer. I think it’s definitely a positive thing on the field. But can it solve the fundamental problems of the AGI mind? I don’t believe so. I’m glad I was wrong too.

Do you agree with the statement that while existing models perform well, there are still compute and access constraints, so there is room to create better products?

huh. Meta started a few years behind OpenAI and Anthropic and basically caught up, and xAI caught up very quickly. I think it’s because essentially the pace of improvement has stalled.

Nilay Patel compares AI model competition to early Bluetooth. Everyone keeps telling me there is cooler Bluetooth, but my phone still won’t connect.

You see this in every product cycle. The first few versions of the iPhone were much better than their predecessors. Now I can’t tell the difference between a 3 year old phone and a new one.

I think that’s what we see here. How to leverage these LLMs and their built-in deployments to solve business problems is the next frontier.

somewhere else

  • Google is getting flatter. CEO Sundar Pichai He told employees that the company’s string of layoffs had resulted in a 10% decrease in the number of managers, directors and vice presidents. business insider And I talked to several employees who heard that comment. In relation to this, Pichai also took the opportunity to add ‘sloppyness’ as a personality trait to the internal definition of ‘Google-ness’. (Yes, that’s true.) He demurred on the most upvoted employee question about whether layoffs would continue, but we heard him mention that “overall” headcount will increase next year.
  • The meta cuts down on perks. Submit this under “Sad Violin”. I heard that Meta will stop offering free EV charging at its Bay Area campuses starting in early January. Metamates, keep your heads held high.

What else you need to know

job board

Here are some notable moves this week:

  • meta get promoted john hegeman Reports to Chief Revenue Officer (CRO) Reports to COO Javier Olivan. Here’s another one of Olivan’s reports: Justin OsofskyHe was also promoted to Head of Partnerships across the company, including the company’s go-to-market strategy for Llama.
  • Alec RadfordThe influential veteran OpenAI researcher who wrote the original GPT research paper is leaving but moving forward. To see We continue to work with the company in some capacity. and Sivakumar VenkataramanHe was also recently hired from Google to lead OpenAI’s search efforts.
  • Coda Co-Founder and CEO Sisir Mehrotra Now it also runs Grammarly. Two companies mergeWith Grammarly CEO Rahul Roy Chowdhury He remains a board member.
  • Tencent dismissed two directors. David Wallerstein and ben featherAfterwards, the Epic Games board of directors the Ministry of Justice said Their intervention violated antitrust laws.
  • Former Twitter CFO Ned Siegel It’s done knocking Becomes Director of Housing and Economic Development for the City of San Francisco.

Additional Links

If you haven’t received your new issue yet command lineDon’t forget to subscribe The VergeThis includes unlimited access to all our stories and an improved advertising experience across the web. You can also access our full archive of past issues.

As always, we’d love to hear from you, especially if you have any tips or feedback. Please respond here and we will contact you. or Send a secure ping from Signal.

Share This Article
Leave a comment