Artwork

Konten disediakan oleh a16z and Andreessen Horowitz. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh a16z and Andreessen Horowitz atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

Remaking the UI for AI

40:05
 
Bagikan
 

Manage episode 418534704 series 2546451
Konten disediakan oleh a16z and Andreessen Horowitz. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh a16z and Andreessen Horowitz atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Make sure to check out our new AI + a16z feed: https://link.chtbl.com/aiplusa16z

a16z General Partner Anjney Midha joins the podcast to discuss what's happening with hardware for artificial intelligence. Nvidia might have cornered the market on training workloads for now, but he believes there's a big opportunity at the inference layer — especially for wearable or similar devices that can become a natural part of our everyday interactions.

Here's one small passage that speaks to his larger thesis on where we're heading:

"I think why we're seeing so many developers flock to Ollama is because there is a lot of demand from consumers to interact with language models in private ways. And that means that they're going to have to figure out how to get the models to run locally without ever leaving without ever the user's context, and data leaving the user's device. And that's going to result, I think, in a renaissance of new kinds of chips that are capable of handling massive workloads of inference on device.

"We are yet to see those unlocked, but the good news is that open source models are phenomenal at unlocking efficiency. The open source language model ecosystem is just so ravenous."

More from Anjney:

The Quest for AGI: Q*, Self-Play, and Synthetic Data

Making the Most of Open Source AI

Safety in Numbers: Keeping AI Open

Investing in Luma AI

Follow everyone on X:

Anjney Midha

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

Stay Updated:

Find a16z on Twitter: https://twitter.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Subscribe on your favorite podcast app: https://a16z.simplecast.com/

Follow our host: https://twitter.com/stephsmithio

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

  continue reading

256 episode

Artwork

Remaking the UI for AI

a16z Podcast

51,307 subscribers

published

iconBagikan
 
Manage episode 418534704 series 2546451
Konten disediakan oleh a16z and Andreessen Horowitz. Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh a16z and Andreessen Horowitz atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Make sure to check out our new AI + a16z feed: https://link.chtbl.com/aiplusa16z

a16z General Partner Anjney Midha joins the podcast to discuss what's happening with hardware for artificial intelligence. Nvidia might have cornered the market on training workloads for now, but he believes there's a big opportunity at the inference layer — especially for wearable or similar devices that can become a natural part of our everyday interactions.

Here's one small passage that speaks to his larger thesis on where we're heading:

"I think why we're seeing so many developers flock to Ollama is because there is a lot of demand from consumers to interact with language models in private ways. And that means that they're going to have to figure out how to get the models to run locally without ever leaving without ever the user's context, and data leaving the user's device. And that's going to result, I think, in a renaissance of new kinds of chips that are capable of handling massive workloads of inference on device.

"We are yet to see those unlocked, but the good news is that open source models are phenomenal at unlocking efficiency. The open source language model ecosystem is just so ravenous."

More from Anjney:

The Quest for AGI: Q*, Self-Play, and Synthetic Data

Making the Most of Open Source AI

Safety in Numbers: Keeping AI Open

Investing in Luma AI

Follow everyone on X:

Anjney Midha

Derrick Harris

Check out everything a16z is doing with artificial intelligence here, including articles, projects, and more podcasts.

Stay Updated:

Find a16z on Twitter: https://twitter.com/a16z

Find a16z on LinkedIn: https://www.linkedin.com/company/a16z

Subscribe on your favorite podcast app: https://a16z.simplecast.com/

Follow our host: https://twitter.com/stephsmithio

Please note that the content here is for informational purposes only; should NOT be taken as legal, business, tax, or investment advice or be used to evaluate any investment or security; and is not directed at any investors or potential investors in any a16z fund. a16z and its affiliates may maintain investments in the companies discussed. For more details please see a16z.com/disclosures.

  continue reading

256 episode

ทุกตอน

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat