Artwork

Konten disediakan oleh Machine Learning Street Talk (MLST). Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Machine Learning Street Talk (MLST) atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.
Player FM - Aplikasi Podcast
Offline dengan aplikasi Player FM !

Cohere's SVP Technology - Saurabh Baji

1:30:25
 
Bagikan
 

Manage episode 439563104 series 2803422
Konten disediakan oleh Machine Learning Street Talk (MLST). Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Machine Learning Street Talk (MLST) atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Saurabh Baji discusses Cohere's approach to developing and deploying large language models (LLMs) for enterprise use.

* Cohere focuses on pragmatic, efficient models tailored for business applications rather than pursuing the largest possible models.

* They offer flexible deployment options, from cloud services to on-premises installations, to meet diverse enterprise needs.

* Retrieval-augmented generation (RAG) is highlighted as a critical capability, allowing models to leverage enterprise data securely.

* Cohere emphasizes model customization, fine-tuning, and tools like reranking to optimize performance for specific use cases.

* The company has seen significant growth, transitioning from developer-focused to enterprise-oriented services.

* Major customers like Oracle, Fujitsu, and TD Bank are using Cohere's models across various applications, from HR to finance.

* Baji predicts a surge in enterprise AI adoption over the next 12-18 months as more companies move from experimentation to production.

* He emphasizes the importance of trust, security, and verifiability in enterprise AI applications.

The interview provides insights into Cohere's strategy, technology, and vision for the future of enterprise AI adoption.

https://www.linkedin.com/in/saurabhbaji/

https://x.com/sbaji

https://cohere.com/

https://cohere.com/business

MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api.

TOC (*) are best bits

00:00:00 1. Introduction and Background

00:04:24 2. Cloud Infrastructure and LLM Optimization

00:06:43 2.1 Model deployment and fine-tuning strategies *

00:09:37 3. Enterprise AI Deployment Strategies

00:11:10 3.1 Retrieval-augmented generation in enterprise environments *

00:13:40 3.2 Standardization vs. customization in cloud services *

00:18:20 4. AI Model Evaluation and Deployment

00:18:20 4.1 Comprehensive evaluation frameworks *

00:21:20 4.2 Key components of AI model stacks *

00:25:50 5. Retrieval Augmented Generation (RAG) in Enterprise

00:32:10 5.1 Pragmatic approach to RAG implementation *

00:33:45 6. AI Agents and Tool Integration

00:33:45 6.1 Leveraging tools for AI insights *

00:35:30 6.2 Agent-based AI systems and diagnostics *

00:42:55 7. AI Transparency and Reasoning Capabilities

00:49:10 8. AI Model Training and Customization

00:57:10 9. Enterprise AI Model Management

01:02:10 9.1 Managing AI model versions for enterprise customers *

01:04:30 9.2 Future of language model programming *

01:06:10 10. AI-Driven Software Development

01:06:10 10.1 AI bridging human expression and task achievement *

01:08:00 10.2 AI-driven virtual app fabrics in enterprise *

01:13:33 11. Future of AI and Enterprise Applications

01:21:55 12. Cohere's Customers and Use Cases

01:21:55 12.1 Cohere's growth and enterprise partnerships *

01:27:14 12.2 Diverse customers using generative AI *

01:27:50 12.3 Industry adaptation to generative AI *

01:29:00 13. Technical Advantages of Cohere Models

01:29:00 13.1 Handling large context windows *

01:29:40 13.2 Low latency impact on developer productivity *

Disclaimer: This is the fifth video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview. Filmed in Seattle in Aug 2024.

  continue reading

193 episode

Artwork
iconBagikan
 
Manage episode 439563104 series 2803422
Konten disediakan oleh Machine Learning Street Talk (MLST). Semua konten podcast termasuk episode, grafik, dan deskripsi podcast diunggah dan disediakan langsung oleh Machine Learning Street Talk (MLST) atau mitra platform podcast mereka. Jika Anda yakin seseorang menggunakan karya berhak cipta Anda tanpa izin, Anda dapat mengikuti proses yang diuraikan di sini https://id.player.fm/legal.

Saurabh Baji discusses Cohere's approach to developing and deploying large language models (LLMs) for enterprise use.

* Cohere focuses on pragmatic, efficient models tailored for business applications rather than pursuing the largest possible models.

* They offer flexible deployment options, from cloud services to on-premises installations, to meet diverse enterprise needs.

* Retrieval-augmented generation (RAG) is highlighted as a critical capability, allowing models to leverage enterprise data securely.

* Cohere emphasizes model customization, fine-tuning, and tools like reranking to optimize performance for specific use cases.

* The company has seen significant growth, transitioning from developer-focused to enterprise-oriented services.

* Major customers like Oracle, Fujitsu, and TD Bank are using Cohere's models across various applications, from HR to finance.

* Baji predicts a surge in enterprise AI adoption over the next 12-18 months as more companies move from experimentation to production.

* He emphasizes the importance of trust, security, and verifiability in enterprise AI applications.

The interview provides insights into Cohere's strategy, technology, and vision for the future of enterprise AI adoption.

https://www.linkedin.com/in/saurabhbaji/

https://x.com/sbaji

https://cohere.com/

https://cohere.com/business

MLST is sponsored by Brave:

The Brave Search API covers over 20 billion webpages, built from scratch without Big Tech biases or the recent extortionate price hikes on search API access. Perfect for AI model training and retrieval augmentated generation. Try it now - get 2,000 free queries monthly at http://brave.com/api.

TOC (*) are best bits

00:00:00 1. Introduction and Background

00:04:24 2. Cloud Infrastructure and LLM Optimization

00:06:43 2.1 Model deployment and fine-tuning strategies *

00:09:37 3. Enterprise AI Deployment Strategies

00:11:10 3.1 Retrieval-augmented generation in enterprise environments *

00:13:40 3.2 Standardization vs. customization in cloud services *

00:18:20 4. AI Model Evaluation and Deployment

00:18:20 4.1 Comprehensive evaluation frameworks *

00:21:20 4.2 Key components of AI model stacks *

00:25:50 5. Retrieval Augmented Generation (RAG) in Enterprise

00:32:10 5.1 Pragmatic approach to RAG implementation *

00:33:45 6. AI Agents and Tool Integration

00:33:45 6.1 Leveraging tools for AI insights *

00:35:30 6.2 Agent-based AI systems and diagnostics *

00:42:55 7. AI Transparency and Reasoning Capabilities

00:49:10 8. AI Model Training and Customization

00:57:10 9. Enterprise AI Model Management

01:02:10 9.1 Managing AI model versions for enterprise customers *

01:04:30 9.2 Future of language model programming *

01:06:10 10. AI-Driven Software Development

01:06:10 10.1 AI bridging human expression and task achievement *

01:08:00 10.2 AI-driven virtual app fabrics in enterprise *

01:13:33 11. Future of AI and Enterprise Applications

01:21:55 12. Cohere's Customers and Use Cases

01:21:55 12.1 Cohere's growth and enterprise partnerships *

01:27:14 12.2 Diverse customers using generative AI *

01:27:50 12.3 Industry adaptation to generative AI *

01:29:00 13. Technical Advantages of Cohere Models

01:29:00 13.1 Handling large context windows *

01:29:40 13.2 Low latency impact on developer productivity *

Disclaimer: This is the fifth video from our Cohere partnership. We were not told what to say in the interview, and didn't edit anything out from the interview. Filmed in Seattle in Aug 2024.

  continue reading

193 episode

Semua episode

×
 
Loading …

Selamat datang di Player FM!

Player FM memindai web untuk mencari podcast berkualitas tinggi untuk Anda nikmati saat ini. Ini adalah aplikasi podcast terbaik dan bekerja untuk Android, iPhone, dan web. Daftar untuk menyinkronkan langganan di seluruh perangkat.

 

Panduan Referensi Cepat