
Tech Brewed
Welcome to Tech Brewed, your go-to podcast for the latest in technology products, training, and tips. Whether you're a tech enthusiast or a business professional, our show offers valuable insights into the ever-evolving world of technology.
What We Cover
Home and Business Technology Products
We delve into the latest gadgets and tools that can enhance your home and business environments. From smart home devices to enterprise-level solutions, we keep you updated on the best products for your needs.
Technology Training and Tips
Our episodes are packed with practical advice and training tips to help you maximize your tech investments. Whether you want to improve your cybersecurity or optimize your workflow, we've got you covered.
Creative Technology Software and AI
In today's digital age, creative technology software and artificial intelligence are game-changers. We explore how these innovations transform various industries and offer insights into their practical applications.
Featured Segments
AI and Its Impact
Discover how AI is changing the workplace and driving innovation. Our discussions range from improving audio quality with AI to extending life through advanced technologies.
Practical AI
Our goal is to make artificial intelligence accessible and practical for everyone. We break down complex topics into easy-to-understand segments, ensuring you stay ahead of the curve.
Join us on Tech Brewed for a deep dive into the tech world, where we blend expert knowledge with practical advice to help you navigate the digital landscape. Subscribe now and stay informed on the latest trends and innovations!
Tech Brewed
Seagate’s Game-Changing NVMe Hard Drives for Scalable AI Workloads
Welcome to Tech Brewed! In this episode, host Greg Doig takes you deep into the cutting edge of AI infrastructure, focusing on Seagate’s bold move to bring NVMe technology to traditional hard drives. As AI workloads balloon to petabyte—and even exabyte—scales, companies are facing a storage crisis: SSDs are fast but prohibitively expensive, while traditional hard drives and cloud storage come with their own frustrating trade-offs.
Greg breaks down how Seagate’s NVMe hard drives could change the game, combining the speed and streamlined software of NVMe with the cost-effective, high-capacity benefits of traditional HDDs. Discover how this technology promises to cut latency, simplify architectures, and enable direct GPU-to-storage access—all while slashing power usage and carbon footprint for more sustainable AI growth. Plus, hear real-world results from Seagate’s own smart factories where these innovations are already in action.
If you’re looking to stay ahead in the rapidly evolving world of AI and tech infrastructure, grab your favorite drink and tune in—you won’t want to miss this conversation!
Subscribe to the weekly tech newsletter at https://gregdoig.com
Welcome to techbrood. With your host, Greg Doig. We dive deep into the latest tech trends, innovations and conversations that matter. Whether you're a tech enthusiast, industry professional, or just curious about how technology is shaping our world, you found the right place. So grab your favorite drink, settle in, and let's explore the fascinating world of technology together. Hey everyone, and welcome back to the show. I'm Greg Doig and today we're diving into something that's absolutely fascinating and frankly something I think is going to be a game changer for the AI industry. We're talking about NVME hard drives. Seagate just released some details about their work on bringing NVME technology to traditional hard drives specifically for AI workloads. And after digging into this, I think this could solve one of the biggest infrastructure headaches facing AI companies today. So let's break this down. First, let's talk about the elephant in the room. Hopefully there's not one in your room. But anyways, AI storage is becoming a massive problem. And I mean massive. Think about this for a second. Machine learning data sets now require petabytes of storage. Some enterprises are dealing with exabyte scale data sets. That's a million terabytes, folks. The scale is just mind boggling. And here's the thing, it's not just about storing this data. You need to retrieve it efficiently, process it quickly, and do all of this at a cost that doesn't bankrupt your company. Right now we've got three main approaches and they all have serious limitation. First, there's the SSD route. SSDs are fast, blazingly fast. But here's the problem. They're expensive. Like really expensive when you're talking about petabyte scale storage. Most companies simply can't afford to store their entire AI training data sets on SSDs and it's just not economically viable right now. Secondly, you've got traditional SAS and SATA hard drives. These are much cheaper per terabyte. But here's where it gets interesting. These interfaces, SAS and SATA, they were designed decades ago. They rely on proprietary silicon host bust adapters and all these additional controller layers that weren't built with AI workloads in mind. So you end up with complexity, latency and bottlenecks. The third option is cloud storage. And while cloud storage sounds great in theory, in practice you're dealing with high data transfer costs, latency spikes and unpredictable retrieval times. Your expensive GPU clusters end up sitting idle waiting for data to arrive over the network. So you can see the problem here. Fast but expensive. Cheap but slow and complex. Or flexible but unpredictable and costly. So Seagate says they're working on a solution. And this is where their approach gets really interesting. They're essentially asking, what if we could bring the best of both worlds together? Their solution is developing NVME technology specifically for high capacity hard drives. Now, this might sound incremental, but it's actually pretty revolutionary when you think about it. See, NVME was originally designed for SSDs. It's a protocol that's built for speed and low latency. But Seagate is pioneering bringing this technology to traditional hard drives. And the benefits are pretty compelling. First, simplified deployment with NVME hard drives. You eliminate all these host bus adapters, protocol bridges and additional SAS infrastructure I mentioned earlier. You get a much cleaner, more streamlined architecture. Second, unified software stack. Instead of managing separate software layers for your SSDs and hard drives, everything works through a single NVME driver and OS stack. That's a huge operational simplification. But here's where it gets really cool. Direct GPU to storage access. With traditional storage, your data has to go through CPU driven pipelines. GPU wants data, it asks the cpu. CPU talks to the storage controller, data comes back through the cpu, then finally gets to the gpu. That's a lot of stops along the way. With MVME hard drives and DPUs, that's data processing units, you can create a direct path from GPU to storage. You're essentially bypassing those CPU bottlenecks entirely. And then there's this MVME over fabrics, which has its own abbreviation of MVME of. And this enables these drives to integrate into distributed AI storage architectures so you can scale seamlessly across multiple racks, multiple data centers. And the proof of concept. Well, here we go. This all sounds great in theory, but Seagate actually built the proof of concept to test this out and the results are pretty impressive. According to the report I read, they integrated NVME hard drives with NVMe SSDs, Nvidia Bluefield, DPUs and something called AI Store software. Aistore is interesting. It's software that dynamically optimizes caching and tiering. So what did they find? Reduced latency. That direct GPU to storage communication via DPUs really did help reduce storage related latency in AI workflows. Simplified architecture also was a result. By eliminating all that legacy SAS SATA overhead made the system much cleaner and more efficient. Another result, optimized performance. The AI Store software was able to dynamically manage what data lived where Keeping hot data on SSDs and moving cooler data to the hard drives automatically. And seamless scaling. The MVMEOF integration proved that you really could build these composable multi rack AI storage clusters. And here's what I really love about this story. Seagate isn't just developing this technology in a lab. They're actually using it in their own smart factories. At their quantum antenna production facilities. They're using AI driven defect detection. This requires high speed image ingestion and rapid retrieval for model training. So they're essentially eating their own dog food, which gives me a lot more confidence in the technology. They need mass capacity storage for high definition images without compression, efficient long term storage for AI training data sets, and seamless access for model retraining and continuous improvements. And they're finding that MVME hard drives enable all of this in a way that's both cost effective and performant. Now, there's another angle to this story that I think is really important. Sustainability. AI infrastructure consumes massive amounts of power. And as AI adoption scales, this is becoming a real problem. Data centers are already consuming significant portions of global electricity production. But here's where NVME hard drives could make a real difference. They are 10 times more efficient in terms of embodied carbon per terabyte. That's the carbon footprint for manufacturing and materials. They use four times less operating power per terabyte compared to SSD heavy architectures. And of course, they're significantly cheaper per terabyte, which reduces the total cost of ownership for AI storage at scale. When you're talking about exabyte scale deployments, these efficiency gains really add up. So where is this all heading? Seagate is working on several fronts. They're scaling their Mosaic platform, which currently ships 36 terabyte drives, to develop even higher capacity MVME hard drives. They're advancing MVME of support to enable AI workloads to scale seamlessly across high hybrid environments. And they're creating reference architectures so AI developers can deploy these optimized storage solutions more easily. Look, I think this is one of those technologies that could really change the game. Right now, AI companies are making tough trade offs between performance, cost and scale when it comes to storage. Seagate's MVME hard drive approach offers a path to heavier cake and easy eating it too. Getting the performance benefits of MVME with the cost and capacity advantages of hard drives. And the fact that they're proving this out in their own production environments gives me confidence that this isn't just marketing hype. The AI revolution is creating unprecedented demands on storage infrastructure Solutions like this that can deliver both performance and cost effectiveness at scale are going to be crucial for enabling the next wave of AI innovation. That's all for today's deep dive. If you found this useful, make sure to subscribe and share it with your colleagues who are dealing with AI infrastructure challenges. And until next time, let's keep building the future. Thank you for tuning in to another episode of Tech Brood. If you enjoyed today's discussion, don't forget to subscribe. Wherever you get your podcasts, have questions or suggestions for future topics, reach out on our website or social media channels. Until next time. Greg asked me to remind you that the future of tech is brewing right now, and we're all part of that journey. Stay curious, stay connected, and we will catch you on our next episode.