PERIANxFlyte: Reduce Cost, save Time and boost Performance of your ML Workflows

Introducing PERIAN for Dagster: The smarter Kind of Compute Infra for Dagster Workflows


In the fast-moving world of AI, the need for scalable, budget-friendly cloud GPU resources just keeps growing. AI teams often find themselves trying to secure their needed compute resources without getting stuck in long-term contracts or blowing their budgets. For many teams using Dagster, this challenge is especially relevant as they orchestrate complex and compute-intensive pipelines that rely on consistent access to high-performance compute resources. 


But here’s some good news for all teams using Dagster: PERIAN Sky Platform now integrates with Dagster! This combination gives AI teams easy on-demand access to serverless GPU resources, helping boost performance of their compute workloads while keeping costs at a reasonable level.



Key Takeaways (TL;DR)


Here’s why you should be excited about this PERIAN and Dagster combo: 


  • • Get the GPUs you need, when you need them. No long-term contracts: The PERIAN Sky and Dagster integration provides easy, on-demand access to cost-efficient serverless GPUs, eliminating the need for reservations or commitments. 

  • • Cost-effective compute:  With PERIAN's pay-as-you-go model, AI teams only pay for the resources they use, avoiding wasted costs on idle or reserved instances. We find needed GPUs at the best rates.

  • • Plug-and-play infra: The integration seamlessly fits into existing Dagster workflows, reducing cloud management overhead and freeing up teams to focus on AI development.



The AI Infra Struggle is real - also for Dagster users


We all know AI/ML projects need serious computational power, especially GPUs, to handle data processing and model training. After talking to many AI teams using Dagster, we’ve heard the same pain points over and over: those with bigger demands for compute resources struggle with getting the GPUs they need right when and where they need them. On-demand availability isn’t guaranteed, and without knowing if resources will be there when required, planning becomes a guessing game. Besides, cost for on-demand GPUs are high, especially with hyperscalers like AWS, Azure and GCP.


The common advice from hyperscalers? Reserve and commit in advance. Otherwise, there’s no clear information for AI teams on when specific GPUs will be available on-demand in the future. This lack of transparency leaves teams in the dark, often forced to take whatever accelerators are available rather than the ones best suited for their workload and budget. In addition, the complexity and effort of managing these cloud setups rather than spending time on their models make AI teams look out for better options to keep projects on track.



The Cheat Code: PERIAN-Dagster Combo


That’s where PERIAN's integration for Dagster steps in. Dagster already has a solid reputation for managing data and ML pipelines, and with PERIAN’s serverless GPU capabilities added in, AI teams can get GPU power right when they need it, without dealing with reservations or contracts. PERIAN Sky Platform aggregates GPUs from multiple clouds and automatically executes your Dagster jobs on the best suited cloud resources according to your requirements. 



Key Features of the Integration


Teams can now free their to-do-list and let PERIAN's Dagster integration handle compute infra for them. 


  • • Workload dockerization: The integration packages your codebase into a Docker container. Whether you’re moving between environments or keeping things consistent, this helps make it as simple as possible. 

  • • Serverless execution:  Once containerized, it launches your code on PERIAN’s scalable GPUs, scaling resources up or down based on demand. This way, you only pay for what you actually use.

  • • Automatic cost-efficiency: Forget costly, reserved instances that may sit idle. PERIAN’s optimizer automatically picks the most cost-efficient compute for every job with pure pay-as-you-go model. No wasted resources and no surprise bills.

  • • Smooth workflow integration: This integration works easily with existing Dagster workflows, so teams can add PERIAN’s GPU resources without making big changes—keeping development and deployment fast and flexible.



How It Works


Getting started is straightforward. First, make sure Docker is installed on your local machine, that you have access to PERIAN Sky Platform and installed the Dagster-Perian package. In your Dagster project, you can package your codebase into Docker containers with containerize_codebase and run serverless compute jobs via PERIAN with create_perian_job. For full details, check out the documentation and dagster-perian GitHub page. 



What’s In It for AI Teams?


It has never been easier to level-up your Dagster operations while automatically optimizing AI infra on the side. 


  • • Scalability: With PERIAN's serverless compute, teams can handle any workload, big or small, without the limits of fixed-capacity infrastructure. 

  • • Flexibility:  PERIAN’s resources are always available on demand, so teams can launch compute tasks whenever needed, with no reservations or planning required. Free yourself from a single cloud provider and their pricing.

  • • Cost savings: Get your compute at best-rates and save 30% and more compared to hyperscalers; only pay for active usage.

  • • Reduce operational overhead: Integrating PERIAN with Dagster helps AI teams manage pipelines and compute resources in one place, reducing time spent on cloud infrastructure operations and management.



Any reason not to try it out?


PERIAN Sky Platform and Dagster integration is a great solution for AI teams who want high-performance on-demand GPU resources without breaking the bank. By tackling challenges like on-demand GPU unavailability, rising cloud bills, and cloud operations overhead, this integration lets AI teams focus on what they’re best at: building AI. 


Want to see it in action? Signing-up to PERIAN Sky Platform is free without any commitment. Head over to the documentation to get started! We're also happy to help you set up the integration and run a PoC together! Get in touch via [email protected].