Customer Development Interview. AI Cloud Compute Users

Posted 2026-05-06
Remote, USA Full-time Immediate Start

Customer Development Interview with AI cloud compute users


We are looking to speak with experienced AI practitioners who have hands-on experience using GPU cloud infrastructure for model training or inference.


This is a short research conversation about what has worked well and what has been painful in your past experience. The goal is to learn from practitioners and use those insights to shape a product in the future. It is not an evaluation of you, and is purely a learning conversation.

Who is a good fit?


You are:



  • An AI Engineer, ML Engineer, Applied AI Researcher, or Technical Founder

  • Currently working at:

    • An AI startup (Seed to Series B preferred), OR

    • An AI-heavy product company (gaming, video, agents, multimodal, LLM apps)



  • Directly involved in infrastructure decisions for:

    • Model training (fine-tuning, SFT, LoRA, QLoRA, etc.)

    • Inference workloads (batch or real-time)

    • Long-running AI agents or multimodal pipelines




Infrastructure Experience Required


You have used at least one of the following beyond AWS/GCP/Azure:



  • RunPod

  • CoreWeave

  • Lambda Labs

  • Paperspace

  • Vast.ai

  • Modal

  • Together.ai

  • Any other GPU cloud provider


Bonus if youve:



  • Switched providers due to pricing or reliability

  • Experienced scaling issues across multiple GPUs

  • Compared bare metal vs managed GPU solutions

  • Faced GPU availability shortages


We are especially interested if:


  • You manage AI compute budgets

  • You care about price/performance optimization

  • Youve struggled with unpredictable costs

  • Youve deployed production inference workloads

  • Youve optimized GPU utilization


Not a Fit If:



  • You only used AWS Sagemaker once for a tutorial

  • You have no direct infrastructure decision-making involvement

  • You are not hands-on with model deployment

Research interview Details



  • 30 minute structured interview

  • Remote (Google Meet)

  • Discussion topics:

    • GPU provider selection criteria

    • Pricing models and cost predictability

    • Performance bottlenecks

    • Workload types (training vs inference vs agents)

    • Switching costs and lock-in



To Apply


Please include:



  1. What AI infrastructure providers have you personally used?

  2. What type of workloads did you run?

  3. Approximate monthly compute spend?

  4. Your role in infrastructure decision-making?

Similar Jobs

Back to Job Board