Data Engineer

Snoutplans
Braintree
3 days ago
Create job alert
đŸ¶ Our mission

Every pet should get the care they need, regardless of cost. Vet med prices are up 40% since 2020, and pet owners are stuck having to pick between their bank account and furry friend. At Snout, we aim to solve this by enabling clinics to offer pet wellness plans, that actually work.


🚀 The opportunity

Snout is one of the fastest growing wellness plan providers in the veterinary space, trusted by clinics across the U.S. We’re a small but mighty startup team of < 30 employees. We’re on the lookout for an experienced and remote Data Engineer based in the USA to architect and build the data infrastructure that will power our business for years to come.


đŸ’» The role

We're at that exciting inflection point where data is becoming critical to how we understand our business, serve our customers, and make strategic decisions. We’re looking for a Data Engineer to build our data foundation from the ground up — someone who thrives on autonomy, loves solving problems, and wants to own something meaningful. This isn't just a “pipelines” job because you'll be the go-to expert for all things data, working closely with our engineering team of ~10 and directly supporting our leadership team with the insights that drive our strategy.


You'll build the data foundation that powers Snout's growth by creating scalable, automated pipelines that transform raw data from Stripe, APIs, and internal systems into reliable analytics infrastructure. Your work will encompass the full spectrum: designing optimized SQL schemas for both transactional and analytical workloads, building comprehensive financial data models for subscription metrics and revenue recognition, implementing robust monitoring and data quality systems, and creating clean, well‑documented data marts that empower stakeholders across the company. From automated board reporting pipelines to ad‑hoc analyses on customer behavior and revenue trends, you'll deliver the data solutions that drive marketing automation, customer segmentation, operational dashboards, and strategic decision‑making — establishing the infrastructure that scales with the company's ambitious growth trajectory.


Apply now for the chance to shape the data infrastructure that powers Snout's mission!


Snout is committed to building a diverse and inclusive team. We know that great candidates may not check every box — and that’s okay. If you're excited about this role and our mission, we encourage you to apply if you meet at least 75% of ‘what we’re looking for’ including the first bullet point. If you need any accommodations during the application or interview process, please let us know — we’re happy to support you.


🛠 What you’ll do

  • Design and build data pipelines that extract, transform, and load data from Stripe, APIs, databases, and application logs into scalable infrastructure on AWS
  • Develop financial data models supporting subscription analytics, revenue tracking, and key business metrics (MRR, ARR, churn, cohort analysis)
  • Collaborate with leadership to translate business questions into data solutions and perform analyses on customer behavior, payment trends, and performance metrics
  • Maintain and optimize data infrastructure on AWS for performance, cost‑efficiency, and reliability while ensuring pipeline stability and data quality
  • Support executive reporting by delivering clean, accurate data for board decks, strategic planning, and investor updates
  • Establish data foundations by creating documentation and implementing best practices that scale with the company's growth

💡 What we're looking for

  • 5+ years building production data systems with expert SQL, Python and JavaScript proficiency, ETLs, and Airflow
  • Payment platform expertise with Stripe API or similar systems like PayPal or Braintree, including hands‑on experience with transaction processing and API integrations
  • SaaS financial acumen including subscription metrics (MRR, ARR, churn, cohort retention) and financial data modeling (ledgers, reconciliation, revenue tracking)
  • Excellent communication skills and ability to convert business questions into technical solutions and provide data insights clearly to non‑technical stakeholders and executives
  • Ownership mentality with pragmatic execution — you proactively identify and solve problems, balance perfect vs. shipped, and make sound architectural decisions independently
  • Startup‑readiness — you thrive in ambiguous, fast‑changing environments, comfortable wearing multiple hats and working with minimal guidance

🌟 What will make you stand out

  • Data transformation and quality expertise with frameworks like dbt, SQLMesh, or Dataform, plus data quality tools
  • BI and visualization tools experience with platforms
  • Real‑time data streaming experience using tools like Kafka, AWS Kinesis, or similar platforms
  • AI/ML pipeline knowledge including data preparation and infrastructure for machine learning models
  • Experience at subscription companies, familiarity with revenue recognition and accounting principles, experience building executive dashboards or board materials
  • Early‑stage startup experience as a first/solo data hire or in fast‑paced environments, with knowledge of CI/CD practices for data pipelines

💖 Why you should join Snout

  • Fast growing, venture capital backed technology company focused on improving access to veterinary care
  • Ability to have a meaningful impact from Day 1, where your voice and opinion matters
  • Unlimited potential to grow in your career, learn and expand your skillsets
  • Collaborative, flexible, and friendly culture

❌ Why this role might not be a fit for you

  • You are not based or legally authorized to work in the USA
  • You don’t live near an airport or have the flexibility to travel quarterly
  • A fast-moving, startup culture feels overwhelming to you and you aren’t comfortable with ambiguity and figuring things out as you go
  • You don't have 5+ years of hands‑on experience building production data systems, or you're not proficient in Python and SQL databases
  • You haven't worked with payment platforms like Stripe, PayPal, or Braintree, or lack experience with transaction processing and API integrations
  • You’re unfamiliar with SaaS subscription metrics (MRR, ARR, churn, cohort retention) or financial data modeling (ledgers, reconciliation, revenue tracking)
  • You struggle to translate business questions into technical solutions or communicate data insights to non‑technical stakeholders and executives
  • You prefer clearly defined tasks over proactive problem‑solving, need extensive guidance for architectural decisions, or prioritize perfection over pragmatic shipping

💾 Compensation

  • $120 000 - $160 000 base salary
  • Equity

⚕Benefits

  • Day 1 medical/dental/vision benefits
  • Flexible time off + 11 Snout calendar holidays
  • Paid parental leave


#J-18808-Ljbffr

Related Jobs

View all jobs

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Data Engineer

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

Machine Learning Jobs for Career Switchers in Their 30s, 40s & 50s (UK Reality Check)

Are you considering a career change into machine learning in your 30s, 40s or 50s? You’re not alone. In the UK, organisations across industries such as finance, healthcare, retail, government & technology are investing in machine learning to improve decisions, automate processes & unlock new insights. But with all the hype, it can be hard to tell which roles are real job opportunities and which are just buzzwords. This article gives you a practical, UK-focused reality check: which machine learning roles truly exist, what skills employers really hire for, how long retraining realistically takes, how to position your experience and whether age matters in your favour or not. Whether you come from analytics, engineering, operations, research, compliance or business strategy, there is a credible route into machine learning if you approach it strategically.

How to Write a Machine Learning Job Ad That Attracts the Right People

Machine learning now sits at the heart of many UK organisations, powering everything from recommendation engines and fraud detection to forecasting, automation and decision support. As adoption grows, so does demand for skilled machine learning professionals. Yet many employers struggle to attract the right candidates. Machine learning job adverts often generate high volumes of applications, but few applicants have the blend of modelling skill, engineering awareness and real-world experience the role actually requires. Meanwhile, strong machine learning engineers and scientists quietly avoid adverts that feel vague, inflated or confused. In most cases, the issue is not the talent market — it is the job advert itself. Machine learning professionals are analytical, technically rigorous and highly selective. A poorly written job ad signals unclear expectations and low ML maturity. A well-written one signals credibility, focus and a serious approach to applied machine learning. This guide explains how to write a machine learning job ad that attracts the right people, improves applicant quality and strengthens your employer brand.

Maths for Machine Learning Jobs: The Only Topics You Actually Need (& How to Learn Them)

Machine learning job adverts in the UK love vague phrases like “strong maths” or “solid fundamentals”. That can make the whole field feel gatekept especially if you are a career changer or a student who has not touched maths since A level. Here is the practical truth. For most roles on MachineLearningJobs.co.uk such as Machine Learning Engineer, Applied Scientist, Data Scientist, NLP Engineer, Computer Vision Engineer or MLOps Engineer with modelling responsibilities the maths you actually use is concentrated in four areas: Linear algebra essentials (vectors, matrices, projections, PCA intuition) Probability & statistics (uncertainty, metrics, sampling, base rates) Calculus essentials (derivatives, chain rule, gradients, backprop intuition) Basic optimisation (loss functions, gradient descent, regularisation, tuning) If you can do those four things well you can build models, debug training, evaluate properly, explain trade-offs & sound credible in interviews. This guide gives you a clear scope plus a six-week learning plan, portfolio projects & resources so you can learn with momentum rather than drowning in theory.