Data Scientist

Association for Institutional Research (AIR)
Bucknell
3 months ago
Applications closed

Related Jobs

View all jobs

Data Scientist

Data Scientist

Data Scientist

Data Scientist

Data Scientist

Data Scientist

Job Overview

Data Scientist role at Association for Institutional Research (AIR)


Job Duties

  • Technical Competence in AI Tools & Frameworks: Deploy generative AI platforms (ChatGPT, Claude, MidJourney, Hugging Face) to enhance data analysis, reporting, and decision-making. Use machine learning frameworks (TensorFlow, PyTorch, scikit-learn) to support applied analytics projects. Integrate AI models via APIs and cloud platforms (AWS, Azure, GCP) to scale institutional solutions. Apply AI techniques such as NLP and generative modeling to analyze unstructured data (e.g., survey comments). Ensure responsible AI use by documenting assumptions, limitations, and risks, and communicate results to non-technical stakeholders. Lead or support workshops promoting best practices in AI and analytics.
  • Data Management: Use WhereScape and Microsoft SQL Server for data warehouse automation and efficient data management. Generate standardized reports with Cognos. Clean, preprocess, and organize datasets for reporting and modeling. Maintain data integrity through standards, validation, and documentation. Contribute to data initiatives involving metadata, data quality, model design, extraction, dashboard development, analytics tool evaluation, system configuration, and end-user support.
  • Data Science & Advanced Analytics: Conduct advanced statistical, predictive, and prescriptive analysis using R, Python, and SQL. Develop and validate models (regression, random forests, neural networks) and scale prototypes into production. Apply forecasting and modeling to support academic and strategic planning. Write statistical designs, conduct analyses, and generate predictive insights. Analyze text data using NLP, topic modeling, and sentiment analysis. Run experiments (A/B testing) to evaluate institutional initiatives.
  • Cross-Functional Collaboration: Partner with offices such as the Registrar’s to promote data-informed decision-making. Collaborate with IT, architects, and analysts to design and manage analytics platforms aligned with strategic priorities. Serve as the data science expert on multi-department projects, guiding end-to-end analytics processes.
  • Basic Analytics & Reporting: Support operational and strategic initiatives through data analysis projects. Apply statistical methods, querying, scripting, and data modeling to generate reports, dashboards, and visualizations. Conduct exploratory data analysis to identify trends and anomalies. Develop automated workflows to ensure consistent data delivery. Create engaging dashboards that communicate insights to diverse audiences, including campus leadership.
  • Strategy Development: Contribute to developing and executing data and analytics strategies with campus leadership. Support initiatives involving the enterprise data lake, data warehouse, BI tools, machine learning, and AI. Provide consulting and analytic guidance to align campus projects with institutional goals.
  • Non-Essential Functions: Responsibilities listed are not exhaustive; additional tasks may be assigned as needed.

Job Qualifications

  • Bachelor’s degree and four (4) years of professional experience in data science or a related data-focused field OR Master’s degree in Analytics, Data Science, Statistics, Computer Science, Information Systems, or a related field and two (2) years of professional experience in data science or a related data-focused field.
  • Two (2) or more years experience with designing, implementing, and administering data lakes and warehouses, ETL or data warehouse automation solutions, enterprise reporting platforms (Cognos, Business Objects, Microstrategy, or similar), data visualization platforms (Tableau Server/Cloud, Power BI, Qlik, or similar), relational and NoSQL database platforms like Microsoft SQL Server, MySQL, Oracle, MongoDB, and DocumentDB.
  • Proficiency in SQL and strong programming skills with experience in one or more modern programming languages (Python, R, Java or similar). Experience with data wrangling, cleaning, and preprocessing. Expertise with data visualization tools and best practices. Knowledge of various statistical models (generalized linear models, hierarchical models, nonparametric models, regression trees, random forests) and a solid understanding of descriptive and inferential statistics, probability, and experimental design. Ability to explain model assumptions, limitations, and results to non-technical audiences.

Preferred But Not Required

  • Familiarity with big data tools (Spark, Hadoop) and cloud environments (AWS, GCP, Azure).
  • Demonstrated experience with at least one version control tool such as Git, CVS, SVN, or similar.

Physical Requirements

  • This role is based in a typical office setting without any unique physical or environmental requirements.

Benefits

  • Flexible scheduling options determined by role.
  • Medical, prescription drug, vision, dental, life, and long-term disability insurance options.
  • 10% employer contribution to your retirement plan (no contribution requirement for non-exempt positions).
  • Generous paid time off, including vacation and sick time, a community service day, and 19 paid holidays (including two full weeks off for Winter Break).
  • Full-time and part-time members of the faculty and staff are eligible for tuition remission for themselves. Additionally, full-time members of the faculty and staff are eligible for tuition remission for their spouse/spousal equivalent and are eligible for various tuition programs for their children. Credit for full-time benefits eligible employment at other institutions of higher education will be applied to waiting periods.
  • A comprehensive employee wellness program including program incentives.
  • A myriad of other benefits, including parental leave, an employee assistance program, fitness center membership, and the power of your Bucknell ID card.

Application Procedure

To apply, please click here.


#J-18808-Ljbffr

Subscribe to Future Tech Insights for the latest jobs & insights, direct to your inbox.

By subscribing, you agree to our privacy policy and terms of service.

Industry Insights

Discover insightful articles, industry insights, expert tips, and curated resources.

How Many Machine Learning Tools Do You Need to Know to Get a Machine Learning Job?

Machine learning is one of the most exciting and rapidly growing areas of tech. But for job seekers it can also feel like a maze of tools, frameworks and platforms. One job advert wants TensorFlow and Keras. Another mentions PyTorch, scikit-learn and Spark. A third lists Mlflow, Docker, Kubernetes and more. With so many names out there, it’s easy to fall into the trap of thinking you must learn everything just to be competitive. Here’s the honest truth most machine learning hiring managers won’t say out loud: 👉 They don’t hire you because you know every tool. They hire you because you can solve real problems with the tools you know. Tools are important — no doubt — but context, judgement and outcomes matter far more. So how many machine learning tools do you actually need to know to get a job? For most job seekers, the real number is far smaller than you think — and more logically grouped. This guide breaks down exactly what employers expect, which tools are core, which are role-specific, and how to structure your learning for real career results.

What Hiring Managers Look for First in Machine Learning Job Applications (UK Guide)

Whether you’re applying for machine learning engineer, applied scientist, research scientist, ML Ops or data scientist roles, hiring managers scan applications quickly — often making decisions before they’ve read beyond the top third of your CV. In the competitive UK market, it’s not enough to list skills. You must send clear signals of relevance, delivery, impact, reasoning and readiness for production — and do it within the first few lines of your CV or portfolio. This guide walks you through exactly what hiring managers look for first in machine learning applications, how they evaluate CVs and portfolios, and what you can do to improve your chances of getting shortlisted at every stage — from your CV and LinkedIn profile to your cover letter and project portfolio.

MLOps Jobs in the UK: The Complete Career Guide for Machine Learning Professionals

Machine learning has moved from experimentation to production at scale. As a result, MLOps jobs have become some of the most in-demand and best-paid roles in the UK tech market. For job seekers with experience in machine learning, data science, software engineering or cloud infrastructure, MLOps represents a powerful career pivot or progression. This guide is designed to help you understand what MLOps roles involve, which skills employers are hiring for, how to transition into MLOps, salary expectations in the UK, and how to land your next role using specialist platforms like MachineLearningJobs.co.uk.