HIGH PRIORITY - DATA ENGINEER

Kansas City Metropolitan Area, USA

Job Type

REMOTE - Local Kansas City candidates are preferred
Only Green card holders or US Citizens please

ABOUT THE ROLE

PLEASE NOTE: This role is high priority with immediate start for the successful candidate!

Our client is a Data Science and Machine learning (proprietary) solutions provider startup located in the Kansas City Metro area working with many of the premier corporations in the midwest and around the country.

Opportunity to join an exciting data science company in the Midwest to work on some of
the most cutting-edge data science projects in the country. If you are looking to join a young company with GREAT culture, fantastic market position and an entrepreneurial mindset, this is it!

Our client is VERY focused on clear communicators, and technical ability, and successful candidates MUST be fluent in python.

Great growth prospects and opportunity to shape your career to become a well-rounded data engineer. You will leverage your data engineering and architecture skills to build and maintain robust data pipelines and data platforms to help drive valuable insights for our clients. You will have an opportunity to work on several different types of projects spanning several industry verticals (e.g., healthcare, finance, marketing, logistics/operations, etc.).

Responsibilities:

*Develop and maintain data pipelines and ETL’s, interacting with disparate data sources and multiple database servers.
*Bring analytical models and scripting to a production setting and push regular data/model updates to in-production projects.
*Setup automation related to data/model monitoring and performance
evaluation.
*Develop and/or maintain client-facing applications including interactive dashboards and APIs.

Qualifications:

*Bachelor’s degree or equivalent experience in a quantitative field (Computer Science, Engineering, Statistics, Mathematics, etc.). Master’s and/or PhD are preferred.

*2-3 years of experience in working with relational databases, developing automated pipelines, deploying models/projects to production, and developing dashboards and APIs.

*SQL & database interaction experience. Comfortable writing complex SQL queries and interacting with databases using PostgreSQL, MySQL, and MongoDB.

*Strong fluency in Python is a requirement.

*Comfortable working from the terminal in a Linux environment is a requirement.

*Comfortable with project/model deployments to production settings and
version control with GitLab/Hub.

*Experience using the Python Flask framework to build and maintain dashboards and APIs are a requirement.

*Automation experience (bash scripting and job scheduling) is a requirement.

“Nice to haves” include:

*JavaScript experience and prior experience with machine learning.

*Should be comfortable working independently and in a small team.

Compensation:
Base, bonus, stocks, health insurance, vacation, work from home, and other benefits.