Active TS/SCI clearance with a Full Scope Poly. PLEASE DO NOT APPLY UNLESS YOU HAVE AN ACTIVE FULLSCOPE POLY!
Bachelor’s Degree and 5+ years of experience (10 preferred) in data engineering and/or software development.
This
Mid-Level Software Engineer
will be working as part of a team that is adapting new and established analytic techniques to operate on large scale data problems. They need is for developers who consider themselves to have a specialty in extraction, transformation, and loading (ETL) processes. (Our client) has two projects available –
one in Reston, VA; and one in Annapolis Junction, MD.
In this role, candidate will work at a customer site to support the agile development of tools and leverage standard tools (particularly Apache-NIFI) for Extract, Transform, and Loading data as part of a larger system architecture. The successful candidate will create custom code to quickly extract, triage, and exploit data across domains in support of analytic work while supporting the strategic development of replicable processes. The candidate will conduct product usability tests and must work efficiently with a cross functional team members to include analysts, data scientists, project managers, and software solutions integrators.
Responsibilities and Duties:
Work with various team members (e.g., systems engineers, AI/ML experts, HPC experts)
Create prototypes and proofs of concept for iterative development
Learn new technologies and apply the knowledge in production system
Monitor and troubleshoot performance issues on the enterprise data pipelines
Partner with various teams to define and execute data acquisition, transformation, processing and make data actionable for operational and analytics initiatives
Develop software to accomplish extraction, translation, and loading of data from existing data sources (SQL and NoSQL databases) to custom formats required for HPC processing.
Identify issues with the integration of technology
Required Skills
Experience with Apache-NIFI, Kafka, and streaming data sources
UNIX
Java and Python proficient
Expertise in designing/developing platform components like caching, messaging, event processing, automation, transformation, data quality, and tooling frameworks
Expertise transforming data in various formats, including JSON, XML, CSV, and zipped files
Preferred Experience/Skills:
Data engineering experience in Intelligence Community or other government agencies
Experience with Microservices architecture components, including Docker and Kubernetes.
Experience developing microservices to fit data cleansing, transformation and enrichment needs
Experience with AWS cloud services: EC2, S3, EMR, RDS, Redshift, Athena and/or Glue
Experience with Jira, Confluence and extensive experience with Agile methodologies.
Knowledge about security and best practices.
Knowledge of Scrum Agile development process and ceremonies including scrums, planning events, backlog grooming, retrospectives and demos.
Experience developing flexible data ingest and enrichment pipelines, to easily accommodate new and existing data sources
Experience with database and data lifecycle management
Experience with software configuration management tools such as Git/Gitlab, Salt, Confluence, etc.
Experience with continuous integration and deployment (CI/CD) pipelines and their enabling tools such as Jenkins, Nexus, etc.
Detailed oriented/self-motivated with the ability to learn and deploy new technology quickly
Experience with HPC architectures and/or GPUs
/end