• Analytics DevOps Engineer

    Job Locations US-VA-Tysons
    Posted Date 1 month ago(10/16/2018 10:59 AM)
    Job ID
    # of Openings
  • Overview

    The Analytics DevOps Engineer will work within LMI’s Advanced Analytics service line to design, develop, and maintain infrastructure to support analytics operating environments. This position will work closely with advanced analytics professionals, as well as IT infrastructure, network, and security engineers, and software developers to deliver as-a-service capabilities to data science end users. Ideal candidates will demonstrate a strong foundation for the desired competencies, as well as the motivation to further develop additional technical and professional skills through hands-on experience, training, and mentorship.


    The Analytics DevOps Engineer will:

    • Work closely with data scientists, data engineers, data product engineers, infrastructure engineers, network engineers, security engineers, and software developers to integrate and stand up analytics operating environments.
    • Architect and set up high-availability infrastructure services for production data science and analytics environments.
    • Automate the build-test-deploy lifecycle for a wide variety of data science and analytics applications and services.
    • Use AWS and Microsoft Azure cloud services to enable high-performance computing (e.g., AWS EMR) and create standard virtual machine (e.g., EC2) instances.
    • Automate and configure software using configuration management tools, such as Ansible, Terraform, or Puppet.
    • Explore, evaluate, and implement a wide variety of open source and commercial data analysis and visualization technologies, such as Tableau, Qlik, Superset, and Metabase.
    • Develop best practices on operational processes in a DevOps environment to efficiently deliver data science and analytics applications and services.
    • Design and evaluate IT and analytics architectures based on discussions and research with product owners and users, intuition, and an understanding of the desired outcomes of a requested solution.
    • Establish an awareness and understanding of technical constraints, resources, and opportunities, such as cost, speed to market, scalability, security, and operational constraints within the federal government.
    • Contribute to internal research and development efforts, identify and evaluate innovation, and find ways to apply innovative technology to develop practical solutions for internal and federal transformation initiatives.
    • Implement real-time alerting capabilities within production data science and analytics environments.


    • Degree (Master’s preferred) in science, technology, engineering, mathematics, computer science, economics, or other related business or technical discipline is required.
    • Minimum 3 years’ experience in infrastructure engineering.
    • Experience developing infrastructure architectures for delivering scalable analytics capabilities.
    • Experience deploying both on-premises and cloud-based infrastructure including AWS and Microsoft Azure.
    • Experience setting up and maintaining scalable, distributed high-performance computing environments, such as Apache Hadoop, Apache Spark, and Apache Storm.
    • Preferred experience with security requirements in a federal IT environment, including FedRAMP-certified providers and FISMA requirements for acquiring an Authority to Operate (ATO).
    • Strong background in Linux/Unix administration.
    • Experience with various traditional and analytics database management systems, including SQL, PostgreSQL, Vertica, and NoSQL.
    • A solid understanding of networking and core Internet protocols (e.g. TCP/IP, DNS, SMTP, HTTP, and distributed networks).
    • Computer programming experience across multiple programming paradigms and languages, including JavaScript, Python, Ruby, Java, C#, and Scala.
    • Extensive knowledge about application programming interfaces (APIs) and ability to design RESTful services and integrate with other data providers using both JSON and XML.
    • Experience using configuration management tools, such as Ansible, Terraform, and Puppet.
    • Understanding of data science, machine learning, and algorithm development.


    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed

    Need help finding the right job?

    We can recommend jobs specifically for you! Click here to get started.