• Big Data Engineer

    Location US-CO-Englewood
    Job ID
    Information Technology
  • Summary

    DISH is a Fortune 250 company with more than $14 billion in annual revenue that continues to redefine the communications industry. Our legacy is innovation and a willingness to challenge the status quo, including reinventing ourselves. We disrupted the pay-TV industry in the mid-90s with the launch of the DISH satellite TV service, taking on some of the largest U.S. corporations in the process, and grew to be the fourth-largest pay-TV provider. We are doing it again with the first live, internet-delivered TV service – Sling TV – that bucks traditional pay-TV norms and gives consumers a truly new way to access and watch television.


    Now we have our sights set on upending the wireless industry and unseating the entrenched incumbent carriers.


    We are driven by curiosity, pride, adventure, and a desire to win – it’s in our DNA. We’re looking for people with boundless energy, intelligence, and an overwhelming need to achieve to join our team as we embark on the next chapter of our story.


    Opportunity is here. We are DISH.



    To learn more about our IT organization please visit

    Job Duties and Responsibilities

    Primary responsibilities fall into the following categories:


    • Evangelist for data engineering and data science cross functionally
    • Optimizing our data and data pipeline architecture
    • Support our software engineers and data scientists
    • Contribute to our cloud strategy based on prior experience
    • Understand the latest technologies in a rapidly innovative marketplace
    • Work with all stakeholders across the organization to deliver enhanced functionality
    • Design infrastructure that is flexible enough to solve for all existing database technologies (SQL, Oracle, Teradata, Netezza)
    • Design our machine learning platform

    Skills - Experience and Requirements

    A successful Data Engineer will have the following:

    • Advanced SQL experience with a critical mass of publicly available database technologies
    • The ability to build CI/CD pipelines with large data sets and cloud technologies
    • Prior experience extracting data disconnected data repositories
    • Functional expertise in data streaming, message queues and scalable near-real time data stores
    • Prior experience with data engineering tools, specifically Hadoop, Spark, Kafka, AWS (EC2, EMR, RDS, Redshift)



    Sorry the Share function is not working properly at this moment. Please refresh the page and try again later.
    Share on your newsfeed

    Connect With Us!

    Not ready to apply? Connect with us for general consideration.