Principal Data Engineer
Why Catalina? Catalina delivers omni-channel solutions to our customers with a long-standing history of rich data assets, but our greatest asset is our people. Our guiding principles set the stage for winning in the markets we serve, and our potential is powerful. When you join the Catalina team, you will be part of an inclusive environment that embraces flexibility, community involvement, work-life balance as well as opportunities to grow professionally.
The Enterprise Data Management team defines and builds the core capabilities that Catalina Products and Data Solutions leverage in the market. We build capabilities once and re-use across our solution suites. The team provides the data which is used to extract insight and drive our market models to provide value to our clients.
Catalina is seeking a Principal Data Engineer to help us expand our cloud-based data platform and master data management solutions by developing cutting-edge, data-driven solutions for retailer and CPG customers. We will be migrating the world’s richest shopper intelligence databases of purchase behavior inside retail around consumer-packaged goods from on-premises to the Cloud (Azure). The overall data environment consists of petabytes of data, trillions of rows of consumer transactions, and five plus years of shopper history data. You will help us continue developing our master data management solutions to integrate multiple consumer data sources, allowing an understanding around path to purchase from multiple channels where consumers are most active.
This is a great opportunity for a passionate principal data engineer with a desire to launch innovative ad tech solutions and deploy data driven solutions. We are a team passionate about data technology.
This position will report to the Senior Director of Data Engineering within the organization of the Vice President, Product and Data Platform Management, and subsequently the CTO.
Develop Data Solutions in the public cloud (Azure) using technology such as Databricks, Apache Spark/Scala, Azure Data Factory, Azure DevOps, Confluent Kafka, and Snowflake resulting in stable and high-quality code within deadlines, following established process
Develop additional data domains (Promotions, Consumers, Events) in the Data Lakehouse
Work with the Data Governance team to ensure new data pipelines and lineage are available in our data catalog
Build or enhance features within our internal frameworks such as data retention/destruction and security/audit
Develop/Unit test and deploy data solutions (mid/high complexity)
Migrate end-of-life on-premise solutions (Informatica, Unix/Linux Scripting, Python, etc.) to Cloud-based solutions
Support the production environment and previously deployed solutions through an on-call rotation
Write SQL code in database systems as required (Snowflake, Yellowbrick, Netezza, MySQL, Oracle, DB2), ensuring the code is optimized to the specified environment
Gain complete knowledge of the technology stack used by Catalina and make recommendations as required to improve our ability to solve data needs efficiently
Troubleshoot data issues within our Petabyte+ environment creating a permanent resolution to the root cause
Clearly communicate with leadership, product owners, and team on challenges and proposed solutions with concise documentation and presentations at the frequency requested
Build and maintain an in-depth knowledge of our data ecosystem and industry trends to understand where we can improve our system architecture and design
Develop/oversee standards such as Data Recipes, Checklists, Deployment Processes, etc. where needed to ensure efficiency
Stay current on new data solution tools and mentor others so that we can maintain a culture of continuous improvement
Participate in code review with fellow engineers to ensure the quality of code
Actively participate in project estimation/planning to provide input that improves outcomes
Participate in talent acquisition tasks such as resume review/interviews when requested
Bachelor’s Degree in Computer Science, Information Technology or related field or equivalent; minimum of 8 years of relevant data engineering experience
Experience with Cloud Data Technologies: Azure, Databricks, Spark/Scala, Azure Data Factory, HDFS/Azure Data Lake, Hive, Azure DevOps, or other relevant cloud technologies i.e., AWS or GCP
Experience with Database Systems (Such as Snowflake, Yellowbrick, and Netezza)
Experience with Linux/Unix Systems and scripting
Experience working in terabyte+ data environments; hundreds of terabytes to petabytes of data preferred
Able to develop/unit test and deploy data solutions independently (mid to high complexity)
Positive attitude towards challenges, with resiliency
Working knowledge of Agile software engineering processes
Communication skills to present solutions clearly to the team and users
Able to mentor fellow co-workers to expand overall team skills
Community developer presence preferred (GitHub, open-source projects, etc.)
Marketing Technology, Advertising Technology, Loyalty program experience helpful but not required
Catalina is a recognized leader in highly targeted, personalized digital media that drives, tracks and measures sales lift for leading CPG retailers and brands. Powered by the most extensive shopper database in the world, Catalina's mobile, online and in-store networks personalize the consumer's path to purchase, delivering $7.9 billion in relevant consumer value each year. Catalina has no higher priority than ensuring the privacy and security of the data entrusted to us and maintaining the consumer trust paramount to the continued success of our business partners and Catalina. Based in St. Petersburg, FL, Catalina has operations in the United States, Europe and Japan. To learn more, please visit www.catalina.com or follow us on Twitter @Catalina.
Diversity, Inclusion + Belongingness
Catalina is committed to investing in, empowering, and retaining a more inclusive community within our company. We are dedicated to hiring and cultivating diverse teams of the best and brightest from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has a seat at the table and a voice to be heard. Our goal is to ensure that all our talented professionals are equipped with support, resources, and the opportunity to excel.
The intent of this job description is to describe the major duties and responsibilities performed by incumbents of this job. Incumbents may be required to perform other job-related tasks other than those specifically included in this description.
All duties and responsibilities are essential job functions and requirements and are subjected to possible modification to reasonably accommodate individuals with disabilities.
This position may be performed as a remote, work from home position.
This role is to be filled outside of the state of Colorado.
We are proud to be an EEO employer M/F/D/V. We maintain a drug-free workplace.
There are no saved jobs.