Skip to main content

Data Engineer

INEOS Automotive – Grenadier – Built On Purpose

It’s a big task to launch a best-in-class 4x4 for those who depend on a vehicle as a working tool, and it’s our responsibility to do the best job possible. We’re building the Grenadier from the ground up, component by component. No corners cut and no easy options.

To make this vision a reality, we’ve assembled a team of world-class experts who are willing to roll up their sleeves and get stuck in. We need more doers that think big. More thinkers that dive in and do. More people that make things happen. We’re a diverse workforce of tenacious, straight-talking experts with engineering at our core. We’re growing our world-class team and looking for spirited innovators and disruptors - those who thrive on a gritty challenge and will work through adversity in the pursuit of success. We’re doing things differently.

If this sounds like you, let’s talk.


  • We are looking for a Data Engineer to join the INEOS Automotive Information Technology department. The department sits at the heart of the business developing and supporting robust, easy to use, scalable tools and providing data, analysis, and insight to help shape the company’s strategy.
  • Ideally you come from a solid technical background in data engineering, and are now looking for the challenge of implementing, developing, managing, and supporting the data engineering application estate. Your ambition and passion for technical delivery will be a critical factor for success in this role.
  • Our data engineering will be powered by a blend of Qlik Data Integration (QDI) suite namely, Attunity Replicate and Compose and Azure Data Factory so you will have a degree of expertise in these products but may also be familiar with the wider landscape of ETL products such as Informatica, Talend, Matillion, SAP Data Integrator, SQL Server Information Services (SSIS) etc.
  • You will be familiar with environments concerning Operational and Management Analytics and will have expertise in developing optimal data pipelines using large volumes of data with varying variety and veracity and ideally have exposure to industry standard methodologies such as Boyce-Codd, Innmon, Kimball and Data Vault.



  • Responsible for designing, building, and deploying data pipelines using the functionality available in Qlik Replicate, Compose and Azure Data Factory
  • Responsible for designing and building database schemas and associated data pipelines (extract, transform and load) routines / processes to facilitate the end-user data and analytics requirements
  • Responsible for understanding database structures and articulating these using industry standard methodologies such as Boyce-Codd (Relational), Innmon (Data Marts), Kimball (Data Warehouse) and Dan Lindst (Data Vault)
  • Responsible for scoping, designing, building, servicing, and supporting all new and existing data pipelines within the organisation with the goal of providing robust analytical solutions that support the strategic aims
  • Responsible for gathering, inspecting, cleaning, transforming, and modelling data with the goal of discovering useful information, suggesting conclusions, and supporting decision-making
  • Responsible for capturing data requirements to design, extend or maintain, logical and physical data models which support new and existing business initiatives
  • Responsible for creation and maintenance of the data artifacts (data catalogue and data dictionary) and system level documentation
  • Responsible for understanding functional and non-functional requirements to create appropriate and effective technical solutions
  • Responsible for designing and building workflows to facilitate process automation
  • Responsible for providing end user support, service, and delivery management
  • Responsible for the data acquisition estate and conducting approved technology upgrades, providing technical advice, product information and 3rd line support
  • Responsible for coding SQL and stored procedures, including testing and troubleshooting queries, execution plans, optimising models, and tuning code
  • Responsible managing software licences, application security (roles and permissions) and application workspaces within the Qlik Replicate, Compose and Azure Data Factory Platforms


  • Min 5 years of experience with the development of data warehouse solutions
  • Min 5 years of experience of working on complex business intelligence programmes across divisions
  • Min 5 years of experience of designing, developing, and implementing data pipelines, data lineage, taxonomies, catalogue, and metadata management processes
  • Min 5 years of experience of supporting and maintaining data pipelines, data lineage, taxonomies, catalogue, and metadata management processes
  • Min 5 years of experience with various forms of data storage, data warehouses and data lakes
  • Min 5 years of experience working with a wide variety of database platforms such as, Oracle, Postgres, SQL Server, Snowflake, Redshift etc.
  • Min 5 years of experience in time and project management using Agile or Waterfall principals
  • Must be highly proficient with Microsoft suite including Word, PowerPoint, Excel, etc. especially Excel and PowerPoint. Will be expected to compile polished content including presentations
Clicking this link will take you to an external site where you can continue with your application