grid
grid
Home/Careers
Ignate Careers Unlock Potential undefined

Ignate Careers

Unlock Potential

Join, Grow, and Transform Data Engineering Excellence
Open Positions

Join OurInnovation Team

Discover exciting career opportunities and be part of our mission to transform data integration.

Senior Spark Data Developer

Full-time
Remote
Senior Spark Data Developer

Responsibilities

Design, develop, and maintain robust data processing pipelines using Apache Spark.

Implement efficient algorithms and data structures to optimize performance and scalability.

Collaborate with data engineers and data scientists to integrate machine learning models into production pipelines.

Troubleshoot and debug issues in existing codebase and provide timely resolutions.

Conduct code reviews and provide constructive feedback to team members.

Stay up to date with the latest advancements in big data technologies and incorporate them into the development process.

Mentor junior developers and contribute to their professional growth.

Qualifications

Bachelor's or master's degree in computer science, Engineering, or related field.

Proven experience (5+ years) developing data processing applications with Scala and PySpark.

Strong proficiency in functional programming paradigms, particularly with Cats library.

Proficiency in Rust programming language is highly desirable.

Experience designing and optimizing distributed systems for large-scale data processing.

Basic knowledge about streaming services.

Familiarity with design pattern principles.

Machine Learning (ML) Engineer

Full-time
Remote
Machine Learning (ML) Engineer

Responsibilities

Design, develop, and implement machine learning algorithms and models that address complex business problems and enhance data-driven decision-making processes.

Perform comprehensive data analysis, cleansing, preprocessing, and feature engineering to ensure the quality and relevance of input data for optimal model performance.

Train, validate, and fine-tune machine learning models using appropriate techniques and methodologies, ensuring accurate predictions and optimal model generalization.

Identify and select relevant features to improve model efficiency, interpretability, and predictive accuracy, considering domain-specific insights.

Continuously refine and optimize machine learning models for improved performance, scalability, and efficiency, taking into account real-world constraints and considerations.

Qualifications

Bachelor's degree in computer science or a related field

5+ years of experience in database support.

Solid understanding of machine learning algorithms, NLP techniques, and frameworks such as TensorFlow, PyTorch, spaCy, transformers, scikit-learn.

Proficiency in programming languages such as Python and with relevant libraries for data manipulation and analysis (e.g., NLTK, Pandas, NumPy).

Experience with pre-trained language models such as Llama, GPT, BARD, BERT, BARD, and Molly that leverage transfer learning.

Knowledge of semantic analysis, sentiment analysis, topic modeling, and other NLP tasks.

MEARN Stack Software Engineer (MongoDB, Express, Angular, React, Node)

Full-time
Remote
MEARN

Responsibilities

Design, develop, and maintain MEARN stack web applications.

Implement software solutions meeting business requirements and coding standards.

Collaborate with a team to add features, fix bugs, and enhance performance.

Troubleshoot and debug application issues, ensuring timely resolutions.

Perform unit testing for quality assurance.

Qualifications

Bachelor's degree in Computer Science or related field.

2+ years of experience in MEARN stack web application development.

Proficiency in JavaScript and TypeScript.

Strong front-end skills with HTML5, CSS3, and responsive design.

Familiarity with databases like MySQL and MongoDB.

Data Engineer

Full-time
Remote
Data Engineer

Responsibilities

Design and implement scalable data pipelines for ETL processes.

Develop data warehouses for efficient data storage.

Analyze data to identify patterns and solutions.

Troubleshoot and resolve application issues.

Ensure compliance with data governance and privacy regulations.

Qualifications

Bachelor's degree in Computer Science or related field.

2+ years of experience with data engineering tools like Hadoop, Spark, Python, Java, and Scala.

Proficiency in ETL pipeline development.

Familiarity with data integration concepts.

Hands-on experience with databases like MySQL, PostgreSQL, and MongoDB.

Submit your resume
Apply Today

Ready to Join OurInnovation Team?

Submit your application and take the first step towards an exciting career in data excellence.

Apply Today

Submit YourApplication

Take the first step towards your career journey

Click to upload your resumeMaximum file size: 2MB

Or send your resume directly to

[email protected]