Databricks Engineer

Everscale Group

As a Databricks Engineer, you will be responsible for designing, implementing, and maintaining scalable data processing and analytics solutions using Databricks Unified
Analytics Platform. The ideal candidate possesses a deep understanding of big data technologies, proficient coding skills, and a strong background in data engineering and


Architecture and Design:

  • Design and implement scalable and efficient data processing solutions using Databricks.
  • Collaborate with architects and data scientists to develop optimal data structures and architectures.


  • Write efficient and maintainable code in languages like Python, Scala, or Java.
  • Develop and implement ETL processes for data integration and transformation.

Data Management:

  • Manage and optimize large-scale data storage and processing environments.
  • Ensure data quality and integrity throughout the data lifecycle.


  • Work closely with data scientists, analysts, and business stakeholders to understand data requirements.
  • Collaborate with cross-functional teams to integrate data solutions into business processes.

Monitoring and Optimization:

  • Monitor performance and troubleshoot issues in Databricks clusters.
  • Optimize data processing workflows for improved efficiency and cost-effectiveness.


  • Create and maintain comprehensive documentation for data processing workflows and solutions.
  • Keep documentation up-to-date with changes and improvements.

Stay Current:

  • Stay abreast of the latest developments in big data technologies and Databricks features.
  • Evaluate and recommend new tools and technologies to enhance data processing capabilities.

Educational Background:

  • Bachelor’s degree in computer science, Information Technology, or a related field.

Technical Proficiency:

  • Strong expertise in Databricks Unified Analytics Platform.
  • Proficiency in programming languages such as Python, Scala, or Java.
  • Experience with big data technologies like Apache Spark.
  • Familiarity with cloud platforms, such as AWS, Azure, or Google Cloud.
  • Knowledge of data warehousing and ETL processes.


  • Proven experience in designing, implementing, and maintaining data processing pipelines.
  • Hands-on experience with data modeling and database design.
  • Previous work with large-scale data storage and processing systems.

Communication Skills:

  • Excellent communication skills to collaborate with cross-functional teams.
  • Ability to convey technical concepts to non-technical stakeholders.

To apply for this job email your details to