Lead Software Engineer - Spark/Scala
Responsibilities
Experience: 10+ Years
Location: Bangalore
Role: Full-time / Technical Lead
Role Summary
We are seeking a highly capable Technical Lead with deep expertise in Big Data (Spark, Hadoop, Hive), Kubernetes platforms, Azure cloud, PostgreSQL, CI/CD (GitHub Actions/Jenkins), and DevOps automation. The ideal candidate will be a strong hands-on engineer who also brings mature leadership capabilities, sound architectural judgment, and exceptional software craftsmanship practices.
Key Responsibilities
1. Software Engineering & Craftsmanship
- Develop high‑quality, maintainable code using Software Craftsmanship principles such as:
- Test-Driven Development (TDD)
- Continuous Integration / Continuous Delivery / Continuous Deployment
- Clean code, refactoring, SOLID principles
- Participate actively in design reviews and code reviews to ensure engineering excellence.
- Implement quick POCs using the latest tech stack to validate ideas and technical feasibility.
- Apply design patterns and engineering best practices for scalable and maintainable solutions.
- Collaborate in all phases of the SDLC, with strong understanding of Agile/Scrum or continuous delivery environments.
2. Big Data Engineering & Distributed Systems
- Lead development of large-scale ETL/ELT pipelines using Apache Spark (Java/Scala).
- Oversee data processing frameworks in Hadoop and Hive, especially within Azure-based ecosystems.
- Optimize Spark/Hive workloads for cost, performance, and reliability.
3. Cloud, Kubernetes & Infrastructure Engineering
- Lead architecture and operations of distributed applications running on:
- Kubernetes (On-Prem + AKS)
- Docker
- Drive adoption of Helm, container standards, and cluster best practices.
- Build and maintain Azure infrastructure using Terraform (IaC) with reusable, scalable modules.
- Ensure end-to-end observability of clusters using Prometheus, ELK, APM tools, and custom Python scripts.
4. DevOps Engineering & CI/CD Leadership
- Define and own the platform CI/CD strategy using:
- GitHub Actions
- Jenkins
- Terraform automation
- Build standardized CI/CD frameworks for 50+ microservices, leveraging Helm and automated build workflows.
- Automate DNS creation, certificate management, APM integration, user management, Vault policy creation, and repository migrations (200+ repos).
- Champion an automation-first mindset across the engineering and SRE teams.
- Coordinate with developers and testers for smooth deployment and production readiness.
5. Database Engineering
- Provide leadership on PostgreSQL cluster management:
- HA setup
- Performance tuning
- Failover mechanisms
- Monitoring dashboards
- Automate PostgreSQL operations using Python and infrastructure scripts.
6. Observability, Stability & Support
- Ensure reliability and availability of services across stacks (Java, Angular, .NET).
- Implement monitoring and alerting using Prometheus, Grafana, ELK, Elastic APM.
- Oversee production stability through proactive dashboards and automated health checks.
- Provide guidance for L2/L3 support and participate in incident review and resolution.
7. Leadership & Collaboration
- Lead a cross-functional team of developers, DevOps engineers, and data engineers.
- Translate business requirements into technical designs and actionable backlogs.
- Mentor team members on coding practices, DevOps processes, and cloud-native patterns.
- Collaborate with distributed teams and communicate effectively with technical and non-technical stakeholders.
Required Skills & Qualifications
- 10+ years of experience in software engineering / data engineering / DevOps roles.
- Strong expertise in:
- Apache Spark (Java/Scala)
- Hadoop, Hive on Azure
- Kubernetes, Docker
- Azure Kubernetes Service (AKS)
- Ansible
- Jenkins & GitHub Actions
- PostgreSQL (HA + automation)
- Strong understanding of CI/CD, SDLC, Agile, and continuous delivery.
- Good understanding of networking fundamentals, DNS, LBs, ingress controllers.
- Experience working on production support and high-availability systems.
Soft Skills
- Strong leadership and mentoring capabilities.
- Excellent communication skills with distributed teams.
- Ability to analyze complex problems and deliver quick, practical solutions.
- High sense of ownership and accountability.
Nice-to-Have
- Experience with Azure Data Lake, Databricks, or Synapse.
- Exposure to microservices architecture and domain-driven design.
Profile required
• Require 6+ years relevant experience (Spark)
• Programming experience Using Spark - Java/Scala
• Working knowledge on CI/CD Pipeline
• Experience in Hadoop and HIVE on AZURE is mandatory
• Participate to API Development
a { text-decoration: none; color: #464feb; } tr th, tr td { border: 1px solid #e6e6e6; } tr th { background-color: #f5f5f5; }
- Strong expertise in Kubernetes, Docker, and container orchestration.
- Hands-on experience with Azure Kubernetes Service (AKS) and Infrastructure-as-Code using Terraform.
- Proficiency with Git Hub actions, Ansible, Jenkins, and CI/CD tooling.
• Experience on support activities.
• Ability to work closely in a team environment is highly recommended.
Why join us
“We are committed to creating a diverse environment and are proud to be an equal opportunity employer. All qualified applicants receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, or veteran status”.
Business insight
At Société Générale, we are convinced that people are drivers of change, and that the world of tomorrow will be shaped by all their initiatives, from the smallest to the most ambitious. Whether you’re joining us for a period of months, years or your entire career, together we can have a positive impact on the future. Creating, daring, innovating, and taking action are part of our DNA. If you too want to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis and develop or strengthen your expertise, you will feel right at home with us!
Still hesitating?
You should know that our employees can dedicate several days per year to solidarity actions during their working hours, including sponsoring people struggling with their orientation or professional integration, participating in the financial education of young apprentices, and sharing their skills with charities. There are many ways to get involved.
We are committed to support accelerating our Group’s ESG strategy by implementing ESG principles in all our activities and policies. They are translated in our business activity (ESG assessment, reporting, project management or IT activities), our work environment and in our responsible practices for environment protection.