At Bose, we strive to Wow the customer and we are driven by curiosity and perseverance. We like to concentrate on the job at hand.
We value passionate, down-to-earth, can do people who enjoy fine-tuning small details without losing sight of the big picture.
We are looking for the type of person who loves to challenge the status quo, who isn’t afraid to give honest feedback, and feels uncomfortable when a day goes by without achieving something of impact.
We get excited when a challenge demands a creative solution. Above all else, we look for people who take pride in their work and are inspired and motivated by their role in improving the way millions of people listen to music worldwide.
As a software engineer focusing on Big Data, you will work with our IT team and Software Engineering team to develop data platforms that turn big data into big insights with tremendous value for our Bose China operations.
This data will enable business partners to make decisions more efficiently and with greater speed. As part of an agile delivery team, you will design, develop, deploy and support the data ingestion pipeline and the data access solutions for our Big Data Analytics platform.
This role requires knowledge and hands-on experience with big data technologies used throughout the entire application stack include Spark, Cloudera Data Hub, and Python / Scala / R languages.
Design and develop ETL pipelines connected devices, web applications, and mobile applications that support the customer experiences.
Collaborate with front-end and mobile app development teams on user-facing features and services.
Work with platform architects on software and system optimizations, helping to identify and remove potential performance bottlenecks.
Focus on innovating new and better ways to create solutions that add value and amaze the end user, with a penchant for simple elegant design in every aspect from data structures to code to UI and systems architecture.
Stay up to date on relevant technologies, plug into user groups, understand trends and opportunities that ensure we are using the best techniques and tools.
Work with other software leads on developing continuous integration (CI) pipeline and unit test automation.
Document the work you do, especially APIs that you create.
Qualifications (demonstrated competence) :
Delivered the full life cycle of a solution using Hadoop, AWS S3, AWS EMR.
Delivered at least one Big data solution using cloud services & open source.
Expert knowledge of programming languages such as Python, Java, or Scala.
Ingested data using Big Data ETL tools (Apache Spark).
Support Data Science and ML tools like AWS Sagemaker, Cloudera CDSW, Google AI Platform.
Implemented data security and privacy in a cloud environment.
Delivered solutions using Agile methodology.
Highly desirable, but not required skills :
Experience with cloud computing (Amazon Web Services preferred).
Experience with cloud computing services (Amazon Web Services like EC2, Dynamo, S3, RDS preferred).
Years of Experience and Education :
8+ years working in software development.
4+ years developing, deploying and maintaining high volume production big data solutions.
Bachelor's degree in Computer Science, or equivalent. Master's degree welcome, but not required.
Proficiency in written Simplified Chinese and spoken Mandarin is required.
Proficiency in written English (specifically technical documentation) is required. Spoken English is optional.