1. Apply statistical methods and machine learning for data analysis, evaluation of user needs and user life cycle value. Develop models and use data throughout the entire life cycle management.
2. Establish and deploy industry-leading machine learning models.
3. Participate in the R&D of data products and their applications; explore the commercial value of data; create ultimate experience for data products; boost user growth; and improve the efficiency and ability of risk management.
4. Conduct basic researches in deep learning, reinforced learning, text image processing, speech recognition, NLP, statistics, AI and other fields.
1. Bachelor’s degree or above, and a minimum of 6 years’ working experience.
2. Track records of leading the design of large-scale data platform or data warehouse, solid theoretical knowledge of big data and data warehouse; proficiency in a minimum of one programming language (java/python); mastery of using various common algorithms and data structures, and the ability of independent implementation.
3. In-depth understanding of Hadoop's big data system; practical experience of application and development of Hadoop, Hive, HBase, Spark, Storm, Kafka, ES, etc.; and thorough understanding of the source code.
4. Exemplary abilities of learning, problem analysis and problem solving; good teamwork spirit and great skills for internal and external communication.
5. Technical experience related to IoT products (preferred);
6. Code contributor to open source community (preferred).