Recruitment (EN)

Language: 中文 | English

Contents

HKUST(GZ) DIGAI Lab Recruitment

Update: 2025-09-12

The Data Intelligence and Geometric AI Laboratory (DIGAI Lab) at HKUST(GZ) is actively seeking passionate students to join our cutting-edge AI research team. We offer opportunities for PhD studies and remote visiting research positions. Our research spans representation learning, LLMs, geometric learning, and information retrieval, with a focus on advancing geometric AI theory and its applications in network science and scientific discovery. We welcome students from diverse backgrounds and provide extensive research resources and international collaboration opportunities.

DIGAI Lab Logo

1. Primary Research Directions

Direction 1: Large Language Models (Fine-tuning, RAG, Reasoning)

This research direction focuses on gaining deep insights into the fundamental mechanisms of large language models while enhancing their reasoning capabilities, representation quality, and multi-modal integration. Our goal is to build more efficient, interpretable, and controllable intelligent systems.

Core Research Areas:

  • LLM Representation Analysis & Theoretical Foundations: We investigate the internal representation mechanisms and geometric structures of large language models, exploring semantic space distributions of token embeddings, hierarchical representation learning, and the theoretical underpinnings of attention mechanisms. We develop interpretability analysis tools to reveal knowledge storage patterns, reasoning pathways, and decision-making processes in LLMs, providing theoretical guidance for model optimization and safety alignment.

  • LLM Implicit Reasoning & Logic Enhancement: Our research examines the implicit reasoning capabilities and logical thinking mechanisms of large models, developing reasoning enhancement methods based on Chain-of-Thought, tool learning, and program synthesis. We explore complex cognitive tasks including mathematical reasoning, causal inference, and commonsense reasoning to build intelligent systems capable of multi-step logical deduction and problem-solving.

  • Multi-Modal Alignment & Fusion Learning: We construct unified multi-modal representation spaces to achieve deep alignment and semantic fusion between different modalities such as text, images, audio, and video. Our research focuses on cross-modal attention mechanisms, inter-modal knowledge transfer, and unified encoder architectures to develop large-scale foundation models capable of understanding and generating multi-modal content.

  • Reinforcement Learning Reasoning & Parameter-Efficient Fine-tuning (PEFT): We combine reinforcement learning with Reinforcement Learning from Human Feedback (RLHF) techniques to improve reasoning accuracy and alignment effectiveness of large models. We explore parameter-efficient fine-tuning methods including LoRA, adapter networks, and prompt tuning to achieve rapid domain adaptation and task customization under limited computational resources while balancing model performance and computational efficiency.

We welcome students and researchers interested in deep learning theory, natural language processing, computer vision, reinforcement learning, explainable AI, multi-modal learning, and related mathematical foundations and algorithmic optimization techniques to join our team.


Direction 2: Personalized Modeling, Recommendation Algorithms, Knowledge Graphs & Social Networks

This research direction integrates knowledge graphs, localized data, and geometric modeling techniques to build recommendation systems and personalized analysis frameworks. We focus on addressing key challenges in personalized applications of large models, including hallucination problems, recommendation accuracy, social network dynamics, and group behavior prediction.

Core Research Areas:

  • Retrieval-Augmented Generation (RAG, GraphRAG): We develop personalized large models based on knowledge graphs and localized data, leveraging RAG (Retrieval-Augmented Generation) and Graph RAG technologies to enhance model performance for specific domains and user groups. By combining structured knowledge with unstructured text, we achieve precise information retrieval and generation while effectively mitigating hallucination problems and enhancing personalization capabilities.

  • Personalized LLMs & Adaptive Learning: We investigate personalization techniques for large language models, including Parameter-Efficient Fine-Tuning (PEFT), Adapter Networks, Prompt Learning, and In-Context Learning methods. We explore user profiling, personal preference learning, and dynamic adaptation mechanisms to develop personalized intelligent assistants that continuously evolve based on user behavior and feedback, enabling personalized memory and knowledge accumulation in multi-turn conversations.

  • Information Retrieval & Recommendation Systems: We develop next-generation recommendation algorithms based on deep learning and geometric constraints, integrating user behavioral data, content features, and social relationships to build multi-modal recommendation models. We explore core techniques including collaborative filtering, content-based recommendation, and sequential recommendation to address key challenges such as cold start problems, data sparsity, and recommendation diversity.

  • Large-scale Networks, Graph Representation & Optimization: We focus on modeling, representation learning, and optimization problems for large-scale complex networks, developing efficient algorithms and theoretical frameworks. Our research emphasizes community detection, influence analysis, node classification, graph classification, and link prediction, combining graph neural networks with geometric deep learning techniques to capture hierarchical structures and dynamic evolution patterns in network structures.

We welcome students and researchers interested in recommendation systems, social network analysis, knowledge graphs, graph neural networks, complex network theory, information retrieval, personalized large models, parameter-efficient fine-tuning, and related data mining and machine learning technologies to join our team.


Direction 3: Hyperbolic Representation Learning & Non-Euclidean Geometric Machine Learning

This research direction explores the deep applications of non-Euclidean geometric spaces (particularly hyperbolic geometry) in machine learning and data representation. We aim to transcend the limitations of traditional Euclidean spaces, providing novel geometric perspectives and theoretical frameworks for hierarchical data, complex networks, and high-dimensional data modeling.

Core Research Areas:

  • Hyperbolic Deep Learning: We develop neural network architectures and optimization algorithms based on hyperbolic spaces, including hyperbolic convolutional neural networks, hyperbolic graph neural networks, and hyperbolic attention mechanisms. We explore deep learning models and reasoning systems that can naturally handle hierarchical structures and tree-like data.

  • Hierarchical Representation Learning & Embedding: We leverage the negative curvature properties of hyperbolic spaces to construct low-dimensional embedding representations for data with natural hierarchical structures (such as knowledge graphs, taxonomies, and organizational structures). We develop efficient hyperbolic embedding algorithms that preserve semantic similarity, encode hierarchical relationships, and enhance interpretability, particularly targeting hierarchical data in natural language processing, computer vision, and bioinformatics.

  • Manifold Learning & Geometric Optimization: We study optimization theory and algorithms on Riemannian manifolds, including hyperbolic spaces, spherical surfaces, and other non-Euclidean geometric manifolds. We develop geometry-based optimization methods and explore applications of geodesic gradient descent, exponential maps, and logarithmic maps in machine learning, providing theoretical foundations for constrained optimization and geometric deep learning.

  • Geometric Network Representation: We map complex networks into non-Euclidean geometric spaces to capture the intrinsic geometric structures and topological properties of networks. We study geometric invariance, scalability, and robustness of network embeddings, developing geometric representation methods that preserve network hierarchy, community structures, and dynamic evolution characteristics.

We welcome students and researchers interested in differential geometry, Riemannian optimization, graph theory, deep learning theory, computational geometry, and related mathematical foundations and application domains to join our team.


2. University Introduction

HKUST(GZ) is the first legally independent mainland-Hong Kong cooperative educational institution established under the Greater Bay Area development framework. Officially established in June 2022 with approval from China’s Ministry of Education, HKUST(GZ) focuses on interdisciplinary innovation and exploring new talent cultivation models. We aim to become a paradigm for integrated educational development and an internationally renowned university that cultivates innovative talents for the future.

HKUST(GZ) Bridge

HKUST(GZ) awards master’s and doctoral degrees from The Hong Kong University of Science and Technology. As of September 2024, we have over 300 academic staff, including 240+ tenured faculty. All faculty hold doctoral degrees, 98% have international experience, nearly 20% are national-level talent program recipients, nearly 50% are provincial/ministerial-level talent program selectees, and 15% are ranked among the global top 2% scientists.

Since establishment, HKUST(GZ) has been approved for 3 Guangdong Provincial Key Laboratories and 11 Guangzhou Municipal Key Laboratories. We have secured over 300 government-funded research projects, including 66 national-level projects and participation in 18 national key and major projects.

HKUST(GZ) Campus

The university has signed cooperation agreements with over 90 leading enterprises and research institutions, including Alibaba Cloud, GTA Semiconductor, and Shenzhen Bay Laboratory. We have established joint laboratories with nearly 10 industry leaders, supported over 100 entrepreneurship incubation projects, and registered 40+ enterprises. The HKUST(GZ) Innovation Zone is under construction. We have established a 1 billion yuan technology transfer fund with Guangzhou Industrial Investment Group, with total fund partnerships reaching 2.4 billion yuan.

HKUST(GZ) Logo

3. Team Introduction

DIGAI Lab is led by Dr. Menglin Yang (https://yangmenglinsite.github.io/), who holds a Ph.D. from The Chinese University of Hong Kong and conducted postdoctoral research at Yale University. He currently serves as an assistant professor in the AI Thrust at HKUST(GZ). Our team has extensive experience in machine learning, geometric AI, and scientific computing, maintaining close collaborations with top universities and research institutions worldwide. We are actively recruiting PhD students and research assistants to participate in cutting-edge research projects. Successfully admitted PhD students receive full scholarships. Our group is a vibrant academic family that emphasizes integrity and fairness, providing excellent office environments and computational support. Joining our team offers:

  • Publishing Excellence: Opportunities to publish in top-tier ML, data mining, and AI journals and conferences, with support for international conference participation
  • Mentoring & Academic Environment: Weekly one-on-one meetings with supervisor, harmonious group atmosphere, regular paper reading sessions, and systematic learning in ML, optimization, and statistics. We encourage student collaboration and respect individual research interests while providing domain guidance
  • Research Skills Development: For students pursuing research careers, we provide systematic training in research methodology and thinking to enable independent project leadership and lab management
  • Collaboration & Exchange: Extensive opportunities including long-term visits to HKUST Clear Water Bay campus, exchanges with world-class universities, and industry internships at leading companies like Tencent, Huawei, and Peng Cheng Laboratory

4. Recruitment Information

We welcome applicants with:

  • Strong interest in large models, recommendation systems, network science, AI4SCI, and related fields
  • Undergraduate/graduate backgrounds in computer science, mathematics, physics, bioinformatics, network science, etc.
  • Strong English communication skills (IELTS 6.5 or TOEFL 80 required for admission)
  • Solid programming skills in relevant languages
  • We also recruit research assistants (RAs) and interns with competitive compensation based on experience and project involvement. Remote interns receive computational resources and comprehensive guidance, with opportunities to publish high-level papers. Outstanding performers receive priority consideration for doctoral admission.

PhD students can start in February or August 2026. Program duration is 3 years (with relevant research master’s degree) or 4 years (without). Tuition is 40,000 RMB/year, with full scholarships provided to all admitted students (~15,000 RMB/month).

📝 Master’s Student Application: Master’s applicants can apply directly through the university website without prior supervisor contact. If already admitted to a master’s program and interested in joining our lab, please contact us directly via email.

HKUST(GZ) master’s and undergraduate students are welcome to contact the supervisor directly.


5. How to Apply

Please send the following materials to menglinyang[at]hkust-gz.edu.cn or digailab[at]outlook.com

  • Resume
  • Undergraduate and graduate transcripts
  • Professional ranking certificates (if available)
  • Recommendation letters (if available)
  • Representative papers or projects (if available)
  • Research proposal (if available)

Email subject format: Position Applied + Your Name + Degree + Graduation University + Major

We will conduct preliminary screening and arrange interviews upon receipt of materials.

Official application portal: https://fytgs.hkust-gz.edu.cn/