Portrait of Xiaolong Han

Xiaolong Han

Ph.D. student at the University of Surrey

Guildford, Surrey, UK ยท University of Surrey

Ph.D. Student in Computer Science

Xiaolong Han

I am a Ph.D. student at the University of Surrey, advised by Prof. Ferrante Neri and Assistant Prof. Lu Yin.

My work focuses on parameter-centric machine learning: how model weights can be searched, analyzed, represented, and generated. I am currently most interested in weight space learning, neural architecture search, and functional representations for scientific data, and I actively collaborate with Zehong Wang on related problems.

My recent work asks three connected questions: how to search neural architectures more reliably, how to organize model weights into useful representations, and how to generate neural functions directly for structured domains such as molecules. I am particularly interested in methods that are efficient enough to reuse pretrained models rather than retraining from scratch.

Featured Projects

Overview figure for the Weight Space Learning survey
2026 Survey Weight Space Learning

A Survey of Weight Space Learning: Understanding, Representation, and Generation

Xiaolong Han, Zehong Wang, Bo Zhao, Binchi Zhang, Jundong Li, et al.

This survey organizes weight space learning into three axes: understanding, representation, and generation. It provides a unified map of how pretrained weights can be analyzed as structured objects and reused across retrieval, editing, federated learning, neural architecture search, and neural function generation.

Framework figure for W2T
2026 arXiv LoRA

W2T: LoRA Weights Already Know What They Can Do

Xiaolong Han, Ferrante Neri, Zijian Jiang, Fang Wu, Yanfang Ye, Lu Yin, Zehong Wang

W2T asks whether LoRA checkpoints can be understood directly from their weights, without training data or base-model inference. It resolves LoRA factorization ambiguity with a symmetry-aware QR-SVD canonicalization, then tokenizes rank-wise components with a transformer encoder for attribute classification, performance prediction, adapter retrieval, and transfer across LoRA collections.

Framework figure for MolField
2026 Preprint Molecular Learning

Molecular Representations in Implicit Functional Space via Hyper-Networks

Zehong Wang, Xiaolong Han, Qi Yang, Xiangru Tang, Fang Wu, et al.

MolField models molecules in implicit functional space rather than fixed graph embeddings. The framework combines SE(3)-aware canonicalization, structured weight tokenization, and a transformer hyper-network to generate task-conditioned neural fields for molecular dynamics, property prediction, and related downstream tasks.

Framework figure for SaDENAS
2024 Swarm and Evolutionary Computation NAS

SaDENAS: A Self-adaptive Differential Evolution Algorithm for Neural Architecture Search

Xiaolong Han, Yu Xue, Zehong Wang, Yong Zhang, Anton Muravev, Moncef Gabbouj

SaDENAS revisits continuous evolutionary NAS through self-adaptive differential evolution. It balances local exploitation and global exploration in architecture-encoding space to reduce premature convergence and the small-model trap that can dominate supernet-based search.

Representative framework figure for GENAS
2024 IEEE TNNLS NAS

A Gradient-guided Evolutionary Neural Architecture Search

Yu Xue, Xiaolong Han, Ferrante Neri, Jiafeng Qin, Danilo Pelusi

GENAS combines evolutionary search with gradient-guided local refinement in a weight-sharing supernet. The method keeps the exploration strength of evolutionary NAS while using efficient local updates to improve candidate architectures without retraining every subnet from scratch.

Framework figure for SWD-NAS
2024 IEEE TII Differentiable NAS

Self-adaptive Weight Based on Dual-attention for Differentiable Neural Architecture Search

Yu Xue, Xiaolong Han, Zehong Wang

SWD-NAS stabilizes differentiable neural architecture search with a dual-attention mechanism that reweights candidate operations more reliably than vanilla architecture parameters, reducing the bias and collapse behaviors often observed in DARTS-style search.

Earlier Publications

  • 2023
    Heterogeneous Graph Contrastive Multi-view Learning. SDM 2023. Paper
  • 2022
    Temporal Graph Transformer for Dynamic Network. ICANN 2022. Paper
  • 2022
    Prediction of Willingness to Pay for Airline Seat Selection Based on Improved Ensemble Learning. Aerospace. Paper

Service and Recognition

I serve as a reviewer for IEEE Transactions on Neural Networks and Learning Systems, IEEE Transactions on Circuits and Systems for Video Technology, Swarm and Evolutionary Computation, Scientific Reports, and Knowledge Engineering Review.

Recent awards include the National Scholarship, the NUIST Graduate "Puxin" Elite Scholarship, the Principal Scholarship, and the First Prize Scholarship in 2024.