About me
I am a first-year PhD student at Imperial College London, advised by Prof. Wayne Luk and Dr. Hongxiang Fan. I am a recipient of the President’s PhD Scholarship at Imperial. Previously, I completed both my bachelor’s and master’s degrees at Imperial College London.
My research focuses on bridging the gap between advanced AI algorithms and hardware platforms through efficient hardware-aware algorithms, cost-effective model merging, ML-assisted hardware design, and algorithm-system co-design. My current research directions include:
- Multi-token generation for LLMs
- LLM quantization for edge deployment
- Model merging
- Code agents for hardware design
Education
- Ph.D in Computing Research, Imperial College London, 2024 - 2028 (expected)
- MEng in Computing, Imperial College London, 2020 - 2024
Selected Publications
Rethinking Optimal Verification Granularity for Compute-Efficient Test-Time Scaling
2025 The Thirty-Ninth Annual Conference on Neural Information Processing Systems (NeurIPS), 2025.Hao Mark Chen, Guanxi Lu, Yasuyuki Okoshi, Zhiwen Mo, Masato Motomura, Hongxiang Fan
[Paper]
FW-Merging: Scaling model merging with frank-wolfe optimization
2025 International Conference on Computer Vision (ICCV), 2025.Hao Mark Chen, Shell Xu Hu, Wayne Luk, Timothy Hospedales, Hongxiang Fan
[Paper]
Exploring Code Language Models for Automated HLS-based Hardware Generation: Benchmark, Infrastructure and Analysis
Proceedings of the 30th Asia and South Pacific Design Automation Conference (ASP-DAC), 2025.Jiahao Gai, Hao (Mark) Chen, Zhican Wang, Hongyu Zhou, Wanru Zhao, Nicholas Lane, Hongxiang Fan
[Paper]
Enhancing LLM-based Quantum Code Generation with Multi-Agent Optimization and Quantum Error Correction
2025 62th ACM/IEEE Design Automation Conference (DAC), 2025.Charlie Campbell, Hao Mark Chen, Wayne Luk, Hongxiang Fan
[Paper]
Democratizing Agentic AI with Fast Test-Time Scaling on the Edge
arXiv preprint arXiv:2509.00195, 2025.Hao Mark Chen, Zhiwen Mo, Guanxi Lu, Shuang Liang, Lingxiao Ma, Wayne Luk, Hongxiang Fan
[Paper]
Advancing AI-assisted Hardware Design with Hierarchical Decentralized Training and Personalized Inference-Time Optimization
arXiv preprint arXiv:2506.00002, 2025.Hao Mark Chen, Zehuan Zhang, Wanru Zhao, Nicholas Lane, Hongxiang Fan
[Paper]
AdaBlock-dLLM: Semantic-Aware Diffusion LLM Inference via Adaptive Block Size
arXiv preprint arXiv:2509.26432, 2025.Guanxi Lu, Hao Mark Chen, Yuto Karashima, Zhican Wang, Daichi Fujiki, Hongxiang Fan
[Paper]
Parallel Prompt Decoding: A Cost-Effective and Memory-Efficient Approach for Accelerating LLM Inference
Arxiv, 2024.Hao Mark Chen, Wayne Luk, Hongxiang Fan, Roberto Bondesan
[Paper]
Progressive Mixed-Precision Decoding for Efficient LLM Inference
2025 The Thirteenth International Conference on Learning Representations (ICLR), 2024.Hao Mark Chen, Fuwen Tan, Alexandros Kouris, Royson Lee, Hongxiang Fan, Stylianos I Venieris
[Paper]
Hardware-Aware Parallel Prompt Decoding for Memory-Efficient Acceleration of LLM Inference
2025 Conference on Empirical Methods in Natural Language Processing (EMNLP), 2024.Hao (Mark) Chen, Wayne Luk, Ka Fai Cedric Yiu, Rui Li, Konstantin Mishchenko, Stylianos I Venieris, Hongxiang Fan
[Paper]
Hardware-Aware Neural Dropout Search for Reliable Uncertainty Prediction on FPGA
Proceedings of the 61st ACM/IEEE Design Automation Conference, 2024.Zehuan Zhang, Hongxiang Fan, Hao Chen, Lukasz Dudziak, Wayne Luk
[Paper]
Enhancing dropout-based Bayesian neural networks with multi-exit on FPGA
arXiv preprint arXiv:2406.14593, 2024.Hao Mark Chen, Liam Castelli, Martin Ferianc, Hongyu Zhou, Shuanglong Liu, Wayne Luk, Hongxiang Fan
[Paper]
When Monte-Carlo Dropout Meets Multi-Exit: Optimizing Bayesian Neural Networks on FPGA
2023 60th ACM/IEEE Design Automation Conference (DAC), 2023.Hongxiang Fan, Hao (Mark) Chen, Liam Castelli, Zhiqiang Que, He Li, Kenneth Long, Wayne Luk
[Paper]
Web-Based AI System for Medical Image Segmentation
Medical Image Understanding and Analysis: 27th Annual Conference, MIUA 2023, Aberdeen, UK, July 19–21, 2023, Proceedings, 2023.Hao (Mark) Chen, Taowen Liu, Songyun Hu, Leyang Yu, Yiqi Li, Sihan Tao, Jacqueline Lee, Ahmed E. Fetit
[Paper]
