Advancing Computing Frontiers: The Research Landscape of Hao Zheng at UCF
The field of computer architecture and machine learning is in a state of rapid evolution, driven by the insatiable demand for faster, more efficient, and more powerful computational systems. As traditional silicon technology approaches its physical limits, researchers are compelled to explore novel architectures and methodologies to meet the escalating requirements of modern applications, particularly in areas like artificial intelligence and machine learning. At the forefront of this innovation is Hao Zheng, an assistant professor in the Department of Electrical and Computer Engineering at the University of Central Florida (UCF), whose research endeavors are dedicated to revolutionizing current chip architectures.
The Imperative for Novel Architectures
The relentless advancement of technologies such as artificial intelligence and machine learning presents a dual challenge: these fields demand significantly faster processing speeds and lower energy consumption, while simultaneously pushing the boundaries of traditional silicon technology into increasingly difficult sub-10 nanometer ranges. This dichotomy necessitates a fundamental rethinking of how computational hardware is designed and utilized. Specializing the underlying hardware architecture has emerged as a trending solution to meet these escalating computational demands. However, existing specialized hardware, often in the form of accelerators, faces limitations. These accelerators are either fully customized for highly regular applications, thereby lacking generality, or they are too general and lack the specific optimizations needed to support a wide range of applications efficiently. This gap highlights the critical need for architectures that can adapt to diverse and complex computing tasks.
The Polymorphic Chip Processor: A Transformative Concept
Hao Zheng's research is centered on addressing these limitations through the development of a transformative concept: the polymorphic chip processor. This innovative approach aims to support ubiquitous, irregular, and complex applications characterized by intensive data processing. The core idea is to invent a new class of chip processors, fundamentally grounded in graph theory, that possess the dynamic ability to adapt to irregular and complex workloads at runtime. This adaptability is crucial for handling the unpredictable and often sparse nature of data encountered in many modern computational tasks, especially those involving graph structures prevalent in machine learning and data analysis.
The iCAT Laboratory: Pioneering Intelligent Computer Architecture
At the helm of this cutting-edge research is Zheng's Intelligent Computer Architecture and Technology (iCAT) Laboratory. The iCAT Lab is actively engaged in revolutionizing current chip architectures, with a particular focus on enhancing the capabilities of existing solutions like graphics processing units (GPUs) to better manage the rising complexity of modern AI workloads. Their work extends beyond theoretical concepts, aiming to translate these ideas into tangible prototypes and ultimately, real-world products. The lab's mission is to deliver end-to-end computing systems with industry-scale quality, bridging the gap between academic research and practical application.
A Foundation in Graph Theory and Geometric Optimization
A significant thread running through the research conducted at iCAT is the application of graph theory and geometric optimization principles to hardware design. Publications such as "Scaling Graph Neural Network Training via Geometric Optimization" (HPCA'26) and "Rethinking Tiling and Dataflow for SpMM Acceleration: A Graph Transformation Framework" (MICRO'25) underscore this focus. Graph Neural Networks (GNNs) are increasingly vital for processing complex relational data, and their efficient training and inference demand specialized architectural solutions. The work on geometric optimization and graph transformation frameworks aims to develop methods that can accelerate these computations by exploiting the inherent structure of graph data.
Furthermore, research into accelerators for GNNs, including "DiTile-DGNN: An Efficient Accelerator for Distributed Dynamic Graph Neural Network Inference" (ISCA'25) and "I-DGNN: A Graph Dissimilarity-based Framework for Designing Scalable and Efficient DGNN Accelerators" (HPCA'25), demonstrates a commitment to building hardware that can handle the dynamic and distributed nature of graph data processing. The development of accelerators like "SCALE: A Structure-Centric Accelerator for Message Passing Graph Neural Networks" (MICRO'24) and "EGMA: Enhancing Data Reuse and Workload Balancing in Message Passing GNN Acceleration via Gram Matrix Optimization" (DAC'24) further highlights the lab's dedication to optimizing GNN computations through hardware-algorithm co-design. The paper "SAGA: Sparsity-Agnostic Graph Convolutional Network Acceleration with Near-optimal Workload Balance" (ICCAD'23) also points to the lab's efforts in creating flexible and efficient GNN accelerators.
Addressing Diverse Computational Challenges
Beyond GNNs, the iCAT Lab's research portfolio is broad, reflecting the multifaceted nature of modern computing challenges. For instance, "MetaLeak: Uncovering Side Channels in Secure Memory Architectures Exploiting Metadata" (ISCA'24) delves into critical security aspects of memory systems, an area of paramount importance in an increasingly connected world. The work on "CircuitSeer: RTL Post-PnR Delay Prediction via Coupling Functional and Structural Representation" (ICCAD'24) addresses the intricate domain of electronic design automation, focusing on improving the accuracy and efficiency of predicting circuit performance.
The lab's interest also extends to accelerating other complex computational tasks. "VITA: ViT Acceleration for Efficient 3D Human Mesh Recovery via Hardware-Algorithm Co-Design" (DAC'24) showcases a focus on accelerating Vision Transformer (ViT) models, which are crucial for advanced computer vision tasks. Similarly, "FDMAX: An Elastic Accelerator Architecture for Solving Partial Differential Equations" (ISCA'23) targets the acceleration of scientific computing problems, demonstrating the versatility of the lab's architectural innovations. The paper "Venus: A Versatile Deep Neural Network Accelerator Architecture Design for Multiple Applications" (DAC'23) further emphasizes the pursuit of general-purpose yet efficient deep learning accelerators.
A Collaborative and Supportive Research Environment
Hao Zheng emphasizes that the recognition and advancements made by his research group are a testament to the collective efforts of his entire team. He deeply appreciates the collaborative research culture at UCF, acknowledging the invaluable guidance and encouragement received from colleagues and the support of the department chair, Dr. Reza Abdolvand. The presence of exceptional Ph.D. students has been instrumental in achieving these research milestones.
The iCAT Lab is committed to providing a well-rounded experience for its students, encompassing solid theoretical studies, practical modeling and simulation experiments, performance evaluation, and physical implementation. Opportunities for students to collaborate with distinguished researchers nationwide and participate in industry-scale projects are abundant, facilitating the translation of research outcomes into real-world products. Each student receives support for annual conference travel, fostering professional development and exposure to the wider research community. The lab fosters a supportive and inclusive culture, prioritizing student growth, health, and future development. Their mentoring philosophy centers on providing all necessary resources for students to succeed and achieve impactful research, all while ensuring an enjoyable PhD journey.
Qualifications for Prospective Students
Prospective students aspiring to join the iCAT Lab are expected to be motivated and eager to learn new concepts at the forefront of cutting-edge research. A solid understanding or prior research experience in areas such as computer architecture, algorithms, machine learning applications, FPGA/HLS programming, and computer/mathematical modeling is highly valued. Proficiency in programming languages like C/C++, Python, Verilog, and script languages is also essential. The lab's current projects span the entire computing stack with the goal of delivering end-to-end computing systems of industry-scale quality, welcoming students with diverse backgrounds. Graduate Research/Teaching Assistantships are available for prospective Ph.D. students.
UCF: A Hub for Advanced Research
UCF provides a robust platform for conducting state-of-the-art research, characterized by its comprehensive resources, rich educational opportunities, and a collaborative environment. Initiatives such as the university's AI Initiative and the Knights Digital Twin Initiative further underscore UCF's commitment to fostering innovation in critical technological domains. The College of Engineering and Computer Science (CECS) and the NanoScience Technology Center (NSTC) offer essential research facilities and infrastructure that empower research groups like iCAT to pursue unique and transformative research with potential societal impacts. The strong graduate programs in computer engineering at UCF, recognized by U.S. News and World Report and CSranking.org, further solidify its position as a leading institution for advanced research and development in computer architecture and related fields.
tags: #hao #zheng #ucf #research

