
A true inspiration to all learners.
David Andrews is a Professor of Engineering and holds the Thomas Clinton Mullins Endowed Chair in the Department of Electrical Engineering and Computer Science at the University of Arkansas at Fayetteville. He earned a Doctor of Philosophy and joined the University of Arkansas in July 2008 as a full professor. Prior to this appointment, Andrews served as a full professor at the University of Kansas Information and Telecommunication Technology Center from July 2000 to July 2008. Before his position at the University of Kansas, he was on the faculty at the University of Arkansas and worked at General Electric Company. His academic career spans over three decades, with contributions spanning embedded systems and high-performance computing.
Andrews' research centers on computer architecture, embedded systems, and reconfigurable computing, with a focus on programming models for hybrid CPU/FPGA systems, FPGA-based accelerators for machine learning, processor-in-memory architectures, and real-time constraints. He has published extensively, with over 85 works including journal articles and conference papers in leading venues such as IEEE Micro, IEEE Transactions on Parallel and Distributed Systems, IEEE Transactions on Very Large Scale Integration Systems, FCCM, FPL, and FPGA. Highly cited publications include 'Seeking solutions in configurable computing' (Computer, 1997, 253 citations), 'Programming models for hybrid FPGA-CPU computational components: a missing link' (IEEE Micro, 2004, 190 citations), 'PAWS: A performance evaluation tool for parallel computing systems' (Computer, 2002, 109 citations), 'Hthreads: A computational model for reconfigurable devices' (2006, 107 citations), 'Achieving programming model abstractions for reconfigurable computing' (IEEE Transactions on Very Large Scale Integration Systems, 2008, 104 citations), and 'Reconfigurable computing cluster (RCC) project: Investigating the feasibility of FPGA-based petascale computing' (2007, 93 citations). Recent contributions feature 'N-TORC: Native Tensor Optimizer for Real-Time Constraints' (FCCM, 2025), 'The BRAM is the Limit: Shattering Myths, Shaping Standards, and Building Scalable PIM Accelerators' (FCCM, 2024), 'IMAGine: An In-Memory Accelerated GEMV Engine Overlay' (FPL, 2024), 'ProTEA: Programmable Transformer Encoder Acceleration on FPGA' (SC Workshops, 2024), 'FPGA Processor In Memory Architectures (PIMs): Overlay or Overhaul?' (FPL, 2023), and 'A Runtime Programmable Accelerator for Convolutional and Multilayer Perceptron Neural Networks on FPGA' (ARC, 2022). Andrews has garnered over 2,374 citations on Google Scholar, demonstrating substantial influence in advancing scalable hardware solutions for complex computational challenges. He has secured research grants, including from the United States Naval Research Laboratory.
