Session
Session B: 12:00-2:00PM
Poster Assignment
113
Department
Computer Science - Engineering
Presenter(s)
Siddhi Mundhra, Prisha Bobde, Yutong Sui, Megan Fu
Mentor(s)
Professor Sherwood
Title
Mitigating Errors of LLM-Generated PyRTL Code
Abstract
Large Language Models (LLMs) generate Hardware Description Languages (HDLs) from natural language to create Register-Transfer Level (RTL) designs, but most struggle to produce accurate HDL code due to RTL complexity. Prior error analysis and mitigation work in LLMs focuses on traditional HDLs like Verilog. PyHDL-Eval: An LLM Evaluation Framework for Hardware Design Using Python-Embedded DSLs introduces a benchmark for Python-based DSLs compared to traditional HDLs.
This work extends the error analysis and mitigation framework proposed in Understanding and Mitigating Errors of LLM-Generated RTL Code to DSLs, with a focus on PyRTL. We adapt rule-based description refinement, retrieval-augmented generation (RAG), and simulation-driven two-stage debugging to operate for PyRTL rather than Verilog. Using PyHDL-Eval, we re-evaluate LLM performance under this adapted framework and analyze how errors differ both with and without the workflow, as well as across Python-based DSLs and Verilog.