Master Thesis Defense: Christian Norman Madsen
Using a Large Language Model for Categorization of Quantum Computing Papers
Quantum computing as a research field is evolving so rapidly that even experts can not follow along, leaving the layman without any chance, with both being vulnerable to over-hyping and misinformation. In this work I look into the possibility of using LLMs to make the field more accessible to both experts and the layman by categorizing quantum computing papers. I have customized an LLM using a specific prompt engineering and optimization tests, to be able to answer questions of rising complexity. Getting an overview of both the limits of the understanding of current generation LLMs and also looking into optimizing performance, time and costs. I have found that LLMs are at a point where the models created, greatly exceeds that of the layman, however is still not able to compete with the general understanding of experts, meaning the automation of paper categorization is only useful for the layman, while the experts still gain more by implementing their own cognitive abilities. I have also found that there is an information sweet-spot where LLMs perform the best, optimizing not only time and cost but also performance. This serves as a proof of concept of LLMs as a tool for societal education, but also as a tool that can help map out quantum computing research in a larger context.