Postgraduate research project

Large language models for military veteran mental health

Funding
Competition funded View fees and funding
Type of degree
Doctor of Philosophy
Entry requirements
2:1 honours degree View full entry requirements
Faculty graduate school
Faculty of Engineering and Physical Sciences
Closing date

About the project

This project will explore recent distillation LLM training (e.g. DeekSeek-R1) for the mental health domain, including human-in-the-loop LLM training approaches such as adversarial training and rationale-based learning. These algorithms will be tested on a case study focussing on robust and safe self-help mental health applications for military veterans.

Recent advances in Large Language Models (LLMs) have the potential to revolutionize healthcare, but currently LLMs have problems associated with bias (e.g. underrepresented groups in training data) and error (often called hallucinations by LLM vendors) that can cause medical practitioners to treat LLMs with caution. Military veterans in particular can be reluctant to seek help so self-help LLMs could provide a way for them to access early stage support prior to formally seeking help from a medical practitioner.

This project will explore how recent distillation LLM training (e.g. DeekSeek-R1) methods can be adapted to fine-tune small but powerful LLMs for the mental health domain. You will develop novel small LLMs training algorithms that are both robust and safe, investigating human-in-the-loop LLM training approaches such as adversarial training and rationale-based learning. These algorithms will be tested on a case study focussing on self-help mental health applications for military veterans with potential issues such as post-traumatic stress disorder (PTSD) and alcohol abuse.