Proposal 1: Empowering LLMs with Optimal Planning Proficiency
Topic Description: This project focuses on integrating large language models with classical planning algorithms to enhance their ability to generate optimal and efficient plans from natural language instructions. The goal is to bridge the gap between high-level natural language understanding and low-level task execution.
Source Material:
Title: LLM+P: Empowering Large Language Models with Optimal Planning Proficiency https://arxiv.org/pdf/2304.11477
Link: GitHub Repository
Year: 2023
Summary: LLM+P combines LLMs with classical planning methods using the Planning Domain Definition Language (PDDL). This integration allows for the translation of natural language inputs into formal plans, which are then executed using traditional planning algorithms.
Proposal 2: Enhancing Zero-Shot Reasoning with Role-Play Prompting
Topic Description: This project investigates the use of role-play prompting to enhance the zero-shot reasoning capabilities of large language models. By prompting the model to adopt specific roles (e.g., teacher, mathematician), it aims to improve its problem-solving and reasoning performance.
Source Material:
Title: Better Zero-Shot Reasoning with Role-Play Prompting https://arxiv.org/pdf/2308.07702
Link: GitHub Repository
Year: 2023
Summary: Role-play prompting involves instructing LLMs to assume specific roles, which implicitly guides their reasoning processes. This approach has been shown to significantly improve zero-shot performance on tasks requiring logical and commonsense reasoning.
Proposal 3: Few-Shot Grounded Planning for Embodied Agents
Topic Description: This project explores the use of large language models for few-shot grounded planning, enabling embodied agents (e.g., robots or virtual assistants) to perform dynamic and context-aware planning tasks. The focus will be on developing and testing methods to enhance agents' capabilities in real-world scenarios.
Source Material:
Title: LLM-Planner: Few-Shot Grounded Planning for Embodied Agents with Large Language Models https://arxiv.org/pdf/2212.04088
Link: GitHub Repository
Year: 2023
Summary: The LLM-Planner framework leverages few-shot learning techniques to allow LLMs to generate plans grounded in the agent's current environment. It demonstrates how natural language can be used to guide complex planning tasks in a way that adapts to real-time observations. best regards, Daniel
Approved.