neverminedio/algovera-agent

By neverminedio

Updated over 1 year ago

Image

44

py-template

Python template

Includes ipykernel (for VS Code) and black (for formatting)

Install

pip install -e .

Prompting

Multi-step reasoning: a cost/performance tradeoff

Many prompts used in this project request multiple steps of reasoning are performed in one-shot. For example, search for findings about X and summarise them, if none are present, reply 'none'. A multi-step version of this would be 'Does this text report findings about X. Reply with yes or no'. If yes, ask what the findings are. This requires the model to do less work in a single step and will likely yield greater results. However, it requires that the model processes the entire text twice, effectively doubling the cost and time-taken to produce a result.

Examples

Where possible, always provide examples of the output you want. Put these on new lines, and explicitly label them as examples. If this is not clear, the model will pick up on language in the examples and use it to inform its response.

Inserting context

Inserting context clearly is key. If you have example outputs, and then new data for the model to read, clearly distinguish these on new lines with uppercase headings to mark 'variables' that the request should refer to. Again this ensure the text describing the task isn't confused with the text it should be processing.

Description of the task based on some TEXT.

TEXT: the text it should read.

Explanations

Asking for an explanation, even if you dont actually need it, can improve performance. However sometimes you just want a simple list of results or yes/no reply. The model has a bias to be 'helpful & polite' and therefore reply with things like "Sorry, I couldn't find any results", which is harder to parse than "None". In these cases you need to clearly specify 'Don't explain your response'.

Docker Pull Command

docker pull neverminedio/algovera-agent