- Published on
Beyond Basic Prompts: Leveraging Context and RAG
- Authors
- Name
- Simon Håkansson
- @0fflinedocs
The content of a prompt
As discussed in my previous blogpost, to increase our chances of a successful result and qualitative responses from LLM's we can add context. Either in terms of being detailed in a prompt, or by constraining the models data by utilizing RAG (retrieval-augmented generation).
Depending on the training, large language models will have access to vast amount of data. In essence, since the models use statistical probability to calculate its word sequences, clarity is key for efficiency and decreasing the ambiguity for the LLM.
A prompt that clearly states your goal, the context and your desired output will increase your chances of a good result, since it's more likely that the output from the LLM aligns with your intention.
Prompt structure
As an example let's say I want to view all user accounts and groups that are excluded from Conditional Access policies in Entra ID.
Ambigious Prompt
List all Conditional Access Exclusions using Powershell
Depending on the training data and the model we use, the results will of course vary. Will it include all types of exclusions? What will the resulting output look like? What will be the suggested authentication method? This is not specified in the prompt, so the model will "guess" this. We might get a good result, we might get a bad one. To increse our chances of success, let's add some context.
Refined Prompt
Objective:
List all Conditional Access Policies using Powershell including all types of exclusions.
Check if each required module is installed, if not install it with -force for the currentuser.
Constraints & Context:
Use Microsoft Graph with an interactive login as authentication.
Ensure that scope permissions used are based on least privilege.
The output of each excluded value must be returned as the displayname of the value and not the object-id (GUID).
Output:
Add extensive comments as documentation to the code.
Output the results from the script in a table.
NOTE
This refined prompt includes more details trying to guide the model towards what I want, based on my knowledge and the formatting output I'd like.
Let's look at another example:
Ambigious Prompt
List the MFA status of All Users
Refined Prompt
Objective:
Create a list all licensed users in Entra ID & their registered authentication methods.
Constraints & Context:
Utilize Microsoft Security Graph API and an interactive login for authentication.
Ensure that least privileges are used. Use the least amount of required modules.
Output:
Add comments to the code and format the output to a table.
Our intent is clear. The context rules out old, unsupported cmdlets (Get-Msoluser) and deprecated modules (Connect-AzureAD). By providing more context, we increase our chances of success and use security as a guiding principle.
RAG (Retrieval-Augmented Generation)
Even if our refined prompt helps us, the data pool from the training set might lead to unwanted results. Or perhaps we want to create an agent or application that specificially is constrained to a specific data set. To do this we can use a capability called RAG (Retrieval-Augmented Generation). Shortly summarized it consists of:
- Retrieveing data based on the initial prompt
- Augmenting the prompt with grounded data
- Use a LLM to Generate the grounded response
The following Microsoft products have different RAG-Capabilities:
Generative AI Product | RAG-Capabilities |
---|---|
Microsoft 365 Copilot | Built-in (Microsoft 365 organizational data) |
Copilot Studio Agent | Built-in (Sharepoint) or Custom using Azure AI Search |
Azure AI Foundry | Custom using Azure AI Search |
Security Copilot | Custom Knowledgebase Plugin |
Github Copilot | Custom model in Enterprise Preview |
Benefits of utilizing RAG
- Accuracy: The AI can reference specific facts instead of guessing
- Up-to-date: You can add new information without retraining the entire AI
- Transparency: Sources can be clearly referenced in the output
Summary
We have explored some examples on how to refine a prompt to achieve better results. We also briefly covered the concept of RAG and it's benefits. Sometimes a refined prompt is all that is needed for a good result, at other times RAG is a requirement to ensure a quality-driven result over time. Remember, always validate the information with multiple sources.