top of page

Challenge 020 | Prompt Engineering for Power Platform

Last month, I had to skip a challenge, but this month we will continue where we left off. We've built a ChatGPT-like Canvas App that we can start using. But just like you need Google skills to get a valuable output from a Google search, you will need prompting skills to get a valuable output from ChatGPT. We didn't learned Googling in a day, we refined it over the years while using these powerful search engines. I am certain this will count for these generative AI models too. This challenge will cover some basics of Prompt Engineering, show how it can be valuable for us Power Platform Developers, and will add already available prompts to the app we created in last challenge.

Challenge Objectives

🎯 Learn about Prompt Engineering

🎯 Get a better understanding how these Large Language Models (LLMs) can be used

🎯 Trial some prompts with the Azure OpenAI Service

🎯 Add valuable prompts to our ChatGPT-like Canvas App

Introduction

By now, it would almost be impossible if you haven't tried out ChatGPT yourself yet. There has never been a new technology where the user base reached 100 million users in such a short timespan.

As a user, you've probably also noticed that the response can be mind-blowing, but sometimes also pretty underwhelming, let alone the chance of the infamous so-called hallucinations.

As magical as these LLMs sometimes seem, they are generated by a computer. A powerful computer, but still just a computer. That's why I like to think of them as a function, but with a less predictable output.

Just like PowerFx functions, they need an input. In case of an LLM, your text is the input. Also called a prompt. In order to get the desired output it is essential to write a good prompt. Creating these is called Prompt Engineering or Prompt Design. A Prompt Engineer combines words, symbols, formats, etc. to get the desired output, which can be much more than just a factual sentence that got scraped from Wikipedia. We will look into some of the tactics used to get a better understanding of how it can be used.

Prompting Tactics

Last May, I had the honor to speak at the Automation Summit 2023 in London about the Creator Kit. As I opted to travel by train, I had 3 hours up and down to kill. A good moment to take the free short course ChatGPT Prompt Engineering for Developers, which I did. I highly recommend you to follow it. It took me 3-4 hours to finish it, instead of the declared 1 hour, which includes playing around with the prompts to make it stick. But it is a great crash-course on Prompt Engineering.

The course discusses two major principles.

Principle 1: Write clear and specific instructions

Principle 2: Give the model time to think

I will only share the first three tactics of the first principle, as I think this contains already loads of valuable info for us Power Platform developers, and I want to encourage you to take the full course.

You can try the given example prompts in the following sections using the Chat Playground of the Azure OpenAI Service model we've deployed during Challenge 019. You can leave the system message to Default. To get a more predictable output, it is recommended to set the temperature to 0.


Principle 1: Write clear and specific instructions

To get the desired output, it is essential to instruct the model what you want it to do as clear and specific as possible. This means that in many cases a longer prompt will give more context to the model on what it should do. Below are some tactics described on how to achieve this.

Tactic 1: Use delimiters

To be really clear on what is an instruction and what is the text on which the instructions should be applied to, using delimiters is a great tactic. The delimiters will act just like quotes in a string. It can clearly seperate specific pieces of text from the rest of the prompt. As regular text paragraphs use punctuation marks, it can help to clearly indicate which punctuation marks to look for as a delimiter.

Delimiter

Example

Triple quotes

“””

Triple backticks

```

Triple dashes

---

Angle brackets

<>

XML tags

<tag></tag>

Example
Summarize the text delimited by triple dashes into a single sentence. 
---
Last month, I had to skip a challenge, but this month we will continue where we left off. We've built a ChatGPT-like Canvas App that we can start using. But just like you need Google skills to get a valuable output from a Google search, you will need prompting skills to get a valuable output from ChatGPT. We didn't learned Googling in a day, we refined it over the years while using these powerful search engines. I am certain this will count for these generative AI models too. This challenge will cover some basics of Prompt Engineering, show how it can be valuable for us Power Platform Developers, and will add already available prompts to the app we created in last challenge.
---

Tactic 2: Ask for structured output

Instead of getting a piece of text in natural language as an output, we can ask it to be structured, for example XML or JSON. Especially the JSON option is great for us as a Power Platform developer.

Example
Generate a list of three made-up challenges in JSON format. The array should be named challenges, an each challenge will have the following properties:
challenge_id, title, topic.
Only give the JSON.

Tactic 3: Ask to use conditions

Just like we can use a condition in a Power Automate flow, we can use a condition in our prompt by asking for it in natural language. What the model should do must be described for both options. Below are two examples given. The instructions are identical, but the input text should lead to either of the instructed outputs.

Example 1: Condition = true
You will be provided with text delimited by triple dashes. 
If it contains a sequence of instructions, re-write those instructions in the following format:
Step 1 - ...
Step 2 - ...
...
Step N - ...
If the text does not contain a sequence of instructions, then simply write "No steps provided."
---
Making a cup of tea is easy! First, you need to get some water boiling. While that's happening, grab a cup and put a tea bag in it. Once the water is hot enough, just pour it over the tea bag. Let it sit for a bit so the tea can steep. After a few minutes, take out the tea bag. If you like, you can add some sugar or milk to taste. And that's it! You've got yourself a delicious cup of tea to enjoy.
Example 2: Condition = false
You will be provided with text delimited by triple dashes. 
If it contains a sequence of instructions, re-write those instructions in the following format:
Step 1 - ...
Step 2 - ...
...
Step N - ...
If the text does not contain a sequence of instructions, then simply write "No steps provided."
---
The sun is shining brightly today, and the birds are singing. It's a beautiful day to go for a walk in the park. The flowers are blooming, and the trees are swaying gently in the breeze. People are out and about, enjoying the lovely weather. Some are having picnics, while others are playing games or simply relaxing on the grass. It's a perfect day to spend time outdoors and appreciate the beauty of nature.
---

And now what?

I hope you understand how these tactics can improve your prompting skills. Let's see how this can be directly of value for us in the Power Platform realm.

As mentioned, I think the tactic to ask for a structured output can be a great timesaver for us. We discussed the Creator Kit many times before. We know that many of those controls require a table as an input (e.g. the command bar). Instead of typing everything from scratch, we can use our LLM to give us the required structure, based on just a few words. In the prompt below I describe the format of the desired output. It is heavily based on the example given on the MS docs page of the command bar.

You will get a list of words delimited by triple dashes. Each word is separated by a comma. Make sure that every ItemKey is in lower case, and that every ItemDisplayName starts with a capital. 
Respond in the following format:
table(
    {
        ItemKey: "word-1",
        ItemDisplayName: "word-1",
        ItemIconName: "word-1"
    },{
        ItemKey: "word-2",
        ItemDisplayName: "word-2",
        ItemIconName: "word-2"
    },{
        ItemKey: "word-n",
        ItemDisplayName: "word-n",
        ItemIconName: "word-n"
    }
)
---
train, car, plane
---

As you can see on the image above, we get a nice list of all the items we asked for, exactly in the format that is expected by the command bar. We can now simply copy-paste it into the items property. Sweet isn't it?

Although already powerful, it is still a bit cumbersome to make get this prompt right from an end-user perspective. This is where we can start using the system message. Earlier, we used the default system message, which means we will need to specify the instructions in our prompt. If we move the instructions to the system message, thing become much more easy to use.

The snippet below is the system message that we will enter. As you can see, it is almost identical, but we leave out the delimiter instruction, as the system message already separates the instruction from the prompt itself.

You will get a list of words. Each word is separated by a comma.   
Make sure that every ItemKey is in lower case, and that every ItemDisplayName starts with a capital.  
Respond in the following format:  
table(  
    {  
        ItemKey: "word-1",  
        ItemDisplayName: "word-1",  
        ItemIconName: "word-1"  
    },{  
        ItemKey: "word-2",  
        ItemDisplayName: "word-2",  
        ItemIconName: "word-2"  
    },{  
        ItemKey: "word-n",  
        ItemDisplayName: "word-n",  
        ItemIconName: "word-n"  
    }  
)

As you can see, our text input is much more easy to enter. In the last part of this challenge, we will update the solution from last challenge. We need to be able to to save system messages to Dataverse, and adjust the app so that we can select them. This way we will still be able to use the app like ChatGPT, but now we can save our prompt to gain development productivity. I am excited to make this work. I hope so are you!

Extend our Solution

To really get a deep understanding of what you are doing, I recommend you finish Challenge 019 first. If you just want to continue from where I left off, you can import the unmanaged solution below.

ChatGPT_1_0_0_1
.zip
Download ZIP • 3.70MB

Add Table

We will need a new table in our solution to save the system messages. I named the table System Message Template. We will add the following custom columns to the table:

Display name

Data type

System Message

Single line of text - Text area

Hint Text

Single line of text - Text

​Temperature

Number - Decimal


Make all these columns required, and make sure to set the maximum character count for the System Message column to much more (e.g. 4000).

Add System Message Templates

We can already add two System Message Templates, so we have some sample data. Below you will see the two that I've added. The system message from the Creator Kit - CommandBar is the same as the snippet provided earlier.

Update Security Role

The new table we've created should be accessible for users. We could add all permissions to the same security role, but I like to only give read permissions to the ChatGPT User role and create a new role that has the permission to manage the System Message Templates. This way we have much more control. We can copy the security role and name it System Message Template Admin.

In the security role editor, we can remove the privileges on the Chat and Message tables. We can turn on everything for the System Message Template. The Chat and Message permissions were set to the user, so that users will only see and interact with their own chats. The System Message Template Admin role can be granted to multiple individuals. That's why I set it to Organization.

Make sure that this new security role is included in your solution.

Update Canvas App

In our app, we need to be able to see the different System Message Templates and select the one we want. The goal is to add a button to the command bar to adjust the settings (which system message template, but also manually adjust the temperature and potentially other model properties), and that the functionality will follow as such.

So let's start with the command bar. Update the Items property to the snippet below. As you can see, we will add a settings button that will stay on the right side.

Table(
    {
        ItemKey: "back",
        ItemDisplayName: "All chats",
        ItemIconName: "ChevronLeft",
        ItemVisible: App.ActiveScreen.Size = ScreenSize.Small
    },
    {
        ItemKey: "settings",
        ItemDisplayName: "Settings",
        ItemIconName: "Settings",
        ItemFarItem: true
    }
)

Update the OnSelect of the command bar to the snippet below. As you can see, we will switch over the local variable. This variable will be used to switch between the chat and the settings later.

Switch(Self.Selected.ItemKey,
    //Back button
    "back", UpdateContext({lclItemSelected: false}),

    //Settings button
    "settings", UpdateContext({lclSettingsVisible: !lclSettingsVisible}),

    //in case the button isn't defined yet
    Notify("An unsupported button has been pressed.", NotificationType.Warning)
)

To make sure that the variable is false by default, update the OnVisible of the Chat Screen to the following function.

UpdateContext({lclSettingsVisible: false})

The settings page will show all the predefined system messages. Here the user can select the message of choice. To make it work, we can go to the Controls Screen, and copy the RadioGroup1 control, and paste it in the conVerticalChat. The items will be set to the System Message Templates table we created earlier. You do need to add this table to your app upfront. It will look like the image below by now.

To make our app toggle between the message and the settings, we only need to update three Visible properties.

//for galMessages & conUserInput
!lclSettingsVisible

//for radSystemMessage
lclSettingsVisible

Now, lets make sure the send button will use the predefined system messages as an input. The current OnSelect Value is shown below.

UpdateContext({lclSendingMessage: true, lclMessage: txtMessageInput.Value, lclMessageCount: CountRows(galMessages.AllItems), lclScope: "You are a helpful assistant"});
Reset(txtMessageInput);
Patch(Messages, Defaults(Messages), {Question: lclMessage, Chat: galChats.Selected});
Set(gblCompletionResponse, 'Get-Completion'.Run(galChats.Selected.Chat, gblDeploymentID, lclMessage, lclScope).answer);
Patch(Messages, Last(galMessages.AllItems), {Answer: gblCompletionResponse});
If(lclMessageCount = 0, Set(gblSummaryResponse, 'Get-Summary'.Run(gblDeploymentID, lclMessage).summary);
Patch(Chats, galChats.Selected, {Name:gblSummaryResponse}));
UpdateContext({lclSendingMessage: false});

the lclScope variable contains the system message. Currently it is hard coded. For now, we will only update this section to the following to make it dynamic. Easy.

lclScope: radSystemMessage.Selected.'System Message'

We also created a hint text where the instruction is given to the user of what is expected. This is also based on the radSytemMessage.Selected item. Set the txtMessageInput to the following.

radSystemMessage.Selected.'Hint Text'

If you play around, you will see the far right button of the command bar will change, as well as the hint text. Pretty nice.

But there is still a bit of a problem. If we leave it as it is, we can change the system message during the conversation, which will break things for sure. So we will need to save which system message is used for a particular chat. We do have a Chat table, but there is no field for that yet. Let's add a lookup to the System Message table, so we can patch that later to the chats table.

After creation, you will need to refresh the Chats table from the data panel. We can now get back to the Send button we've updated earlier. During the second last step we ask the Azure OpenAI Service to summarize our question in a few words, and patch that tot the title of the chat, so we can clearly distinguish the different chats. This is only done with the first message. We can extend the Patch function to include the System Message as well.

If(lclMessageCount = 0, Set(gblSummaryResponse, 'Get-Summary'.Run(gblDeploymentID, lclMessage).summary);
Patch(Chats, galChats.Selected, {Name:gblSummaryResponse, 'System Message': radSystemMessage.Selected}));

You can try if it works by creating a new chat, selecting a system message and sending a request. In the Chats table in dataverse the System Message column should contain the selected System Mesage Template. We could create a Model-driven app for this, but for now, the Dataverse functionality also does the trick.

We can now set the DisplayMode of the radSystemMessage control to the snippet below. This will make sure we cannot select an item when the Chat already contains a System Message.

If(
    IsBlank(galChats.Selected.'System Message'),
    DisplayMode.Edit,
    DisplayMode.Disabled
)

The last step is to set the default selected item of the same control. This way we can see which System message template has been selected for an earlier created chat.

With(
    {lclSystemMessage: galChats.Selected.'System Message'},
    If(
        IsBlank(lclSystemMessage),
        LookUp(
            'System Message Templates',
            Name = "Default"
        ),
        lclSystemMessage
    )
)

There are two things I want to explain about the snippet above. The first is the With() function. What I am trying to achieve is that we set the default value to the earlier selected item of the selected chat. I regularly see people sticking to the If() function. It would then look like something below. As you can see, the galChats.Selected.'System Message' is looked up twice. With this function it is already in cache, so performance-wise this wouldn't make too much of a difference, but when you are using the same LookUp() multiple time in your function, this is quite inefficient. That's where the With() function comes in handy. As you can see from the snippet above, we store the value in a variable, that is much faster accessible. The reason I do use the With() function now is for learning purpose, but also to make sure I have a uniform approach on how I use these functions.

The second thing I want to show is that I use a lookUp to make sure the default selected item is the Default System Message Template. This will result in a functionality that a new chat will work just as you know from ChatGPT, but if you want another functionality, you can simply select the option of your choice.

If(
    IsBlank(galChats.Selected.'System Message'),
    LookUp(
        'System Message Templates',
        Name = "Default"
    ),
    galChats.Selected.'System Message'
)

If you go to the chat you created earlier for testing, so the one that patched the selected System Message Template to the Chat record, you will see it is disabled and will have the System Message Template that was used selected.

Power Platform Prompts

We now know have a better understanding about Prompt Engineering, and we have created an app that can help us easily use powerful Prompts. It is now up to you to create valuable prompts for your workflow. I have given you an example of how it can be used to generate an output for the Items property of the Creator Kit's command bar. You can start expanding it to facilitate every control, and maybe even multiple properties. But you can do much more, and I am really curious what you will come up with.

The Power Platform Advocate Team came up with the brilliant idea to create a GitHub repository for prompts that can be used in the Power Platform. as with any Open Source project, it is much fun to contribute, as well as you can find great inspiration for ourselves. I want to highlight one that we can directly implement into the solution we've created, which is the PowerFx Interpreter. With this quite straightforward prompt, we can now use our self-made app to copy in a PowerFx function, to tell us what it is actually doing. You should be able now to add it to your System Message Template table yourself by now. Please also contribute to this repo, as this is still in it's infancy. We can all learn from each other.

Key Takeaways

👉🏻 LLMs can do much more than act as a Wikipedia chatbot

👉🏻 We can create prompts that can help us as a Power Platform Developer

👉🏻 Let's share the prompts we love open-source

Comments


bottom of page