Boost your Prompt Engineering Efficiency with Prompty and VS Code!
Hello! I’m Wada (@cognac_n), an AI Evangelist at KINTO Technologies.
How do you manage your prompts? Today, I will introduce Prompty, a tool that simplifies creating/editing
, testing
, implementing
, and organizing
prompts!
1. What is Prompty?
Prompty is a tool designed to streamline prompt development for large language models (LLMs). This enables centralization of prompts and parameters in YAML format, making it ideal for version control on GitHub and improving collaboration in team environments. Using the Visual Studio Code (VS Code) extension can greatly improve the efficiency of prompt engineering.
Benefits of Introducing Prompty
Although integration with Azure AI Studio and Prompt Flow offers benefits, this article will focus on the integration with VS Code.
Who should consider using Prompty:
- Those looking to speed up prompt development
- Developers who need version control for prompts
- Teams collaborating on prompt creation
- Anyone wanting to simplify prompt execution on the application side
https://github.com/microsoft/prompty
2. Prerequisites
Requirements (at the time of writing)
- Python 3.9 or higher
- Vs Code (if using the extension)
- OpenAI API Key or Azure OpenAI Endpoint (depending on the LLM in use)
Installation and initial setup
Install the VS Code extension
Use pip or other package managers to install the necessary libraries
pip install prompty
3. Try It Out
3-1. Create a New Prompty File
Right-click in the Explorer tab and select "New Prompty" to create a template.
New Prompty
The created template is as follows:
---
name: ExamplePrompt
description: A prompt that uses context to ground an incoming question
authors:
- Seth Juarez
model:
api: chat
configuration:
type: azure_openai
azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}
azure_deployment: <your-deployment>
parameters:
max_tokens: 3000
sample:
firstName: Seth
context: >
The Alpine Explorer Tent boasts a detachable divider for privacy,
numerous mesh windows and adjustable vents for ventilation, and
a waterproof design. It even has a built-in gear loft for storing
your outdoor essentials. In short, it's a blend of privacy, comfort,
and convenience, making it your second home in the heart of nature!
question: What can you tell me about your tents?
---
system:
You are an AI assistant who helps people find information. As the assistant,
you answer questions briefly, succinctly, and in a personable manner using
markdown and even add some personal flair with appropriate emojis.
# Customer
You are helping {{firstName}} to find answers to their questions.
Use their name to address them in your responses.
# Context
Use the following context to provide a more personalized response to {{firstName}}:
{{context}}
user:
{{question}}
In the area enclosed by ---
, specify parameters. Below this section, add the main content of the prompt. You can define roles using system:
or user:
.
Basic Parameter Overview
Parameter | Description |
---|---|
name | Specifies the name of the prompt |
description | Provides a description of the prompt |
authors | Includes information about the prompt creators |
model | Details the AI model used in the prompt |
sample | If the prompt contains placeholders such as {{context}} , the content specified here is substituted during testing |
3-2. Configuring API Keys and Parameters
There are several ways to set the required API keys, endpoint information, and execution parameters.
[Option 1] Specifying in the .prompty file
This involves directly adding these details to the .prompty file.
model:
api: chat
configuration:
type: azure_openai
azure_endpoint: ${env:AZURE_OPENAI_ENDPOINT}
azure_deployment: <your-deployment>
parameters:
max_tokens: 3000
You can also reference environment variables, such as ${env:AZURE_OPENAI_ENDPOINT}
. However, please note that azure_openai_api_key
cannot be configured in this way. ![azure_openai_api_key cannot be written directly in the .prompty file] (/assets/blog/authors/s.wada/20240821/image_3.png =750x) azure_openai_api_key cannot be written directly in the .prompty file
[Option 2] Configuring with settings.json
Another approach is to use VS Code’s settings.json
. If the settings are incomplete and you click the play button in the upper-right corner, you will be prompted to edit settings.json. You can create multiple configurations beyond the default definition and switch between them during testing. When type
is set to azure_openai
and api_key
is left empty, the process will direct you to authenticate using Azure Entra ID, as explained later.
{
"prompty.modelConfigurations": [
{
"name": "default",
"type": "azure_openai",
"api_version": "2023-12-01-preview",
"azure_endpoint": "${env:AZURE_OPENAI_ENDPOINT}",
"azure_deployment": "",
"api_key": "${env:AZURE_OPENAI_API_KEY}"
},
{
"name": "gpt-3.5-turbo",
"type": "openai",
"api_key": "${env:OPENAI_API_KEY}",
"organization": "${env:OPENAI_ORG_ID}",
"base_url": "${env:OPENAI_BASE_URL}"
}
]
}
[Option 3] Configuration with a .env file
By creating a .env
file, environment variables can be read directly from it. Note that the .env
file must be located in the same directory as the .prompty
file you are using. This setup is especially convenient for local testing.
AZURE_OPENAI_API_KEY=YOUR_AZURE_OPENAI_API_KEY
AZURE_OPENAI_ENDPOINT=YOUR_AZURE_OPENAI_ENDPOINT
AZURE_OPENAI_API_VERSION=YOUR_AZURE_OPENAI_API_VERSION
[Option 4] Configuring with Azure Entra ID
By signing in with an Azure Entra ID that has the appropriate permissions, you can access the API.
I haven’t tested this option yet
3-3. Running Prompts in VS Code
You can easily execute prompts by clicking the play button in the upper right corner. The results are displayed in the OUTPUT
section. To view raw data, including placeholder substitution and token usage, select "Prompty Output(Verbose)" from the dropdown in the OUTPUT
panel. This option is useful for checking detailed information.
Use the Play button in the upper right to run the prompt
Results can be seen in the OUTPUT section
3-4. Other Parameters
Various parameters are introduced on the following page.
Defining options like inputs
and outputs
, especially when using json mode, improves prompt visibility, so be sure to set them.
inputs:
firstName:
type: str
description: The first name of the person asking the question.
context:
type: str
description: The context or description of the item or topic being discussed.
question:
type: str
description: The specific question being asked.
3-5. Integrating with an Application
The syntax for integration may vary depending on the library used in your application. As Prompty is frequently updated, be sure to check the latest documentation regularly. Here’s an example code snippet demonstrating the use of Prompty with Prompt Flow
. This allows for simple prompt execution.
from promptflow.core import Prompty, AzureOpenAIModelConfiguration
# Set up configuration to load Prompty using AzureOpenAIModelConfiguration
configuration = AzureOpenAIModelConfiguration(
azure_deployment="gpt-4o", # Specify the deployment name for Azure OpenAI
api_key="${env:AZURE_OPENAI_API_KEY}", # Retrieve the API key from environment variables
api_version="${env:AZURE_OPENAI_API_VERSION}", # Retrieve the API version from environment variables
azure_endpoint="${env:AZURE_OPENAI_ENDPOINT}", # Retrieve the Azure endpoint from environment variables
)
# Configure to override model parameters
# Here, max_tokens is overridden as an example
override_model = {"configuration": configuration, "max_tokens": 2048}
# Load Prompty with the overridden model settings
prompty = Prompty.load(
source="to_your_prompty_file_path", # Specify the Prompty file to use
model=override_model # Apply the overridden model settings
)
# Execute prompty
result = prompty(
firstName=first_name, context=context, question=question
# Execute Prompty based on the provided text and obtain the result
4. Summary
Prompty is a powerful tool that can significantly streamline prompt engineering tasks. In particular, the development environment integrated with VS Code allows for seamless creation
, testing
, implementation
, and management
of prompts, making it highly user-friendly. Mastering Prompty can greatly enhance the efficiency and quality of prompt engineering. I encourage everyone to give it a try!
Benefits of Introducing Prompty (Repost)
We Are Hiring!
At KINTO Technologies, we are seeking colleagues to help drive the adoption of generative AI in our business. We are open to casual interviews, so if you’re even slightly interested, please contact us via the link below or through X DM . We look forward to hearing from you! https://hrmos.co/pages/kinto-technologies/jobs/1955878275904303115 Learn more about how we work with generative AI here. https://blog.kinto-technologies.com/posts/2024-01-26-GenerativeAIDevelopProject/
Thank you for reading this far.
関連記事 | Related Posts
A system for efficiently reviewing code and blogs: Introducing PR-Agent (Amazon Bedrock Claude3)
Generative AI and Copilot — How a Non-Engineer like Me Developed an Operation Tool Using AI
Introduction to our Generative AI Development Project
Words Turn Into Worlds - The Magic of Amazon QuickSight Generative BI
GitHub Copilotとプログラミングして分かったAIとの付き合い方(モバイルエンジニア編)
コードとブログの両方を効率的にレビューする仕組みについて:PR-Agent(Amazon Bedrock Claude3)の導入
We are hiring!
生成AIエンジニア/生成AI活用PJT/東京・名古屋・大阪
生成AI活用PJTについて生成AIの活用を通じて、KINTO及びKINTOテクノロジーズへ事業貢献することをミッションに2024年1月に新設されたプロジェクトチームです。生成AI技術は生まれて日が浅く、その技術を業務活用する仕事には定説がありません。
【PdM】/KINTO FACTORY開発G/東京・大阪
KINTO FACTORYについて自動車のソフトウェア、ハードウェア両面でのアップグレードを行う新サービスです。トヨタ・レクサスの車をお持ちのお客様にOTAやハードウェアアップデートを通してリフォーム、アップグレード、パーソナライズなどを提供し購入後にも進化続ける自動車を提供するモビリティ業界における先端のサービスの開発となります。