Prompt Engineering & Orchestration

Dhiraj Patra
3 min readNov 25, 2023

--

Photo by Andrea Piacquadio

Prompt engineering is a buzzword, especially for Software Development.

Today we are going to learn by developing a very simple application.

We are going to assemble a set of prompts into a working prototype service, utilizing orchestration tools

to link multiple calls to AI.

Python code below

import json

import requests

# Define the AI endpoints

ai_endpoints = {

“text_generation”: “https://api.openai.com/v1/engines/davinci/completions",

“image_generation”: “https://api.openai.com/v1/images/generation"

}

# Define the orchestration tool

class Orchestrator:

def __init__(self):

self.ai_endpoints = ai_endpoints

def call_ai(self, endpoint, prompt):

headers = {

“Authorization”: “Bearer YOUR_API_KEY”

}

data = json.dumps({“prompt”: prompt})

response = requests.post(endpoint, headers=headers, data=data)

return response.json()

def assemble_service(self, prompts):

service = {}

for prompt, endpoint in prompts.items():

response = self.call_ai(endpoint, prompt)

service[prompt] = response[“result”]

return service

# Create an orchestrator object

orchestrator = Orchestrator()

# Define the prompts

prompts = {

“text_generation”: “Write a poem about a cat”,

“image_generation”: “Generate an image of a cat”

}

# Assemble the service

service = orchestrator.assemble_service(prompts)

# Print the service output

print(service)

This code will call the OpenAI Text Completion and Image Generation endpoints to generate a poem and an image of a cat. The results of the AI calls are then assembled into a single service output.

This is just a simple example, of course. More complex services can be assembled by linking multiple AI calls together in a sequence or pipeline. For example, you could use the text generation output to generate an image caption, or you could use the image generation output to train a new AI model.

Orchestration tools can be used to simplify the process of linking multiple AI calls together. These tools typically provide a graphical user interface for designing and deploying workflows. Some popular orchestration tools include:

Prefect

Airflow

Kubeflow Pipelines

These tools can help you to automate the execution of your workflows and manage the dependencies between different AI calls.

Now let’s create one small implementation as well.

Python code below

import json

import requests

class DCChargeManagement:

def __init__(self, station_id):

self.station_id = station_id

# Define the AI endpoints

self.ai_endpoints = {

“predict_demand”: “https://api.openai.com/v1/engines/davinci/completions",

“optimize_charging”: “https://api.openai.com/v1/images/generation"

}

# Define the orchestration tool

self.orchestrator = Orchestrator()

def predict_demand(self):

prompt = f”Predict the demand for DC charging at station {self.station_id} in the next hour.”

response = self.orchestrator.call_ai(self.ai_endpoints[“predict_demand”], prompt)

return response[“result”]

def optimize_charging(self, demand_prediction):

prompt = f”Optimize the charging schedule for station {self.station_id} based on the following demand prediction: {demand_prediction}”

response = self.orchestrator.call_ai(self.ai_endpoints[“optimize_charging”], prompt)

return response[“result”]

def manage_charging(self):

demand_prediction = self.predict_demand()

charging_schedule = self.optimize_charging(demand_prediction)

# Send the charging schedule to the charging station controller

# …

To use this class, you would first create an instance of the class, passing in the station ID as an argument. You would then call the predict_demand() method to get a prediction of the demand for DC charging at the station in the next hour. Next, you would call the optimize_charging() method to get an optimized charging schedule for the station, based on the demand prediction. Finally, you would send the charging schedule to the charging station controller.

This is just a basic example, of course. You could extend the class to include additional functionality, such as:

Support for multiple AI endpoints

Support for different orchestration tools

Support for multiple charging stations

Integration with other systems, such as a billing system or a customer relationship management (CRM) system

You could also use the class to develop a more sophisticated application, such as a mobile app that allows users to manage their DC charging sessions.

--

--

Dhiraj Patra
Dhiraj Patra

Written by Dhiraj Patra

AI Strategy, Generative AI, AI & ML Consulting, Product Development, Startup Advisory, Data Architecture, Data Analytics, Executive Mentorship, Value Creation