LLMs Brought a New Kind of Software
LLMs Are a New Kind of Software
We are going to presents a comparative view of Traditional Software vs LLMs (Large Language Models) to highlight how LLMs represent a fundamental shift in the software paradigm.
Comparison Table
Traditional Software LLMs (Large Language Models)
Deterministic Non-deterministic
Fast Slow
Cheap Expensive
Rigid Flexible
If → then Reasoning
Detailed Explanation
1. Deterministic vs Non-deterministic
- Traditional Software follows a fixed, rule-based approach. Given the same input, it always produces the same output.
- LLMs are probabilistic. Their outputs can vary for the same input because they are based on learned patterns from data rather than hard-coded rules.
2. Fast vs Slow
- Traditional programs execute instructions quickly because they’re optimized and compiled to run directly on machines.
- LLMs involve complex computations (e.g., matrix multiplications in neural networks), often requiring GPUs or TPUs, which can make them slower, especially for larger prompts or models.
3. Cheap vs Expensive
- Once built, traditional software is inexpensive to run at scale.
- LLMs are resource-intensive and require significant compute power, making them expensive to run, especially at high volume or with low latency requirements.
4. Rigid vs Flexible
- Traditional software is inflexible: it needs explicit updates for new logic or edge cases.
- LLMs are adaptable and can respond to a broad range of tasks (e.g., translation, coding, summarization) without being explicitly programmed for each.
5. If → Then vs Reasoning
- Traditional software logic is hard-coded using conditional statements.
- LLMs can “reason” based on their training — they generalize from massive data and can infer context or patterns to provide intelligent outputs, mimicking reasoning.
Conclusion
This comparison emphasizes the paradigm shift from rule-based programming to data-driven intelligence. LLMs open new possibilities where software can handle fuzzy, open-ended, and unstructured tasks — something traditional software struggles with — making them revolutionary in fields like AI assistance, content generation, and natural language interaction.
Certainly! Here’s a practical example demonstrating both Traditional Software (Deterministic) and LLM-based (Non-Deterministic) approaches for solving a simple task: Intent Classification from user input.
🧠 Use Case: Detecting User Intent (e.g., Greeting, Order, Complaint)
1. Traditional Software Approach (Deterministic)
def classify_intent(text):
text = text.lower()
if "hello" in text or "hi" in text:
return "Greeting"
elif "order" in text or "buy" in text:
return "Order"
elif "not working" in text or "problem" in text:
return "Complaint"
else:
return "Unknown"
# Test
print(classify_intent("Hi there")) # Greeting
print(classify_intent("I want to buy a laptop")) # Order
print(classify_intent("My device is not working"))# Complaint
🔹 Deterministic: Same input = same output
🔹 Rigid: Needs explicit if-else
for every condition
🔹 Fast and Cheap
2. LLM-based Approach (Non-Deterministic)
import openai
openai.api_key = "your-api-key"def classify_intent_with_llm(text):
prompt = f"What is the user's intent in the following message?\nMessage: \"{text}\"\nIntent:"
response = openai.Completion.create(
engine="gpt-3.5-turbo-instruct", # Or any available model
prompt=prompt,
max_tokens=10,
temperature=0.3
)
return response.choices[0].text.strip()# Test
print(classify_intent_with_llm("Hi there")) # Greeting
print(classify_intent_with_llm("I want to buy a laptop")) # Order
print(classify_intent_with_llm("My device is not working")) # Complaint
🔹 Non-Deterministic: May vary slightly each time
🔹 Flexible: Can handle unseen or ambiguous phrasing
🔹 Expensive & Slower
Here’s a practical example demonstrating both Traditional Software (Deterministic) and LLM-based (Non-Deterministic) approaches for solving a simple task: Intent Classification from user input.
🧠 Use Case: Detecting User Intent (e.g., Greeting, Order, Complaint)
1. Traditional Software Approach (Deterministic)
def classify_intent(text):
text = text.lower()
if "hello" in text or "hi" in text:
return "Greeting"
elif "order" in text or "buy" in text:
return "Order"
elif "not working" in text or "problem" in text:
return "Complaint"
else:
return "Unknown"
# Test
print(classify_intent("Hi there")) # Greeting
print(classify_intent("I want to buy a laptop")) # Order
print(classify_intent("My device is not working"))# Complaint
🔹 Deterministic: Same input = same output
🔹 Rigid: Needs explicit if-else
for every condition
🔹 Fast and Cheap
2. LLM-based Approach (Non-Deterministic)
import openai
openai.api_key = "your-api-key"def classify_intent_with_llm(text):
prompt = f"What is the user's intent in the following message?\nMessage: \"{text}\"\nIntent:"
response = openai.Completion.create(
engine="gpt-3.5-turbo-instruct", # Or any available model
prompt=prompt,
max_tokens=10,
temperature=0.3
)
return response.choices[0].text.strip()# Test
print(classify_intent_with_llm("Hi there")) # Greeting
print(classify_intent_with_llm("I want to buy a laptop")) # Order
print(classify_intent_with_llm("My device is not working")) # Complaint
🔹 Non-Deterministic: May vary slightly each time
🔹 Flexible: Can handle unseen or ambiguous phrasing
🔹 Expensive & Slower
You’re absolutely right — the future is not about LLMs replacing traditional software, but about merging both into a new paradigm, often referred to as:
🌐 Agentic Software Systems / Cognitive Architectures
🔁 The Future Is Hybrid
Traditional Software 🤝 LLMs / Agentic Models Deterministic logic + Reasoning & Flexibility APIs, Databases + Language, Code, Tools Speed, Control + Adaptivity, Learning
🧠 Key Concepts Emerging:
- Agentic Workflows
LLMs act as agents that reason, plan, and call APIs/tools (e.g., ReAct, AutoGPT, LangGraph, CrewAI). - Tool-Using LLMs
LLMs delegate precise computation to traditional tools (e.g., calculators, DBs, API calls). - Event-driven Agents
Instead of “if → then”, agents can “observe → think → act”. - Prompt Engineering + Function Calling
Structured prompts + calling specific functions bring control and predictability to LLMs. - LangChain, Semantic Kernel, Autogen, CrewAI
Frameworks are emerging to orchestrate LLMs + code + tools into reliable systems.
📌 Future Software Engineering Stack (Agentic)
[ UI / App ]
↓
[ Event → LLM Agent ]
↓
[ Plans → Tools / APIs ]
↓
[ Executes → Validates → Stores ]
🧭 Think of It As:
- Traditional code = Muscle
- LLMs/Agents = Brain
- Together = Intelligent System
✅ Summary
Feature | Traditional Software | LLM-based Software
Rule-based |✔| ❌
Learns from data |❌ |✔
Same output always |✔ |❌
(depends on temperature, context) Handles fuzzy input |❌ |✔
The future of software is agentic, hybrid, modular, and tool-aware. Developers will write code plus design workflows for reasoning agents that combine the best of both worlds.