Near Realtime Application with Protobuf and Kafka

Disclaimer: This is a hypothetical demo application to explain certain technologies. Not related to any real world scenario.

Think Different - Dhiraj Patra
13 min readFeb 24, 2024

The Poultry Industry’s Quest for Efficiency: Sexing Eggs in Real-Time with AI

The poultry industry faces constant pressure to optimize production and minimize waste. One key challenge is determining the sex of embryos early in the incubation process. Traditionally, this involved manual candling, a labor-intensive and error-prone technique. But what if there was a faster, more accurate way?

Enter the exciting world of near real-time sex prediction using AI and MRI scans. This innovative technology promises to revolutionize the industry by:

  • Boosting Efficiency: Imagine processing thousands of eggs per second, automatically identifying female embryos for optimal resource allocation. No more manual labor, no more missed opportunities.
  • Improving Accuracy: AI models trained on vast datasets can achieve far greater accuracy than human candlers, leading to less waste and more efficient hatchery operations.
  • Real-Time Insights: Get instant feedback on embryo sex, enabling quick decision-making and batch-level analysis for informed management strategies.
  • Data-Driven Optimization: Track trends and insights over time to optimize hatching conditions and maximize yield, leading to long-term improvements.

This article dives deep into the intricate details of this groundbreaking technology, exploring the:

  • Technical architecture: From edge scanners to cloud-based processing, understand the intricate network that makes real-time sex prediction possible.
  • Deep learning models: Discover the powerful algorithms trained to identify sex with high accuracy, even in complex egg MRI scans.
  • Data security and privacy: Learn how sensitive data is protected throughout the process, ensuring compliance and ethical use.
  • The future of the poultry industry: Explore the transformative potential of this technology and its impact on efficiency, sustainability, and animal welfare.

First, we need to find out more details before going into deeper for a solution.

Specific Requirements and Constraints:

  • MRI Modality: What type of MRI scanner will be used (e.g., T1-weighted, T2-weighted, functional MRI)?
  • Data Volume and Frequency: How much data will be generated per scan, and how often will scans be performed?
  • Latency Requirements: What is the acceptable delay from image acquisition to analysis results?
  • Security and Compliance: Are there any HIPAA or other regulatory requirements to consider?

Performance and Scalability:

  • Expected Number of Concurrent Users: How many users will be accessing the application simultaneously?
  • Resource Constraints: What are the available computational resources (CPU, GPU, memory, network bandwidth) in your cloud environment?

Analytical Purposes:

  • Specific Tasks: What are the intended downstream applications or analyses for the processed data (e.g., diagnosis, segmentation, registration)?
  • Visualization Needs: Do you require real-time or interactive visualization of results?

Additional Considerations:

  • Deployment Environment: Where will the application be deployed (public cloud, private cloud, on-premises)?
  • Training Data Availability: Do you have a labeled dataset for training the deep learning model?
  • Monitoring and Logging: How will you monitor application performance and troubleshoot issues?

Once you have a clearer understanding of these details, you can dive into further details. Here’s a general outline of the end-to-end application solution, incorporating the latest technologies and addressing potential issues:


  1. MRI Acquisition:
  • Use DICOM (Digital Imaging and Communications in Medicine) standard for data acquisition and transmission.
  • Consider pre-processing on the scanner if feasible to reduce data transmission size.
  1. Data Ingestion and Preprocessing:
  • Use a lightweight, scalable message queue (e.g., Apache Kafka, RabbitMQ) to buffer incoming MRI data.
  • Employ a microservice for initial data validation and format conversion (if necessary).
  • Implement a preprocessing microservice for tasks like skull stripping, normalization, and intensity standardization.
  1. Near Real-Time Deep Learning Inference:
  • Choose a containerized deep learning framework (e.g., TensorFlow Serving, PyTorch Inference Server) for efficient deployment and scaling.
  • Consider cloud-based GPU instances for faster inference, especially for large models.
  • Implement a microservice for model loading, inference, and result post-processing.
  • Explore edge computing options (e.g., NVIDIA Triton Inference Server) if latency is critical.
  1. Data Storage and Retrieval:
  • Use a high-performance database (e.g., Apache Cassandra, Amazon DynamoDB) for storing processed MRI data and analysis results.
  • Consider object storage (e.g., Amazon S3, Azure Blob Storage) for archiving raw MRI data.
  • Implement a microservice for data access, retrieval, and query-based filtering.
  1. Analytics and Visualization:
  • Integrate with existing analytical tools or create a custom microservice for data visualization (e.g., using Plotly, Bokeh).
  • Offer interactive visualizations or dashboards for exploring and interpreting results.
  1. Monitoring and Logging:
  • Implement centralized logging and monitoring for all microservices using tools like Prometheus and Grafana.
  • Track key metrics (e.g., latency, resource utilization, errors) for proactive issue detection and troubleshooting.

Technologies and Best Practices:

  • FastAPI: Use FastAPI for building RESTful APIs for microservices due to its ease of use, performance, and integration with async/await for concurrency.
  • Protobuf: Employ Protobuf for data serialization and RPC communication between microservices because of its efficiency and platform-neutrality.
  • Cloud-Based Deployment: Utilize cloud services like AWS, Azure, or GCP for scalability, flexibility, and managed infrastructure.
  • Security: Implement robust security measures like authentication, authorization, and encryption to protect sensitive patient data.
  • Containerization: Use Docker containers for packaging and deploying microservices to ensure consistency and portability.
  • API Gateway: Consider an API gateway (e.g., Kong, Tyk) to manage API traffic, security, and versioning.
  • Continuous Integration and Delivery (CI/CD): Automate build, test, and deployment processes for faster iteration and updates.

Remember that this is a high-level overview, and the specific implementation will depend on your requirements and constraints.

Based on my hypothetical requirements, I have prepared the following design, architecture and solution high points.


Data Acquisition:

  1. Edge scanner:
  • Use a lightweight, high-throughput framework (e.g., OpenCV, scikit-image) on the edge scanner for basic pre-processing (e.g., resizing, normalization) to reduce data transmission size.
  • Employ an edge-based message queue (e.g., RabbitMQ, Apache Pulsar) for buffering MRI data efficiently.
  • Implement edge security measures (e.g., authentication, encryption) to protect data before sending.
  1. Data Ingestion and Preprocessing:
  • Use Kafka as a high-throughput, scalable message queue to buffer incoming MRI data from multiple edge scanners.
  • Implement a microservice for initial data validation, format conversion (if necessary), and security checks.
  • Run a preprocessing microservice for essential tasks like skull stripping, normalization, and intensity standardization.

Near Real-Time Deep Learning Inference:

  1. Model Selection:
  • Carefully choose a suitable deep learning model architecture and training dataset based on your specific requirements (e.g., accuracy, speed, resource constraints). Consider models like U-Net, DeepLab, or custom architectures tailored for egg image segmentation.
  1. Model Training:
  • Train and validate the model on a representative dataset of labeled egg MRI scans with embryo sex annotations. Ensure high-quality data and address potential biases.
  1. Distributed Inference:
  • Use TensorFlow Serving or PyTorch Inference Server for efficient model deployment and distributed inference across multiple GPUs or TPUs in a hybrid cloud environment.
  • Explore edge inference options (e.g., NVIDIA Triton Inference Server) for latency-critical tasks if feasible.

Data Storage and Retrieval:

  1. NoSQL Database:
  • Use a fast and scalable NoSQL database like MongoDB or Cassandra for storing pre-processed MRI data and analysis results.
  • Consider partitioning and indexing to optimize query performance.
  1. Object Storage:
  • Archive raw MRI data in an object storage service like Amazon S3 or Azure Blob Storage for long-term archival and potential future analysis.

Analytics and Visualization:

  1. Interactive Visualization:
  • Integrate with a real-time visualization library like Plotly.js or Bokeh for interactive visualization of embryo sex predictions and batch analysis.
  • Allow users to filter, zoom, and explore results for informed decision-making.
  1. Dashboards:
  • Create dashboards to display key metrics, trends, and batch-level summaries for efficient monitoring and decision support.

Monitoring and Logging:

  1. Centralized Logging:
  • Use a centralized logging system like Prometheus and Grafana to collect and visualize logs from all components (edge scanners, microservices, inference servers).
  • Track key metrics (e.g., latency, throughput, errors) for proactive issue detection and troubleshooting.

Hybrid Cloud Deployment:

  1. Edge Scanners:
  • Deploy lightweight pre-processing and data buffering services on edge scanners to minimize data transmission and latency.
  1. Cloud Infrastructure:
  • Use a combination of public cloud services (e.g., AWS, Azure, GCP) and private cloud infrastructure for scalability, flexibility, and cost optimization.
  • Consider managed services for databases, message queues, and other infrastructure components.

Additional Considerations:

  • Data Security:
  • Implement robust security measures throughout the pipeline, including encryption at rest and in transit, secure authentication and authorization mechanisms, and vulnerability management practices.
  • Scalability and Performance:
  • Continuously monitor and optimize your system for scalability and performance, especially as data volume and user demand increase. Consider auto-scaling mechanisms and load balancing.
  • Monitoring and Logging:
  • Regularly review and analyze logs to identify and address potential issues proactively.
  • Model Maintenance:
  • As your dataset grows or requirements evolve, retrain your deep learning model periodically to maintain accuracy and performance.
  • Ethical Considerations:
  • Ensure responsible use of the technology and address potential ethical concerns related to data privacy, bias, and decision-making transparency.

By carefully considering these factors and tailoring the solution to your specific needs, you can build a robust, scalable, and secure end-to-end application for near real-time sex prediction in egg MRI scans.

Or here is a near alternative thought. You can take a dive into a high level design normally it could be here in this link.

Architecture Overview:

1. Frontend Interface:

- Users interact through a web interface or mobile app.

- FastAPI or a lightweight frontend framework like React.js for the web interface.

2. Load Balancer and API Gateway:

- Utilize services like AWS Elastic Load Balancing or NGINX for load balancing and routing.

- API Gateway (e.g., AWS API Gateway) to manage API requests.

3. Microservices:

- Image Processing Microservice:

- Receives MRI images from the frontend/customer with EDGE.

- Utilizes deep learning models for image processing.

- Dockerize the microservice for easy deployment and scalability.

- Communicates asynchronously with other microservices using message brokers like Kafka or AWS SQS.

- Data Processing Microservice:

- Receives processed data from the Image Processing microservice.

- Utilizes Protocol Buffers for efficient data serialization.

- Performs any necessary data transformations or enrichments.

- Storage Microservice:

- Handles storing processed data.

- Utilize cloud-native databases like Amazon Aurora or DynamoDB for scalability and reliability.

- Ensures data integrity and security.

4. Deep Learning Model Deployment:

- Use frameworks like TensorFlow Serving or TorchServe for serving deep learning models.

- Deployed as a separate microservice or within the Image Processing microservice.

- Containerized using Docker for easy management and scalability.

5. Cloud Infrastructure:

- Deploy microservices on a cloud provider like AWS, Azure, or Google Cloud Platform (GCP).

- Utilize managed Kubernetes services like Amazon EKS or Google Kubernetes Engine (GKE) for container orchestration.

- Leverage serverless technologies for auto-scaling and cost optimization.

6. Monitoring and Logging:

- Implement monitoring using tools like Prometheus and Grafana.

- Centralized logging with ELK stack (Elasticsearch, Logstash, Kibana) or cloud-native solutions like AWS CloudWatch Logs.

7. Security:

- Implement OAuth2 or JWT for authentication and authorization.

- Utilize HTTPS for secure communication.

- Implement encryption at rest and in transit using services like AWS KMS or Azure Key Vault.

8. Analytics and Reporting:

- Utilize data warehouses like Amazon Redshift or Google BigQuery for storing analytical data.

- Implement batch processing or stream processing using tools like Apache Spark or AWS Glue for further analytics.

- Utilize visualization tools like Tableau or Power BI for reporting and insights.

This architecture leverages the latest technologies and best practices for near real-time processing of MRI images, ensuring scalability, reliability, and security. We can use with Data pipeline with federated data ownership.

Incorporating a data pipeline with federated data ownership into the architecture can enhance data management and governance. Here’s how you can integrate it:

Data Pipeline with Federated Data Ownership:

1. Data Ingestion:

- Implement data ingestion from edge scanners into the data pipeline.

- Use Apache NiFi or AWS Data Pipeline for orchestrating data ingestion tasks.

- Ensure secure transfer of data from edge devices to the pipeline.

2. Data Processing and Transformation:

- Utilize Apache Spark or AWS Glue for data processing and transformation.

- Apply necessary transformations on the incoming data to prepare it for further processing.

- Ensure compatibility with federated data ownership model, where data ownership is distributed among multiple parties.

3. Data Governance and Ownership:

- Implement a federated data ownership model where different stakeholders have control over their respective data.

- Define clear data ownership policies and access controls to ensure compliance and security.

- Utilize tools like Apache Ranger or AWS IAM for managing data access and permissions.

4. Data Storage:

- Store processed data in a federated manner, where each stakeholder has ownership over their portion of the data.

- Utilize cloud-native storage solutions like Amazon S3 or Google Cloud Storage for scalable and cost-effective storage.

- Ensure data segregation and encryption to maintain data security and privacy.

5. Data Analysis and Visualization:

- Use tools like Apache Zeppelin or Jupyter Notebook for data analysis and exploration.

- Implement visualizations using libraries like Matplotlib or Plotly.

- Ensure that visualizations adhere to data ownership and privacy regulations.

6. Data Sharing and Collaboration:

- Facilitate data sharing and collaboration among stakeholders while maintaining data ownership.

- Implement secure data sharing mechanisms such as secure data exchange platforms or encrypted data sharing protocols.

- Ensure compliance with data privacy regulations and agreements between stakeholders.

7. Monitoring and Auditing:

- Implement monitoring and auditing mechanisms to track data usage and access.

- Utilize logging and monitoring tools like ELK stack or AWS CloudWatch for real-time monitoring and analysis.

- Ensure transparency and accountability in data handling and processing.

By incorporating a data pipeline with federated data ownership into the architecture, you can ensure that data is managed securely and in compliance with regulations while enabling collaboration and data-driven decision-making across multiple stakeholders.

Now I am going to deep dive into a POC application for this with detailed architectural view.

Architecture Overview:

1. Edge Scanner:

- Utilize high-speed imaging devices for scanning eggs.

- Implement edge computing devices for initial processing if necessary.

2. Edge Processing:

- If required, deploy lightweight processing on edge devices to preprocess data before sending it to the cloud.

3. Message Queue (Kafka or RabbitMQ):

- Introduce Kafka or RabbitMQ to handle the high throughput of incoming data (1000 eggs/scans per second).

- Ensure reliable messaging and decoupling of components.

4. FastAPI Backend:

- Implement a FastAPI backend to handle REST API requests from users.

- Deploy multiple instances to handle simultaneous requests (100+).

5. Microservices:

- Image Processing Microservice:

- Receives egg scan data from the message queue.

- Utilizes deep learning models to determine the sex of the embryo.

- Utilize Docker for containerization and scalability.

- Data Processing Microservice:

- Receives processed data from the Image Processing microservice.

- Stores data in MongoDB or a NoSQL database for fast and efficient storage.

- Visualization Microservice:

- Provides near real-time visualization of the output to users.

- Utilizes WebSocket connections for real-time updates.

6. Hybrid Cloud Setup:

- Utilize Google Cloud Platform (GCP) or AWS for the public cloud backend.

- Ensure seamless integration and data transfer between edge devices and the cloud.

- Implement data replication and backup strategies for data resilience.

7. Security:

- Implement secure communication protocols (HTTPS) for data transfer.

- Encrypt data at rest and in transit.

- Utilize role-based access control (RBAC) for user authentication and authorization.

8. Monitoring and Logging:

- Implement monitoring using Prometheus and Grafana for real-time monitoring of system performance.

- Utilize centralized logging with ELK stack for comprehensive log management and analysis.

9. Scalability and Resource Management:

- Utilize Kubernetes for container orchestration to manage resources efficiently.

- Implement auto-scaling policies to handle varying loads.

This architecture ensures high throughput, low latency, data security, and scalability for processing egg scans to determine the sex of embryos. It leverages Kafka/RabbitMQ for handling high throughput, FastAPI for serving REST APIs, MongoDB/NoSQL for efficient data storage, and hybrid cloud setup for flexibility and resilience. Additionally, it includes monitoring and logging for system visibility and management.

Sure, below is a simplified implementation example of the backend serverless function using Lambda, FastAPI, Kafka, and Protocol Buffers for the given application:

python code

# Lambda function handler

import json

from fastapi import FastAPI

from kafka import KafkaProducer

from pydantic import BaseModel

app = FastAPI()

class EggScan(BaseModel):

egg_id: str

scan_data: bytes“/process-egg-scan”)

async def process_egg_scan(egg_scan: EggScan):

# Send egg scan data to Kafka topic

producer = KafkaProducer(bootstrap_servers=’your_kafka_broker:9092')

producer.send(‘egg-scans’, egg_scan.json().encode(‘utf-8’))


return {“message”: “Egg scan data processed successfully”}

# Kafka consumer handler

from kafka import KafkaConsumer

from fastapi import BackgroundTasks

from typing import Dict

async def process_egg_scan_background(egg_scan: Dict):

# Implement your processing logic here

print(“Processing egg scan:”, egg_scan)


async def startup_event():

# Start Kafka consumer

consumer = KafkaConsumer(‘egg-scans’, bootstrap_servers=’your_kafka_broker:9092', group_id=’egg-processing-group’)

for message in consumer:

egg_scan = json.loads(message.value.decode(‘utf-8’))

# Execute processing logic in background

background_tasks.add_task(process_egg_scan_background, egg_scan)

# Protocol Buffers implementation (protobuf files and code generation)

# Example protobuf definition (egg_scan.proto)


syntax = “proto3”;

message EggScan {

string egg_id = 1;

bytes scan_data = 2;



# Compile protobuf definition to Python code

# protoc -I=. — python_out=. egg_scan.proto

# Generated Python code usage

from egg_scan_pb2 import EggScan

egg_scan = EggScan()

egg_scan.egg_id = “123”

egg_scan.scan_data = b”example_scan_data”

# Serialize to bytes

egg_scan_bytes = egg_scan.SerializeToString()

# Deserialize from bytes

deserialized_egg_scan = EggScan()


In this example:

The FastAPI application receives egg scan data via HTTP POST requests at the /process-egg-scan endpoint. Upon receiving the data, it sends it to a Kafka topic named ‘egg-scans’.

The Kafka consumer runs asynchronously on the FastAPI server using BackgroundTasks. It consumes messages from the ‘egg-scans’ topic and processes them in the background.

Protocol Buffers are used for serializing and deserializing the egg scan data efficiently.

Please note that this is a simplified example for demonstration purposes. In a production environment, you would need to handle error cases, implement proper serialization/deserialization, configure Kafka for production use, handle scaling and concurrency issues, and ensure proper security measures are in place.

Below are simplified examples of worker process scripts for two microservices: one for processing and saving data, and another for serving customer/admin requests related to the data.

Microservice 1: Processing and Saving Data



from kafka import KafkaConsumer

from pymongo import MongoClient

from egg_scan_pb2 import EggScan

# Kafka consumer configuration

consumer = KafkaConsumer(‘egg-scans’, bootstrap_servers=’your_kafka_broker:9092', group_id=’egg-processing-group’)

# MongoDB client initialization

mongo_client = MongoClient(‘mongodb://your_mongodb_uri’)

db = mongo_client[‘egg_scans_db’]

egg_scans_collection = db[‘egg_scans’]

# Processing and saving logic

for message in consumer:

egg_scan = EggScan()


# Process egg scan data

processed_data = process_egg_scan(egg_scan)

# Save processed data to MongoDB



Microservice 2: Serving Customer/Admin Requests



from fastapi import FastAPI

from pymongo import MongoClient

app = FastAPI()

# MongoDB client initialization

mongo_client = MongoClient(‘mongodb://your_mongodb_uri’)

db = mongo_client[‘egg_scans_db’]

egg_scans_collection = db[‘egg_scans’]


async def get_egg_scan(egg_id: str):

# Retrieve egg scan data from MongoDB

egg_scan_data = egg_scans_collection.find_one({“egg_id”: egg_id})

if egg_scan_data:

return egg_scan_data


return {“message”: “Egg scan data not found”}


async def get_all_egg_scans():

# Retrieve all egg scan data from MongoDB

all_egg_scans = egg_scans_collection.find()

return list(all_egg_scans)


In these examples:

- Microservice 1 (``) listens to the Kafka topic `’egg-scans’`, processes incoming egg scan data, and saves the processed data to a MongoDB database.

- Microservice 2 (``) is a FastAPI application that provides HTTP endpoints for retrieving egg scan data from MongoDB. It has two endpoints: one for retrieving data for a specific egg ID (`/egg-scans/{egg_id}`) and another for retrieving all egg scan data (`/egg-scans`).

These scripts are simplified for demonstration purposes. In a production environment, you would need to handle error cases, implement proper logging, configure authentication and authorization, and consider scalability and performance optimizations. Additionally, you may want to deploy these microservices in containers for easier management and scalability.

Hope this gives you an idea to start thinking of real solutions. Below are some reference links.



Think Different - Dhiraj Patra

I am a Software architect for AI, ML, IoT microservices cloud applications. Love to learn and share.