Monday, December 15, 2025

 This table provides chatbot prompts for each of the SQL queries in the benchmark: https://github.com/ravibeta/ezbenchmark

ezbenchmark SQL Queries and AI Prompts side-by-side comparisons.

| SQL Query File | Natural Language Prompt |

|----------------------------|-------------------------------------------------------------------------------------------------------------------------------------------|

| q1_object_counts.sql | For each drone mission, tell me how many objects of each type were detected, and also give me the average number of detections per mission for comparison. |

| q2_mission_duration.sql | Identify the drone missions that lasted the longest and covered the largest geographic area, and show me their mission IDs and summary statistics. |

| q3_payload_accuracy.sql | Compare how different payload configurations (like thermal camera vs RGB camera) performed in terms of detection accuracy, and rank them by effectiveness. |

| q4_terrain_breakdown.sql | Break down object detections by terrain type — forest, urban, water — and highlight which terrain produced the highest detection counts during missions. |

| q5_weather_reliability.sql | Show me how mission success rates vary depending on weather conditions like clear skies, rain, or high wind. |

| q6_time_of_day.sql | Analyze detection counts across different time windows — morning, afternoon, evening — and tell me which time of day yields the most reliable detections. |

| q7_external_layers.sql | Correlate drone mission detections with external map layers, like vegetation density or building footprints, and show me where detections align most strongly. |

| q8_cost_efficiency.sql | Estimate the compute and storage costs for each drone mission, and compare them to the detection yield so I can see cost‑effectiveness. |

| q9_latency_comparison.sql | Compare the latency of object detection when run on edge devices versus cloud servers, and tell me which pipeline is faster and by how much. |

| q10_top_missions.sql | Rank all drone missions by detection quality and throughput, and list the top missions that achieved the highest benchmarks. |

| q11_anomaly_patterns.sql | Identify drone missions that experienced repeated anomalies, such as dropped frames or failed detections, and summarize the failure patterns. |

| q12_synthetic_vs_real.sql | Compare the synthetic workload generator outputs with real mission telemetry, and tell me how closely they match in terms of detection counts and mission duration. |

References:

Previous article: https://1drv.ms/w/c/d609fb70e39b65c8/IQDqrulu7lAcRrfqGYWxXmzpAQzvynSprwP0lbMQqGEaD1w?e=8RC7vg


No comments:

Post a Comment