I have three separate background processes. Two of them run independently, but the third depends on the results of the first two. How do I postpone the execution of the third process until both of the first two have finished and emitted their results?
This is a common situation when working with background threads in Qt: you have tasks that can run in parallel, but a later task needs the results of the earlier ones before it can begin. Since threads can finish in any order, you can't simply start all three at once and hope for the best.
The solution is to use signals combined with a small piece of state tracking. Each time one of the first two workers finishes, you store its result and check whether the other has also completed. Only when both results are available do you launch the third worker.
Let's walk through how to set this up.
The challenge with dependent threads
When you start two QRunnable workers on a QThreadPool, they run concurrently. One might finish in 2 seconds, the other in 10. There's no built-in Qt mechanism that says "wait for these two signals, then do something." You need to track that state yourself.
This might sound complicated, but the pattern is straightforward once you see it. The idea is:
- Start Worker A and Worker B.
- Connect both of their
finishedsignals to the same slot. - In that slot, store the result and check: do we have results from both A and B?
- If yes, start Worker C with those results.
Setting up the worker signals
Since QRunnable doesn't inherit from QObject, it can't have signals directly. The standard approach is to create a small QObject subclass to hold the signals, and attach an instance of it to each worker. If you're new to this pattern, our guide to multithreading with QThreadPool covers the fundamentals in detail.
You might wonder: can't you just use multiple inheritance (QRunnable, QObject) to add signals directly? Unfortunately, this causes crashes because QRunnable and QObject have different C++ base classes that don't combine well through Python's multiple inheritance. The separate signals object is the reliable approach.
from PySide6.QtCore import QObject, Signal
class WorkerSignals(QObject):
finished = Signal(str, object) # worker_id, result data
Here we define a finished signal that carries two pieces of information: a string identifying which worker finished, and the result data (as a generic object).
Creating the workers
Next, define a base runner class that accepts some input data, does its work in run(), and emits the result along with its worker ID.
import time
from PySide6.QtCore import QRunnable, Slot
class BaseRunner(QRunnable):
def __init__(self, data):
super().__init__()
self.data = data
self.signals = WorkerSignals()
@Slot()
def run(self):
print(f"Running {self.worker_id}...")
time.sleep(self.delay) # simulate work
result = self.data + " [processed]"
print(f"{self.worker_id} done.")
self.signals.finished.emit(self.worker_id, result)
Each BaseRunner instance gets its own WorkerSignals object. This matters — if you put the signals object on the class itself (as a class attribute), all instances would share the same signals, and your connections would pile up in unexpected ways.
Now create specific subclasses for each task:
class RunnerA(BaseRunner):
worker_id = "A"
delay = 5 # simulates a 5-second task
class RunnerB(BaseRunner):
worker_id = "B"
delay = 10 # simulates a 10-second task
class RunnerC(BaseRunner):
worker_id = "C"
delay = 3 # simulates a 3-second task
In a real application, run() would do actual work — database queries, file processing, API calls — rather than sleeping. The delays here just make it easy to observe the timing behavior.
Coordinating the workers
The coordination happens in your main window. When you click a button, workers A and B start. Both have their finished signal connected to the same method, check_and_start_third. That method stores results in a dictionary and checks whether both A and B have reported in. For more on how signals and slots work in PySide6, including connecting multiple signals to a single slot, see our dedicated tutorial.
from PySide6.QtCore import QThreadPool
from PySide6.QtWidgets import QApplication, QMainWindow, QPushButton
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
button = QPushButton("Start")
button.pressed.connect(self.start_initial_runners)
self.setCentralWidget(button)
self.results = {}
self.threadpool = QThreadPool()
def start_initial_runners(self):
self.results = {} # Clear any previous results
runner_a = RunnerA("dataset-one")
runner_b = RunnerB("dataset-two")
runner_a.signals.finished.connect(self.check_and_start_third)
runner_b.signals.finished.connect(self.check_and_start_third)
self.threadpool.start(runner_a)
self.threadpool.start(runner_b)
def check_and_start_third(self, worker_id, data):
print(f"Received result from {worker_id}: {data}")
self.results[worker_id] = data
if "A" in self.results and "B" in self.results:
print("Both results ready. Starting C...")
combined_data = f"{self.results['A']} + {self.results['B']}"
runner_c = RunnerC(combined_data)
runner_c.signals.finished.connect(self.handle_final_result)
self.threadpool.start(runner_c)
else:
print("Waiting for the other worker to finish...")
def handle_final_result(self, worker_id, data):
print(f"Final result from {worker_id}: {data}")
The self.results dictionary is the state tracker. The first time check_and_start_third is called, only one entry exists, so the if check fails and we wait. The second time it's called, both entries are present and Worker C starts with the combined data.
How the output looks
If you run the complete example below, you'll see output like this:
Workers A and B started.
Running A...
Running B...
A done.
Received result from A: dataset-one [processed]
Waiting for the other worker to finish...
B done.
Received result from B: dataset-two [processed]
Both results ready. Starting C...
Running C...
C done.
Final result from C: dataset-one [processed] + dataset-two [processed] [processed]
Worker A finishes first (5-second delay vs. B's 10 seconds), but Worker C doesn't start until B finishes too.
Why there's no built-in for this
You might expect Qt to have a "wait for multiple signals" feature. The reason it doesn't is that this pattern involves maintaining state — you need to remember what's happened so far and make decisions based on it. That's inherently application-specific. How many signals to wait for, what data to collect, what conditions must be met — these all vary from case to case. The dictionary-based approach shown here is flexible enough to handle most situations.
If your tasks were purely sequential (A → B → C, each depending only on the previous one), you could simplify things by connecting each worker's finished signal directly to a method that starts the next worker. You could even use a QThreadPool with setMaxThreadCount(1) to force serial execution, though you'd still need signals to pass data between stages.
Complete working example
Here's everything together in a single file you can copy and run:
import sys
import time
from PySide6.QtCore import QObject, QRunnable, QThreadPool, Signal, Slot
from PySide6.QtWidgets import QApplication, QMainWindow, QPushButton
class WorkerSignals(QObject):
finished = Signal(str, object) # worker_id, result data
class BaseRunner(QRunnable):
def __init__(self, data):
super().__init__()
self.data = data
self.signals = WorkerSignals()
@Slot()
def run(self):
print(f"Running {self.worker_id}...")
time.sleep(self.delay)
result = self.data + " [processed]"
print(f"{self.worker_id} done.")
self.signals.finished.emit(self.worker_id, result)
class RunnerA(BaseRunner):
worker_id = "A"
delay = 5
class RunnerB(BaseRunner):
worker_id = "B"
delay = 10
class RunnerC(BaseRunner):
worker_id = "C"
delay = 3
class MainWindow(QMainWindow):
def __init__(self):
super().__init__()
self.setWindowTitle("Thread Coordination Example")
button = QPushButton("Start")
button.pressed.connect(self.start_initial_runners)
self.setCentralWidget(button)
self.results = {}
self.threadpool = QThreadPool()
def start_initial_runners(self):
self.results = {}
runner_a = RunnerA("dataset-one")
runner_b = RunnerB("dataset-two")
runner_a.signals.finished.connect(self.check_and_start_third)
runner_b.signals.finished.connect(self.check_and_start_third)
self.threadpool.start(runner_a)
self.threadpool.start(runner_b)
print("Workers A and B started.")
def check_and_start_third(self, worker_id, data):
print(f"Received result from {worker_id}: {data}")
self.results[worker_id] = data
if "A" in self.results and "B" in self.results:
print("Both results ready. Starting C...")
combined_data = f"{self.results['A']} + {self.results['B']}"
runner_c = RunnerC(combined_data)
runner_c.signals.finished.connect(self.handle_final_result)
self.threadpool.start(runner_c)
else:
print("Waiting for the other worker to finish...")
def handle_final_result(self, worker_id, data):
print(f"Final result from {worker_id}: {data}")
app = QApplication(sys.argv)
window = MainWindow()
window.show()
app.exec()
Click the "Start" button and watch the console output. You'll see A and B run in parallel, and C starts only after both have finished.
Extending this pattern
This same approach scales well. If you had four prerequisite tasks instead of two, you'd check for all four keys in your results dictionary before proceeding. You could also wrap this pattern into a reusable class that accepts a list of runners and a "continuation" runner, triggering the continuation automatically once all prerequisites are satisfied.
For purely linear chains — where each step depends only on the previous one — connecting each worker's finished signal to a method that starts the next worker is simpler and doesn't require the dictionary at all. You can also transmit extra data with your signals to pass results from one stage to the next without needing shared state.
Packaging Python Applications with PyInstaller by Martin Fitzpatrick
This step-by-step guide walks you through packaging your own Python applications from simple examples to complete installers and signed executables.