Yes. Excel maintains a dependency graph of the cells in a workbook. It uses this to determine what cells need to be recalculated when a cell changes, and also to determine the best order in which to evaluate cells when calculating the workbook.
For example, if you had a simple sheet where:
A1 = functionA()
A2 = functionB(A1)
Excel needs to evaluate A1 before it can evaluate A2, as A2 depends on A1.
Now suppose you had a (slightly) more complex example:
A1 = functionA()
A2 = functionB(A1)
A3 = functionC(A1)
A4 = functionD(A2, A3)
Here, A1 must be calculated before A2 and A3 can be calculated, and both A2 and A3 must be calculated before A4 can be calculated. You can visualise this as a dependency graph where A4 depends on A2 and A3, and A2 and A3 depend on A1.
A1 --> A2 --> A4
\-> A3 --^
In this case, once A1 is calculated then A2 and A3 can be calculated in parallel. Excel supports this type of parallelism, as long as the functions that are to be called in parallel are thread safe.
A function is thread safe if it can safely be called from a background thread at the same time other code is executing (possibly even the same function could be executed concurrently on different threads).
To mark a Python function as thread-safe use the "thread_safe" option to @xl_func. eg:
from pyxll import xl_func
@xl_func(thread_safe=True)
def functionB(input):
... thread safe calculation ...
return result
A note about multi-threading in Python
Python supports running code across multiple threads, but, because of its implementation this does not usually give the performance improvement you would expect. Python has something called the GIL, or Global Interpreter Lock. This prevents Python code from running concurrently as only one thread can run the Python interpreter at a time.
For CPU bound code, some C extension modules may release the GIL before starting some intensive work. This allows the program to make better use of the number of CPU cores available. Plain Python code however will not do this, and you may find the performance is actually slightly worse if run across multiple threads. One solution is to use Python's multiprocessing module and run CPU intensive work across multiple Python processes, and feed the results back to the main Python/Excel process.
AsyncIO and IO bound operations
For operations that are IO bound rather than CPU bound, Python's asyncio package can help. This allows tasks to yield to other tasks while they wait for an external (typically IO) routine to complete.
For details of how PyXLL supports async functions, see https://www.pyxll.com/docs/userguide/udfs/asyncfuncs.html.
A final note...
Before making any optimisations, it's important to understand what's going on in your application and where the time is going. Without doing that, you may waste a lot of time 'optimising' only to find that your application still doesn't run any faster!
See this blog post https://www.pyxll.com/blog/how-to-profile-python-code-in-excel/ for help getting started profiling your Excel application.