PyXLL uses the same Python runtime as you run when using a Python console or executing a Python script, so with exactly the same inputs, the code will run at the same speed.
Having said that, there are a few of things that can slow down your Python code:
1. Attaching a debugger to Excel. If you are using a debugger then that will slow down how quickly your Python code runs.
2. PyXLL "allow_abort" feature. If you have "allow_abort" enabled, either in your pyxll.cfg file or in your @xl_func function decorator, that will cause your Python code to run more slowly. This is because periodically while your Python code is running, PyXLL has to interrupt it to check if the user has aborted the function.
3. PyXLL's deep reload import tracker. This is a very minor overhead, and in most situations will not result in any slow down to your Python code at all. PyXLL adds an import hook that runs each time a module is imported to track the dependencies between modules, so that it can reload them in the correct order. If your code is doing a lot of imports and you suspect this may be slowing things down you can disable it by setting "deep_reload_disable=1" in the PYXLL section of your pyxll.cfg file. See also this note for PyXLL 4.3.0 or 4.3.1 users.
Sometimes when people find that their code is running slower in Excel it is because the inputs are different and so the same thing is not being measured. This can be subtle, for example, it can be the difference between int and float inputs, or the dtypes of a pandas.DataFrame that causes the difference. To determine if the input are identical, it can be helpful to log the inputs to the function you are timing so you can check what's actually be passed to your function. Another option is to use Pickle to serialise the inputs to your function and then check calling it from Python with the exact same inputs.
Another thing that sometimes causes confusion is the number of times functions are called. If inputs are changing your Python functions will get called each time the inputs change, and this may mean you are actually doing more work in Excel than in your Python script.
To really understand what's going on in your Excel workbook you can use a Python performance profiler to measure what's actually being called. That way you can see how many times each function is being called and identify hotspots in your code. If you are concerned that it is not running at the same speed as when run from a Python prompt, you can use the Python profiler to profile the code running inside and outside of Excel and compare the results.
The following blog for details of how you can profile your Python code:
https://www.pyxll.com/blog/how-to-profile-python-code-in-excel/
Footnotes
PyXLL versions 4.3.0 and 4.3.1 had a problem with the deep reload import hook that did cause a performance problem when doing lots of imports. This was fixed in 4.3.2, and so if you are using either of those two versions of PyXLL we would recommend upgrading.