From 85344f5fd4ca8e5e6bef80832f4c19eae560f705 Mon Sep 17 00:00:00 2001 From: aseembits93 Date: Thu, 22 Jan 2026 17:26:25 -0800 Subject: [PATCH] keep editing --- docs/support-for-jit/index.mdx | 3 +-- 1 file changed, 1 insertion(+), 2 deletions(-) diff --git a/docs/support-for-jit/index.mdx b/docs/support-for-jit/index.mdx index 9aa91a6a4..ee03d5220 100644 --- a/docs/support-for-jit/index.mdx +++ b/docs/support-for-jit/index.mdx @@ -18,9 +18,8 @@ Each framework uses different compilation strategies to accelerate Python code: Numba compiles Python functions to optimized machine code using the LLVM compiler infrastructure. Codeflash can suggest Numba optimizations that use: -- **`@jit`** - General-purpose JIT compilation with optional flags. +- **`@jit`** - General-purpose JIT compilation with optional flags. Here is a non-exhaustive options which codeflash would apply on the function to optimize it via numba jit compilation. - **`noython=True`** - Compiles to machine code without falling back to the python interpreter. - - **`parallel=True`** - Enables automatic thread-level parallelization of the function across multiple CPU cores (no GIL!). - **`fastmath=True`** - Uses aggressive floating-point optimizations via LLVM's fastmath flag - **`cache=True`** - Numba writes the result of function compilation to disk which significantly reduces future compilation times.