mirror of
https://github.com/codeflash-ai/codeflash.git
synced 2026-05-04 18:25:17 +00:00
reordering sections
This commit is contained in:
parent
ec3eed6b8a
commit
8c66acb450
1 changed files with 14 additions and 14 deletions
|
|
@ -10,6 +10,19 @@ keywords: ["JIT", "just-in-time", "numba", "pytorch", "tensorflow", "jax", "GPU"
|
|||
|
||||
Just-in-time (JIT) compilation is a runtime technique where code is compiled into machine code on the fly, right before it is executed, to improve performance. Codeflash supports optimizing numerical code using Just-in-Time (JIT) compilation via leveraging JIT compilers from the **Numba**, **PyTorch**, **TensorFlow**, and **JAX** frameworks.
|
||||
|
||||
## How Codeflash Optimizes with JIT
|
||||
|
||||
When Codeflash identifies a function that could benefit from JIT compilation, it:
|
||||
|
||||
1. Rewrites the code in a JIT-compatible format, which may involve breaking down complex functions into separate JIT-compiled components.
|
||||
2. Generates appropriate tests that are compatible with JIT-compiled code, carefully handling data types since JIT compilers have stricter input type requirements.
|
||||
3. Disables JIT compilation when running coverage and tracer. This ensures accurate coverage and trace data, since both rely on Python bytecode execution. JIT-compiled code bypasses Python bytecode, so it would prevent proper tracking.
|
||||
4. Disables the Line Profiler for JIT compiled code. It could be possible to disable JIT compilation and run the line profiler, but that would lead to inaccurate information which could misguide the optimization process.
|
||||
|
||||
## Configuration
|
||||
|
||||
JIT compilation support is **enabled automatically** in Codeflash. You don't need to modify any configuration to enable JIT-based optimizations. Codeflash will automatically detect when JIT compilation could improve performance and suggest appropriate optimizations.
|
||||
|
||||
## When JIT Compilation Helps
|
||||
|
||||
JIT compilation is most effective for:
|
||||
|
|
@ -241,17 +254,4 @@ TensorFlow uses `@tf.function` to compile Python functions into optimized Tensor
|
|||
|
||||
JAX uses XLA to JIT compile pure functions into optimized machine code. It emphasizes functional programming patterns and captures side-effect-free operations for optimization.
|
||||
|
||||
- **`@jax.jit`** - JIT compiles functions using XLA with automatic operation fusion.
|
||||
|
||||
## How Codeflash Optimizes with JIT
|
||||
|
||||
When Codeflash identifies a function that could benefit from JIT compilation, it:
|
||||
|
||||
1. Rewrites the code in a JIT-compatible format, which may involve breaking down complex functions into separate JIT-compiled components.
|
||||
2. Generates appropriate tests that are compatible with JIT-compiled code, carefully handling data types since JIT compilers have stricter input type requirements.
|
||||
3. Disables JIT compilation when running coverage and tracer. This ensures accurate coverage and trace data, since both rely on Python bytecode execution. JIT-compiled code bypasses Python bytecode, so it would prevent proper tracking.
|
||||
4. Disables the Line Profiler for JIT compiled code. It could be possible to disable JIT compilation and run the line profiler, but that would lead to inaccurate information which could misguide the optimization process.
|
||||
|
||||
## Configuration
|
||||
|
||||
JIT compilation support is **enabled automatically** in Codeflash. You don't need to modify any configuration to enable JIT-based optimizations. Codeflash will automatically detect when JIT compilation could improve performance and suggest appropriate optimizations.
|
||||
- **`@jax.jit`** - JIT compiles functions using XLA with automatic operation fusion.
|
||||
Loading…
Reference in a new issue