Revolutionizing compiler technology with Reactant.jl and MLIR to transform Julia into the ultimate low-level, memory-safe language for embedded systems.
using Reactant using MLIR @reactant function kernel(x) y = x * 2.5f0 z = y + 1.0f0 return z end # Compiles to optimized MLIR # with full AOT capabilities ir = lower_to_mlir(kernel)
We're bridging the gap between Julia's high-level productivity and low-level performance by leveraging MLIR for world-class AOT compilation. Our stack brings Mojo/Taichi/Triton-style optimizations to Julia's ecosystem.
Zero-cost abstractions with Rust-like memory guarantees
Full ahead-of-time compilation for embedded targets
Direct optimization through MLIR's compiler infrastructure
Our compiler transforms idiomatic Julia code into highly optimized MLIR, enabling performance that rivals hand-written C++ while maintaining Julia's expressiveness.
// -----// IR Dump After Optimization //----- // func.func @kernel(%arg0: f32) -> f32 { %0 = arith.mulf %arg0, 2.5 : f32 %1 = arith.addf %0, 1.0 : f32 return %1 : f32 }
We're positioning Julia as the ultimate alternative to Rust and C++ for embedded and high-performance computing.
| Feature | Julia + Reactant | Rust | C++ |
|---|---|---|---|
| Memory Safety | |||
| AOT Compilation | |||
| MLIR Integration | |||
| Embedded Ready | |||
| High-Level Productivity |
Join us in building the future of Julia as a memory-safe, embedded powerhouse with world-class performance.
using Reactant using MLIR @reactant function vecadd(a, b) c = similar(a) @inbounds for i in eachindex(a) c[i] = a[i] + b[i] end return c end # Compile to MLIR and run a = rand(Float32, 1000) b = rand(Float32, 1000) compiled = lower_to_mlir(vecadd) result = compiled(a, b)