| Conference: | Verification Futures 2025 (click here to see full programme) |
| Speaker: | Georg Meinhardt |
| Presentation Title: | Rethinking AI Inference and Hardware Verification through Differentiable Logic Gate Networks (difflogic) |
| Abstract: | Efficient AI inference is critical for edge computing and real-time systems. However, current hardware inference solutions such as binarized neural networks or quantization-based methods still rely heavily on resource-intensive operations like matrix multiplications and frequent memory access, limiting their latency, throughput, and power efficiency. In this presentation, we provide an introduction to differentiable logic gate networks (difflogic), a new neural network (AI) architecture designed specifically to address these limitations while enabling formal and functional verification directly at the logic-gate level. Logic gate networks are neural network architectures using only fundamental digital circuit elements such as AND, OR, and XOR gates-completely eliminating matrix multiplications, integer arithmetic, and RAM-based weight storage. Previously, logic gate-based models required combinatorial optimization techniques for training, which limited their scalability and practical deployment. By employing differentiable relaxation techniques, logic gate networks can now be trained effectively using standard gradient descent methods. Using differentiable relaxations and end-to-end gradient-based learning leads to advances in both inference and verification efficiency:
|
| Speaker Bio: | Georg Meinhardt is Founding Engineer at DiffLogic Inc, the team behind commercializing differentiable logic gate networks in the financial industry. A mathematician (Oxford) turned hardware‑AI specialist, he bridges algorithm design and chip implementation, pushing sub‑10 ns neural inference on FPGAs. |
| Key Points: |
|

