Neuromorphic Computing in 2026: Where Intel's Loihi and IBM's NorthPole Actually Stand


Every few years, the chip industry falls in love with a new paradigm that’s supposed to change everything. Quantum computing had its moment. Now it’s neuromorphic computing’s turn in the spotlight — and honestly, this one might actually deserve the attention.

Neuromorphic chips mimic the structure of biological neural networks. Instead of processing instructions sequentially like traditional CPUs, or crunching matrix math like GPUs, they use artificial neurons and synapses that fire asynchronously, processing information more like a brain does. The promise? Orders-of-magnitude improvements in energy efficiency for AI workloads.

In early 2026, two chips dominate the conversation: Intel’s Loihi 2 and IBM’s NorthPole. Let’s look at where they actually are — not where press releases say they are.

Intel Loihi 2: The Research Darling

Intel’s neuromorphic research lab in Portland has been working on Loihi since 2017. The second-generation Loihi 2 chip, built on Intel 4 process technology, packs about 1 million artificial neurons per chip. Intel’s Hala Point system — their largest neuromorphic deployment — strings together 1,152 Loihi 2 chips for a total of 1.15 billion neurons.

That sounds impressive. And for specific tasks, it genuinely is.

Loihi 2 excels at sparse, event-driven workloads. Think sensor fusion, anomaly detection, and robotic control — tasks where most of the input data is “nothing’s happening” punctuated by brief bursts of activity. For these workloads, Loihi 2 uses roughly 1/100th the energy of an equivalent GPU solution.

Where Loihi struggles is generality. Intel’s Lava software framework for programming the chip has improved, but it’s still a specialist tool. You can’t just port a PyTorch model onto Loihi and expect it to work. You need to rethink your approach from scratch using spiking neural networks (SNNs), and the talent pool for that is tiny.

Intel’s commercial strategy for Loihi remains unclear. They’ve shipped evaluation systems to dozens of research institutions and a handful of defence contractors, but there’s no sign of a mass-market product. The Gaudi accelerator line gets the commercial AI focus. Loihi remains a research project with incredible potential and limited real-world deployment.

IBM NorthPole: The Efficiency Champion

IBM took a different approach with NorthPole, announced in late 2023 and now in its second revision. Rather than trying to fully replicate biological neurons, NorthPole borrows specific ideas from brain architecture — particularly the tight coupling of memory and compute — while keeping enough conventional design to be more programmable.

The results are striking. NorthPole achieves 25 times better energy efficiency than leading GPUs on inference tasks like image classification, while fitting on a chip smaller than a fingernail. It does this by eliminating the memory bottleneck — the von Neumann bottleneck — that plagues traditional architectures.

NorthPole’s limitation is that it’s inference-only. You can’t train models on it. And it’s optimised for specific neural network architectures — convolutional networks work beautifully, but large transformer models don’t map well to the architecture.

IBM has been more commercially aggressive than Intel, partnering with several automotive companies for edge AI applications. If you need to run a vision model in a self-driving car at minimal power consumption, NorthPole is genuinely compelling. But it’s not going to replace your data centre GPUs anytime soon.

The Competitive Landscape Beyond the Big Two

It’d be a mistake to focus only on Intel and IBM. Several other players are doing interesting work.

BrainChip’s Akida is arguably the most commercially available neuromorphic processor today. The Australian company (yes, neuromorphic’s biggest commercial bet is headquartered in Sydney) has its second-gen Akida chip in production and shipping to customers in automotive, IoT, and defence. It’s less powerful than Loihi or NorthPole, but it’s real silicon you can actually buy.

SynSense, a Zurich-based startup, is focused on ultra-low-power edge applications. Their Xylo chip targets hearing aids and wearable health monitors — applications where the power budget is measured in microwatts.

GrAI Matter Labs (acquired by Snap in 2023) went the consumer route, building neuromorphic processors for AR/VR applications. Whether Snap will actually productise this technology remains to be seen.

The Fundamental Question

Here’s what I keep coming back to: do we actually need neuromorphic chips?

NVIDIA’s conventional GPU architecture keeps getting more efficient. AMD is catching up. Even Google’s TPUs handle inference workloads at respectable power levels. The software ecosystem around these platforms is mature, the talent pool is deep, and the performance is good enough for most applications.

Neuromorphic chips shine at the edges — literally. Edge computing, autonomous vehicles, space applications, battery-powered devices — anywhere power efficiency matters more than raw throughput. For cloud data centres running massive transformer models, conventional architectures probably win for the foreseeable future.

The most likely path forward isn’t neuromorphic replacing conventional compute. It’s heterogeneous systems that use the right processor for each workload. Your phone might have a neuromorphic co-processor handling always-on sensing while the main GPU handles heavy lifting.

What to Watch in 2026

Three things will determine whether neuromorphic goes mainstream or stays niche:

  1. Software tooling. If someone builds a compiler that can automatically convert standard neural networks to spiking neural networks without manual intervention, the adoption barrier drops dramatically.

  2. Killer application. Neuromorphic needs its “AlexNet moment” — a demonstration so compelling that the industry can’t ignore it. Autonomous vehicles might be it, but we’re still waiting.

  3. Manufacturing scale. Chips get cheap when you make billions of them. Neuromorphic volumes are still tiny, keeping costs high.

I’m cautiously optimistic. The physics argument for neuromorphic computing is sound — brains process incredible amounts of information on about 20 watts. We should be able to build chips that capture some of that efficiency. Whether the industry has the patience to see it through is another question entirely.