Forget quantum computing – Fujitsu has a better idea

Fujitsu has teamed up with the University of Toronto to concoct a new computing architecture which is thousands of times faster than a conventional machine, and offers benefits over and above a quantum computer in terms of swiftly solving real-world problems which require heavyweight analysis.

The new architecture sticks with conventional semiconductor tech and is built on a ‘basic optimisation circuit’ that uses FPGAs, and it offers flexible circuit configurations, with these basic circuits able to be implemented in parallel at high densities.

The end result is that the architecture, which has already been prototyped, can perform computations around 10,000 times faster than a conventional computer, and it can also deal better with ‘combinatorial optimisation problems’ than a quantum computer.

Combinatorial what-now? Basically, this refers to complex real-world problems such as planning disaster recovery operations, formulating economic policy, optimising investment portfolios, or simultaneously managing multiple projects on a strict budget.

In other words, problems which involve a huge amount of factors and elements that must be considered and evaluated in relation to each other, and all weighed up together in terms of attempting to make the optimal decision. Real head-scratchers, as they might otherwise be known.

Quantum annealing 

While quantum computers can tackle such problems way, way faster than a conventional computer using a process called ‘quantum annealing’, their weakness is that they can’t handle a wide range of problems due to the way they are built.

Fujitsu’s new architecture on the other hand uses parallelisation to great effect, and unlike quantum computers boasts a fully connected structure that allows signals to move freely between the basic optimisation circuits, making it capable of dealing with a wide range of problems and factors – and still offer the speed seen with quantum computers.

Fujitsu says it has implemented basic optimisation circuits using an FPGA to handle combinations which can be expressed as 1024 bits, which when using a ‘simulated annealing’ process ran 10,000 times faster than conventional processors in terms of handling the aforementioned thorny combinatorial optimisation problems.

The company says it will work on improving the architecture going forward, and by the fiscal year 2018, it expects “to have prototype computational systems able to handle real-world problems of 100,000 bits to one million bits that it will validate on the path toward practical implementation”.

So a real machine which can pull off these sort of heavy-duty analysis tricks might not be so far off realisation.

Darren is a freelancer writing news and features for TechRadar (and occasionally T3) across a broad range of computing topics including CPUs, GPUs, various other hardware, VPNs, antivirus and more. He has written about tech for the best part of three decades, and writes books in his spare time (his debut novel - 'I Know What You Did Last Supper' - was published by Hachette UK in 2013).