Winbond introduces CUBE architecture for Edge AI devices

2 mins read

Winbond Electronics has unveiled a powerful enabling technology that enables more affordable Edge AI computing in mainstream use cases.

The company’s new customised ultra-bandwidth elements (CUBE) enable memory technology to be optimised for running generative AI on hybrid edge/cloud applications.

CUBE has been designed to enhance the performance of front-end 3D structures such as chip on wafer (CoW) and wafer on wafer (WoW), as well as back-end 2.5D/3D chip on Si-interposer on substrate and fan-out solutions. It looks to meet the growing demands of edge AI computing devices and is compatible with memory density from 256Mb to 8Gb with a single die. It can also be 3D stacked to enhance bandwidth while reducing data transfer power consumption.

Winbond said that CUBE will enable seamless deployment across various platforms and interfaces - the technology is suited to advanced applications such as wearable and edge server devices, surveillance equipment, ADAS, and co-robots.

"The CUBE architecture enables a paradigm shift in AI deployment," said a spokesperson for Winbond. "We believe that the integration of cloud AI and powerful edge AI will define the next phase of AI development. With CUBE, we are unlocking new possibilities and paving the way for improved memory performance and cost optimisation on powerful Edge AI devices."

Key features of CUBE include:

  • Power efficiency: CUBE delivers efficiency, consuming less than 1pJ/bit, ensuring extended operation and optimised energy usage.
  • Improved performance: With bandwidth capabilities ranging from 32GB/s to 256GB/s per die, CUBE ensures accelerated performance that exceeds industry standards.
  • Compact size: CUBE offers a range of memory capacities from 256Mb to 8Gb per die, based on the 20nm specification now and 16nm in 2025. This allows CUBE to fit into smaller form factors seamlessly. The introduction of through-silicon vias (TSVs) further enhances performance, improving signal and power integrity. Additionally, it reduces the IO area through a smaller pad pitch, as well as heat dissipation, especially when using SoC on the top die and CUBE on the bottom die.
  • Cost-Effective Solution with High Bandwidth: The CUBE IO offers an impressive data rate of up to 2Gbps with total 1K IO. When paired with legacy foundry processes like 28nm/22nm SoC, CUBE can deliver ultra-high bandwidth capabilities, reaching 32GBs-256GB/s (=HBM2 Bandwidth), equivalent to harnessing the power of 4-32pcs LP-DDR4x 4266Mbps x16 IO bandwidth.
  • Reduction in SoC Die Size for Improved Cost Efficiency: By stacking the SoC (top die without TSV) atop the CUBE (bottom die with TSV), it becomes possible to minimise the SoC die size, eliminating any TSV penalty area. This not only enhances cost advantages, but also contributes to the overall efficiency, including small form factor of Edge AI devices.

"CUBE can unleash the full potential of hybrid edge/cloud AI to elevate system capabilities, response time, and energy efficiency," according to Winbond.

Winbond is currently actively engaging with partner companies to establish the 3DCaaS platform, which will leverage CUBE's capabilities.