Model factories

5 mins read

Through the use of simulation, it’s now possible to develop scalable cost-effective virtual worlds that eliminate costly product and manufacturing iterations.

At first sight, Nvidia and Siemens do not seem to be natural bedfellows. A chipmaker that grew on the back of the computer-gaming industry and the continuing for realism in 3D graphics and the other an industrial conglomerate with almost no commercial interests in the entertainment business.

But things have changed. Nvidia is now seeing much of its business coming from the rise of machine learning and thanks to sales of high-end accelerators is putting much more emphasis on industrial simulation. Siemens itself has gradually built up its own simulation interests, with purchases ranging from Mentor Graphics to MultiMechanics and Nextflow.

At the end of June, Nvidia and Siemens declared they had signed a deal to work together on the “industrial metaverse”, with the public backing of manufacturers such as BMW and infrastructure operators Govia Thameslink in the rail industry and Norwegian electrical-distribution operator Elvia.

The focus of that announcement was the concept of the digital twin, a virtual representation of a complete factory, the substations and delivery points that make up a typical electricity grid or the trains, signalling and sensors that make up a rail network.

In a panel on digital twins at the Design Automation Conference (DAC) in San Francisco earlier in the summer, Bryan Ramirez, director of industries, solutions and ecosystems at Siemens EDA, explained their value to design teams: "By designing, verifying and optimising products and production, before taking them to the real world, you can reduce costs and improve efficiency. By developing scalable cost-effective virtual worlds, you can get to market faster, eliminating costly product and manufacturing iterations. And you can reduce risks by ensuring designs are fully functional and that they satisfy requirements.”

It is not just product design on which Siemens has set its sights, but the factories used to make them. Speaking at the event to mark the Nvidia-Siemens deal, BMW management board member Milan Nedeljković said, “To bring everything as digital twins into a virtual world opens up a completely new dimension. We are getting the opportunity to set up our systems, plan the systems and eventually even steer the whole plant with it. It's a huge, huge field.”

Digital twin modelling

The future Siemens and others in the industrial space anticipate is one in which the digital twin can model changes made at the software level and have them pushed out automatically across the hardware infrastructure when deemed to be ready, mimicking the way in which the online-services companies such as Amazon push changes out to the thousands of servers in their data centres.

Though such changes cannot change the hardware on the shopfloor, they can change how those machines and robots interact if they are designed to be updated or, more likely, contain a basic set of functions that are managed and coordinated by upstream computers.

Whereas machine tools today are often the result of bespoke projects carried out by a specialist integrator for a customer, the direction is towards the “white box” model now found in telecommunications.

What gets physically delivered is a fairly generic piece of hardware that receives software updates over its lifetime that activate functions and support new services when needed. A second element is that, if these machines are able to use high-speed networks such as gigabit ethernet or private-5G cellular, they can call upon other local or edge servers to deal with functions, such as machine-learning models, that need more compute resources.

“Artificial intelligence is the next big thing for automation in a production field: we can use it for camera systems for quality inspection, for logistics and transport, in maintenance and machine steering,” says Nedeljković.

Dirk Didascalou, CTO of Siemens Digital Industries. These approaches were used at the company’s electronics factory in Amberg in southern Germany.

“We deployed smart robotics, AI-powered processes and controls, predictive-maintenance algorithms and all that to achieve 140 per cent factory output with double the product complexity. The most interesting part was that it was without needing additional resources.”

Didascalou points to machine-tool builder Heller as an early example of what he calls the “hardware as a service” market for programmable machine tools.

According to Heller there is a lot you can as long as you have the computing resources nearby. In the machining of metal engine blocks for road vehicles, for example, the robot needs access to a large variety of tools that are stored in magazines. A mechanical arm inside the machine removes the tools from their shelves in a known sequence and places them in a transfer compartment and from there they go into the spindle to perform the job before being replaced. With a predictable sequence, you can easily predict where each tool needs to be.

For more flexible production environments, the waiting time for tool movements can exceed the actual machining time. To improve the flow, Heller developed algorithms that run on an edge server close to the machine that can calculate the optimum sequence of movements and speed up production, which provides a cheaper alternative to upgrading the machinery itself.

“Now they go even further,” says Didascalou. “They have a completely new business model where a customer can buy or lease a machine at a much lower price with the basic functionality and then later when they need it, or if they just want to try it, upgrade to premium features at the press of a button. It's literally the equivalent of in-app purchase made possible nowadays for industrial operations thanks to digitalisation.”

Deployment

At the deployment end, industrial suppliers are turning to technologies that come from Nvidia’s adopted home of the cloud data centre. A major element of this digitalised factory is the idea that tasks can easily be parcelled up and despatched to whatever hardware around the factory has capacity and the required level of communications to the machines they will be coordinating. The cloud’s answer to that, which has been taken up by Arm, among others, through Project Cassini, is to use containerisation as a way of packaging the software.

Containers parcel applications together with the specific libraries and operating-system functions they need so that they can be deployed to any compatible computer, whether it is an embedded module or a cloud server. The main requirement is that it runs Linux or a similar operating system and that there is some kind of hardware virtualisation in place. Though industrial systems present developers with a much wider range of I/O than a data-centre application, much of that hardware sensitivity can, in principle, be confined to the boards inside the robots and machines while the higher-performance computers that coordinate them run the vanilla Linux environments able to receive the containers.

In a model that is built around the digital twin, developers would create applications initially for the virtual model running somewhere in the cloud: running tests to make sure a change does not upset some process down the line. Once they are satisfied, orchestration tools then deploy the new or reworked applications inside their containers to each of the target systems whether they are on the shopfloor, in nearby edge-server cabinets or some way away in a data centre.

The next phase in the evolution is to ensure that this approach to development and integration works from beginning to end. Siemens CEO Roland Busch says it is about making real time decisions with confidence and using the digital-twin version to make sure before hitting the deployment button. “It means you really change the real world, and you better get it right first time.”