Skip to content
On this page

3D Virtual Model

On this page

3D Virtual Model

3D Engine

Metaverse Tools

List

Platforms/Engines

3D Modeling Tools

Avatar creation

Augmented Reality

Metaverse Environment

Metaverse Map

Python - Simulation Connection

CFD

HMI Python

Tutorial

Motor Simulator

Physics Simulation

Experience

  • Matplotlib is very slow given the amount of data I try to display.
  • Modern Browser-based solutions like Bokeh are likely to have similar limitations and are more meant for 2D data only
  • Several libraries based on OpenGL are missing some step-by-step tutorials, have sparse docs, or are deprecated.
  • There are still solutions like PyQtGraph, Vispy or Mayavi that I could try (What is the best to begin with for the described purpose?)
  • Pure PyOpenGL solution will be lacking all the goodies of simple plotting capabilities. But I could write them myself, potentially.
  • Library like Open3D could be nice but it the way it handles data transformation makes it infeasible for my use case.
  • ROS/Rviz or other well-known robotics tools.
  • pygame for 2d animations.
  • panda3d for 3d animations.

Discrete Event Simulation

Python Blog

SCADA HMI

Digital Twin BLog

Embedded Machine Learning Platform

Edge Impulse
Welcome - Expert Projects

Digital Twin Philosophy

Digital twins are an interesting idea that, like "cognitive computing", is easily abused by marketing, and will probably rake in a lot of consulting fees for people like the authors of this piece (Accenture Research) and companies like IBM.
The essence of a digital twin is a simulation complex enough to be useful in making predictions in the real world. (That's the "twin" part.) Making complex simulations, as you might imagine, is difficult. It requires effort, deep domain knowledge (rare talent), good feedback mechanisms with the real situation in question, and some means of managing that complexity.
Digital twins do exist in deployment. What differentiates them from, say, any old machine-learning model you might use for predictions is that a "digital twin" is probably used for a more complex task than just classification. That is, it's probably used to direct the actions of a system. The words imply a larger solution.
So one thing you see is simulations that embed machine-learning models and predict what actions to take in a given state. Think of it like AlphaGo applied to business scenarios.
What are the pitfalls? Real-world data in these environments is non-stationary and messy, so signal may be low, or the ways you find signal might change over time.
To make the "digital twin" useful you are probably integrating with large software systems not entirely in your control, which may be hard to reason about (ERP systems like SAP).
The digital twin idea, insofar as it includes large parametric models that depend on algorithms like deep reinforcement learning, matters now, because those models are able to find structure in complexity, and make ever more accurate predictions about what to do. That is, we're able to identify optimal actions in more complex situations, with techniques more sophisticated than expert systems.
All that aside, this sort of thing is already getting deployed under the right circumstances, and you could argue that it is the future of a lot of business operations in supply chain and manufacturing.

Companies like GE, SAP, Mathworks, Dassault, PTC and Siemens all have digital twin platforms used by major manufacturers. Initially there was a period (2003 -2013) where twin systems were built my specialist media developers using 3rd party authoring tools and integrating with different system simulation tools. While this still happens for specialist or niche projects (or sometimes on very large projects with specific deadlines that require outsourced help). Increasingly most manufacturers build digital twins directy from design assets (CAD, CAM, Systems simulations etc) as part of the in-house product development process using extensions to their existing design and simulation tooling.

The one distinguishing feature (in theory) of digital twins is it is supposed to be such a hyper accurate model that it can be used to predict absolutely anything about the system in question. No changing of model setup, it's a "perfect" representation.
The down side is everything explodes exponentially - setup time, mesh count, solve time; and we usually get worse results than more focused simulations because we can't squeeze enough detail in across the board.
It generally starts because some manager hears that we've created 8 different specialized models of something due to different areas of interest, and has the bright idea of "lets just create a single super-accurate model we can use for everything". I've been fighting against them my entire career, although 10 years ago it was "virtual mockups"
The next buzzword in the pipeline seems to be "virtual lab" which I can't figure out either. I've been simulating laboratory tests for over a decade and no one can explain to me why that isn't exactly what we're already doing.
None of this is to say that this team isn't doing great work, but somewhere along the way it got wrapped up in some marketing nonsense.

Though it's a buzzword now, the idea behind 'digital twins' was that you not only have a detailed and faithful model (of an item, or process, or system, or network, etc.) whose granularity is congruent with the level of granularity that interests you about the real thing, but you also have bi-directional movement of data between the 'real' thing and its model.
So you can have sensor and measurement data from the real thing be streamed to the model in (ideally) real-time, you can make decisions off of the state of the model, and have those decisions be sent back out into the real world to make a change happen.
The specific wording of digital twins originated from a report discussing innovations in manufacturing, but I find that railway systems and operations make for some of the best examples to explain the concept, because they manage a diverse set of physical assets over which they have partial direct control, and apply conceptual processes on top of them.

I work as a Computational Researcher at Stanford Med. My work is quite literally translating 3D scans of the eyes (read MRI) into "digital twins" (read FEA Models).
I think that there is a subtlety in differentiating a digital twin from a model/simulation in intent. Our intent is to quite literally figure out how to use the digital twin specifically, NOT the scan that it is based on, as a way to replace more invasive diagnostics.
Of course, in the process, we figure out more about diagnosing medical problems as a function of just the scans themselves too.

I am familiar with it in the aerospace industry. Digital Twin implies a higher degree of fidelity in terms of importing data from sensors and modeling of physics than just model or simulation might apply, even though it is a model and simulation.
For GE's digital twins in the jet engines, they will build a high fidelity representation of the each individual engine based on as built parts, and then they will simulate every flight based on accelerometers, force sensors, humidity sensors, temperature and pressure sensors which they have placed in the engine. This is different from a general model or simulation which will build a model from CAD and then have a series of expected flight simulations and use that to predict life of the engine.

I work with digital twins in chemical manufacturing, and there the term is directly coupled with Model Predictive Control. The basic idea is that you build a model of the system (e.g. a chemical plant) you want to control, use that model to optimize controller behavior, apply the results to real controllers in the real system, and then sample the system to reground the model. Rinse, repeat. Such a model is called the "digital twin" of the real system - the idea is that it exists next to the system and is continuously updated to match the real world.

Digital twins aren't simulations, they're just data. The idea is that it's openly available and easily queryable so that people have good data to run their models and simulations on.
Even real life twins don’t exactly match. No digitization process can be error free, the “digital twin” language is to indicate that it’s supposed to be a realistic model rather than an editorialized/interpreted environment.

Digital twin uses sensor data + machine learning against a digital representation of the system to predict failures and tune performance.

Realtime modeling that includes realtime data is starting to be called digital twinning in various industries. E.g. each car off a line has its specific torques of each bolt traveling along with it as a digital twin.
Power plants with sensors on all the equipment trying to predict preventive maintenance is being called digital twinning too.

Though the term digital twin was occasionally in use in the virtual reality and manufacturing community as a concept in the mid-1990s, Dr.Michael Grieves explicitly defined the term in 2001 for a digital version of a physical system as part of an overall product lifecycle process.
The term has been widely used in manufacturing since then and has been a key sales messaging for companies like IBM, Dassault Systems, Siemens amongst others. United Technologies for example have been demonstrating digital twin concepts for routing in flight engine telematics into 3D virtual engine simulations for trouble shooting and maintenance since 2005.

The term regularly crops up in manufacturing, maintenance, smart cities, IoT and more recently AI related proposals and projects and generally means a virtual simulation that uses real data to simulate a complex system.

As pointed out by many over the years - the concepts are also described by Prof.David Gelernter in his book 'MIrror Worlds: Or the Day Software Puts the Universe in a Shoebox…How It Will Happen and What It Will Mean' first ipublished in 1991.

Digital Twin Software - GE Digital
How Digital Twins Can Transform Track Maintenance - Railway Age
Realizing the Potential for Digital Twins in Rail - Railway Age

The CEO of Ford Motor Company, Jim Hackett, has claimed this: for autonomous vehicles to work resiliently at scale they require

  1. vehicle-centered sensors
  2. a way for vehicles to share their sensors with nearby vehicles,
  3. detailed models of the cities in which they operate, and
  4. wireless networks capable of transporting all this data to where it's needed.

He calls all this the "mobility operating system."
In the interview I heard (I wish I could remember where, sorry) Mr. Hackett said he believes the defensible intellectual property for autonomous cars will be those detailed city models.
So, I am delighted to hear that municipalities are getting ahead of this issue with respect to city models.

Freecad Animation with Python

VPython Animation with Python

Animation not Python

Candidate List

Candidate list

Youtube

SVG Animation

3D BLDC Motor

BLDC Motor Simulation

HMI Python

PYthon GUI

Python Graph

Python Game Library
Pygame, PyopenGL, Pyglet, and Panda3D

Seed Studio Hardware

Machine Learning Platform

Freecad Headless

Digital Twin Project

Web based SCADA

Web 3D

IoT Platform

The Route:
Use Python. build model upon Headless Freecad. With PyQT GUI.

Edit this page
Last updated on 8/21/2023