AI is a game changer
Monday, February 6, 2023
Austin Harris |
Jonathan Girroir explains why AI will be a game changer in 2023, why companies will start using analysis tools at all points of the design process, how the new technology called neural radiance fields will be used if tech companies will put a freeze on hiring or conduct mass layoffs, and much more.
Even though the industry has been talking about AI for many years, the trifecta of massive amounts of data available through the rise of the internet, the dramatic increase in processing power fueled by GPUs originally designed for gaming, and steady advances in software algorithms to analyze and use that data will result in the power of AI is more fully realized across almost all markets. AI is 100% the number one game changer in the coming years, it will change the way we interact with computers just as Google Search did 20 years ago.
AI is a game changer
Generative Design is one great example. Being able to use text prompts, leverage huge datasets, and natural language recognition are now driving increased use of generative design, and I think we’ll see much more of that in 2023. But it goes beyond generative design and will have a big impact on Computer Aided Engineering (CAE) in general. For example, you will be able to generate large data sets and then have the system learn from them, predict the outcome of deformations or anything else, give advance preview of post-processing results and even suggest certain workflows for optimum efficiency. Or give the system a basic 2D image and then it generates a 3D design from scratch.
In addition to CAE, AI will completely revolutionize how realistic digital-twin assets are created. For example, designers would sit for hours generating and applying complex repeating textures on objects and environments. This can now be done with AI and will result in a far greater volume and variety of assets.
Analysis from beginning to end
The same trifecta of powerful tools, massive amounts of data, and the power to process that data is also driving more integrated use of analysis from the beginning of the design process to the end of production. Why have these tools all of a sudden become incredibly powerful? It's because of the massive amount of data the AI now has to learn from. With AI being able to learn more and faster, we now have the power to analyze design ideas at the very beginning of the process.
We are seeing more and more companies analyze everything starting at the very beginning of the design process - to run simulations, predict outcomes, and make better decisions in all aspects. Instead of running analysis far down the design path once you have a close-to-final design to make sure it meets your specifications and then iterating from there until it meets specifications, we are seeing the evolution of having analysis tools at your fingertips at all points of the design process, from the initial conception of the idea to designing it.
A different way to capture reality
When it comes to reality capture, the current options are to use a scanning device to create point clouds or photogrammetry technology where a 3D mesh is calculated from images taken from different angles and positions. However, we see a new technology called neural radiance fields (or NeRFs) taking hold, which takes as input images or videos similar to photogrammetry but then uses deep learning to essentially "learn" the scene and is able to generate novel views of high quality from it that can be turned into triangles or rendered directly.
More metaverse platforms
Most people assume the Metaverse has to be immersive, and also assume a Digital Twin can only reside on a desktop workstation. As we see Microsoft and Amazon invest in technology to support Digital Twins and the other big players continue to define and refine the Metaverse, we will see the line between these arenas blur. Maybe you experience these through a headset, maybe you don’t. The key will be the ability to utilize engineering applications within a Metaverse environment to collaborate on designs and monitor Digital Twins. This can be done on a workstation or a mobile device. The Covid-19 pandemic forced us all to collaborate remotely, and this forcing function also showed us that the Metaverse and Digital Twins don’t have to be restricted to specific platforms. They can be cloud-based and easy to work with.
Licensing vs hiring
It’s no secret that there is a recession looming. Meta, Microsoft, Alphabet, and so many other tech companies have either put a freeze on hiring or conducted mass layoffs. The boom initiated by the pandemic has now recoiled, and companies are looking for ways to cut costs while maintaining growth. One key way to do this on the technology development front is to license component technology rather than maintain large teams of engineers and developers. For example, instead of having an army of graphics developers, you can license a graphics engine. Utilizing component technologies for specific functions is efficient and cost-effective, especially as today’s components are solid and reliable.
Jonathan Girroir
Jonathan has a passion for 3D software, CAD, and innovation. For the last 22 years, Jonathan has been supporting software developers to build and utilize 3D visualization tools.
Become a subscriber of App Developer Magazine for just $5.99 a month and take advantage of all these perks.
MEMBERS GET ACCESS TO
- - Exclusive content from leaders in the industry
- - Q&A articles from industry leaders
- - Tips and tricks from the most successful developers weekly
- - Monthly issues, including all 90+ back-issues since 2012
- - Event discounts and early-bird signups
- - Gain insight from top achievers in the app store
- - Learn what tools to use, what SDK's to use, and more
Subscribe here