Info Image

How Medicine, Batteries and Carbon Capture Benefit from Generative AI

How Medicine, Batteries and Carbon Capture Benefit from Generative AI Image Credit: Your_photo/BigStockPhoto.com

From the cuddly, to the ludicrous, to the downright nightmarish, art generated by artificial intelligence is sweeping the internet. What may give casual users a good laugh or haunt them forever is created by a specific type of artificial intelligence: generative AI.

Generative AI is opening doors to growth, progress, and innovation. It’s behind AI content generators such as DALL-E and ChatGPT, and has the potential to dramatically improve many processes, including battery design and drug discovery. As this technology evolves, the industry may even take a small step towards moonshot innovations such as artificial general intelligence (AGI). However, technologists and researchers must understand generative AI models to ensure they’re applied properly and harness their full potential.

The frameworks behind generative AI

Generative AI is a type of machine learning (ML) framework consisting of several architecture models. The most common model is a generative adversarial network (GAN) that leverages two neural networks that compete for accuracy by generating new synthetic data, allowing it to train unsupervised. This training process gives the model the ability to generate realistic text, image, audio, and video data. The popular AI breakout ChatGPT is built on top of Generative Pre-Trained Transformer 3 (GPT-3). GPT-3 is a 175 billion-parameter autoregressive model that incorporates dependency between generated components, making it best suited for text generation, language and sentiment analysis, translation, and Q&A applications.

Two other commonly used Generative AI tools are DALL-E and Phenaki. DALL-E uses a 12 billion-parameter version of GPT-3 to generate art from text descriptions. Phenaki synthesizes realistic videos from open domain video snippets and text descriptions. Both are now widely used recreationally by the general public, and professionally by content creators.

The end of trial and error?

Trial and error are what make materials research so time-consuming. Battery design, carbon capture, and medical innovations are especially prone to lengthy research timelines. For instance, it took more than two decades for researchers to discover and perfect lithium-ion batteries.

Generative AI has the potential to transform the field of scientific research. Instead of manually setting up each trial, waiting for the outcome, and refining the tested method or design, generative AI can run through permutations and predict outcomes with greater accuracy than previous methods. This frees researchers to spend more time working on fewer, better designs, and ultimately running fewer experiments with better outcomes.

IBM is already using generative AI models, high-performing computing, and cloud technology to speed up their discovery of carbon dioxide capturing and storage materials. IBM rapidly ruled out millions of possible CO2 adsorbents at the nano-particle level. New material discovery can take years, whereas AI models can simulate millions of predictions much faster than first-principles models. This allows researchers to focus their efforts on just a handful of high-likelihood adsorbents, giving environmental scientists valuable time back in the fight against climate change.

Medicine is another area that could benefit from a similar approach for drug discovery and vaccine modeling. Furthermore, with digitization of modeling records, AI can empower personalized medicine with the ability to encode many different types of data into a predictive model.

Data responsibility

Artificial intelligence models are only as good as the models they’re built on - flawed data will result in incorrect or misleading output. Responsible AI is a hot topic with data at the heart of it. This is particularly important as humans rely on more technologies built with generative AI. Data must be free of bias and error at every stage in the AI lifecycle.

Although cleaning data seems simple, in 2021 67% of companies drew from more than 20 data sources to power their AI. Every source must be carefully vetted to ensure errors and biases aren’t perpetuated, and cleaning the data can take as much as 80% of the total project effort. This is becoming extra important as more AI-generated content is created, with future generations of AI models inevitably incorporating some of this content into their own training datasets. Ultimately adoption is about trust, and people will only ever trust AI-based technologies that are accurate and unbiased.

From analyzers to creators

AI has evolved from being a tool that simply analyzes data and spits out a prediction, to being a creator capable of developing dynamic, original content. This evolution has opened up AI from developers who hide it under the hood, to a general audience who intentionally seeks it out to create eye-catching art or thoughtfully answer questions. This hype will build lasting momentum, and the long-term benefits to engineers, scientists and researchers will be significant investment in generative AI to fuel even greater transformation and discoveries.

NEW REPORT:
Next-Gen DPI for ZTNA: Advanced Traffic Detection for Real-Time Identity and Context Awareness
Author

Michael Krause is Data Science Director at Beyond Limits, a pioneering artificial intelligence engineering company creating advanced software solutions that go beyond conventional AI. Michael specializes in industrial AI with experience from bespoke AI solutions at small businesses to digital transformation at large enterprises. Prior to joining Beyond Limits, Michael was Director of Analytics at Tiandi Energy in Beijing, China, and later at Energective in Houston, Texas. Michael holds a Ph.D. in Energy Resources Engineering from Stanford University.

PREVIOUS POST

Push to Eliminate 'Digital Poverty' to Drive Demand for Satellite-Powered Broadband Connectivity Post Pandemic