Models / Libraries / Frameworks – NVIDIA Technical Blog News and tutorials for developers, data scientists, and IT admins 2025-02-28T18:11:54Z https://developer.nvidia.com/blog/feed/ Douglas Moore <![CDATA[Accelerate Medical Imaging AI Operations with Databricks Pixels 2.0 and MONAI]]> https://developer.nvidia.com/blog/?p=96530 2025-02-28T18:11:54Z 2025-02-28T18:11:50Z According to the World Health Organization (WHO), 3.6 billion medical imaging tests are performed every year globally to diagnose, monitor, and treat various...]]>

According to the World Health Organization (WHO), 3.6 billion medical imaging tests are performed every year globally to diagnose, monitor, and treat various conditions. Most of these images are stored in a globally recognized standard called DICOM (Digital Imaging and Communications in Medicine). Imaging studies in DICOM format are a combination of unstructured images and structured metadata.

Source

]]>
Tom Augspurger <![CDATA[High-Performance Remote IO With NVIDIA KvikIO]]> https://developer.nvidia.com/blog/?p=96582 2025-02-27T17:55:56Z 2025-02-27T17:55:52Z Workloads processing large amounts of data, especially those running on the cloud, will often use an object storage service (S3, Google Cloud Storage, Azure...]]>

Workloads processing large amounts of data, especially those running on the cloud, will often use an object storage service (S3, Google Cloud Storage, Azure Blob Storage, etc.) as the data source. Object storage services can store and serve massive amounts of data, but getting the best performance can require tailoring your workload to how remote object stores behave. This post is for RAPIDS users…

Source

]]>
Anu Srivastava <![CDATA[Latest Multimodal Addition to Microsoft Phi SLMs Trained on NVIDIA GPUs]]> https://developer.nvidia.com/blog/?p=96519 2025-02-28T17:13:38Z 2025-02-26T22:05:00Z Large language models (LLMs) have permeated every industry and changed the potential of technology. However, due to their massive size they are not practical...]]>

Large language models (LLMs) have permeated every industry and changed the potential of technology. However, due to their massive size they are not practical for the current resource constraints that many companies have. The rise of small language models (SLMs) bridge quality and cost by creating models with a smaller resource footprint. SLMs are a subset of language models that tend to…

Source

]]>
Anton Anders <![CDATA[NVIDIA cuDSS Advances Solver Technologies for Engineering and Scientific Computing]]> https://developer.nvidia.com/blog/?p=96466 2025-02-25T18:31:00Z 2025-02-25T18:30:56Z NVIDIA cuDSS is a first-generation sparse direct solver library designed to accelerate engineering and scientific computing. cuDSS is increasingly adopted in...]]>

NVIDIA cuDSS is a first-generation sparse direct solver library designed to accelerate engineering and scientific computing. cuDSS is increasingly adopted in data centers and other environments and supports single-GPU, multi-GPU and multi-node (MGMN) configurations. cuDSS has become a key tool for accelerating computer-aided engineering (CAE) workflows and scientific computations across…

Source

]]>
Michelle Horton <![CDATA[AI for Climate, Energy, and Ecosystem Resilience at NVIDIA GTC 2025]]> https://developer.nvidia.com/blog/?p=95520 2025-02-20T15:50:14Z 2025-02-20T17:44:00Z From mitigating climate change to improving disaster response and environmental monitoring, AI is reshaping how we tackle critical global challenges....]]>

From mitigating climate change to improving disaster response and environmental monitoring, AI is reshaping how we tackle critical global challenges. Advancements in fast, high-resolution climate forecasting, real-time monitoring, and digital twins are equipping scientists, policy-makers, and industry leaders with data-driven tools to understand, plan for, and respond to a warming planet.

Source

]]>
Allyson Vasquez <![CDATA[Bring NVIDIA ACE AI Characters to Games with the New In-Game Inferencing SDK]]> https://developer.nvidia.com/blog/?p=96051 2025-02-20T21:43:57Z 2025-02-20T17:00:00Z NVIDIA ACE is a suite of digital human technologies that bring game characters and digital assistants to life with generative AI. ACE on-device models enable...]]>

Source

]]>
Kyle Tretina <![CDATA[Understanding the Language of Life’s Biomolecules Across Evolution at a New Scale with Evo 2]]> https://developer.nvidia.com/blog/?p=95589 2025-02-20T15:52:05Z 2025-02-19T17:14:51Z AI has evolved from an experimental curiosity to a driving force within biological research. The convergence of deep learning algorithms, massive omics...]]>

AI has evolved from an experimental curiosity to a driving force within biological research. The convergence of deep learning algorithms, massive omics datasets, and automated laboratory workflows has allowed scientists to tackle problems once thought intractable—from rapid protein structure prediction to generative drug design, increasing the need for AI literacy among scientists.

Source

]]>
Ram Cherukuri <![CDATA[Spotlight: BRLi and Toulouse INP Develop AI-Based Flood Models Using NVIDIA Modulus]]> https://developer.nvidia.com/blog/?p=95990 2025-02-20T15:53:17Z 2025-02-13T21:00:00Z Flooding poses a significant threat to 1.5 billion people, making it the most common cause of major natural disasters. Floods cause up to $25 billion in global...]]>

Flooding poses a significant threat to 1.5 billion people, making it the most common cause of major natural disasters. Floods cause up to $25 billion in global economic damage every year. Flood forecasting is a critical tool in disaster preparedness and risk mitigation. Numerical methods have long been developed that provide accurate simulations of river basins. With these, engineers such as those…

Source

]]>
Rick Ratzel <![CDATA[Using NetworkX, Jaccard Similarity, and cuGraph to Predict Your Next Favorite Movie]]> https://developer.nvidia.com/blog/?p=95820 2025-02-20T15:53:38Z 2025-02-13T17:00:00Z As the amount of data available to everyone in the world increases, the ability for a consumer to make informed decisions becomes increasingly difficult....]]>

As the amount of data available to everyone in the world increases, the ability for a consumer to make informed decisions becomes increasingly difficult. Fortunately, large datasets are a beneficial component for recommendation systems, which can make a sometimes overwhelming decision much easier. Graphs are excellent choices for modeling the relationships inherent in the data that fuel…

Source

]]>
Pranav Marathe <![CDATA[Just Released: Tripy, a Python Programming Model For TensorRT]]> https://developer.nvidia.com/blog/?p=95947 2025-02-10T17:08:43Z 2025-02-10T17:08:40Z Experience high-performance inference, usability, intuitive APIs, easy debugging with eager mode, clear error messages, and more.]]>

Experience high-performance inference, usability, intuitive APIs, easy debugging with eager mode, clear error messages, and more.

Source

]]>
Allison Ding <![CDATA[Get Started with GPU Acceleration for Data Science]]> https://developer.nvidia.com/blog/?p=95894 2025-02-20T15:55:08Z 2025-02-06T23:07:48Z In data science, operational efficiency is key to handling increasingly complex and large datasets. GPU acceleration has become essential for modern workflows,...]]>

In data science, operational efficiency is key to handling increasingly complex and large datasets. GPU acceleration has become essential for modern workflows, offering significant performance improvements. RAPIDS is a suite of open-source libraries and frameworks developed by NVIDIA, designed to accelerate data science pipelines using GPUs with minimal code changes.

Source

]]>
David Hart <![CDATA[Render Path-Traced Hair in Real Time with NVIDIA GeForce RTX 50 Series GPUs]]> https://developer.nvidia.com/blog/?p=95790 2025-02-07T18:41:50Z 2025-02-06T20:30:00Z Hardware support for ray tracing triangle meshes was introduced as part of NVIDIA RTX in 2018. But ray tracing for hair and fur has remained a compute-intensive...]]>

Hardware support for ray tracing triangle meshes was introduced as part of NVIDIA RTX in 2018. But ray tracing for hair and fur has remained a compute-intensive problem that has been difficult to further accelerate. That is, until now. NVIDIA GeForce 50 Series GPUs include a major advancement in the acceleration of ray tracing for hair and fur: hardware ray tracing support for the linear…

Source

]]>
Christoph Kubisch <![CDATA[NVIDIA RTX Mega Geometry Now Available with New Vulkan Samples]]> https://developer.nvidia.com/blog/?p=95842 2025-02-13T18:21:42Z 2025-02-06T18:29:20Z Geometric detail in computer graphics has increased exponentially in the past 30 years. To render high quality assets with higher instance counts and greater...]]>

Geometric detail in computer graphics has increased exponentially in the past 30 years. To render high quality assets with higher instance counts and greater triangle density, NVIDIA introduced RTX Mega Geometry. RTX Mega Geometry is available today through NVIDIA RTX Kit, a suite of rendering technologies to ray trace games with AI, render scenes with immense geometry, and create game characters…

Source

]]>
Michelle Horton <![CDATA[AI Foundation Model Enhances Cancer Diagnosis and Tailors Treatment]]> https://developer.nvidia.com/blog/?p=95722 2025-02-06T19:33:49Z 2025-02-04T17:16:54Z A new study and AI model from researchers at Stanford University is streamlining cancer diagnostics, treatment planning, and prognosis prediction. Named MUSK...]]>

A new study and AI model from researchers at Stanford University is streamlining cancer diagnostics, treatment planning, and prognosis prediction. Named MUSK (Multimodal transformer with Unified maSKed modeling), the research aims to advance precision oncology, tailoring treatment plans to each patient based on their unique medical data. “Multimodal foundation models are a new frontier in…

Source

]]>
1
Matthew Nicely <![CDATA[Just Released: CUTLASS 3.8]]> https://developer.nvidia.com/blog/?p=95716 2025-02-06T19:33:50Z 2025-02-03T23:54:16Z Provides support for the NVIDIA Blackwell SM100 architecture. CUTLASS is a collection of CUDA C++ templates and abstractions for implementing high-performance...]]>

Provides support for the NVIDIA Blackwell SM100 architecture. CUTLASS is a collection of CUDA C++ templates and abstractions for implementing high-performance GEMM computations.

Source

]]>
Matthew Nicely <![CDATA[Just Released: NVIDIA cuDNN 9.7]]> https://developer.nvidia.com/blog/?p=95670 2025-02-06T19:33:52Z 2025-01-31T21:23:42Z Bringing support for NVIDIA Blackwell architecture across data center and GeForce products, NVIDIA cuDNN 9.7 delivers speedups of up to 84% for FP8 Flash...]]>

Bringing support for NVIDIA Blackwell architecture across data center and GeForce products, NVIDIA cuDNN 9.7 delivers speedups of up to 84% for FP8 Flash Attention operations and optimized GEMM capabilities with advanced fusion support to accelerate deep learning workloads.

Source

]]>
Prem Sagar Gali <![CDATA[Mastering the cudf.pandas Profiler for GPU Acceleration]]> https://developer.nvidia.com/blog/?p=95351 2025-02-06T19:33:56Z 2025-01-30T17:00:00Z In the world of Python data science, pandas has long reigned as the go-to library for intuitive data manipulation and analysis. However, as data volumes grow,...]]>

In the world of Python data science, pandas has long reigned as the go-to library for intuitive data manipulation and analysis. However, as data volumes grow, CPU-bound pandas workflows can become a bottleneck. That’s where cuDF and its pandas accelerator mode, , step in. This mode accelerates operations with GPUs whenever possible, seamlessly falling back to the CPU for unsupported…

Source

]]>
Matt Ahrens <![CDATA[Accelerating JSON Processing on Apache Spark with GPUs]]> https://developer.nvidia.com/blog/?p=95298 2025-02-06T19:37:12Z 2025-01-29T22:10:22Z JSON is a popular format for text-based data that allows for interoperability between systems in web applications as well as data management. The format has...]]>

JSON is a popular format for text-based data that allows for interoperability between systems in web applications as well as data management. The format has been in existence since the early 2000s and came from the need for communication between web servers and browsers. The standard JSON format consists of key-value pairs that can include nested objects. JSON has grown in usage for storing web…

Source

]]>
Michelle Horton <![CDATA[Advancing Rare Disease Detection with AI-Powered Cellular Profiling]]> https://developer.nvidia.com/blog/?p=95498 2025-02-06T19:33:59Z 2025-01-29T20:45:46Z Rare diseases are difficult to diagnose due to limitations in traditional genomic sequencing. Wolfgang Pernice, assistant professor at Columbia University, is...]]>

Rare diseases are difficult to diagnose due to limitations in traditional genomic sequencing. Wolfgang Pernice, assistant professor at Columbia University, is using AI-powered cellular profiling to bridge these gaps and advance personalized medicine. At NVIDIA GTC 2024, Pernice shared insights from his lab’s work with diseases like Charcot-Marie-Tooth (CMT) and mitochondrial disorders.

Source

]]>
Brian Tepera <![CDATA[Accelerating Time Series Forecasting with RAPIDS cuML]]> https://developer.nvidia.com/blog/?p=95127 2025-01-23T19:54:21Z 2025-01-16T17:20:10Z Time series forecasting is a powerful data science technique used to predict future values based on data points from the past Open source Python libraries like...]]>

Time series forecasting is a powerful data science technique used to predict future values based on data points from the past Open source Python libraries like skforecast make it easy to run time series forecasts on your data. They allow you to “bring your own” regressor that is compatible with the scikit-learn API, giving you the flexibility to work seamlessly with the model of your choice.

Source

]]>
Kyle Tretina <![CDATA[Evaluating GenMol as a Generalist Foundation Model for Molecular Generation]]> https://developer.nvidia.com/blog/?p=94836 2025-01-23T19:54:29Z 2025-01-13T14:00:00Z Traditional computational drug discovery relies almost exclusively on highly task-specific computational models for hit identification and lead optimization....]]>

Traditional computational drug discovery relies almost exclusively on highly task-specific computational models for hit identification and lead optimization. Adapting these specialized models to new tasks requires substantial time, computational power, and expertise—challenges that grow when researchers simultaneously work across multiple targets or properties.

Source

]]>
Kyle Tretina <![CDATA[Accelerate Protein Engineering with the NVIDIA BioNeMo Blueprint for Generative Protein Binder Design]]> https://developer.nvidia.com/blog/?p=94851 2025-01-23T19:54:28Z 2025-01-13T14:00:00Z Designing a therapeutic protein that specifically binds its target in drug discovery is a staggering challenge. Traditional workflows are often a painstaking...]]>

Designing a therapeutic protein that specifically binds its target in drug discovery is a staggering challenge. Traditional workflows are often a painstaking trial-and-error process—iterating through thousands of candidates, each synthesis and validation round taking months if not years. Considering the average human protein is 430 amino acids long, the number of possible designs translates to…

Source

]]>
Dan Su <![CDATA[Announcing Nemotron-CC: A Trillion-Token English Language Dataset for LLM Pretraining]]> https://developer.nvidia.com/blog/?p=94818 2025-01-23T19:54:30Z 2025-01-09T19:20:16Z NVIDIA is excited to announce the release of Nemotron-CC, a 6.3-trillion-token English language Common Crawl dataset for pretraining highly accurate large...]]>

NVIDIA is excited to announce the release of Nemotron-CC, a 6.3-trillion-token English language Common Crawl dataset for pretraining highly accurate large language models (LLMs), including 1.9 trillion tokens of synthetically generated data. One of the keys to training state-of-the-art LLMs is a high-quality pretraining dataset, and recent top LLMs, such as the Meta Llama series…

Source

]]>
Pranjali Joshi <![CDATA[Advancing Physical AI with NVIDIA Cosmos World Foundation Model Platform]]> https://developer.nvidia.com/blog/?p=94577 2025-01-23T19:54:31Z 2025-01-09T17:42:06Z As robotics and autonomous vehicles advance, accelerating development of physical AI—which enables autonomous machines to perceive, understand, and perform...]]>

As robotics and autonomous vehicles advance, accelerating development of physical AI—which enables autonomous machines to perceive, understand, and perform complex actions in the physical world—has become essential. At the center of these systems are world foundation models (WFMs)—AI models that simulate physical states through physics-aware videos, enabling machines to make accurate decisions and…

Source

]]>
1
Anish Maddipoti <![CDATA[One-Click Deployments for the Best of NVIDIA AI with NVIDIA Launchables]]> https://developer.nvidia.com/blog/?p=94569 2025-01-23T19:54:34Z 2025-01-07T04:30:00Z AI development has become a core part of modern software engineering, and NVIDIA is committed to finding ways to bring optimized accelerated computing to every...]]>

AI development has become a core part of modern software engineering, and NVIDIA is committed to finding ways to bring optimized accelerated computing to every developer that wants to start experimenting with AI. To address this, we’ve been working on making the accelerated computing stack more accessible with NVIDIA Launchables: preconfigured GPU computing environments that enable you to…

Source

]]>
Peter Entschev <![CDATA[Accelerating GPU Analytics Using RAPIDS and Ray]]> https://developer.nvidia.com/blog/?p=94495 2024-12-20T21:13:45Z 2024-12-20T21:13:42Z RAPIDS is a suite of open-source GPU-accelerated data science and AI libraries that are well supported for scale-out with distributed engines like Spark and...]]>

RAPIDS is a suite of open-source GPU-accelerated data science and AI libraries that are well supported for scale-out with distributed engines like Spark and Dask. Ray is a popular open-source distributed Python framework commonly used to scale AI and machine learning (ML) applications. Ray particularly excels at simplifying and scaling training and inference pipelines and can easily target both…

Source

]]>
Jenn Yonemitsu <![CDATA[NVIDIA Hackathon Winners Share Strategies for RAPIDS-Accelerated ML Workflows]]> https://developer.nvidia.com/blog/?p=94393 2025-01-22T18:31:27Z 2024-12-20T18:00:00Z Approximately 220 teams gathered at the Open Data Science Conference (ODSC) West this year to compete in the NVIDIA hackathon, a 24-hour machine learning (ML)...]]>

Approximately 220 teams gathered at the Open Data Science Conference (ODSC) West this year to compete in the NVIDIA hackathon, a 24-hour machine learning (ML) competition. Data scientists and engineers designed models that were evaluated based on accuracy and processing speed. The top three teams walked away with prize packages that included NVIDIA RTX Ada Generation GPUs, Google Colab credits…

Source

]]>
Michelle Horton <![CDATA[Top Posts of 2024 Highlight NVIDIA NIM, LLM Breakthroughs, and Data Science Optimization]]> https://developer.nvidia.com/blog/?p=93566 2024-12-16T18:34:16Z 2024-12-16T18:34:14Z 2024 was another landmark year for developers, researchers, and innovators working with NVIDIA technologies. From groundbreaking developments in AI inference to...]]>

2024 was another landmark year for developers, researchers, and innovators working with NVIDIA technologies. From groundbreaking developments in AI inference to empowering open-source contributions, these blog posts highlight the breakthroughs that resonated most with our readers. NVIDIA NIM Offers Optimized Inference Microservices for Deploying AI Models at Scale Introduced in…

Source

]]>
0
Miles Macklin <![CDATA[Introducing Tile-Based Programming in Warp 1.5.0]]> https://developer.nvidia.com/blog/?p=94002 2025-02-06T00:06:57Z 2024-12-14T21:15:45Z With the latest release of Warp 1.5.0, developers now have access to new tile-based programming primitives in Python. Leveraging cuBLASDx and cuFFTDx, these new...]]>

With the latest release of Warp 1.5.0, developers now have access to new tile-based programming primitives in Python. Leveraging cuBLASDx and cuFFTDx, these new tools provide developers with efficient matrix multiplication and Fourier transforms in Python kernels for accelerated simulation and scientific computing. In this blog post, we’ll introduce these new features and show how they can be used…

Source

]]>
Nick Becker <![CDATA[Harnessing GPU Acceleration for Multi-Label Classification with RAPIDS cuML]]> https://developer.nvidia.com/blog/?p=93575 2024-12-12T19:17:22Z 2024-12-12T16:55:40Z Modern classification workflows often require classifying individual records and data points into multiple categories instead of just assigning a single label....]]>

Modern classification workflows often require classifying individual records and data points into multiple categories instead of just assigning a single label. Open-source Python libraries like scikit-learn make it easier to build models for these multi-label problems. Several models have built-in support for multi-label datasets, and a simple scikit-learn utility function enables using those…

Source

]]>
Anjali Shah <![CDATA[NVIDIA TensorRT-LLM Now Accelerates Encoder-Decoder Models with In-Flight Batching]]> https://developer.nvidia.com/blog/?p=93516 2024-12-12T19:35:15Z 2024-12-11T22:10:51Z NVIDIA recently announced that NVIDIA TensorRT-LLM now accelerates encoder-decoder model architectures. TensorRT-LLM is an open-source library that optimizes...]]>

NVIDIA recently announced that NVIDIA TensorRT-LLM now accelerates encoder-decoder model architectures. TensorRT-LLM is an open-source library that optimizes inference for diverse model architectures, including the following: The addition of encoder-decoder model support further expands TensorRT-LLM capabilities, providing highly optimized inference for an even broader range of…

Source

]]>
Bharath Thotakura <![CDATA[NVIDIA CUDA-Q Runs Breakthrough Logical Qubit Application on Infleqtion QPU]]> https://developer.nvidia.com/blog/?p=93486 2024-12-12T19:35:16Z 2024-12-10T14:00:00Z Infleqtion, a world leader in neutral atom quantum computing, used the NVIDIA CUDA-Q platform to first simulate, and then orchestrate the first-ever...]]>

Infleqtion, a world leader in neutral atom quantum computing, used the NVIDIA CUDA-Q platform to first simulate, and then orchestrate the first-ever demonstration of a material science experiment on logical qubits, on their Sqale physical quantum processing unit (QPU). Qubits, the basic units of information in quantum computing, are prone to errors, and far too unreliable to make meaningful…

Source

]]>
Joanne Chang <![CDATA[Just Released: NVIDIA VILA VLM]]> https://developer.nvidia.com/blog/?p=93512 2024-12-12T19:35:17Z 2024-12-09T17:09:10Z Now available in preview, NVIDIA VILA is an advanced multimodal VLM that provides visual understanding of multi-images and video.]]>

Now available in preview, NVIDIA VILA is an advanced multimodal VLM that provides visual understanding of multi-images and video.

Source

]]>
Bhoomi Gadhia <![CDATA[Just Released: NVIDIA Modulus v24.12]]> https://developer.nvidia.com/blog/?p=93475 2024-12-12T19:35:18Z 2024-12-06T00:00:52Z The new release includes new network architectures for external aerodynamics application as well as for climate and weather prediction.]]>

The new release includes new network architectures for external aerodynamics application as well as for climate and weather prediction.

Source

]]>
Prem Sagar Gali <![CDATA[Unified Virtual Memory Supercharges pandas with RAPIDS cuDF]]> https://developer.nvidia.com/blog/?p=93438 2024-12-12T19:35:20Z 2024-12-05T19:07:07Z cuDF-pandas, introduced in a previous post, is a GPU-accelerated library that accelerates pandas to deliver significant performance improvements—up to 50x...]]>

introduced in a previous post, is a GPU-accelerated library that accelerates pandas to deliver significant performance improvements—up to 50x faster—without requiring any changes to your existing code. As part of the NVIDIA RAPIDS ecosystem, acts as a proxy layer that executes operations on the GPU when possible, and falls back to the CPU (via pandas) when necessary.

Source

]]>
Jonathan Litt <![CDATA[Optimize GPU Workloads for Graphics Applications with NVIDIA Nsight Graphics]]> https://developer.nvidia.com/blog/?p=93418 2024-12-12T19:35:20Z 2024-12-05T18:06:35Z One of the great pastimes of graphics developers and enthusiasts is comparing specifications of GPUs and marveling at the ever-increasing counts of shader...]]>

One of the great pastimes of graphics developers and enthusiasts is comparing specifications of GPUs and marveling at the ever-increasing counts of shader cores, RT cores, teraflops, and overall computational power with each new generation. Achieving the maximum theoretical performance represented by those numbers is a major focus in the world of graphics programming. Massive amounts of rendering…

Source

]]>
Yarkin Doroz <![CDATA[Introducing NVIDIA cuPQC for GPU-Accelerated Post-Quantum Cryptography]]> https://developer.nvidia.com/blog/?p=92737 2024-12-12T19:45:49Z 2024-12-03T18:00:00Z In the past decade, quantum computers have progressed significantly and could one day be used to undermine current cybersecurity practices. If run on a quantum...]]>

In the past decade, quantum computers have progressed significantly and could one day be used to undermine current cybersecurity practices. If run on a quantum computer, for example, an algorithm discovered by the theoretical computer scientist Peter Shor could crack common encryption schemes, including the Rivest-Shamir-Adleman (RSA) encryption algorithm. Post-quantum cryptography (PQC) is…

Source

]]>
Vega Shah <![CDATA[In-Silico Antibody Development with AlphaBind Using NVIDIA BioNeMo and AWS HealthOmics]]> https://developer.nvidia.com/blog/?p=92757 2024-12-12T19:38:30Z 2024-12-03T18:00:00Z Antibodies have become the most prevalent class of therapeutics, primarily due to their ability to target specific antigens, enabling them to treat a wide range...]]>

Antibodies have become the most prevalent class of therapeutics, primarily due to their ability to target specific antigens, enabling them to treat a wide range of diseases, from cancer to autoimmune disorders. Their specificity reduces the likelihood of off-target effects, making them safer and often more effective than small-molecule drugs for complex conditions. As a result…

Source

]]>
Pradnya Khalate <![CDATA[Accelerated Quantum Supercomputing with the NVIDIA CUDA-Q and Amazon Braket Integration]]> https://developer.nvidia.com/blog/?p=92875 2025-01-07T20:14:50Z 2024-12-02T22:30:00Z As quantum computers scale, tasks such as controlling quantum hardware and performing quantum error correction become increasingly complex. Overcoming these...]]>

As quantum computers scale, tasks such as controlling quantum hardware and performing quantum error correction become increasingly complex. Overcoming these challenges requires tight integration between quantum processing units (QPUs) and AI supercomputers, a paradigm known as accelerated quantum supercomputing. Increasingly, AI methods are being used by researchers up and down the quantum…

Source

]]>
Joanne Chang <![CDATA[Just Released: NVIDIA DeepStream 7.1]]> https://developer.nvidia.com/blog/?p=92695 2024-12-12T19:46:55Z 2024-11-25T16:40:22Z The new release introduces Python support in Service Maker to accelerate real-time multimedia and AI inference applications with a powerful GStreamer...]]>

The new release introduces Python support in Service Maker to accelerate real-time multimedia and AI inference applications with a powerful GStreamer abstraction layer.

Source

]]>
Ben Zaitlen https://www.linkedin.com/in/benjamin-zaitlen-62ab7b4/ <![CDATA[Best Practices for Multi-GPU Data Analysis Using RAPIDS with Dask]]> https://developer.nvidia.com/blog/?p=92480 2024-12-12T19:38:40Z 2024-11-21T19:02:03Z As we move towards a more dense computing infrastructure, with more compute, more GPUs, accelerated networking, and so forth—multi-gpu training and analysis...]]>

As we move towards a more dense computing infrastructure, with more compute, more GPUs, accelerated networking, and so forth—multi-gpu training and analysis grows in popularity. We need tools and also best practices as developers and practitioners move from CPU to GPU clusters. RAPIDS is a suite of open-source GPU-accelerated data science and AI libraries. These libraries can easily scale-out for…

Source

]]>
Michelle Horton <![CDATA[AI Unlocks Early Clues to Alzheimer’s Through Retinal Scans]]> https://developer.nvidia.com/blog/?p=92565 2024-12-12T19:38:44Z 2024-11-21T16:40:39Z Your eyes could hold the key to unlocking early detection of Alzheimer’s and dementia, with a groundbreaking AI study. Called Eye-AD, the deep learning...]]>

Your eyes could hold the key to unlocking early detection of Alzheimer’s and dementia, with a groundbreaking AI study. Called Eye-AD, the deep learning framework analyzes high-resolution retinal images, identifying small changes in vascular layers linked to dementia that are often too subtle for human detection. The approach offers a rapid, non-invasive screening for cognitive decline…

Source

]]>
1
Elias Wolfberg <![CDATA[AI Research Delivers Rapid, Accurate Prostate Cancer Predictions]]> https://developer.nvidia.com/blog/?p=92407 2024-11-19T21:16:02Z 2024-11-19T18:39:13Z Prostate cancer researchers unveiled a new AI-powered model that can quickly analyze MRIs to accurately predict how prostate cancer tumors may develop and...]]>

Prostate cancer researchers unveiled a new AI-powered model that can quickly analyze MRIs to accurately predict how prostate cancer tumors may develop and potentially metastasize over time. The technology uses a segmentation algorithm to quickly analyze MRIs of prostates and outline—in detail—the contours of any cancerous tumors. The model can then calculate the volume of the tumors it…

Source

]]>
John Linford <![CDATA[Rapidly Create Real-Time Physics Digital Twins with NVIDIA Omniverse Blueprints]]> https://developer.nvidia.com/blog/?p=91997 2025-02-13T20:46:52Z 2024-11-18T18:30:00Z Everything that is manufactured is first simulated with advanced physics solvers. Real-time digital twins (RTDTs) are the cutting edge of computer-aided...]]>

Everything that is manufactured is first simulated with advanced physics solvers. Real-time digital twins (RTDTs) are the cutting edge of computer-aided engineering (CAE) simulation, because they enable immediate feedback in the engineering design loop. They empower engineers to innovate freely and rapidly explore new designs by experiencing in real time the effects of any change in the simulation.

Source

]]>
Wonchan Lee <![CDATA[Effortlessly Scale NumPy from Laptops to Supercomputers with NVIDIA cuPyNumeric]]> https://developer.nvidia.com/blog/?p=91682 2025-02-25T19:37:52Z 2024-11-18T17:00:00Z Python is the most common programming language for data science, machine learning, and numerical computing. It continues to grow in popularity among scientists...]]>

Python is the most common programming language for data science, machine learning, and numerical computing. It continues to grow in popularity among scientists and researchers. In the Python ecosystem, NumPy is the foundational Python library for performing array-based numerical computations. NumPy’s standard implementation operates on a single CPU core, with only a limited set of operations…

Source

]]>
1
Michelle Horton <![CDATA[Deep Learning Model Boosts Accuracy in Long-Range Weather and Climate Forecasting]]> https://developer.nvidia.com/blog/?p=91943 2025-01-07T20:20:37Z 2024-11-14T18:35:24Z Dale Durran, a professor in the Atmospheric Sciences Department at the University of Washington, introduces a breakthrough deep learning model that combines...]]>

Dale Durran, a professor in the Atmospheric Sciences Department at the University of Washington, introduces a breakthrough deep learning model that combines atmospheric and oceanic data to set new climate and weather prediction accuracy standards. In this NVIDIA GTC 2024 session, Durran presents techniques that reduce reliance on traditional parameterizations, enabling the model to bypass…

Source

]]>
1
Graham Lopez <![CDATA[Just Released: NVIDIA HPC SDK v24.11]]> https://developer.nvidia.com/blog/?p=91930 2024-11-14T17:10:32Z 2024-11-14T15:11:24Z The new release includes several enhancements to the Math Libraries and improvements for C++ programming.]]>

The new release includes several enhancements to the Math Libraries and improvements for C++ programming.

Source

]]>
Chris Alexiuk <![CDATA[An Introduction to Model Merging for LLMs]]> https://developer.nvidia.com/blog/?p=90842 2024-10-31T18:33:13Z 2024-10-28T18:30:00Z One challenge organizations face when customizing large language models (LLMs) is the need to run multiple experiments, which produces only one useful model....]]>

One challenge organizations face when customizing large language models (LLMs) is the need to run multiple experiments, which produces only one useful model. While the cost of experimentation is typically low, and the results well worth the effort, this experimentation process does involve “wasted” resources, such as compute assets spent without their product being utilized…

Source

]]>
2
Charlie Huang <![CDATA[Scale High-Performance AI Inference with Google Kubernetes Engine and NVIDIA NIM]]> https://developer.nvidia.com/blog/?p=90198 2024-10-30T18:57:03Z 2024-10-16T16:30:00Z The rapid evolution of AI models has driven the need for more efficient and scalable inferencing solutions. As organizations strive to harness the power of AI,...]]>

The rapid evolution of AI models has driven the need for more efficient and scalable inferencing solutions. As organizations strive to harness the power of AI, they face challenges in deploying, managing, and scaling AI inference workloads. NVIDIA NIM and Google Kubernetes Engine (GKE) together offer a powerful solution to address these challenges. NVIDIA has collaborated with Google Cloud to…

Source

]]>