Entries
100
AI lexicon entries currently assigned to this category.
AI Topic Category
This page maps the Industry, Applications and Infrastructure portion of the Lexicon Labs AI encyclopedia. It brings together the main concepts in this category, the tracks that organize them, and the related books and guides that make the topic easier to study.
Entries
AI lexicon entries currently assigned to this category.
Tracks
Taxonomy tracks that sit inside this category.
Top Entry Types
The most common entry types appearing in this topic cluster.
Industry, Applications and Infrastructure is one of the active taxonomy categories in the Lexicon Labs AI encyclopedia. The current dataset includes 100 entries in this area, which makes it large enough to function as a real discovery surface rather than a placeholder page.
Use the sample entries as a fast orientation layer, then move into the AI encyclopedia preview or the related paperbacks and bundles if you want a longer learning path.
Track in Industry, Applications and Infrastructure.
Track in Industry, Applications and Infrastructure.
Track in Industry, Applications and Infrastructure.
NVIDIA is a leading technology company specializing in designing graphics processing units (GPUs), chipsets, and related software. It's a key enabler for AI, gaming, data centers, and professional visualization markets globally.
Jensen Huang is the co-founder, President, and CEO of NVIDIA. He pioneered the development of GPUs, making NVIDIA a dominant force in AI hardware crucial for accelerating machine learning and deep learning advancements.
Bill Dally is a renowned computer scientist and Chief Scientist at NVIDIA. He is a leading expert in high-performance computing, parallel processing, and interconnection networks, significantly influencing GPU architecture and AI acceleration.
CUDA (Compute Unified Device Architecture) is NVIDIA's parallel computing platform and programming model. It enables software developers to harness the power of Graphics Processing Units (GPUs) for general-purpose processing, significantly accelerating computationally intensive tasks.
GPU Computing harnesses Graphics Processing Units (GPUs) to perform numerous calculations in parallel. This method dramatically accelerates complex computational tasks, especially for AI model training, scientific simulations, and large-scale data processing.
NVIDIA DGX systems are integrated AI supercomputers designed by NVIDIA for high-performance deep learning training and analytics. They combine multiple GPUs, high-speed networking, and specialized software to accelerate complex AI workloads.
The NVIDIA H100 is a powerful graphics processing unit (GPU) designed for accelerating artificial intelligence workloads, high-performance computing, and data center operations. It features Hopper architecture for significant performance gains over previous generations.
The NVIDIA A100 is a powerful data center GPU, based on the Ampere architecture, designed for AI training, inference, and high-performance computing. It features Tensor Cores for accelerated matrix operations, crucial for deep learning workloads.
The NVIDIA GH200 Grace Hopper Superchip combines a Grace CPU and a Hopper H100 GPU with high-bandwidth memory. It's designed for demanding AI and high-performance computing, accelerating large model training and inference.
Tensor Cores are specialized processing units within NVIDIA GPUs, designed to rapidly accelerate matrix multiplication and accumulation operations. They are crucial for deep learning training and inference, significantly boosting AI workload performance through mixed-precision computing.
NVIDIA AI Enterprise is an end-to-end software platform accelerating AI development and deployment for businesses. It integrates NVIDIA's AI frameworks, libraries, and tools, optimized for NVIDIA GPUs and infrastructure, enabling scalable enterprise AI solutions.
cuDNN (CUDA Deep Neural Network library) is an NVIDIA-developed GPU-accelerated library for deep neural networks. It provides highly optimized primitives for common deep learning operations, significantly speeding up training and inference on NVIDIA GPUs.
AI Hub
This hub connects the main AI learning surfaces on Lexicon Labs into one path: the encyclopedia preview, student-friendly books, themed bundles, and the tools that help readers turn concepts into working understanding.
Open GuidePaperback Hub
This page groups together Lexicon Labs paperback titles that help younger readers understand artificial intelligence, computation, and the people behind modern computing.
Open GuideTurn messy notes into study-ready flashcards and CSV exports for spaced repetition apps.
Open ToolTransform notes into visual diagrams and export them for sharing or studying.
Open ToolCreate citations for papers fast with APA/MLA formatting and copy-ready output.
Open ToolAnalyze clarity in essays, emails, and articles with readability scores and instant issue flags.
Open Tool
An accessible primer on quantum computing fundamentals, from qubits and superposition to real-world applications.
View Paperback
Learn core Python programming with approachable examples designed for teen learners and first-time coders.
View Paperback
Discover the ideas and influence of one of the most brilliant minds behind computing, game theory, and modern science.
View Paperback
A practical introduction to coding concepts for young learners and beginners.
View Bundle
Books that explain artificial intelligence clearly for young and curious readers.
View Bundle
Modern scientific minds who shaped computing and physics.
View Bundle