- Published 2026
- No of Pages: 120+
- 20% Customization available
Edge AI GPU Systems Market | Latest Analysis, Demand Trends, Growth Forecast
Edge AI GPU Systems Market demand base shaped by local inference, robotics, video analytics and industrial automation
Edge AI GPU systems are compact or ruggedized computing platforms that use GPU acceleration to run AI inference, sensor fusion, video analytics, digital twins, robotics control, and generative AI workloads close to the point where data is produced. These systems are used in factories, retail stores, hospitals, vehicles, telecom sites, warehouses, smart-city nodes and defense-grade field environments where latency, bandwidth cost, privacy, or operational continuity makes full cloud processing inefficient. The Edge AI GPU Systems Market is estimated at around USD 8.5–9.5 billion in 2026, positioned as a high-performance subset of the broader edge AI hardware and edge AI infrastructure market. This estimate aligns with the expansion of the global edge AI market to about USD 47.59 billion in 2026 and edge AI hardware revenue of more than USD 26 billion in 2025, with GPU-based systems capturing demand from workloads that exceed the capability of low-power NPUs, microcontrollers and single-purpose vision chips.
The Edge AI GPU Systems Market is not driven by one application. The strongest demand comes from high-frame-rate video, multi-camera perception, autonomous mobile robots, industrial inspection, medical imaging, AI-enabled traffic systems, edge data centers, and localized enterprise copilots. In these use cases, GPUs remain preferred because they can process multiple AI models, support frequent model updates, and handle mixed workloads such as computer vision, natural language processing, sensor fusion and simulation. NVIDIA’s Jetson Thor platform, made generally available in August 2025, shows the technical direction of this market: up to 2,070 FP4 TFLOPS, 128 GB memory and a 40–130 W power range, with 7.5 times the AI performance of AGX Orin. That level of embedded GPU compute moves edge systems closer to factory-floor robotics, warehouse automation and machine-vision decisions that previously required server-class infrastructure.
| Demand driver | 2026 relevance for Edge AI GPU Systems Market | Typical customers |
| Multi-camera video analytics | High GPU need for real-time object detection, tracking and anomaly detection | Retail chains, cities, logistics hubs, airports |
| Robotics and autonomous machines | Rising requirement for sensor fusion and local decision-making | Automakers, warehouses, agriculture equipment firms |
| Industrial quality inspection | GPU systems process high-resolution images at production-line speed | Electronics, automotive, semiconductor, food processing |
| Healthcare edge AI | Local imaging, ultrasound, surgical assistance and patient monitoring | Hospitals, imaging centers, medical device OEMs |
| Edge data centers and telecom edge | GPU nodes support low-latency inference for enterprise and public workloads | Telecom operators, cloud providers, governments |
| Defense and critical infrastructure | Rugged GPU systems support field analytics and surveillance | Defense agencies, energy utilities, transport authorities |
Edge AI GPU Systems Market demand by geography reflects industrial density, AI infrastructure funding and customer-side automation
North America remains the most commercially mature demand region for the Edge AI GPU Systems Market because AI adoption is tied to warehouse automation, defense modernization, autonomous vehicle testing, healthcare imaging, retail analytics and enterprise edge data centers. The United States has a particularly strong demand pull because large cloud and AI infrastructure investments are now filtering into distributed inference. In January 2025, OpenAI, Oracle and SoftBank announced the Stargate AI infrastructure initiative with an initial USD 100 billion commitment and a stated plan to scale toward USD 500 billion over four years. While this project is centered on large-scale AI data centers, its demand impact extends to edge GPU systems because trained models need deployment points in hospitals, factories, stores, logistics yards and telecom locations where inference must occur near data sources.
U.S. enterprise demand is visible in hardware launches as well. In May 2025, Dell introduced AI servers powered by NVIDIA Blackwell Ultra chips, with configurations supporting up to 192 Blackwell Ultra GPUs and custom setups reaching 256 chips. This type of system primarily addresses enterprise and data-center AI, but it also strengthens the downstream market for smaller edge GPU appliances because customers want a consistent AI stack from core data center to branch, plant and field site. Dell NativeEdge is also positioned around edge orchestration, zero-touch onboarding, zero-trust security and distributed application management, which are important procurement criteria when edge GPU systems are deployed across hundreds or thousands of locations.
The U.S. customer base is broad. Amazon Robotics, Caterpillar, Meta Platforms and Figure were named among early users or evaluators of NVIDIA Jetson Thor-class systems, while John Deere was reported as evaluating the technology. This gives the Edge AI GPU Systems Market a demand signal across warehouse automation, construction machinery, robotics, agriculture and industrial AI rather than only IT infrastructure. A warehouse using autonomous mobile robots needs on-device perception and navigation; a construction equipment OEM needs rugged inference for terrain, safety and operator-assistance functions; an agricultural equipment maker needs localized processing because rural connectivity cannot always support cloud-only AI.
Europe’s demand profile is more policy-linked and industrial. Germany, France, Italy, the Nordics and the Netherlands are the most relevant markets because of advanced manufacturing, robotics, energy systems, automotive production, smart logistics and strict data governance. In December 2024, the European Commission selected seven consortia to establish AI Factories, representing EUR 1.5 billion in combined national and EU funding. These facilities are built around EuroHPC supercomputing resources, but their market effect is wider: European manufacturers and public-sector users gain access to AI development capacity, and many trained models will require local inference on GPU-equipped edge servers in factories, hospitals, transport systems and research facilities.
For the Edge AI GPU Systems Market, Europe is especially important in applications where data sovereignty and industrial reliability matter. A German automotive plant using machine vision for weld inspection cannot afford cloud latency during production; a French hospital network running imaging AI must consider patient-data rules; a port operator in the Netherlands needs real-time container and vehicle monitoring without sending every video stream to the cloud. These conditions support demand for GPU-based edge appliances with local storage, cybersecurity, deterministic connectivity and lifecycle management.
Asia Pacific is the fastest-scaling demand region because it combines electronics manufacturing, robotics deployment, smart-city programs, automotive electrification, and government-backed AI capacity. China is the largest industrial robotics market and a major demand center for edge GPU systems used in factories, logistics automation, surveillance analytics and embodied AI. The International Federation of Robotics reported in its World Robotics 2025 data that China accounted for 54% of annual industrial robot installations worldwide. This directly supports demand for edge AI GPU systems because robot cells, autonomous guided vehicles, machine-vision inspection stations and humanoid robotics platforms need local GPU acceleration for perception and control.
Japan and South Korea are more concentrated in precision manufacturing, automotive electronics, machine tools, robotics and semiconductor ecosystems. Demand in these countries is not only for generic edge boxes; customers often require compact systems with long product lifecycles, thermal stability, high reliability, and compatibility with industrial protocols. In South Korea, semiconductor fabs, display plants, battery factories and electronics assembly sites are natural buyers of GPU-based inspection and process-monitoring systems. In Japan, robotics, factory automation and medical equipment are the stronger demand channels.
India is emerging as a demand market rather than only a software services base. In March 2024, the Government of India approved the IndiaAI Mission with an outlay of about ₹10,371 crore and a plan to build AI compute capacity of 10,000 or more GPUs through public-private partnership. Although this is primarily national AI compute infrastructure, it expands the pipeline of Indian AI models, startups and applied AI deployments. The downstream effect for the Edge AI GPU Systems Market is visible in smart manufacturing, public-sector analytics, healthcare screening, railways, traffic monitoring and telecom edge use cases where localized inference is more practical than centralized cloud processing.
Demand in Southeast Asia is smaller but increasingly relevant. Singapore is a regional hub for AI governance, financial services infrastructure and smart-city deployments. Malaysia, Vietnam and Thailand are gaining electronics manufacturing capacity, which increases demand for machine-vision systems, yield monitoring and factory automation. These countries are less likely to buy the highest-end edge GPU systems at scale immediately, but the installed base of cameras, sensors, robotics and industrial IoT is moving upward, creating mid-range demand for GPU modules and compact inference servers.
The Middle East is becoming a selective but high-value demand geography. The UAE and Saudi Arabia are investing in AI data centers, smart-city platforms, logistics automation, energy infrastructure monitoring and security analytics. Edge GPU systems are relevant here because many use cases involve real-time video, mobility, energy assets and public infrastructure. The region’s demand is less volume-driven than Asia Pacific but can be higher in system value because projects often require ruggedization, cybersecurity, integration services and centralized fleet management.
Application mix shows why GPU-based edge systems are gaining share over fixed-function accelerators
In 2026, the Edge AI GPU Systems Market is strongest where workloads are multimodal or frequently updated. Retailers use GPU systems for shelf analytics, checkout monitoring, loss prevention and customer-flow measurement. Logistics operators deploy them for package sorting, dock monitoring, robotic picking and yard safety. Hospitals use edge GPU systems for imaging assistance, ultrasound AI, patient monitoring and surgical video analytics. Manufacturers deploy them for defect detection, worker safety, process optimization and predictive maintenance.
The segment mix also explains why GPU systems are not being replaced by NPUs. NPUs are efficient for defined inference tasks, but GPU systems are preferred when customers need multiple models on the same box, rapid model retraining, high-resolution video streams, or support for generative AI at the edge. Gartner’s widely cited edge data forecast, referenced in Dell’s retail edge AI material, indicates that 75% of enterprise-managed data is expected to be created and processed outside centralized data centers or cloud environments. That shift strengthens the economic case for local GPU inference because sending all raw video, sensor and operational data to the cloud increases bandwidth cost and response time.
Edge AI GPU Systems Market technology shift from compact inference boxes to multi-workload GPU platforms
The technology base of the Edge AI GPU Systems Market is changing from single-purpose acceleration toward configurable, software-defined edge compute. Earlier edge AI deployments were often limited to basic camera analytics, barcode recognition, predictive maintenance alerts, or simple object detection. In 2026, the requirement is broader: one edge system may need to run computer vision, sensor fusion, anomaly detection, speech models, warehouse robotics control, and a lightweight generative AI interface at the same site.
This is why GPU-based systems are gaining relevance against fixed-function accelerators. NPUs and ASICs are efficient for narrow workloads, but GPU systems provide greater flexibility when enterprises need to update models, run multiple neural networks, support mixed precision, and process video, text, LiDAR, radar, and machine data together. NVIDIA’s Jetson Thor module shows this direction clearly: 2,070 FP4 TFLOPS, 128 GB memory, and 40–130 W configurable power, with 7.5x higher AI compute and 3.5x better energy efficiency than AGX Orin. This specification moves embedded edge AI closer to robotics-grade inference rather than only camera analytics.
The other technical shift is the movement of data-center AI design principles into edge systems. Blackwell-class architectures are built for generative AI, inference, confidential AI, federated learning, and energy-efficient accelerated computing, and the same software approach is now being adapted for distributed edge environments. For the Edge AI GPU Systems Market, this means customers are no longer buying only “GPU boxes”; they are buying platforms that include CUDA or ROCm compatibility, containerized deployment, remote fleet management, security, thermal design, industrial I/O, and validated AI frameworks.
Edge AI GPU Systems becoming more modular across embedded, rugged and micro data center formats
The market is splitting into three technical formats. The first is embedded GPU modules used in robots, drones, autonomous mobile robots, medical devices, retail cameras, and industrial machines. These systems prioritize power efficiency, small form factor, and long lifecycle availability. The second is rugged edge GPU servers used in factories, transport corridors, mining, oil and gas, military vehicles, railways, and telecom edge sites. These systems prioritize thermal tolerance, vibration resistance, redundant storage, and secure networking. The third is micro edge data center infrastructure, where GPU nodes are deployed in telecom hubs, retail clusters, hospitals, ports, and enterprise campuses.
This technology evolution is directly tied to data movement. By 2026, large volumes of enterprise data are being generated outside centralized data centers, and video-heavy workloads are too expensive to continuously stream to cloud infrastructure. Edge GPU systems reduce uplink bandwidth, improve response time, and allow sensitive industrial or patient data to remain local. In healthcare, this supports imaging and surgical video analytics. In manufacturing, it supports defect detection and robot-cell control. In retail, it supports computer vision without sending every video stream to a remote cloud.
The most important technical improvement is not only higher TOPS or TFLOPS. It is workload consolidation. A factory that previously used one controller for machine vision, one industrial PC for predictive maintenance, and another server for safety analytics can now consolidate multiple AI workloads on a GPU-enabled edge system. This improves utilization but raises requirements for memory, virtualization, model isolation, cooling and orchestration.
Production dynamics show Asia as hardware base while the United States leads GPU architecture and software stacks
Production in the Edge AI GPU Systems Market is geographically layered. The United States dominates GPU architecture, AI software ecosystems, reference designs, and high-value platform control through companies such as NVIDIA, AMD, Intel, Dell, HPE, Supermicro and other server or edge infrastructure suppliers. However, physical manufacturing of boards, modules, enclosures, power systems and assembled edge servers remains heavily concentrated in Asia.
Taiwan is central because of semiconductor foundry leadership, advanced packaging, ODM server manufacturing, motherboard production and industrial PC ecosystems. Many edge GPU systems depend on Taiwanese design and manufacturing depth, even when sold under U.S., European or Japanese brands. South Korea contributes through memory, HBM, NAND storage, display electronics and semiconductor component supply. Japan remains important for industrial automation hardware, precision components, machine vision and robotics ecosystems. China remains a large electronics assembly base and a major domestic market for industrial edge AI, although export controls and localization policies are reshaping sourcing decisions.
SEMI’s 2026 data on semiconductor equipment billings confirms the broader manufacturing concentration behind this supply chain: China, Taiwan and Korea together accounted for 79% of global semiconductor equipment spending in 2025, up from 74% in 2024. This matters for edge AI GPU systems because GPU modules, advanced boards, power management, memory and rugged computing platforms depend on the same regional semiconductor and electronics manufacturing base.
China’s production and demand role is especially complex. It is both a large manufacturer of electronics systems and the largest industrial robotics demand market. The International Federation of Robotics reported that China represented 54% of global industrial robot installations in 2024, with 295,000 units installed and an operational stock above 2 million robots. This creates local demand for machine vision, robot control, autonomous logistics and industrial inspection systems, but restrictions on advanced AI chips push Chinese system builders toward domestic accelerators, lower-tier GPUs, and localized edge AI platforms.
Regional manufacturing concentration and supply-side risk in Edge AI GPU Systems Market
North America’s production role is strongest in system architecture, enterprise platforms, software orchestration and high-margin edge infrastructure. U.S.-based suppliers often control the platform roadmap, GPU ecosystem, server design and enterprise customer relationships. Actual production, however, frequently relies on Taiwan, Malaysia, Mexico, China, Vietnam and other electronics manufacturing locations. Mexico is gaining relevance for North American nearshoring of electronics assembly, especially for industrial, telecom and automotive edge systems where customers want shorter lead times and reduced tariff exposure.
Europe is not a volume manufacturing center for GPU modules, but it has strength in industrial edge integration, automation, automotive electronics, machine vision, rail, energy systems and defense-grade computing. Germany, France, Italy, Sweden and Finland are more relevant as system integrator and end-use engineering markets than as GPU production hubs. In December 2024, seven European AI Factory consortia were selected with EUR 1.5 billion in national and EU funding, and broader EuroHPC-linked investment is expected to reach EUR 10 billion over 2021–2027. This supports local AI model development and creates downstream demand for edge inference infrastructure in factories, healthcare systems and public-sector applications.
India is still a small producer of high-end edge GPU systems, but its market position is improving through electronics manufacturing incentives, AI compute programs and industrial digitization. The IndiaAI Mission was approved in March 2024 with ₹10,371.92 crore over five years and a public AI compute infrastructure target of 10,000 or more GPUs. A 2025 PIB update referenced more than 38,000 GPUs deployed under India’s AI compute expansion, which indicates faster capacity build-out than the initial minimum target. This does not make India a large GPU producer, but it increases local demand for edge AI systems in public services, manufacturing, mobility and healthcare.
Edge AI GPU Systems Market segmentation highlights by system type, deployment model and application
The Edge AI GPU Systems Market can be estimated at USD 8.5–9.5 billion in 2026 as the GPU-intensive portion of the broader edge AI hardware and edge AI infrastructure market. Broader edge AI hardware estimates vary by definition, with one 2026 estimate placing edge AI hardware at USD 11.15 billion and another placing it at USD 30.74 billion, depending on whether smartphones, AI PCs and device-level accelerators are included. For a narrower GPU-system view, the market is concentrated in embedded modules, rugged GPU servers and micro edge data center systems.
| Segment | Estimated 2026 share | Estimated 2026 value | Why the segment leads or lags |
| Embedded GPU modules and carrier-board systems | 32–36% | USD 2.8–3.3 billion | Strong use in robotics, drones, AMRs, medical devices and machine vision |
| Rugged industrial edge GPU servers | 26–30% | USD 2.3–2.8 billion | High demand from factories, energy, transport, defense and outdoor analytics |
| Enterprise branch and retail edge GPU appliances | 16–19% | USD 1.4–1.8 billion | Used for store analytics, local copilots, security and inventory intelligence |
| Telecom and micro edge data center GPU nodes | 13–16% | USD 1.1–1.5 billion | Supports distributed inference, private 5G and low-latency services |
| Developer kits and evaluation platforms | 4–6% | USD 350–550 million | Important for ecosystem growth but lower in system value |
By application, industrial automation and robotics form the largest demand block, with an estimated 28–32% share of the Edge AI GPU Systems Market in 2026. The share is supported by China’s robotics scale, Japan’s automation depth, South Korea’s electronics factories, and North American warehouse automation. Video analytics and smart infrastructure account for roughly 20–24%, helped by transport, retail, campus security and smart-city deployments. Healthcare and medical imaging represent 10–13%, but system value per deployment is high because of regulatory, reliability and data-governance requirements. Automotive and mobility account for 14–17%, led by ADAS validation, autonomous machines, in-vehicle compute and fleet analytics. Telecom and enterprise edge data centers contribute 12–15%, while defense, energy and other rugged applications account for the balance.
| Application segment | Estimated 2026 share | Key demand geography |
| Industrial automation and robotics | 28–32% | China, Japan, South Korea, Germany, United States |
| Video analytics and smart infrastructure | 20–24% | United States, China, UAE, Singapore, Europe |
| Automotive, mobility and autonomous machines | 14–17% | United States, Germany, Japan, South Korea, China |
| Telecom and enterprise edge data centers | 12–15% | United States, Europe, India, Middle East |
| Healthcare and medical imaging edge AI | 10–13% | United States, Europe, Japan, South Korea |
| Defense, energy and rugged field systems | 8–11% | United States, Europe, Middle East, India |
The strongest segment growth through 2030 is expected in robotics, autonomous machines, and micro edge data centers. These applications require continuous model upgrades, higher memory bandwidth, and multi-sensor processing. Lower-cost vision-only deployments will remain price-sensitive, but high-compute edge inference will continue shifting toward GPU-based systems because buyers need flexible platforms rather than fixed-function hardware.
Competitive structure of Edge AI GPU Systems Market led by GPU platform control and industrial system integration
The Edge AI GPU Systems Market is moderately concentrated at the GPU-platform layer but more fragmented at the system-integration layer. NVIDIA holds the strongest position because most commercial edge GPU systems are built around Jetson, RTX, IGX, L4/L40S-class GPUs, and NVIDIA AI Enterprise software. In 2026, NVIDIA-linked platforms are estimated to support nearly 65–70% of commercial GPU-accelerated edge AI system value, especially in robotics, machine vision, healthcare edge AI, retail analytics and industrial inference. This does not mean NVIDIA sells every finished edge system directly; rather, its modules, GPUs and software stack sit inside systems supplied by Advantech, ADLINK, AAEON, Supermicro, Dell, HPE, OnLogic and other OEMs.
NVIDIA’s product relevance is clearest in embedded AI and robotics. Jetson AGX Orin remains widely used in high-performance edge inference, offering up to 275 TOPS in systems such as Advantech’s MIC-733-AO. Jetson Thor raises the performance ceiling for robotics and physical AI, with up to 2,070 FP4 TFLOPS, 128 GB memory and configurable 40–130 W power. This gives NVIDIA a strong position in edge deployments where customers need real-time sensor fusion, generative AI at the edge, humanoid robotics, autonomous machines, or multi-camera perception.
Edge AI GPU Systems Market share by major system players and platform suppliers
Market share in this category is best assessed by system value, not only chip revenue, because finished edge AI GPU systems include GPU modules, carrier boards, rugged chassis, storage, networking, thermal design, orchestration software and services. NVIDIA is the dominant platform supplier, but finished-system share is distributed across industrial PC firms, server OEMs and edge infrastructure vendors.
| Company / supplier group | Estimated 2026 share by system value | Relevant products and positioning |
| NVIDIA ecosystem platforms | 65–70% platform influence | Jetson Orin, Jetson Thor, RTX GPUs, IGX, NVIDIA AI Enterprise, Isaac, Metropolis, Holoscan |
| Advantech | 8–11% | MIC-733-AO, MIC-AI systems, industrial edge AI systems using Jetson Orin |
| ADLINK Technology | 6–8% | DLAP-211, DLAP-411-Orin, ROScube, AXE AI GPU servers |
| AAEON | 4–6% | BOXER and embedded AI systems based on NVIDIA Jetson modules, AI panel PCs |
| Supermicro | 5–7% | Compact edge servers, GPU edge AI systems, NVIDIA RTX-based edge platforms |
| Dell Technologies | 4–6% | Dell NativeEdge, Precision edge/workstation systems, edge orchestration with NVIDIA AI Enterprise |
| HPE and other enterprise OEMs | 3–5% | HPE edge-to-cloud and AI infrastructure systems, private AI and accelerated compute platforms |
| Others | 8–12% | OnLogic, Vecow, Neousys, Kontron, Axiomtek, Cincoze, IEI and regional industrial PC suppliers |
Advantech is one of the most visible industrial players in the Edge AI GPU Systems Market. Its MIC-733-AO is a compact fanless AI system based on NVIDIA Jetson AGX Orin, with up to 275 TOPS, multiple GbE ports, optional PoE, USB 3.2, mPCIe and remote monitoring support. Advantech also offers NEMA TS2-oriented edge AI inference systems such as MIC-733-TS, which supports traffic and outdoor infrastructure deployments with -34°C to 74°C operating temperature capability. These specifications are relevant for smart intersections, roadside analytics, factory safety and industrial vision rather than generic office AI workloads.
ADLINK competes through robotics, medical, industrial and server-class edge AI platforms. Its DLAP-211 development kit uses NVIDIA Jetson Orin NX or Orin Nano modules for robotics, manufacturing and retail proof-of-concept development. ADLINK also lists DLAP-411-Orin as an NVIDIA Jetson AGX Orin-powered edge AI inference platform, while its ROScube line targets robotics control. For healthcare edge AI, ADLINK’s MLB-IGX Medical Box PC is powered by NVIDIA IGX with NVIDIA RTX 6000, supporting low-latency inference in surgical and medical scenarios.
AAEON has a strong position in embedded and rugged edge computing. Its NVIDIA AI Solutions portfolio uses Jetson modules ranging from Jetson Nano to Jetson Thor, and the company has highlighted Jetson Orin Nano-based systems for smart retail and object classification applications. AAEON’s competitive position is strongest where customers need industrial form factors, long lifecycle availability, panel PCs, machine vision boxes and ruggedized edge hardware rather than large enterprise servers.
Supermicro is more important in edge server and micro data center formats. Its Edge AI Solutions portfolio targets retail, quick-service restaurants, manufacturing, healthcare, smart spaces and transportation. Supermicro’s compact edge servers support real-time AI processing close to operations, and its edge AI white paper gives a practical retail example using an NVIDIA RTX 6000 Ada GPU to analyze camera streams, customer movement and sales data on-site. This places Supermicro in the higher-value edge server segment rather than the low-power embedded module segment.
Dell Technologies participates through edge orchestration and enterprise-grade infrastructure. Dell NativeEdge is positioned as a full-stack edge operations platform with zero-touch onboarding, zero-trust security and workload orchestration. Dell also states that NativeEdge automates delivery of NVIDIA AI Enterprise, including NVIDIA NIM and other microservices, which is important for customers deploying production-grade AI across distributed stores, factories, clinics and campuses. In the Edge AI GPU Systems Market, Dell’s advantage is less about embedded modules and more about enterprise deployment, management and integration.
AMD and Intel are relevant but have smaller direct share in GPU-based edge AI systems. AMD’s embedded and server GPU ecosystem is stronger in enterprise and data-center acceleration than in rugged edge AI boxes, although ROCm and Instinct expansion can influence future micro edge data center deployments. Intel participates through CPUs, integrated graphics, OpenVINO, NPUs in Core Ultra platforms and edge servers, but many Intel-based edge systems still use discrete NVIDIA GPUs when high AI throughput is required.
Recent industry developments influencing competitive positioning
- In March 2025, ADLINK showcased edge AI and generative AI systems at NVIDIA GTC, including DLAP-211 for robotics, manufacturing and retail, and MLB-IGX Medical Box PC for real-time surgical AI inferencing.
- In March 2025, Advantech demonstrated MIC-733-AO Jetson AGX Orin systems integrated with NVIDIA Agent Studio and NanoOWL for factory safety applications such as mask and helmet recognition, including deployment in Advantech’s own factory.
- In May 2025, Dell highlighted NativeEdge support for GPU-driven enterprise clusters and distributed AI workloads, strengthening its role in managed edge infrastructure.
- In August 2025, NVIDIA made Jetson Thor available with 2,070 FP4 TFLOPS and 128 GB memory, lifting the compute ceiling for robotics and physical AI at the edge.
- In September 2025, Supermicro expanded its AI-optimized portfolio with newer NVIDIA GPU systems and liquid-cooling options that can reduce data center power consumption by up to 40%, supporting edge-to-data-center AI continuity.
“Every Organization is different and so are their requirements”- Datavagyanik