Enterprise

Nvidia will now let you rent a DGX Station A100 mini supercomputer

It’s not meant for gaming though

Published

on

Today, many services are based on a subscription model, whether it’s music streaming or ordering monthly coffee brew packs. Even the gaming industry is gradually moving to a subscription-based business. So, what’s left? How about subscribing to a plan that gives you access to a supercomputer?

Nvidia is trying to pull off a trend in the supercomputer world — selling them via a subscription model. The equipment is costly and requires a lot of upfront investment, discouraging smaller companies or individual developers.

Its DGX Station A100 is a new cloud-native supercomputer that delivers 2.5 Petaflops of AI training power & 5 PetaOPS of INT8 inferencing horsepower. It’s also unique to support MIG (Multi-Instance GPU) protocol, allowing multiple processes to execute faster. The computing resources can be shared with up to 28 scientists at once.

Each A100 system has dual AMD EPYC 7742 CPUs with 64-cores each, supports up to 2TB of memory, and has eight A100 GPUs.

A DGX SuperPod, on the other hand, consists of multiple DGX Station computers. They are AI supercomputers featuring 20 or more Nvidia DGX A100 systems and Nvidia InfiniBand HDR networking. Nvidia intends to open the world of AI to more enterprise customers for artificial intelligence, drug research, autonomous vehicles, and more.

The bare-metal server features 80 GB A100 Tensor Core GPUs, delivering 25 percent faster inference performance and two times faster data analytics performance. This rig clearly isn’t meant for gaming and is specifically designed for research, complex calculations, and content creation.

It’s the first time Nvidia is trying a subscription model, and it genuinely makes a lot of sense. GX Stations start at US$ 149,000, while the DGX SuperPod starts at US$ 7 million and scales to US$ 60 million. This makes it a herculean task for a small team to source the gear. A subscription starts at US$ 9,000 a month, and even though it may sound a lot for a “processor,” it isn’t.

Computers

Lenovo accelerates production-ready enterprise AI with NVIDIA

From AI inferencing to gigawatt-scale AI factories

Published

on

Lenovo has unveiled new Lenovo Hybrid AI Advantage with NVIDIA solutions designed to accelerate AI adoption, reduce time-to-first-token (TTFT), and deliver measurable business results across personal, enterprise, and cloud environments.

Building on the inferencing acceleration introduced at Lenovo Tech World, this next phase of Hybrid AI execution expands the solutions with device to data center to gigawatt-scale AI cloud deployments.

This enables real-time decision-making, operational efficiency, and intelligent automation across industries at global scale. The solutions boost productivity, agility, and innovation by enabling faster AI deployment.

The development comes as AI is seen moving from training models powering real-time decisions. Lenovo is prepared to address the demand for validated hybrid AI platforms built for production-scale inferencing, as organizations will need infrastructure to support such.

In fact, Lenovo’s Hybrid AI Advantage with NVIDIA are now delivering ROI in less than six months. The new inferencing-optimized ThinkSystem and ThinkEdge servers are being utilized for real-time inferencing across retail, manufacturing, healthcare, sports, and smart city scenarios.

The expanded portfolio includes:

  • two Lenovo Hybrid AI platforms, featuring NVIDIA RTX PRO 6000 Blackwell Server Edition and Blackwell Ultra
  • Hybrid AI inferencing starter platform with RTX PRO 4500 Blackwell Server Edition
  • Lenovo ThinkAgile HX650a with Nutanix Enterprise AI and Nutanix Kubernetes Platform
  • Lenovo Hybrid AI platforms with Cloudian

Bringing inferencing directly to professionals

Lenovo and NVIDIA are bringing AI from development environments to real-world production at a global scale. This is thanks to new Lenovo AI inferencing platforms with NVIDIA Dynamo and NVIDIA NIM.

Meanwhile, Lenovo AI Cloud gigafactory platforms are powered by NVIDIA Vera Rubin NVL72. Industry-specific agentic AI solutions are also built with NVIDIA Blueprints and software.

For consumers, there’s next-generation NVIDIA RTX Pro Blackwell-powered mobile and desktop workstations. These will be rolled out across the ThinkPad P14s Gen 7, ThinkPad P16s Gen 5, and ThinkPad P1 Gen 1 lineups.

ThinkStation P5 Gen 2 desktops, meanwhile, will get up to two RTX PRO 6000 Blackwell Max-Q GPUs. They will also have support for NVIDIA OpenShell.

For gigawatt-scale scenarios, the next-gen Vera Rubin platform accelerates deployment for hyperscale and sovereign AI cloud providers.

These fully liquid-cooled, rack-scale AI systems are engineered for faster deployment and dramatically improved token economics. They can achieve up to 10x higher throughput and up to 10x lower cost per token.

Continue Reading

Automotive

How the Ford Ranger is powering community resilience

Through machine and technology, Ford Philanthropy is helping Gawad Kalinga bridge the gap for remote communities.

Published

on

Strong communities aren’t just built with bricks and mortar. They are sustained by the hands that reach out and the wheels that get them there.

For Gawad Kalinga (GK), reaching the most isolated provinces in the Philippines is often the biggest hurdle to delivering hope.

To bridge this gap, Ford Philanthropy and Ford Philippines recently handed over the keys to a brand-new Ford Ranger Sport 4×4.

During the launch of the “Ford Building Together” initiative at the GK Headquarters in Mandaluyong, the Ranger was introduced as a vital partner for GK’s nationwide relief operations.

The Ranger provides the performance and off-road capability needed when every second counts.

More than a mission

“Strong communities are built through strong partnerships,” said Mary Culler, President of Ford Philanthropy.

Alongside Pedro Simoes, Managing Director of Ford Philippines, Culler highlighted how this initiative unites dealers, employees, and owners.

It’s a collective effort to scale the heart of what Ford does: moving people forward.

Through Operation Walang Iwanan, Ford has already equipped disaster response hubs across six regions with essential tech: from Starlink mini-satellites and EcoFlow solar power to water filtration systems.

Between 2024 and 2025, these tools supported over 11,500 individuals through fires and natural disasters.

Investing in the everyday

The impact stretches into the daily moments of community life. Since 2015, Ford’s partnership with GK has reached 15,000 patients through medical missions. They also trained 1,100 health champions.

Through the Kusina ng Kalinga program, children receive the nutrition they need to stay focused in school. Meanwhile, the new READ program provides 12 weeks of literacy support for students in Caloocan.

Even food security is getting a tech-driven boost. Ford has renewed its collaboration with Scholars of Sustenance Philippines, using mobility to rescue surplus food. It is then redistributed to families experiencing hunger in Nueva Vizcaya.

In the end, technology lives inside these real moments. By combining grassroots action with reliable mobility, Ford and Gawad Kalinga are ensuring that no community is ever truly out of reach.

Continue Reading

Enterprise

AMD poised to lead agentic AI era with high-performance CPUs

Published

on

AMD is prepared to lead the industry in its agentic AI era with their high-performance CPU strategy.

As the industry pivots from simple AI models to agentic AI systems that are capable of independent planning and decision-making, the CPU is reclaiming its role as the critical “head coach” of the data center.

This was noted by AMD CEO and Chair Dr. Lisa Su during the AMD Advancing AI event last year. The rise of autonomous agents has transformed inference into a complex and multi-step workflow that demands sophisticated logic and orchestration.

And while high-performance GPUs are necessary to generate insights in real time, the surrounding infrastructure is just as important.

This is where CPUs enter the picture. Their performance and efficiency are more important than ever in the overall performance of modern AI infrastructure.

And AMD delivers an advantage with their offerings. In recently published data, a 5th Gen AMD EPYC CPU-based system is estimated to perform up to 2.1x better per core against an NVIDIA Grace Superchip-based system.

The same system AMD-based system also delivers up to 2.26x uplift on SPECpower, measuring operations per watt.

The x86 CPU architecture gives customers the advantage of a broad, proven software ecosystem that can run existing workloads natively.

This avoids the costly refactoring and code-base duplication often required when switching to Arm-based alternatives.

Looking ahead, AMD is doubling down on the balanced system philosophy. Future architectures such as the “Venice” CPUs will power the “Helios” rack-scale AI design.

By integrating EPYC CPUs with Instinct GPUs and the ROCm software stack, AMD aims to maximize cluster-level performance and lower the total cost of ownership in the agentic era.

Continue Reading

Trending