XPG Mars 980 Blade XPG Mars 980 Blade

Computers

XPG Mars 980 Blade review: High-speed storage that finally makes sense

Fast, reliable, and no longer offensively expensive

Published

on

A few years ago, adding high-speed storage to your PS5 or gaming rig meant coughing up a small fortune — sometimes nearly half the price of the console itself. Back then, PCIe Gen4 NVMe drives were bleeding-edge tech, and prices reflected that. But now that the dust has settled, more affordable options are emerging without compromising performance. One of those options is the XPG Mars 980 Blade, and it’s a drive that gets the job done without the unnecessary frills.

We tested the 980 Blade primarily as a PS5 storage expansion, and the experience was so seamless, it was almost forgettable — in the best way. Games launched quickly, performance remained stable, and the system treated it as if it were native storage. That’s a high compliment for any aftermarket SSD.

But beyond raw performance, what makes the Mars 980 Blade stand out is how reasonably priced it is. The 1TB model goes for $135 USD, while the 2TB version has an SRP of $205 USD. That’s a far cry from the $400+ Gen4 SSD we purchased just a year into the PS5’s life cycle. In many ways, this is the drive we wished we had back then.

Designed to disappear (in a good way)

The Mars 980 Blade isn’t flashy, and that’s part of its appeal. There’s no RGB, no oversized heat pipes, and no branding overload. Instead, you get a low-profile aluminum heatsink that makes it compatible with the PS5’s expansion slot — no modification required. It slid in easily and was recognized by the system almost instantly. The included heatsink keeps things cool enough without getting in the way, which also makes it a strong contender for slim laptops or handhelds.

There’s something refreshing about a component that’s meant to be installed once and forgotten. The Mars 980 Blade leans into that philosophy — performance where it matters, no extra distractions.

Real-World PS5 Use

The real test was daily usage. I moved a few of my heavier titles — including Call of Duty: Black Ops 6 and Spider-Man 2 — from the PS5’s internal SSD onto the Mars 980 Blade. The result? No noticeable difference in performance. Boot times, in-game loading, and texture streaming all felt identical.

This is what makes the Mars 980 Blade a smart buy. You don’t have to sacrifice performance for price. And if you’ve been juggling which games to keep installed, this SSD becomes an easy way to expand your library without babysitting your storage every week.

It also helps that the Mars 980 Blade supports sequential read speeds of up to 7400 MB/s — more than enough to meet Sony’s requirements for PS5-compatible drives. And for those who care about durability, XPG uses TLC NAND, which typically offers better endurance and sustained speed than cheaper QLC-based alternatives.

What about Gaming Laptops and Handhelds?

While we focused our testing on the PS5, it’s not hard to project how this SSD will perform on modern gaming laptops and handhelds like the Legion Go 7 Gen9. With its PCIe Gen4 x4 interface and slim thermal profile, the Mars 980 Blade should integrate seamlessly into most M.2 slots — even in space-constrained builds.

Games installed on internal NVMe drives typically launch faster, stream assets more smoothly, and reduce system hiccups, especially in open-world or live-service titles. That’s exactly where the Mars 980 Blade fits in. Whether you’re booting into Windows or loading up a massive game like Starfield or Cyberpunk 2077, you’ll benefit from the fast sequential speeds and consistent write performance.

Plus, unlike relying on microSD cards or external drives, having a fast internal Gen4 SSD means you’re getting the full performance potential of your hardware. For power users who multitask with game capture, editing, or large file transfers, the Mars 980 Blade should hold its own.

Price vs Performance: A turning point

Let’s talk value — because that’s where this drive really shines. When the PS5 launched, the earliest compatible SSDs often sold for well over $300 to $400 USD, especially if you wanted 2TB with a heatsink. Now, we’re finally seeing drives like the Mars 980 Blade deliver that same tier of performance at nearly half the cost.

At $205 USD for 2TB, this drive undercuts many premium-branded options while offering the same Gen4 speed, excellent thermal control, and broad compatibility. You’re not paying for flashy marketing — just solid performance. For comparison, popular alternatives like the WD_BLACK SN850X or Samsung 980 Pro may come with slightly faster random read/write speeds or longer warranties, but for everyday gaming use, the difference is marginal at best.

The Mars 980 Blade hits a practical sweet spot for gamers who just want more room to play — whether they’re on a console, laptop, or handheld.

Is the XPG Mars 980 Blade your storage expansion Match?

The XPG Mars 980 Blade isn’t trying to reinvent storage. Instead, it focuses on what matters: delivering fast, stable performance at a price that makes sense in 2025. Whether you’re upgrading your PS5 or adding more muscle to a handheld gaming PC, this drive holds its own.

It may not come with flashy extras or aggressive marketing claims, but it doesn’t need to. It performs exactly how you want it to — and at a price that doesn’t feel like a punch to the gut.

The XPG Mars 980 Blade is one of the best-value Gen4 SSDs for console and PC gamers who just want to play more — and worry less.

Buy the XPG Mars 980 Blade here. 

Computers

MINIX launches T4000, T5000 Generative AI Mini WorkStations

For businesses and creators

Published

on

MINIX has launched the T4000 and T5000 Generative AI Mini Workstations.

These powerful and space-saving solutions are built for professional generative AI, local large language model (LLM) inference, content creation, on-premise enterprise deployment, and lightweight model training.

The desktops are powered by the NVIDIA Jetson AGX Thor series modules with flagship Blackwell architecture. As such, they deliver exceptional on-device AI horsepower in a small desktop form factor.

The build features durable metal and plastic chassis, plus twin turbo intercooler for sustained performance.

The new offerings are engineered for professionals, developers, creators, and IT teams, redefining edge and on-premise AI without bulky server hardware.

At the core of the T4000 and T5000 are NVIDIA’s cutting-edge compute platform:

  • T4000: Up to 1200 Sparse FP4 TFLOPs AI performance
  • T5000: Up to 2070 Sparse FP4 TFLOPs AI performance
  • 1536-2560 Blackwell GPU with fifth-generation Tensor Cores
  • Multi-Instance GPU (MIG) for parallel task efficiency
  • NVIDIA PVA 3.0 dedicated vision processing engine

The workstations natively support smooth local inference for 7B-70B parameter LLMs. This makes private, low-latency AI accessible for businesses and creators.

In addition, the offerings feature high-core-count Arm processing and large, fast memories of up to 128GB DDR5 on 12-core or 14-core Arm Neoverse-V3AE 64-bit CPU.

Designed for professional workflows, the mini workstations also include enterprise-grade networking and flexible expansion:

  • Dual 10GbE ethernet
  • Wi-Fi 6E
  • Bluetooth 5.3
  • 2x HDMI 2.1 TMDS (4K@60Hz)
  • 4x USB 3.2 Gen 1 Type-A
  • 1x USB 3.2 Gen 2 Type-C
  • 24V DC input, up to 200W max power

Ideal use cases for the MINIX T4000 and T5000 include local LLM inference, generative AI creation, on-device AI computing, and lightweight model training.

Continue Reading

Computers

Lenovo accelerates production-ready enterprise AI with NVIDIA

From AI inferencing to gigawatt-scale AI factories

Published

on

Lenovo has unveiled new Lenovo Hybrid AI Advantage with NVIDIA solutions designed to accelerate AI adoption, reduce time-to-first-token (TTFT), and deliver measurable business results across personal, enterprise, and cloud environments.

Building on the inferencing acceleration introduced at Lenovo Tech World, this next phase of Hybrid AI execution expands the solutions with device to data center to gigawatt-scale AI cloud deployments.

This enables real-time decision-making, operational efficiency, and intelligent automation across industries at global scale. The solutions boost productivity, agility, and innovation by enabling faster AI deployment.

The development comes as AI is seen moving from training models powering real-time decisions. Lenovo is prepared to address the demand for validated hybrid AI platforms built for production-scale inferencing, as organizations will need infrastructure to support such.

In fact, Lenovo’s Hybrid AI Advantage with NVIDIA are now delivering ROI in less than six months. The new inferencing-optimized ThinkSystem and ThinkEdge servers are being utilized for real-time inferencing across retail, manufacturing, healthcare, sports, and smart city scenarios.

The expanded portfolio includes:

  • two Lenovo Hybrid AI platforms, featuring NVIDIA RTX PRO 6000 Blackwell Server Edition and Blackwell Ultra
  • Hybrid AI inferencing starter platform with RTX PRO 4500 Blackwell Server Edition
  • Lenovo ThinkAgile HX650a with Nutanix Enterprise AI and Nutanix Kubernetes Platform
  • Lenovo Hybrid AI platforms with Cloudian

Bringing inferencing directly to professionals

Lenovo and NVIDIA are bringing AI from development environments to real-world production at a global scale. This is thanks to new Lenovo AI inferencing platforms with NVIDIA Dynamo and NVIDIA NIM.

Meanwhile, Lenovo AI Cloud gigafactory platforms are powered by NVIDIA Vera Rubin NVL72. Industry-specific agentic AI solutions are also built with NVIDIA Blueprints and software.

For consumers, there’s next-generation NVIDIA RTX Pro Blackwell-powered mobile and desktop workstations. These will be rolled out across the ThinkPad P14s Gen 7, ThinkPad P16s Gen 5, and ThinkPad P1 Gen 1 lineups.

ThinkStation P5 Gen 2 desktops, meanwhile, will get up to two RTX PRO 6000 Blackwell Max-Q GPUs. They will also have support for NVIDIA OpenShell.

For gigawatt-scale scenarios, the next-gen Vera Rubin platform accelerates deployment for hyperscale and sovereign AI cloud providers.

These fully liquid-cooled, rack-scale AI systems are engineered for faster deployment and dramatically improved token economics. They can achieve up to 10x higher throughput and up to 10x lower cost per token.

Continue Reading

Computers

CIPTA debuts AI GPU server, edge workstation at CloudFest 2026

Malaysia-made AI infrastructure

Published

on

CIPTA Industrial Sdn Bhd steps onto the global stage with its European debut at CloudFest 2026. They introduced high-density AI infrastructure and edge-ready systems built for modern enterprise workloads.

Held at Europa-Park in Rust, Germany from March 23 to 26, the event marks the company’s first major international showcase under its own brand. Backed by InWin Development Inc., CIPTA positions itself as a new-generation EMS provider focused on AI, cloud, and enterprise systems.

At Booth R41, the company is highlighting two key platforms: the RG658 PRO GPU server developed with Phison, and the cubePRO edge workstation created in collaboration with Accordance.

Built for scalable AI workloads

Leading the showcase is the RG658 PRO, a high-density GPU server designed to handle large-scale AI training and inference without pushing costs out of reach for enterprises.

The system supports up to eight high-performance GPUs and integrates Phison’s Pascari aiDAPTIV alongside its PASCARI enterprise SSD lineup. This combination aims to improve data throughput, reduce latency, and streamline AI pipelines.

Thermal performance is a key focus. The RG658 PRO uses a dual-chamber design to separate heat zones, paired with up to 14 high-speed PWM fans for sustained cooling under heavy workloads. Power delivery is handled by a 3+1 redundant configuration of 80PLUS Titanium PSUs, scaling up to 9600W.

The result is a platform built to scale AI deployments on-site while maintaining efficiency and reliability.

Edge computing without downtime

Alongside its GPU server, CIPTA is introducing the cubePRO, a compact edge workstation designed for environments where uptime and data integrity are critical.

The system supports up to four PCIe slots for GPU configurations, making it suitable for AI workloads at the edge. It also features high-capacity multi-SSD setups and optimized airflow for continuous 24/7 operation.

Through its partnership with Accordance, the cubePRO integrates the Disk Array ARAID M500 solution, enabling high-availability storage and data protection. This ensures uninterrupted performance for use cases such as industrial systems, remote nodes, and enterprise branch deployments.

The focus here is clear: bring AI processing closer to where data is generated, without sacrificing reliability.

Strengthening Malaysia’s role in AI infrastructure

CIPTA’s debut also reflects a broader shift in global supply chains. Operating from Malaysia, the company offers end-to-end services—from concept to production—along with flexible manufacturing cycles and cost-efficient operations tailored for Southeast Asia and international markets.

With access to InWin’s server chassis ecosystem and infrastructure solutions, CIPTA combines global platform capabilities with localized integration. The goal is to help enterprises deploy AI and cloud infrastructure faster while diversifying their supply chain footprint.

As demand for AI systems continues to grow, CIPTA is positioning Malaysia as a key hub for scalable, production-ready infrastructure.

Visitors can find CIPTA at Booth R41 during CloudFest 2026 in Europa-Park, Rust, Germany.

Continue Reading

Trending