Computers
AMD launches Ryzen 5000 desktop processors with major generational improvements
Available on November 5
AMD recently took the wraps off its Ryzen 5000 desktop processors. The new processors tout improvements across the board, from improved performance-per-watt to increased instructions-per-clock.
The processors are still using the same 7nm node used in the Ryzen 3000 CPUs. However, as compared to the previous CPUs, Ryzen 5000 boasts a new design that further reduces latency and gives cores more access to the L3 cache. Improvements in design also translate to 24% more performance-per-watt for the new Ryzen processors compared to its predecessor.
Gamers will really benefit from these improvements with AMD touting a 19% generational increase in instructions-per-clock on Ryzen 5000 processors. The company says that it is their largest increase yet since the introduction of Zen processors.
Now, there are no Ryzen 4000 desktop processors since AMD wants to streamline its processor offerings. Ryzen 4000 mobile processors — the one used on laptops — use Zen 2 architecture while Ryzen 5000 desktop processors use Zen 3. As such, the naming also signifies generational improvements across architecture and performance.
Headlining the Ryzen 5000 processors is the top-of-the-line Ryzen 9 5950X. It comes with 16 cores and 32 threads, having a base frequency of 3.4GHz, and a boost frequency of 4.9GHz. Meanwhile, its total cache is 72MB while its TDP is rated at 105W.
Then, there’s the rest of the lineup. AMD Ryzen 9 5900X has 12 cores, 24 threads, and a base frequency of 3.7GHz with a boost frequency of up to 4.8GHz. Its total cache is at 70MB, and its TDP maxes out at 105W. For this processor, the company touts 7% faster 1080P gaming than its closest competitor, Intel Core i9-10900K.
There’s also the AMD Ryzen 7 5800X which is an 8-core, 16-thread CPU with a base frequency of 3.8GHz and goes up to 4.7GHz. The TDP is the same with other higher-end processors, clocking in at 105W. Meanwhile, its total cache maxes out at 36MB.
Finally, there’s the Ryzen 5 5600X. It is a 6-core, 12-thread CPU with a base frequency of 3.7GHz and maxes out at 4.6GHz. The total cache is 35MB. Its TDP is significantly lower than the three, coming in at 65W.
Pricing and availability
All Ryzen 5000 processors are will be available to order on November 5. The top-of-the-line Ryzen 9 5900X starts at US$ 799. Meanwhile, the Ryzen 9 5900X will retail at US$ 549. The Ryzen 7 5800X and Ryzen 5 5600X will come at a much affordable price of US$ 449 and US$ 249, respectively.
It’s worth noting that AMD will also launch the AMD Radeon RX 6000 GPUs sometime before the Ryzen 5000 processors hit the shelves. These GPUs are expected to use RDNA 2 GPU architecture, which is the one used for Xbox Series X and Sony Playstation 5. An October 28 announcement is already set in stone for the Radeon GPUs.
Source: Niche Gamer and Neowin
Computers
Lenovo accelerates production-ready enterprise AI with NVIDIA
From AI inferencing to gigawatt-scale AI factories
Lenovo has unveiled new Lenovo Hybrid AI Advantage with NVIDIA solutions designed to accelerate AI adoption, reduce time-to-first-token (TTFT), and deliver measurable business results across personal, enterprise, and cloud environments.
Building on the inferencing acceleration introduced at Lenovo Tech World, this next phase of Hybrid AI execution expands the solutions with device to data center to gigawatt-scale AI cloud deployments.
This enables real-time decision-making, operational efficiency, and intelligent automation across industries at global scale. The solutions boost productivity, agility, and innovation by enabling faster AI deployment.
The development comes as AI is seen moving from training models powering real-time decisions. Lenovo is prepared to address the demand for validated hybrid AI platforms built for production-scale inferencing, as organizations will need infrastructure to support such.
In fact, Lenovo’s Hybrid AI Advantage with NVIDIA are now delivering ROI in less than six months. The new inferencing-optimized ThinkSystem and ThinkEdge servers are being utilized for real-time inferencing across retail, manufacturing, healthcare, sports, and smart city scenarios.
The expanded portfolio includes:
- two Lenovo Hybrid AI platforms, featuring NVIDIA RTX PRO 6000 Blackwell Server Edition and Blackwell Ultra
- Hybrid AI inferencing starter platform with RTX PRO 4500 Blackwell Server Edition
- Lenovo ThinkAgile HX650a with Nutanix Enterprise AI and Nutanix Kubernetes Platform
- Lenovo Hybrid AI platforms with Cloudian
Bringing inferencing directly to professionals
Lenovo and NVIDIA are bringing AI from development environments to real-world production at a global scale. This is thanks to new Lenovo AI inferencing platforms with NVIDIA Dynamo and NVIDIA NIM.
Meanwhile, Lenovo AI Cloud gigafactory platforms are powered by NVIDIA Vera Rubin NVL72. Industry-specific agentic AI solutions are also built with NVIDIA Blueprints and software.
For consumers, there’s next-generation NVIDIA RTX Pro Blackwell-powered mobile and desktop workstations. These will be rolled out across the ThinkPad P14s Gen 7, ThinkPad P16s Gen 5, and ThinkPad P1 Gen 1 lineups.
ThinkStation P5 Gen 2 desktops, meanwhile, will get up to two RTX PRO 6000 Blackwell Max-Q GPUs. They will also have support for NVIDIA OpenShell.
For gigawatt-scale scenarios, the next-gen Vera Rubin platform accelerates deployment for hyperscale and sovereign AI cloud providers.
These fully liquid-cooled, rack-scale AI systems are engineered for faster deployment and dramatically improved token economics. They can achieve up to 10x higher throughput and up to 10x lower cost per token.
Computers
CIPTA debuts AI GPU server, edge workstation at CloudFest 2026
Malaysia-made AI infrastructure
CIPTA Industrial Sdn Bhd steps onto the global stage with its European debut at CloudFest 2026. They introduced high-density AI infrastructure and edge-ready systems built for modern enterprise workloads.
Held at Europa-Park in Rust, Germany from March 23 to 26, the event marks the company’s first major international showcase under its own brand. Backed by InWin Development Inc., CIPTA positions itself as a new-generation EMS provider focused on AI, cloud, and enterprise systems.
At Booth R41, the company is highlighting two key platforms: the RG658 PRO GPU server developed with Phison, and the cubePRO edge workstation created in collaboration with Accordance.
Built for scalable AI workloads
Leading the showcase is the RG658 PRO, a high-density GPU server designed to handle large-scale AI training and inference without pushing costs out of reach for enterprises.
The system supports up to eight high-performance GPUs and integrates Phison’s Pascari aiDAPTIV alongside its PASCARI enterprise SSD lineup. This combination aims to improve data throughput, reduce latency, and streamline AI pipelines.
Thermal performance is a key focus. The RG658 PRO uses a dual-chamber design to separate heat zones, paired with up to 14 high-speed PWM fans for sustained cooling under heavy workloads. Power delivery is handled by a 3+1 redundant configuration of 80PLUS Titanium PSUs, scaling up to 9600W.
The result is a platform built to scale AI deployments on-site while maintaining efficiency and reliability.
Edge computing without downtime
Alongside its GPU server, CIPTA is introducing the cubePRO, a compact edge workstation designed for environments where uptime and data integrity are critical.
The system supports up to four PCIe slots for GPU configurations, making it suitable for AI workloads at the edge. It also features high-capacity multi-SSD setups and optimized airflow for continuous 24/7 operation.
Through its partnership with Accordance, the cubePRO integrates the Disk Array ARAID M500 solution, enabling high-availability storage and data protection. This ensures uninterrupted performance for use cases such as industrial systems, remote nodes, and enterprise branch deployments.
The focus here is clear: bring AI processing closer to where data is generated, without sacrificing reliability.
Strengthening Malaysia’s role in AI infrastructure
CIPTA’s debut also reflects a broader shift in global supply chains. Operating from Malaysia, the company offers end-to-end services—from concept to production—along with flexible manufacturing cycles and cost-efficient operations tailored for Southeast Asia and international markets.
With access to InWin’s server chassis ecosystem and infrastructure solutions, CIPTA combines global platform capabilities with localized integration. The goal is to help enterprises deploy AI and cloud infrastructure faster while diversifying their supply chain footprint.
As demand for AI systems continues to grow, CIPTA is positioning Malaysia as a key hub for scalable, production-ready infrastructure.
Visitors can find CIPTA at Booth R41 during CloudFest 2026 in Europa-Park, Rust, Germany.
Computers
AMD expands Ryzen AI Embedded P100 series lineup
Scalable, efficient AI compute for industrial, edge solutions
AMD has recently announced the expansion of its AMD Ryzen AI Embedded P100 Series processor lineup.
This enables scalable and power-efficient AI compute tailor-built for industrial and AI edge systems. Scenarios include factory automation, physical AI in mobile robotics, and other AI-driven edge applications.
With eight to 12 high-performance Zen 5 cores, AMD ROCm support, and up to 80 total system TOPS, the new x86 embedded APUs deliver up to:
- 2x more CPU core counts
- 8x higher GPU compute
- 36% higher system TOPS
This way, developers and system designers get an expanded and scalable portfolio of power-efficient edge computing solutions. These processors support real-time AI from vision to control and reasoning, as well as offer advanced graphics capabilities.
On a single chip, clients get up to 80 TOPS physical AI acceleration, AMD RDNA 3.5 graphics for real-time visualization, and an NPU based on the AMD XDNA 2 architecture.
Moreover, the processors can withstand industrial temperature ranges (-40° C to 105° C) and can support continuous 24/7 operations for up to 10-year life cycles. That’s along with low-latency and power-efficient AI inference.
Real-life applications include intelligent factories, autonomous robots, and medical imaging devices. For instance, the processors can deliver CPU performance required for real-time inspection and process optimization.
For mobile robots, meanwhile, processors can manage navigation, motion, control, and route planning while the GPU processes multi-camera feeds for spatial awareness.
Furthermore, for 3D health imaging, the processors can enable the powering of 3D imaging for ultrasounds, endoscopes, tissue classification, and tumor detection at the edge. This is done with models like U-Net, nnU-Net, and MONAI.
The processors then accelerate image-to-report workflows with MedSigLIP and support clinical reasoning and Q&A with Med-PaLM 2.
-
Reviews1 week agoPOCO X8 Pro Max review: A new beast from the far east
-
News1 week agoPOCO X8 Pro Series: Price, availability in the Philippines
-
Reviews1 week agoPOCO X8 Pro Iron Man Edition review: Midrange phone in superhero armor
-
Automotive2 weeks agoVinFast extends free unlimited charging in 3 markets amid rising fuel prices
-
Philippines1 week agoThe HONOR X8d is serviceable
-
Reviews2 weeks agoSamsung Galaxy S26 Ultra review: A phone you live with
-
News1 week agoPOCO introduces X8 Pro Series with Dimensity 9500s
-
Gaming2 weeks agoNVIDIA’s DLSS 5 can turn your favorite AAA game into AI slop
