Computers

All new Mac mini M4, M4 Pro now official

First carbon neutral mac

Published

on

Mac mini M4

The Week of Mac continues! Following the release of the new M4 iMacs is the all-new Mac mini. Predictably, it’ll be powered by M4 chips: specifically, the M4 and the M4 Pro. Other than being powered by the latest M-series processors, it also has the distinction of being the first carbon neutral mac.

Pint-sized Powerhouse

Mac mini M4

For the unfamiliar, the Mac mini is Apple’s take on a mini-PC that you can connect to different peripherals. You can think of them as tiny desktop PCs, but they’re Mac.

This one in particular measures at just 5×5 inches — 1/20th the size of its predecessor. Some might say that’s not very sizable. And that’s the point. Don’t be discouraged because even at this size, it’s a powerhouse when it comes to performance.

The Mac mini with M4 features a 10-core CPU, 10-core GPU. It now starts with 16GB of unified memory. What does that mean for you? Breezy multitasking across everyday productivity apps to creative projects like video editing, music production, or writing and compiling code.

When compared to the Mac mini with Intel Core i7, Mac mini with M4:

  • Applies up to 2.8x more audio effect plugins in a Logic Pro project.
  • Delivers up to 13.3x faster gaming performance in World of Warcraft: The War Within.
  • Enhances photos with up to 33x faster image upscaling performance in Photomator.

When compared to the Mac mini with M1, Mac mini with M4:

  • Performs spreadsheet calculations up to 1.7x faster in Microsoft Excel.
  • Transcribes with on-device AI speech-to-text up to 2x faster in MacWhisper.
  • Merges panoramic images up to 4.9x faster in Adobe Lightroom Classic.

Naturally, the M4 Pro is even more powerful. It has up to 14 cores, including 10 performance cores and four efficiency cores. With up to 20 cores, the M4 Pro GPU is up to twice as powerful as the GPU in the M4. Both chips bring hardware-accelerated ray tracing to the Mac mini for the first time.

It also has Apple Intelligence with its Neural Engine being 3x more powerful (as it ought to be) than the Mac mini M1. On-device Apple Intelligence models run at blazing speed. M4 Pro supports up to 64GB of unified memory and 273GB/s of memory bandwidth — twice as much bandwidth as any AI PC chip — for accelerating AI workloads. And M4 Pro supports Thunderbolt 5, which delivers up to 120 Gb/s data transfer speeds on Mac mini, and more than doubles the throughput of Thunderbolt 4.

When compared to the Mac mini with Intel Core i7, Mac mini with M4 Pro:

  • Performs spreadsheet calculations up to 4x faster in Microsoft Excel.
  • Executes scene-edit detection up to 9.4x faster in Adobe Premiere Pro.
  • Transcribes with on-device AI speech-to-text up to 20x faster in MacWhisper.
  • Processes basecalling for DNA sequencing in Oxford Nanopore MinKNOW up to 26x faster.

When compared to the Mac mini with M2 Pro, Mac mini with M4 Pro:

  • Applies up to 1.8x more audio effect plugins in a Logic Pro project.
  • Renders motion graphics to RAM up to 2x faster in Motion.
  • Completes 3D renders up to 2.9x faster in Blender.

Ports Galore

Due to its reduced size, some ports had to move up front. But that also means a little more convenience. The ports in front are: two (2) USB-C ports that support USB 3, and an audio jack with support for high-impedance headphones.

The rest of the ports are at the back. They differ for the M4 and the M4 Pro.

M4: Three (3) Thunderbolt 4 ports, can support up to two 6K displays and up to one 5K display,

M4 Pro: Three (3) Thunderbolt 5 ports, can support up to three 6K displays at 60Hz for a total of over 60 million pixels.

The rest of the ports at the back are the same: Gigabit Ethernet, configurable up to 10Gb Ethernet for faster networking speeds, and an HDMI port for easy connection to a TV or HDMI display without an adapter.

First Carbon Neutral Mac

The Mac mini is made with over 50 percent recycled content overall. This includes 100 percent recycled aluminum in the enclosure, 100 percent recycled gold plating in all Apple-designed printed circuit boards, and 100 percent recycled rare earth elements in all magnets.

The electricity used to manufacture Mac mini is sourced from 100 percent renewable electricity. And, to address 100 percent of the electricity customers use to power Mac mini, Apple has invested in clean energy projects around the world.

Apple prioritized lower-carbon modes of shipping, like ocean freight, to further reduce emissions from transportation.

Together, these actions have reduced the carbon footprint of Mac mini by over 80 percent. For the small amount of remaining emissions, Apple applies high-quality carbon credits from nature-based projects, like those generated by its innovative Restore Fund.

Price and availability

Mac mini with M4 starts at $599 (U.S.) and $499 (U.S.) for education. Additional technical specifications are available at apple.com/mac-mini.

Mac mini with M4 Pro starts at $1,399 (U.S.) and $1,299 (U.S.) for education. Additional technical specifications are available at apple.com/mac-mini.

Customers can pre-order the new Mac mini with M4 and M4 Pro starting today, Tuesday, October 29, on apple.com/store and in the Apple Store app in 28 countries and regions, including the U.S. It will start arriving to customers, and in Apple Store locations and Apple Authorized Resellers, beginning Friday, November 8.

Computers

Lenovo accelerates production-ready enterprise AI with NVIDIA

From AI inferencing to gigawatt-scale AI factories

Published

on

Lenovo has unveiled new Lenovo Hybrid AI Advantage with NVIDIA solutions designed to accelerate AI adoption, reduce time-to-first-token (TTFT), and deliver measurable business results across personal, enterprise, and cloud environments.

Building on the inferencing acceleration introduced at Lenovo Tech World, this next phase of Hybrid AI execution expands the solutions with device to data center to gigawatt-scale AI cloud deployments.

This enables real-time decision-making, operational efficiency, and intelligent automation across industries at global scale. The solutions boost productivity, agility, and innovation by enabling faster AI deployment.

The development comes as AI is seen moving from training models powering real-time decisions. Lenovo is prepared to address the demand for validated hybrid AI platforms built for production-scale inferencing, as organizations will need infrastructure to support such.

In fact, Lenovo’s Hybrid AI Advantage with NVIDIA are now delivering ROI in less than six months. The new inferencing-optimized ThinkSystem and ThinkEdge servers are being utilized for real-time inferencing across retail, manufacturing, healthcare, sports, and smart city scenarios.

The expanded portfolio includes:

  • two Lenovo Hybrid AI platforms, featuring NVIDIA RTX PRO 6000 Blackwell Server Edition and Blackwell Ultra
  • Hybrid AI inferencing starter platform with RTX PRO 4500 Blackwell Server Edition
  • Lenovo ThinkAgile HX650a with Nutanix Enterprise AI and Nutanix Kubernetes Platform
  • Lenovo Hybrid AI platforms with Cloudian

Bringing inferencing directly to professionals

Lenovo and NVIDIA are bringing AI from development environments to real-world production at a global scale. This is thanks to new Lenovo AI inferencing platforms with NVIDIA Dynamo and NVIDIA NIM.

Meanwhile, Lenovo AI Cloud gigafactory platforms are powered by NVIDIA Vera Rubin NVL72. Industry-specific agentic AI solutions are also built with NVIDIA Blueprints and software.

For consumers, there’s next-generation NVIDIA RTX Pro Blackwell-powered mobile and desktop workstations. These will be rolled out across the ThinkPad P14s Gen 7, ThinkPad P16s Gen 5, and ThinkPad P1 Gen 1 lineups.

ThinkStation P5 Gen 2 desktops, meanwhile, will get up to two RTX PRO 6000 Blackwell Max-Q GPUs. They will also have support for NVIDIA OpenShell.

For gigawatt-scale scenarios, the next-gen Vera Rubin platform accelerates deployment for hyperscale and sovereign AI cloud providers.

These fully liquid-cooled, rack-scale AI systems are engineered for faster deployment and dramatically improved token economics. They can achieve up to 10x higher throughput and up to 10x lower cost per token.

Continue Reading

Computers

CIPTA debuts AI GPU server, edge workstation at CloudFest 2026

Malaysia-made AI infrastructure

Published

on

CIPTA Industrial Sdn Bhd steps onto the global stage with its European debut at CloudFest 2026. They introduced high-density AI infrastructure and edge-ready systems built for modern enterprise workloads.

Held at Europa-Park in Rust, Germany from March 23 to 26, the event marks the company’s first major international showcase under its own brand. Backed by InWin Development Inc., CIPTA positions itself as a new-generation EMS provider focused on AI, cloud, and enterprise systems.

At Booth R41, the company is highlighting two key platforms: the RG658 PRO GPU server developed with Phison, and the cubePRO edge workstation created in collaboration with Accordance.

Built for scalable AI workloads

Leading the showcase is the RG658 PRO, a high-density GPU server designed to handle large-scale AI training and inference without pushing costs out of reach for enterprises.

The system supports up to eight high-performance GPUs and integrates Phison’s Pascari aiDAPTIV alongside its PASCARI enterprise SSD lineup. This combination aims to improve data throughput, reduce latency, and streamline AI pipelines.

Thermal performance is a key focus. The RG658 PRO uses a dual-chamber design to separate heat zones, paired with up to 14 high-speed PWM fans for sustained cooling under heavy workloads. Power delivery is handled by a 3+1 redundant configuration of 80PLUS Titanium PSUs, scaling up to 9600W.

The result is a platform built to scale AI deployments on-site while maintaining efficiency and reliability.

Edge computing without downtime

Alongside its GPU server, CIPTA is introducing the cubePRO, a compact edge workstation designed for environments where uptime and data integrity are critical.

The system supports up to four PCIe slots for GPU configurations, making it suitable for AI workloads at the edge. It also features high-capacity multi-SSD setups and optimized airflow for continuous 24/7 operation.

Through its partnership with Accordance, the cubePRO integrates the Disk Array ARAID M500 solution, enabling high-availability storage and data protection. This ensures uninterrupted performance for use cases such as industrial systems, remote nodes, and enterprise branch deployments.

The focus here is clear: bring AI processing closer to where data is generated, without sacrificing reliability.

Strengthening Malaysia’s role in AI infrastructure

CIPTA’s debut also reflects a broader shift in global supply chains. Operating from Malaysia, the company offers end-to-end services—from concept to production—along with flexible manufacturing cycles and cost-efficient operations tailored for Southeast Asia and international markets.

With access to InWin’s server chassis ecosystem and infrastructure solutions, CIPTA combines global platform capabilities with localized integration. The goal is to help enterprises deploy AI and cloud infrastructure faster while diversifying their supply chain footprint.

As demand for AI systems continues to grow, CIPTA is positioning Malaysia as a key hub for scalable, production-ready infrastructure.

Visitors can find CIPTA at Booth R41 during CloudFest 2026 in Europa-Park, Rust, Germany.

Continue Reading

Computers

AMD expands Ryzen AI Embedded P100 series lineup

Scalable, efficient AI compute for industrial, edge solutions

Published

on

AMD has recently announced the expansion of its AMD Ryzen AI Embedded P100 Series processor lineup.

This enables scalable and power-efficient AI compute tailor-built for industrial and AI edge systems. Scenarios include factory automation, physical AI in mobile robotics, and other AI-driven edge applications.

With eight to 12 high-performance Zen 5 cores, AMD ROCm support, and up to 80 total system TOPS, the new x86 embedded APUs deliver up to:

  • 2x more CPU core counts
  • 8x higher GPU compute
  • 36% higher system TOPS

This way, developers and system designers get an expanded and scalable portfolio of power-efficient edge computing solutions. These processors support real-time AI from vision to control and reasoning, as well as offer advanced graphics capabilities.

On a single chip, clients get up to 80 TOPS physical AI acceleration, AMD RDNA 3.5 graphics for real-time visualization, and an NPU based on the AMD XDNA 2 architecture.

Moreover, the processors can withstand industrial temperature ranges (-40° C to 105° C) and can support continuous 24/7 operations for up to 10-year life cycles. That’s along with low-latency and power-efficient AI inference.

Real-life applications include intelligent factories, autonomous robots, and medical imaging devices. For instance, the processors can deliver CPU performance required for real-time inspection and process optimization.

For mobile robots, meanwhile, processors can manage navigation, motion, control, and route planning while the GPU processes multi-camera feeds for spatial awareness.

Furthermore, for 3D health imaging, the processors can enable the powering of 3D imaging for ultrasounds, endoscopes, tissue classification, and tumor detection at the edge. This is done with models like U-Net, nnU-Net, and MONAI.

The processors then accelerate image-to-report workflows with MedSigLIP and support clinical reasoning and Q&A with Med-PaLM 2.

Continue Reading

Trending