Computers
24-inch iMac gets the M3 chip treatment
With 4.5K Retina Display
MacBook Pros aren’t the only Macs getting the M3 family of chips upgrade. Apple is also bringing its new hyper-powered silicon chips to the 24-inch iMac.
Supercharged by M3
The M3 chip features an 8-core CPU, up to a 10-core GPU, and support for up to 24GB of unified memory. It’s up to 2x faster than the previous generation with M1. Users will feel the speed and power of M3 in everything they do, from multitasking across everyday productivity apps to exploring creative passions like editing high-resolution photos or multiple streams of 4K video.
Featuring the next-generation GPU of M3, iMac supports hardware-accelerated mesh shading and ray tracing. This provides more accurate lighting, reflections, and shadows for extremely realistic gaming experiences. This also makes three-dimensional design and creation even faster.
With a 16-core Neural Engine and the latest media engine, iMac also delivers blazing machine learning and video performance.
The iMac delivers phenomenal productivity to small businesses, students, gamers, and everyday consumers. The M3-powered iMacs enables the following:
- Safari, the world’s fastest browser, performs up to 30 percent faster.
- Productivity apps like Microsoft Excel perform up to 30 percent faster.
- Games load even faster, and users will experience up to 50 percent faster frame rates.
From content creation to video editing or photography, iMac is perfect for aspiring creatives.
- Edit and play back up to 12 streams of 4K video, which is 3x more than before.
- Produce video projects in Final Cut Pro and Adobe Premiere Pro up to 2x faster.
- Process photos in apps like Adobe Photoshop up to 2x faster.
Everything you love from the iMac is still here
Standout features from this Apple All-in-one are still present.
Expansive Retina display — 24-inch, 4.5K Retina display with 11.3 million pixels, a P3 wide colour gamut, over a billion colors, and 500 nits of brightness.
Advanced connectivity — Wi-Fi 6E, Bluetooth 5.3, four (4) USB‑C ports, including two Thunderbolt ports, and support for Gigabit Ethernet standard on select models; and up to a 6K display.
Camera, mics, and speakers — 1080p FaceTime camera and studio-quality mics. It also has a six-speaker sound system with support for Spatial Audio when playing music or video with Dolby Atmos.
Standout design — Still in green, yellow, orange, pink, purple, blue, and silver. Still strikingly thin at just 11.5 millimetres.
macOS Sonoma
macOS Sonoma brings a rich set of features to the Mac for work and play. Watch our video.
Price and availability
The new 24-inch iMac with M3 is available to order Wednesday, November 1, on apple.com/store and in the Apple Store app in 27 countries and regions, including the U.S. It will begin arriving to customers and will be in Apple Store locations and Apple Authorised Resellers starting Tuesday, November 7.
iMac with 8-core GPU starts at US$1,299/ SG$1,899 and US$ 1,249/ SG$ 1,829 for education. It is available in green, pink, blue, and silver.
iMac with 10-core GPU starts at US$ 1,499/ SG $2,199 and US$ 1,399/ SG$ 2,049 for education. It is available in green, yellow, orange, pink, purple, blue, and silver.
Both feature an 8-core CPU, 8GB of unified memory, 256GB SSD, two Thunderbolt ports, two additional USB 3 ports, Magic Keyboard with Touch ID, Magic Mouse, and Gigabit Ethernet.
MINIX has launched the T4000 and T5000 Generative AI Mini Workstations.
These powerful and space-saving solutions are built for professional generative AI, local large language model (LLM) inference, content creation, on-premise enterprise deployment, and lightweight model training.
The desktops are powered by the NVIDIA Jetson AGX Thor series modules with flagship Blackwell architecture. As such, they deliver exceptional on-device AI horsepower in a small desktop form factor.
The build features durable metal and plastic chassis, plus twin turbo intercooler for sustained performance.
The new offerings are engineered for professionals, developers, creators, and IT teams, redefining edge and on-premise AI without bulky server hardware.
At the core of the T4000 and T5000 are NVIDIA’s cutting-edge compute platform:
- T4000: Up to 1200 Sparse FP4 TFLOPs AI performance
- T5000: Up to 2070 Sparse FP4 TFLOPs AI performance
- 1536-2560 Blackwell GPU with fifth-generation Tensor Cores
- Multi-Instance GPU (MIG) for parallel task efficiency
- NVIDIA PVA 3.0 dedicated vision processing engine
The workstations natively support smooth local inference for 7B-70B parameter LLMs. This makes private, low-latency AI accessible for businesses and creators.
In addition, the offerings feature high-core-count Arm processing and large, fast memories of up to 128GB DDR5 on 12-core or 14-core Arm Neoverse-V3AE 64-bit CPU.
Designed for professional workflows, the mini workstations also include enterprise-grade networking and flexible expansion:
- Dual 10GbE ethernet
- Wi-Fi 6E
- Bluetooth 5.3
- 2x HDMI 2.1 TMDS (4K@60Hz)
- 4x USB 3.2 Gen 1 Type-A
- 1x USB 3.2 Gen 2 Type-C
- 24V DC input, up to 200W max power
Ideal use cases for the MINIX T4000 and T5000 include local LLM inference, generative AI creation, on-device AI computing, and lightweight model training.
Computers
Lenovo accelerates production-ready enterprise AI with NVIDIA
From AI inferencing to gigawatt-scale AI factories
Lenovo has unveiled new Lenovo Hybrid AI Advantage with NVIDIA solutions designed to accelerate AI adoption, reduce time-to-first-token (TTFT), and deliver measurable business results across personal, enterprise, and cloud environments.
Building on the inferencing acceleration introduced at Lenovo Tech World, this next phase of Hybrid AI execution expands the solutions with device to data center to gigawatt-scale AI cloud deployments.
This enables real-time decision-making, operational efficiency, and intelligent automation across industries at global scale. The solutions boost productivity, agility, and innovation by enabling faster AI deployment.
The development comes as AI is seen moving from training models powering real-time decisions. Lenovo is prepared to address the demand for validated hybrid AI platforms built for production-scale inferencing, as organizations will need infrastructure to support such.
In fact, Lenovo’s Hybrid AI Advantage with NVIDIA are now delivering ROI in less than six months. The new inferencing-optimized ThinkSystem and ThinkEdge servers are being utilized for real-time inferencing across retail, manufacturing, healthcare, sports, and smart city scenarios.
The expanded portfolio includes:
- two Lenovo Hybrid AI platforms, featuring NVIDIA RTX PRO 6000 Blackwell Server Edition and Blackwell Ultra
- Hybrid AI inferencing starter platform with RTX PRO 4500 Blackwell Server Edition
- Lenovo ThinkAgile HX650a with Nutanix Enterprise AI and Nutanix Kubernetes Platform
- Lenovo Hybrid AI platforms with Cloudian
Bringing inferencing directly to professionals
Lenovo and NVIDIA are bringing AI from development environments to real-world production at a global scale. This is thanks to new Lenovo AI inferencing platforms with NVIDIA Dynamo and NVIDIA NIM.
Meanwhile, Lenovo AI Cloud gigafactory platforms are powered by NVIDIA Vera Rubin NVL72. Industry-specific agentic AI solutions are also built with NVIDIA Blueprints and software.
For consumers, there’s next-generation NVIDIA RTX Pro Blackwell-powered mobile and desktop workstations. These will be rolled out across the ThinkPad P14s Gen 7, ThinkPad P16s Gen 5, and ThinkPad P1 Gen 1 lineups.
ThinkStation P5 Gen 2 desktops, meanwhile, will get up to two RTX PRO 6000 Blackwell Max-Q GPUs. They will also have support for NVIDIA OpenShell.
For gigawatt-scale scenarios, the next-gen Vera Rubin platform accelerates deployment for hyperscale and sovereign AI cloud providers.
These fully liquid-cooled, rack-scale AI systems are engineered for faster deployment and dramatically improved token economics. They can achieve up to 10x higher throughput and up to 10x lower cost per token.
Computers
CIPTA debuts AI GPU server, edge workstation at CloudFest 2026
Malaysia-made AI infrastructure
CIPTA Industrial Sdn Bhd steps onto the global stage with its European debut at CloudFest 2026. They introduced high-density AI infrastructure and edge-ready systems built for modern enterprise workloads.
Held at Europa-Park in Rust, Germany from March 23 to 26, the event marks the company’s first major international showcase under its own brand. Backed by InWin Development Inc., CIPTA positions itself as a new-generation EMS provider focused on AI, cloud, and enterprise systems.
At Booth R41, the company is highlighting two key platforms: the RG658 PRO GPU server developed with Phison, and the cubePRO edge workstation created in collaboration with Accordance.
Built for scalable AI workloads
Leading the showcase is the RG658 PRO, a high-density GPU server designed to handle large-scale AI training and inference without pushing costs out of reach for enterprises.
The system supports up to eight high-performance GPUs and integrates Phison’s Pascari aiDAPTIV alongside its PASCARI enterprise SSD lineup. This combination aims to improve data throughput, reduce latency, and streamline AI pipelines.
Thermal performance is a key focus. The RG658 PRO uses a dual-chamber design to separate heat zones, paired with up to 14 high-speed PWM fans for sustained cooling under heavy workloads. Power delivery is handled by a 3+1 redundant configuration of 80PLUS Titanium PSUs, scaling up to 9600W.
The result is a platform built to scale AI deployments on-site while maintaining efficiency and reliability.
Edge computing without downtime
Alongside its GPU server, CIPTA is introducing the cubePRO, a compact edge workstation designed for environments where uptime and data integrity are critical.
The system supports up to four PCIe slots for GPU configurations, making it suitable for AI workloads at the edge. It also features high-capacity multi-SSD setups and optimized airflow for continuous 24/7 operation.
Through its partnership with Accordance, the cubePRO integrates the Disk Array ARAID M500 solution, enabling high-availability storage and data protection. This ensures uninterrupted performance for use cases such as industrial systems, remote nodes, and enterprise branch deployments.
The focus here is clear: bring AI processing closer to where data is generated, without sacrificing reliability.
Strengthening Malaysia’s role in AI infrastructure
CIPTA’s debut also reflects a broader shift in global supply chains. Operating from Malaysia, the company offers end-to-end services—from concept to production—along with flexible manufacturing cycles and cost-efficient operations tailored for Southeast Asia and international markets.
With access to InWin’s server chassis ecosystem and infrastructure solutions, CIPTA combines global platform capabilities with localized integration. The goal is to help enterprises deploy AI and cloud infrastructure faster while diversifying their supply chain footprint.
As demand for AI systems continues to grow, CIPTA is positioning Malaysia as a key hub for scalable, production-ready infrastructure.
Visitors can find CIPTA at Booth R41 during CloudFest 2026 in Europa-Park, Rust, Germany.
-
News2 weeks agoOPPO Find X9 Ultra lands in PH: Price, availability, pre-order perks
-
News2 weeks agoOPPO Find X9s now official in PH: Price, availability, pre-order info
-
Gaming2 weeks agoSaros review: Returnal’s difficulty is back and better than ever
-
News2 weeks agoOPPO Find N6 now in PH: Price, pre-order, availability
-
Automotive2 weeks agoVinFast to expand in the Philippines with e-scooters: report
-
Gaming1 week agoLevel Infinite launches Gangstar Mirage City exclusively in PH
-
News2 weeks agorealme 16 series 5G launches in the Philippines
-
Laptops2 weeks agoMacBook Neo officially arrives at Power Mac Center




