Computers

Dell launches their improved, TV-inspired Inspiron AiOs

No need to plug your laptop to the TV anymore

Published

on

All-in-one computers have certainly been a great alternative for bulky computer setups over the years. Despite the smaller form factor, it doesn’t stop the machine from performing just as powerful as their heavier, bulkier counterparts. Dell recognized this, and decided to go even further with their improved Inspiron AiOs for Computex 2019.

The improved Inspiron 24 5490 and Inspiron 27 7790 are the next-generation, sleeker versions of Dell’s all-in-one Inspiron lineup. Both models will sport the latest Intel processors and an optional NVIDIA MX110 graphics card. You can expect greater all-around performance, as well as decent gaming or graphics-intensive performance.

Both models also have an InfinityEdge FHD display, fit for a more theater-like experience. The display sits on top of a metallic stand and speaker grill, acting as its main speaker system. An HDMI-in port is also available at the back of the device, if you want to plug a console to the display. With all of these, your all-in-one device will feel more like a TV than just a regular computer in any room.

One key difference between the two models is the integrated yet hidden webcam, as the Inspiron 5490 comes with a 720p webcam. The bigger Inspiron 7790 comes with a 1080p webcam, yet both have an option for an IR facial recognition lens. The webcam is hidden behind the main display, giving you a full display to work on with nothing getting in the way.

The Dell Inspiron 24 5490 starts at US$ 699, while the Dell Inspiron 27 7790 starts at US$ 949. Although there’s no pricing for Singapore yet, Dell says they’ll come in mid-August.

Computers

Select GIGABYTE Intel motherboards now support HUDIMM

Offering budget-conscious builders more flexibility, accessibility

Published

on

GIGABYTE announced a comprehensive BIOS update for its Intel 800, 700, and 600 series motherboards.  These motherboards are now support the new HUDIMM memory standard, enabling “One Sub-channel DDR5” technology.

The specification is designed to reduce the high retail costs associated with modern memory by utilizing a single 32-bit sub-channel rather than the standard dual-channel configuration.

This update primarily targets the budget-conscious builders. Even system integrators, who have been restricted by DDR5 market pricing, should benefit.

HUDIMM provides a more accessible entry point for those building on modern Intel platforms, by reducing the DRAM chip count per module.

This is without requiring the premium investment typically demanded by high-bandwidth kits.

Beyond initial builds, the update facilitates unconventional upgrade paths for mainstream users. The firmware allows for asymmetric mixing.

In other words, a user can pair a low-cost 8 GB HUDIMM with an existing 16 GB standard module.

This configuration allows for a 24 GB total capacity, providing a middle-ground performance boost that utilizes three combined sub-channels.

GIGABYTE confirmed the BIOS firmware is available immediately via its official website. The company also stated that the update ensures seamless detection and stable operation of the new modules across its entire compatible Intel motherboard lineup.

Continue Reading

Computers

MINIX launches T4000, T5000 Generative AI Mini WorkStations

For businesses and creators

Published

on

MINIX has launched the T4000 and T5000 Generative AI Mini Workstations.

These powerful and space-saving solutions are built for professional generative AI, local large language model (LLM) inference, content creation, on-premise enterprise deployment, and lightweight model training.

The desktops are powered by the NVIDIA Jetson AGX Thor series modules with flagship Blackwell architecture. As such, they deliver exceptional on-device AI horsepower in a small desktop form factor.

The build features durable metal and plastic chassis, plus twin turbo intercooler for sustained performance.

The new offerings are engineered for professionals, developers, creators, and IT teams, redefining edge and on-premise AI without bulky server hardware.

At the core of the T4000 and T5000 are NVIDIA’s cutting-edge compute platform:

  • T4000: Up to 1200 Sparse FP4 TFLOPs AI performance
  • T5000: Up to 2070 Sparse FP4 TFLOPs AI performance
  • 1536-2560 Blackwell GPU with fifth-generation Tensor Cores
  • Multi-Instance GPU (MIG) for parallel task efficiency
  • NVIDIA PVA 3.0 dedicated vision processing engine

The workstations natively support smooth local inference for 7B-70B parameter LLMs. This makes private, low-latency AI accessible for businesses and creators.

In addition, the offerings feature high-core-count Arm processing and large, fast memories of up to 128GB DDR5 on 12-core or 14-core Arm Neoverse-V3AE 64-bit CPU.

Designed for professional workflows, the mini workstations also include enterprise-grade networking and flexible expansion:

  • Dual 10GbE ethernet
  • Wi-Fi 6E
  • Bluetooth 5.3
  • 2x HDMI 2.1 TMDS (4K@60Hz)
  • 4x USB 3.2 Gen 1 Type-A
  • 1x USB 3.2 Gen 2 Type-C
  • 24V DC input, up to 200W max power

Ideal use cases for the MINIX T4000 and T5000 include local LLM inference, generative AI creation, on-device AI computing, and lightweight model training.

Continue Reading

Computers

Lenovo accelerates production-ready enterprise AI with NVIDIA

From AI inferencing to gigawatt-scale AI factories

Published

on

Lenovo has unveiled new Lenovo Hybrid AI Advantage with NVIDIA solutions designed to accelerate AI adoption, reduce time-to-first-token (TTFT), and deliver measurable business results across personal, enterprise, and cloud environments.

Building on the inferencing acceleration introduced at Lenovo Tech World, this next phase of Hybrid AI execution expands the solutions with device to data center to gigawatt-scale AI cloud deployments.

This enables real-time decision-making, operational efficiency, and intelligent automation across industries at global scale. The solutions boost productivity, agility, and innovation by enabling faster AI deployment.

The development comes as AI is seen moving from training models powering real-time decisions. Lenovo is prepared to address the demand for validated hybrid AI platforms built for production-scale inferencing, as organizations will need infrastructure to support such.

In fact, Lenovo’s Hybrid AI Advantage with NVIDIA are now delivering ROI in less than six months. The new inferencing-optimized ThinkSystem and ThinkEdge servers are being utilized for real-time inferencing across retail, manufacturing, healthcare, sports, and smart city scenarios.

The expanded portfolio includes:

  • two Lenovo Hybrid AI platforms, featuring NVIDIA RTX PRO 6000 Blackwell Server Edition and Blackwell Ultra
  • Hybrid AI inferencing starter platform with RTX PRO 4500 Blackwell Server Edition
  • Lenovo ThinkAgile HX650a with Nutanix Enterprise AI and Nutanix Kubernetes Platform
  • Lenovo Hybrid AI platforms with Cloudian

Bringing inferencing directly to professionals

Lenovo and NVIDIA are bringing AI from development environments to real-world production at a global scale. This is thanks to new Lenovo AI inferencing platforms with NVIDIA Dynamo and NVIDIA NIM.

Meanwhile, Lenovo AI Cloud gigafactory platforms are powered by NVIDIA Vera Rubin NVL72. Industry-specific agentic AI solutions are also built with NVIDIA Blueprints and software.

For consumers, there’s next-generation NVIDIA RTX Pro Blackwell-powered mobile and desktop workstations. These will be rolled out across the ThinkPad P14s Gen 7, ThinkPad P16s Gen 5, and ThinkPad P1 Gen 1 lineups.

ThinkStation P5 Gen 2 desktops, meanwhile, will get up to two RTX PRO 6000 Blackwell Max-Q GPUs. They will also have support for NVIDIA OpenShell.

For gigawatt-scale scenarios, the next-gen Vera Rubin platform accelerates deployment for hyperscale and sovereign AI cloud providers.

These fully liquid-cooled, rack-scale AI systems are engineered for faster deployment and dramatically improved token economics. They can achieve up to 10x higher throughput and up to 10x lower cost per token.

Continue Reading

Trending