Explainers

Why is USB Type-C so important?

Published

on

Over the past decade, devices using the Universal Serial Bus (USB) standard have become part of our daily lives. From transferring data to charging our devices, this standard has continued to evolve over time, with USB Type-C being the latest version. Here’s why you should care about it.

First, here’s a little history

Chances are you’ve encountered devices that have a USB port, such as a smartphone or computer. But what exactly is the USB standard? Simply put, it’s a communication protocol that allows devices to communicate with other devices using a standardized port or connector. It’s basically what language is for humans.

Here’s an example of a USB hub that uses Type-A connectors (Image credit: Anker)

When USB was first introduced to the market, the connectors used were known as USB Type-A. You’re likely familiar with this connector; it’s rectangular and can only be plugged in a certain orientation. To be able to make a connection, a USB Type-A connector plugs into a USB Type-A port just like how an appliance gets connected to a wall outlet. This port usually resides on host devices such as computers and media players, while Type-A connectors are usually tied to peripherals such as keyboards or flash drives.

There are also USB Type-B connectors, and these usually go on the other end of a USB cable that plugs into devices like a smartphone. Due to the different sizes of external devices, there are a few different designs for Type-B connectors. Printers and scanners use the Standard-B port, older digital cameras and phones use the Mini-B port, and recent smartphones and tablets use the Micro-B port.

Samples of the different USB Type-B connectors. From left to right: Standard-B, Mini-B, and Micro-B (Image credit: Amazon)

Specifications improved through the years

Aside from the type of connectors and ports, another integral part of the USB standard lies in its specifications. As with all specifications, these document the capabilities of the different USB versions.

The first-ever version of USB, USB 1.0, specified a transfer rate of up to 1.5Mbps (megabits per second), but this version never made it into consumer products. Instead, the first revision, USB 1.1, was released in 1998. It’s also the first version to be widely adopted and is capable of a max transfer rate of up to 12Mbps.

The next version, USB 2.0, was released in 2000. This version had a significantly higher transfer rate of up to 480Mbps. Both versions can also be used as power sources with a rating of 5V, 500mA or 5V, 100mA.

Next up was USB 3.0, which was introduced in 2008 and defines a transfer rate of up to 5Gbps (gigabits per second) — that’s a tenfold increase from the previous version. This feat was achieved by doubling the pin count or wires to make it easier to spot; these new connectors and ports are usually colored blue compared to the usual black/gray for USB 2.0 and below. USB 3.0 also improves upon its power delivery with a rating of 5V, 900mA.

In 2013, USB was updated to version 3.1. This version doubles what USB 3.0 was capable of in terms of bandwidth, as it’s capable of up to 10Gbps. The big change comes in its power delivery specification, now providing up to 20V, 5A, which is enough to power even notebooks. Apart from the higher power delivery, power direction is bidirectional this time around, meaning either the host or peripheral device can provide power, unlike before wherein only the host device can provide power.

Here’s a table of the different USB versions:

Version Bandwidth Power Delivery Connector Type
USB 1.0/1.1 1.5Mbps/12Mbps 5V, 500mA Type-A to Type-A,

Type-A to Type-B

USB 2.0 480Mbps 5V, 500mA Type-A to Type-A,

Type-A to Type-B

USB 3.0 5Gbps 5V, 900mA Type-A to Type-A,

Type-A to Type-B

USB 3.1 10Gbps 5V, up to 2A,

12V, up to 5A,

20V, up to 5A

Type-C to Type-C,

Type-A to Type-C

Now that we’ve established the background of how USB has evolved from its initial release, there are two things to keep in mind: One, each new version of USB usually just bumps its transfer rate and power delivery, and two, there haven’t been any huge changes regarding the ports and connectors aside from the doubling of pin count when USB 3.0 was introduced. So, what’s next for USB?

USB Type-C isn’t your average connector

After USB 3.1 was announced, the USB Implementers Forum (USB-IF) who handles USB standards, followed it up with a new connector, USB Type-C. The new design promised to fix the age-old issue of orientation when plugging a connector to a port. There’s no “wrong” way when plugging a Type-C connector since it’s reversible. Another issue it addresses is how older connectors hinder the creation of thinner devices, which isn’t the case for the Type-C connector’s slim profile.

Here’s how a USB Type-C connector looks like. Left: Type-A to Type-C cable, Right: Type-C to Type-C cable (Image credit: Belkin)

From the looks of it, the Type-C connector could become the only connector you’ll ever need in a device. It has high bandwidth for transferring 4K content and other large files, as well as power delivery that can power even most 15-inch notebooks. It’s also backwards compatible with previous USB versions, although you might have to use a Type-A-to-Type-C cable, which are becoming more common anyway.

Another big thing about USB Type-C is that it can support different protocols in its alternate mode. As of last year, Type-C ports are capable of outputting video via DisplayPort or HDMI, but you’ll have to use the necessary adapter and cable to do so. Intel’s Thunderbolt 3 technology is also listed as an alternate mode partner for USB Type-C. If you aren’t familiar with Thunderbolt, it’s basically a high-speed input/output (I/O) protocol that supports the transfer of both data and video on a single cable. Newer laptops have this built in.

A USB Type-C Thunderbolt 3 port (with compatible dock/adapter) does everything you’ll ever need when it comes to I/O ports (Image credit: Intel)

Rapid adoption of the Type-C port has already begun, as seen on notebooks such as Chromebooks, Windows convertibles, and the latest Apple MacBook Pro line. Smartphones using the Type-C connector are also increasing in number.

Summing things up, the introduction of USB Type-C is a huge step forward when it comes to I/O protocols, as it can support almost everything a consumer would want for their gadgets: high-bandwidth data transfer, video output, and charging.

SEE ALSO: SSD and HDD: What’s the difference?

Explainers

Play more, charge less: Huawei’s GPU Turbo explained

Better visuals without sacrificing battery life?

Published

on

Aside from using your phone to call, text, and take pictures, you now have the power to access the internet and play games with others. Instead of limiting yourself to Snake and Bounce, you now have online games such as PUBG Mobile and Mobile Legends.

There’s just one problem: Not all games are playable across all smartphones. With the gaming world now expanding to the mobile scene, you would need a smartphone with the latest hardware and software inside it. Even if that’s not the case, you would need a smartphone that can handle long hours of gaming, as well. It’s an intense fight over what matters to you the most: performance versus efficiency.

Fortunately, the choice shouldn’t be very difficult thanks to Huawei’s latest mobile advancement: GPU Turbo.

What’s GPU Turbo all about?

GPU Turbo processing technology aims to enhance the gaming experience across Huawei’s smartphones. Executives promise that the tech will boost gaming performance while maintaining the phone’s efficiency. This means you can play games on your smartphone without sacrificing much — like battery life, for example.

The technology looks at the graphical capabilities of your phone and adjusts it accordingly, especially for gaming. With GPU Turbo, technologies such as 4D gaming and both augmented and virtual reality (AR and VR) are taken care of. Huawei believes that GPU Turbo will boost graphical performance by 60 percent, and can make even budget phones play graphically intensive games.

Apart from boosting visual performance, GPU Turbo also enables smartphones to maximize efficiency. One common problem across all smartphones is that the battery depletes relatively fast while you’re gaming. Partner that with a non-effective cooling solution within the phone, and it’s basically device overkill when playing games. What GPU Turbo does is extend your phone’s battery life by 30 percent and keep your device relatively cool while playing.

Implications on Huawei Smartphones

One of the key insights Huawei executives received was about consumer demand for a smoother mobile gaming experience. Because people want to play the latest mobile games seamlessly, they would want to buy smartphones that are capable of doing so. Graphical performance should not suffer in the slightest, especially for multiplayer online battle arena (MOBA) and battle royale games.

The fun doesn’t stop there: With Huawei smartphones supporting GPU Turbo, other technologies such as AR and VR get a chance to truly shine. Huawei executives claim that GPU Turbo opens up opportunities for innovations like online shopping through AR or telemedicine through VR. At this rate, in theory, you could have a truly complete smartphone experience on your hands.

As of writing, GPU Turbo will take effect Huawei’s latest smartphones like the new Huawei Nova 3 series. However, older smartphones supported by the latest EMUI will experience the upgrade, as well. (View the list here.)

If you’ve been dying to have the full mobile gaming experience, GPU Turbo is definitely something to watch out for.

Illustrations by MJ Jucutan

Continue Reading

Explainers

The future for all games: Ray Tracing explained

The magic behind NVIDIA’s RTX series

Published

on

NVIDIA seemed to have struck gold with the announcement of their brand new graphics cards for gamers. These cards are set to bring the gaming world into unparalleled heights thanks to the technologies behind them. The company calls them the RTX series, and the biggest feature within these graphics cards is real-time, ray tracing technology.

But, what is ray tracing technology? What is it about this technology that had NVIDIA wanting to produce a new line of cards to house it? Will it really change the gaming experience as a whole?

Ray tracing in a nutshell

Ray tracing is a rendering technique that uses rays of light to project images or objects onto your screen. These rays of light determine the colors, reflections, refractions, and shadows that the objects possess. These also show more accurate, more realistic images as the rays trace back to any source of light in the surrounding. To put it simply, ray tracing is what happens when you could take a picture of anything around you with your eyes.

The technology isn’t that new; in fact, it has been used since the 1960s for movies and television. Ray tracing is the main proponent behind CGI, where most special effects are often rendered to recreate realistic backgrounds with accurate coloring. In 2008, Intel showed a demonstration of the game Enemy Territory: Quake Wars that used ray tracing powered by a Xeon Tigerton processor. Currently, there are applications that allow you to edit videos using ray tracing such as Adobe’s After Effects.

Shifting from rasterization to ray tracing

For the longest time, NVIDIA has worked with multiple companies to produce game-grade graphics cards for consumers. The main technology behind these graphics cards is rasterization. In a nutshell, rasterization creates shapes to outline certain elements during gameplay. These shapes are given various colors to mimic reflections and shadows produced by such objects. The technology does not use up too much processing power to produce high-quality images for games. Rasterization enables gamers to play at smoother frame rates while getting the best and most realistic image quality.

However, NVIDIA wanted to take things up a notch when producing the next generation of graphics cards for the modern-day gamer. The company wanted to improve the gaming experience by any means, thus bringing in ray tracing in their graphics cards. With ray tracing, colors are more accurate allowing for a more immersive gaming experience — at least that’s how the company explains it. This is clearly seen with their exclusive gameplay of Shadow of the Tomb Raider:

This technology became the backbone for their new RTX graphics cards, putting much emphasis on real-time interactions within games. The RTX graphics cards possess greater memory capacity and processing speeds to keep up with the demands of the technology inside it. With NVIDIA’s Turing architecture, these new cards make the ray tracing processes much faster while using less computations.

Risks of going for ray tracing

Of course, with new technologies comes risks to consider before buying into them. First, ray tracing heavily relies on multiple calculations to generate accurate images on your screen. Back then, computers and graphics cards were not powerful enough to produce quality images immediately using ray tracing. Production of such images can take days to possibly weeks or months, as seen with most movies that heavily rely on CGI.

When applying ray tracing technology to modern games, graphics cards tend to suffer more. The computational requirement for ray tracing is much more than the graphics card’s virtual memory (VRAM) could handle. Of course, it depends on how much RAM is included in your graphics card — even then, it would consume more energy than it’s optimized for. These are the risks that NVIDIA is constantly trying to address with their new RTX cards.

There is still a lot of work needed to prove that ray tracing is the future for gaming. While the technology wants to bring to you the most immersive gaming experience ever, it also comes with a heavy cost — not just on your wallet. Let’s hope that the RTX series is worth the wait.

Continue Reading

Explainers

The importance of artificial intelligence in smartphones

Is this still the future of technology?

Published

on

Have you ever wondered what smartphone brands actually mean when they tell you that their cameras use artificial intelligence (AI)?

With AI now becoming a significant part of our daily lives, we start to look into how this technology found its way into the market, and see whether or not AI truly is the future.

What is Artificial Intelligence?

Artificial intelligence, or AI for short, is a not-so-fairly new concept in the world of technology. What it basically means is that machines are given human-like intelligence through a system of information and programs or applications that are built into machines.

Machines with AI built inside can perform a variety of tasks mostly observed through human intuition like problem solving, gathering knowledge, and logical reasoning — among others. It’s basically making machines smarter and, in a way, more human-like.

Illustrations by Kimchi Lee

AI has been a part of many devices over the past few years, from smart homes to applications on your smartphone. Companies like Amazon and Google have come up with smart home devices that assist people with their day-to-day tasks such as Alexa and Google Assistant.

Businesses with online presence through company websites have also integrated chat boxes and online assistance bots that automatically answer any customer concerns depending on the information given.

How AI found its way to smartphones

Artificial intelligence was often associated with creating robots to perform human-like functions at a much faster, more efficient rate — which is heavily portrayed on mainstream media. Through AI, these machines learn more about the environment they’re in, and carefully adjust to meet the needs of the users. Such a process is called machine learning.

Nowadays, machine learning isn’t just limited to AI robots that learn what people are doing, but has now branched out to what people are thinking, inquiring about, and saying to other people. AI has slowly made its way into other devices that are much more accessible to us, primarily through the internet.

Machine learning is now incorporated into smart home devices, online video streaming websites like YouTube and Netflix, social media websites such as Facebook and Twitter; basically, the technology behind AI constantly learns more about people, their interests, and day-to-day activities.

The newest member of AI-integrated devices are smartphones themselves. Companies like Apple and Google have looked into integrating AI into the processors of their flagship phones — the iPhone and Pixel series, respectively. Early 2018 saw most Android smartphone brands integrate AI within their phones as a way of enhancing the user experience even further; Huawei and ASUS released their new flagship phone lines with their cameras utilizing AI for smarter responses to the environment around the user.

It’s quite possible that smartphones could very well lead the transition of all devices towards machine learning and AI in the near future.

Smartphones with AI

As mentioned, two companies have integrated AI into their smartphones to provide enhanced user experiences in a totally different way. One of these companies is ASUS, with their recently released ZenFone 5 series of smartphones with cameras powered by AI. Its shooters focus primarily on taking better photos and adjusting to the environment around you. The ZenFone 5’s AI Photo Learning allows the phone to learn how you like your photos and adjust the settings accordingly so you don’t have to.

Apart from its cameras, the ZenFone 5 series uses AI to boost overall performance. The base model is powered by a Qualcomm Snapdragon 636 processor, which enables the full utilization of AI features on the phone. The AI Boost technology allows the handset to have an instant hit in performance when running heavy-duty applications and games. Of course, AI in the ZenFone 5 also predicts which apps you will use next and learns which apps you use regularly.

Another company that integrates AI in its smartphones is Huawei, with the Mate 10 and P20 series. They’re powered by the Kirin 970 processor — which boosts overall performance and efficiency using integrated AI. This means that the phones will adjust to how much you use them and maximize performance every step of the way. They also come with Huawei’s EMUI 8.0 with its own set of AI features such as Smart Screen for multitasking and real-time translation during calls.

Much like the ZenFone 5, the Huawei Mate 10 and P20 phones also have cameras powered by AI. This powers the phones’ dual-lens camera setups for scene and object recognition, automatically adjusting the camera’s settings to suit the situation. Huawei also emphasizes producing professional-grade photos by allowing the AI to adjust the camera’s focus on the subject. That way, you are able to achieve a perfect-looking selfie or portrait — without the need to manually adjust the settings for a long period of time.

What we get from AI

Artificial intelligence opens up many opportunities for technology to be like humans in terms of processing thoughts and insights. What AI does is it allows machines to learn more about humans and tailor-fits its processes and capabilities to match us, from search engines to smarter applications. When treated properly, AI can actually deliver better and more efficient ways of dealing with the problems people face almost every single day.

The only downside is AI has the potential to even invade one’s privacy, especially through one’s smartphone. Because the technology is constantly learning more about its user through his or her devices, this opens the door for the data to be retrieved by, quite literally, anyone on the internet.

Because people nowadays access their smartphones almost every chance they get, people who truly know how AI works have the potential to abuse what they know and use it for their own personal gain, either through malicious activities like cyberstalking and cyberbullying, or online attacks like hacking or phishing.

The future of AI

2018 is looking like the year of AI with the unveiling of smartphones and revamped smart devices to upgrade the user experience. The possibilities for artificial intelligence are endless, given its wide usage across any available platform.

For now, it’s intelligent cameras on your smartphones that adjust settings for you to save the hassle of getting the perfect image. Some time in the future, AI could very well exist even on a gaming controller or mirrorless camera to adjust to your needs. However, we have to be aware about the dangers of using AI to its fullest as it can also lead to our own careless actions.

Indeed, the future is bright for artificial intelligence — as long as we use it for the right reasons.

Continue Reading

Trending