Over the past decade, devices using the Universal Serial Bus (USB) standard have become part of our daily lives. From transferring data to charging our devices, this standard has continued to evolve over time, with USB Type-C being the latest version. Here’s why you should care about it.
First, here’s a little history
Chances are you’ve encountered devices that have a USB port, such as a smartphone or computer. But what exactly is the USB standard? Simply put, it’s a communication protocol that allows devices to communicate with other devices using a standardized port or connector. It’s basically what language is for humans.
When USB was first introduced to the market, the connectors used were known as USB Type-A. You’re likely familiar with this connector; it’s rectangular and can only be plugged in a certain orientation. To be able to make a connection, a USB Type-A connector plugs into a USB Type-A port just like how an appliance gets connected to a wall outlet. This port usually resides on host devices such as computers and media players, while Type-A connectors are usually tied to peripherals such as keyboards or flash drives.
There are also USB Type-B connectors, and these usually go on the other end of a USB cable that plugs into devices like a smartphone. Due to the different sizes of external devices, there are a few different designs for Type-B connectors. Printers and scanners use the Standard-B port, older digital cameras and phones use the Mini-B port, and recent smartphones and tablets use the Micro-B port.
Specifications improved through the years
Aside from the type of connectors and ports, another integral part of the USB standard lies in its specifications. As with all specifications, these document the capabilities of the different USB versions.
The first-ever version of USB, USB 1.0, specified a transfer rate of up to 1.5Mbps (megabits per second), but this version never made it into consumer products. Instead, the first revision, USB 1.1, was released in 1998. It’s also the first version to be widely adopted and is capable of a max transfer rate of up to 12Mbps.
The next version, USB 2.0, was released in 2000. This version had a significantly higher transfer rate of up to 480Mbps. Both versions can also be used as power sources with a rating of 5V, 500mA or 5V, 100mA.
Next up was USB 3.0, which was introduced in 2008 and defines a transfer rate of up to 5Gbps (gigabits per second) — that’s a tenfold increase from the previous version. This feat was achieved by doubling the pin count or wires to make it easier to spot; these new connectors and ports are usually colored blue compared to the usual black/gray for USB 2.0 and below. USB 3.0 also improves upon its power delivery with a rating of 5V, 900mA.
In 2013, USB was updated to version 3.1. This version doubles what USB 3.0 was capable of in terms of bandwidth, as it’s capable of up to 10Gbps. The big change comes in its power delivery specification, now providing up to 20V, 5A, which is enough to power even notebooks. Apart from the higher power delivery, power direction is bidirectional this time around, meaning either the host or peripheral device can provide power, unlike before wherein only the host device can provide power.
Here’s a table of the different USB versions:
|Version||Bandwidth||Power Delivery||Connector Type|
|USB 1.0/1.1||1.5Mbps/12Mbps||5V, 500mA||Type-A to Type-A,
Type-A to Type-B
|USB 2.0||480Mbps||5V, 500mA||Type-A to Type-A,
Type-A to Type-B
|USB 3.0||5Gbps||5V, 900mA||Type-A to Type-A,
Type-A to Type-B
|USB 3.1||10Gbps||5V, up to 2A,
12V, up to 5A,
20V, up to 5A
|Type-C to Type-C,
Type-A to Type-C
Now that we’ve established the background of how USB has evolved from its initial release, there are two things to keep in mind: One, each new version of USB usually just bumps its transfer rate and power delivery, and two, there haven’t been any huge changes regarding the ports and connectors aside from the doubling of pin count when USB 3.0 was introduced. So, what’s next for USB?
USB Type-C isn’t your average connector
After USB 3.1 was announced, the USB Implementers Forum (USB-IF) who handles USB standards, followed it up with a new connector, USB Type-C. The new design promised to fix the age-old issue of orientation when plugging a connector to a port. There’s no “wrong” way when plugging a Type-C connector since it’s reversible. Another issue it addresses is how older connectors hinder the creation of thinner devices, which isn’t the case for the Type-C connector’s slim profile.
From the looks of it, the Type-C connector could become the only connector you’ll ever need in a device. It has high bandwidth for transferring 4K content and other large files, as well as power delivery that can power even most 15-inch notebooks. It’s also backwards compatible with previous USB versions, although you might have to use a Type-A-to-Type-C cable, which are becoming more common anyway.
Another big thing about USB Type-C is that it can support different protocols in its alternate mode. As of last year, Type-C ports are capable of outputting video via DisplayPort or HDMI, but you’ll have to use the necessary adapter and cable to do so. Intel’s Thunderbolt 3 technology is also listed as an alternate mode partner for USB Type-C. If you aren’t familiar with Thunderbolt, it’s basically a high-speed input/output (I/O) protocol that supports the transfer of both data and video on a single cable. Newer laptops have this built in.
Rapid adoption of the Type-C port has already begun, as seen on notebooks such as Chromebooks, Windows convertibles, and the latest Apple MacBook Pro line. Smartphones using the Type-C connector are also increasing in number.
Summing things up, the introduction of USB Type-C is a huge step forward when it comes to I/O protocols, as it can support almost everything a consumer would want for their gadgets: high-bandwidth data transfer, video output, and charging.
Basics of cryptocurrency: Risks and benefits
Should you buy in on the craze?
For a while, cryptocurrencies became the talk of the town across the internet. People all over the world saw the potential of what is essentially “virtual money,” starting a frenzy of investments, theories, and yes, memes — particularly towards one of the more popular cryptocurrencies, Bitcoin.
But do we really understand the power these cryptocurrencies yield, and how such power can affect the whole world over?
What are cryptocurrencies?
Cryptocurrencies are virtual currencies that are exchanged online with no interference from anyone, not even the government. These currencies, through their language of cryptography, contain secured information and are exchanged through a recording system known as a blockchain.
No one regulates the exchanges and no one controls how much of the cryptocurrency should be out there, but the blockchain keeps all of the exchanges transparent and fair for everyone. Think of it as openly sharing your share of a pizza to a friend in exchange for money, with your other friends keeping track of the exchange. Your friends make sure that you have a slice of pizza to give, your friend has the money he promised you, and that these items are actually from each of you and not from someone else.
Because of the creation of numerous cryptocurrencies all over the internet, a virtual market has been created for people who are interested and invested in these virtual currencies to trade among themselves. Groups of people have also made an effort to produce their own cryptocurrencies from their computers through cryptomining. Cryptomining, much like regular mining, is creating cryptocurrency tokens (an online version of coins) and putting them into the blockchain to be traded; it’s printing your own money, except it’s done from a computer and shared online.
In Bitcoin, for example: People who want to contribute to its blockchain to earn some share of the cryptocurrency would go through activities such as cryptomining. Despite it being one of the primary activities for creating and gaining Bitcoin, it’s also one of the more expensive ways of doing so since most cryptomining setups require computers with the most up-to-date hardware and processing speeds. Any person who wishes to do cryptomining would spend a ton of money just for the necessary hardware — all just to mine their own Bitcoin.
Where did the hype come from?
2017 (October to December) saw people get into a frenzy towards cryptocurrencies and its perceived value — a frenzy driven by growing interest. People had started to not only be invested (pun intended) in learning about cryptocurrencies in general, but they also searched “Bitcoin” a whole lot.
With more people understanding cryptocurrencies, investments towards such virtual currencies (particularly towards Bitcoin) increased, thereby expanding the market by a whopping 1,200 percent. Imagine getting 15,000 shares on your Facebook post about your dog within two days – that’s how quickly it blew up.
Another phenomenon that contributed to the rise of cryptocurrencies is the creation of initial coin offerings (ICO). An ICO is a public, unregulated way of earning funds for cryptocurrencies and is widely used by startups to bypass the usual fundraising activities for capital; ICOs are much like crowdfunding (such as Kickstarter or GoFundMe), except no one controls how the funding goes.
ICOs are usually distributed in Bitcoins; these will be used to start projects or applications that people create but initially have no money to operate. Because people have new ideas and the Internet is one of the faster ways to have the idea develop and spread all over, more and more people would go through ICOs to fund their projects instead of getting bank loans or using their own money.
Effects of cryptocurrencies
The impact of these cryptocurrencies take on a grand scale, especially from an economic context. People continually join the hype towards cryptocurrencies, so much so that it drives demand for them. Participating in online trading for cryptocurrencies is faster than those in the stock market, and is easily accessible by people since it is unregulated.
As such, governments are pushing for cryptocurrencies as a means for payment to add convenience for customers, especially those with plans to go paperless with their money. The Indian government, for example, is learning to embrace Bitcoin within their monetary system after taking in measures against tax evasion in black markets; they are also looking into regulating Bitcoin and other cryptocurrencies as well in the near future.
The risk of partaking in cryptocurrencies lies in its greatest feature: an organic form of virtual currency. Because no entity has any control of cryptocurrencies — including governments — these virtual currencies are prone to online attacks (most common form of attack: hacking), which rapidly hamper their growth and reduce their value significantly. With a large number of people currently trading cryptocurrencies online, the risk of hackers increases significantly, causing these people to lose more money when worse comes to worst.
Another threat posed by its greatest feature is that people would abuse the high interest rates and entice new investors to purchase tokens. Because there is no body to regulate the trading online, people engage in scams to take advantage of new investors who are not guided properly in the virtual currency market — despite it being heavily secured by cryptography.
Participating in the schemes makes the trade unfair, even with efforts to make things equal for everyone. One example is the Bitcoin Savings and Trust Ponzi scheme in 2011, which was shut down in 2012 due to the perpetrator, Trendon Shavers, being accused of raising 700,000 BTC — all from new investors who didn’t know any better.
Cryptocurrencies at present
At the moment, Bitcoin remains to be the top-traded cryptocurrency within the market, valued at US$ 151.1 billion — in spite of its decline over the past few months. Countries are starting to either accept Bitcoin as part of their national economies or reject Bitcoin and its risks. Litecoin, which was dubbed as an alternative to Bitcoin, is not performing as well as Bitcoin within the past month, culminating in a so-far failing venture with digital wallet service Abra. Ethereum, one of Bitcoin’s closest competitors, has quickly risen due to its value to customers.
There are countries in the world that think that cryptocurrencies can bring them out of total economic collapse and keep the country afloat. Venezuela, for instance had released its own cryptocurrency, Petro, after its own national currency lost its value. Other struggling nations such as Iran and Turkey are looking to follow suit, but would need enough investment to get the necessary equipment for creating their own cryptocurrencies.
Even with the possibility of countries going paperless with their currencies, there are some that still fear its effects and have not wholeheartedly embraced cryptocurrencies. Despite the aforementioned efforts from the Indian government to shift to cryptocurrency-based payment methods, the Reserve Bank still finds engaging in cryptocurrencies illegal, to the point of barring banks from engaging in them. Reports of ransomware spreading in the United States, hacking computers used for mining Bitcoin raise security concerns for people investing in Bitcoin.
Should you be worried?
Whether you are currently investing in cryptocurrencies or not, the risks of such virtual currencies will remain to be there as long as other people keep increasing their investments towards them. The value of these cryptocurrencies continue to be unstable to this day, especially with the hype slowly dying down due to people learning more and more about cryptocurrencies and their possible (and real) dangers.
The call for people who wish to invest in these cryptocurrencies is to practice caution. Do some research, get to know more about the terminologies used in the world of cryptocurrencies, look at news reports — with the internet at your disposal, it’s better to know what you’re getting into, should you want to get into it. Anyone who wishes to create their own cryptocurrency might want to start saving up as early as now for all the hardware.
Should you be worried? Yes, to an extent, but it helps to be prepared.
How Google’s Android Go is different from Android One
Don’t let their names confuse you
When shopping for an Android phone, people often ask about software version. Google keeps this easy for us to remember by naming them after desserts and in alphabetical order. For example: Android 6.0 is Marshmallow, 7.0 is Nougat, and now we have Oreo for version 8.0, which is the latest publicly available version. Android P (no dessert name as of writing) is still in the works, so let’s not worry about it for now.
Most Android phones don’t get the latest version, though. Usually, only the expensive flagship phones receive them, leaving other affordable devices in the dust. This is where Android One comes into the picture.
What is Android One?
Android One is not new; it was announced by Google back in 2014 as a platform to bring a smooth Android experience to emerging markets. Android One was made for low-cost, low-spec devices that get major OS updates for two years, security updates for three years, have the core Google services, and a stock Android interface.
The introduction of Android One was a relief since manufacturers had been pre-installing bloatware on their phones. Android One phones were known as the “poor man’s Nexus” since they were priced around US$ 150 — you practically got the software support and lag-free performance of Nexus phones for cheap.
Android One came back to the spotlight with the announcement of the Xiaomi Mi A1 last year. Google’s newfound partnership with Xiaomi gave us hope for the Android One program. But, this was also when we noticed that Android One isn’t focused on affordable devices anymore.
Android One became a platform for manufacturers to give consumers a pure and fresh Android experience. Nokia made the smart move to make all their new phones — midrange or high-end — embrace Android One. We honestly hope others will follow.
So, what happens with cheap devices now? That’s what Android Go is for.
What about Android Go?
Android Go was originally announced in May 2017, although we didn’t get to see any device running it until Mobile World Congress 2018 in Barcelona. Now referred to as Android Oreo (Go edition), it picks up where Android One left off — well, kind of. It’s a stripped-down version of Android (specifically Android Oreo) built to run on devices with 1GB of memory or even less.
The goal now is to make really cheap devices. Expect them to be priced under US$ 100 or less than US$ 50, in some cases. Examples of the smartphones with Android Oreo (Go edition) are the ZTE Tempo Go, Nokia 1, and Alcatel 1X — all entry-level devices.
How can Google make sure Android works okay despite the limited hardware?
Every core Android app, from Gmail to Maps to Assistant, has been rebuilt and stripped of extra features. They’re streamlined and now labeled with “Go” (e.g. Gmail Go and Maps Go). To highlight apps that’ll work best with 1GB of memory or less, the Play Store for Android Oreo (Go edition) is tweaked to showcase such apps like Facebook Lite.
Since Android Oreo (Go edition) is designed for truly low-cost phones, it features data management tools for both internal storage and mobile data. To help aid with limited storage, Android Go is nearly half the size of “stock” Android which means there’s more room for apps, especially if the phone only has 8GB of storage. Also, Go and Lite apps are 50 percent smaller in file size — some even need just 1MB to install. Moreover, the OS helps users save data by restricting background data access.
One thing to note about Android Oreo (Go edition) is that it has no promised updates, hence the specific Oreo label. Perhaps when Android P gets announced, we’ll then have Android P (Go edition).
Let’s be clear: Android Go is not necessarily a replacement for Android One.
Android One is a line of phones defined and managed by Google. Android Go is just software that can run on entry-level devices. Android Go stretches the original purpose of Android One by making sure that the Android OS can run even if your phone is very basic.
Android Go bridges the gap between feature phones and smartphones. Hopefully, if pricing is right, consumers in developing markets will just buy Android Go-powered phones instead of feature phones.
Where did the 18:9 ratio come from?
And should you get an 18:9 phone?
2018 will be the year of the notched, bezel-less display. Along with it comes a new number that not everyone knows what to make of yet — the 18:9 aspect ratio.
Smartphones are quickly adapting the new 18:9 standard. However, since the ratio is still in its relative infancy, you might have heard more about its smaller variant, 16:9. Since the invention of widescreen TVs, everyone took in 16:9 as the industry standard. With 18:9 just on the horizon, should we care that our phones are getting taller?
What is aspect ratio?
First, let’s define what an aspect ratio is. The two numbers describe how big a device’s screen or a piece of media is. Specifically, it compares how wide your screen is relative to its height. The larger number is its height, while the smaller is its width.
For example, the square photos on Instagram have an aspect ratio of 1:1. As it gets taller or wider, its corresponding number on the ratio increases. It can vary from the old 4:3 to the ubiquitous 16:9 to the new 18:9.
Where did it come from?
The history of aspect ratios has always been intimately linked to the movie industry. The art of experimenting with aspect ratios began as cinematographers trying to perfect their film’s vision. With every new experiment, a new aspect ratio is born. Sometimes, their experiments become popular enough to become an industry standard.
The world’s first documented aspect ratio, 4:3, is also one of the world’s most popular ones. Remember those bulky CRT monitors you used to have? Those used the 4:3 ratio, which was adapted from old-school cinema. The first films used the film negative’s perforations (or holes) to measure their screens. In 4:3’s case, the screen was three perforations tall and four perforations wide.
When television was invented, the world of cinema faced tremendous competition. At the time, TVs also started off with a 4:3 display. Naturally, the homely convenience of a TV placed it at an advantage over the inconvenience of driving to a movie theater. The film industry had to compete.
Going head-to-head with the TV, cinematographers invented wider and taller aspect ratios. From this era, we saw the invention of the 70mm film. These huge ratios could fit more content on the screen. They became so popular that 70mm is still a standard that’s used today.
Our old friend, 16:9, arrived shortly after this boom. With aspect ratios popping out of nowhere, there came a need for a standard that everyone could follow. For this, Dr. Kern Powers, an expert at the craft, proposed the 16:9 format, a compromise between the industry’s most used ratios. With this format, you can watch either a TV show or a movie with minimal letterboxing (or the black rectangles on the edges of your screen).
Its flexibility skyrocketed the ratio into ubiquity. A lot of screens adapted the ratio as a result. Years later, almost every device prior to 2017 used a 16:9 display. Even now, the ratio that we’re most familiar with is 16:9.
Now, if 16:9 so effective, where did 18:9 come from?
The birth of 18:9
Because of the massive popularity of 16:9, 4:3 displays became obsolete. Today, finding a 4:3 device involves a trip to the nostalgia store. 16:9 began as a compromise. Since everyone adapted the compromise, it became a norm. The feud now was between HDTVs and cinemas.
Hence, the gap between portable 16:9 screens and cinema’s 70mm still exists. The next logical step is to create a compromise between 16:9 and the current 70mm cinema standard.
In 1998, cinematographer Vittorio Storaro solved this by inventing the Univisium film format, or what we know now as 18:9. Seeing the need for a new standard, he saw 18:9 as a standard that can make both cinemas and TVs happy.
At the time of his invention, only a handful of films used his new standard. In fact, most of them like Exorcist: The Beginning were his own films. Univisium would lay low for over a decade.
The rise of 18:9
In 2013, Univisium entered a renaissance with hit streaming show House of Cards, which was shot in the format. Having found a new home, Univisium crawled its way into other shows like Stranger Things and Star Trek: Discovery. In some circles, 18:9 was already known as the “streaming ratio.”
With the effectiveness of 18:9 proven, devices started adopting the new ratio. In 2017, the LG G6 and the Samsung Galaxy S8 launched with 18:9 in tow. (The S8 would use a slightly adjusted 18.5:9.)
After the trendsetters, more phones started getting into the trend. Google, OnePlus, and Huawei would soon adapt the new ratio. Even the Apple iPhone X uses a taller ratio: 19.5:9.
Should you get an 18:9 phone?
As it’s still in its infancy, the usefulness of 18:9 isn’t as apparent. However, the ratio already carries a flurry of benefits for early adopters.
Firstly, getting an 18:9 phone ensures future-proofing for a standard that’s quickly gaining traction. More shows are using 18:9. Even Hollywood is already testing the waters. 2015 film Jurassic World used the ratio.
Secondly, Univisium optimizes existing smartphone features like split-screen view. With more real estate, two apps can easily share the screen for a true multitasking experience. Even without the feature, phones can display more content in one screen without scrolling.
It’s likely that you won’t see the benefits until further down the line. The shows that use 18:9 are still too few to call it a true standard. When you watch a contemporary video on an 18:9 screen, you’ll still notice letterboxing. Despite their flexibility, some apps might even have trouble stretching to 18:9.
The vision of 18:9 is still in the future. It will have its growing pains. Critics will even put it down as a fad. However, the new ratio shows a lot of promise in uniting content under one pleasing ratio.
SCUBAPRO Aladin Sport Matrix review: Your first dive computer
Is it time to invest in diving gear?
Fitbit Versa Review: Real arm candy
Is this smartwatch any good?
Nokia 7 Plus Review: The Android One midranger
Pure Android software paired with midrange power
Samsung owes Apple US$ 539M for patent infringement
Vivo Y85 lands in the Philippines, battles the OPPO F7 Youth
Essential cancels Essential Phone 2, puts company up for sale
SCUBAPRO Aladin Sport Matrix review: Your first dive computer
Xiaomi Redmi 6, Redmi 6A leaked on Chinese certification website
Best Budget Smartphones in the Philippines below P10,000
Best Midrange Smartphones in the Philippines from P10,000 to P20,000
Best Premium Smartphones in the Philippines above P30,000
Best Premium Smartphones above $600
Exclusive: ASUS ZenFone 5Z tops AnTuTu benchmark in leaked photo
Lifestyle1 week ago
Samsung attacks Apple once again in latest video ad
News2 weeks ago
ASUS ZenFone Max Pro (M1) is coming to the Philippines
Computers1 day ago
The new Snapdragon 710 promises flagship specs for midrange phones
News6 days ago
Xiaomi Redmi S2 makes surprise landing in the Philippines
News1 week ago
Nokia X6 is officially the company’s first notched phone
Reviews4 days ago
Nokia 7 Plus Review: The Android One midranger
Reviews2 weeks ago
ASUS ZenFone Max (M1) Review
Features2 weeks ago
Huawei P20: His and Hers