Yesterday, Google announced the latest “feature drop” for its Pixel line of Android phones. It’s part of an effort to get people to realize that the Pixel gets software updates ahead of other Android phones and that some of the features it receives stay exclusive to the Pixel. And yesterday’s “drop” epitomizes so many things that are good (and bad) about Google’s hardware efforts, so I wanted to dwell on it for a moment today.
First and foremost, saying that these features were “released” yesterday is only vaguely accurate. Instead, the rollout began yesterday and should theoretically be completed for all users in a couple of weeks. That’s significantly better than the last (and first) feature drop, which trickled out to Pixel owners much more slowly.
Google has very reasonable reasons for not distributing its updates to everybody on day one, but they undercut whatever excitement people may feel when they hear about them — since there’s an indeterminate wait. I covered all this in the newsletter last December with the first feature drop.
You are reading Processor, a newsletter about computers by Dieter Bohn. Dieter writes about consumer tech, software, and the most important news of the day from The Verge. This newsletter delivers about four times a week, at least a couple of which include longer columns. You can subscribe to Processor and learn more about it here. Processor is also a YouTube series with the same goal: providing smart and surprising analysis with a bit of humor. Subscribe to all of The Verge’s great videos here!
By subscribing, you are agreeing to receive a daily newsletter from The Verge that highlights top stories of the day, as well as occasional messages from sponsors and / or partners of The Verge.
So let’s look at what’s new in this month’s update, courtesy of this rundown from Chris Welch. There are some basic quality-of-life (to borrow a term from video games) tweaks: dark mode can be scheduled, adaptive brightness has been improved, and you can set up little actions based on which Wi-Fi networks you’re connected to. There’s a new gesture for the Pixel 4’s Motion Sense chip, new emoji, and new AR effects for Duo video chats. All fine.
But there was one line on Google’s support page for the update that caught my eye (emphasis mine): “In addition to long press, you can now firmly press to get more help from your apps more quickly.”
“Firmly press” sets off alarm bells because it sounds a lot like the iPhone’s 3D Touch, which enables different actions depending on how hard you press on the touchscreen. It was a beloved feature for some people because it gave faster access to the cursor mode on the iPhone’s keyboard (I think long-pressing the space bar works fine for that, but I get that people love it). It’s also gone on the latest versions of the iPhone — Apple has seemingly abandoned it because the hardware to support it was too expensive/thick/complex/finicky/whatever.
But now, it seems that Google has done the same thing for the touchscreen that it does with the camera: use its software algorithms to make commodity parts do something special. That is a very Googley thing to do, but not quite as Googley as the fact that there was virtually no information about this feature to be found anywhere on the internet beyond a speculative note over at XDA Developers.
After a few hours of back and forth, I finally got more details from Google. Here’s what this feature does, according to Google:
Long Press currently works in a select set of apps and system user interfaces such as the app Launcher, Photos, and Drive. This update accelerates the press to bring up more options faster. We also plan to expand its applications to more first party apps in the near future.
Essentially, this new feature lets you press harder to bring up long-press menus faster. In fact, Google’s documentation for Android’s Deep Press API explicitly says it should never do a new thing, it should only be a faster way to execute a long press. The answer to why it only works in certain apps is that a lot of Android developers aren’t using standard APIs for long press actions. Because Android.
Okay, but how does it work? It turns out my hunch was correct: Google has figured out how to use machine learning algorithms to detect a firm press, something Apple had to use hardware for.
Tap your screen right now, and think about how much of your fingertip is getting registered by the capacitive sensors. Then press hard and note how your finger smushes down on the screen — more gets registered. The machine learning comes in because Google needs to model thousands of finger sizes and shapes and it also measures how much changes over a short period of time to determine how hard you’re pressing. The rate of smush, if you will.
I have no idea if Google’s machine-learning smush detection algorithms are as precise as 3D Touch on the iPhone, but since they’re just being used for faster detection of long presses I guess it doesn’t matter too much yet. Someday, though, maybe the Pixel could start doing things that the iPhone used to be able to do.
(For the record, Apple’s GarageBand has a sort of software-based detector for how hard you are pressing, but it uses the accelerometer.)
So Google made long pressing take not so long. It also finally brought some updates to Google Pay — specifically, it finally figured out that people might want to switch between cards in Google Pay more easily, so it added a shortcut to get to them by long-pressing the power button. It’s a little catch-up to Apple Wallet.
Getting passes of all kinds into Apple Wallet is easy and common — essentially every airline gives you a button to do so. It’s so much better than Android’s method, which requires opening the app or saving a screenshot and then hoping you can find it quickly later. But integration with Google Pay has been lacking. Google announced boarding pass support a year and a half ago and virtually no airline uses it. (As an aside, I’d prefer it be called Google Wallet, but that brand was already used up so they call it Google Pay, because Google).
This annoyance has been going on for years, but now there’s finally an answer for Pixel users that is very Google. Instead of convincing partners to also add a Google Pay button, Google lets you take a screenshot of your boarding pass in your airline’s app. When the screenshot system sees a QR code, the notification gives you a button to save the boarding pass in your Google Pay wallet. It also lets the Google Assistant know you care about that flight so it will send you updates.
Both the screenshot boarding pass and the firm press detectors share a common bond: they are very clever software solutions that take unique advantage of Google’s machine learning strengths to solve problems. They are also problems that, bluntly, Apple solved via more traditional methods before Google.
Still, credit where it’s due, Google is catching up and, in some cases, innovating. The automatic car crash detection looks like it could be a literal lifesaver, for example. And in everyday things, Google is making progress on fixing Android’s little annoyances piece by piece and doing so throughout the entire year instead of in one giant operating system update. Now if it could just do a better job distributing both kinds of updates to non-Pixel owners, we’d be cooking with gas.
If you’re on the hunt for the best e-reader that won’t cost you too much, your search ends with the latest Amazon Kindle Paperwhite. It usually costs $130, but it’s discounted to $85 right now at Amazon. Compared to the $60 standard Kindle, this e-reader is worth considering if you want one that’s waterproof and has a backlit display that looks as crisp as text on paper.
Vox Media has affiliate partnerships. These do not influence editorial content, though Vox Media may earn commissions for products purchased via affiliate links. For more information, see our ethics policy. Prices displayed are based on the MSRP at time of posting.
Cancelled conferences or conference appearances:
Big tech responses:
Other coronavirus news:
Under the best of circumstances, testing would lag anyway — because most people don’t show symptoms of COVID-19 for a few days. So positive tests are essentially snapshots of where the virus was several days ago. But by keeping the test criteria narrow, the CDC lost valuable time to prevent outbreaks like the one at Life Care. Now health officials are scrambling to catch up.
The two big reviews (and one big video) yesterday were Nilay Patel’s look at the Mac Pro and the Apple Pro Display XDR. I think both of them fall in a kind of Pro Uncanny Valley. They’re wildly more powerful than what’s been available to Mac users before, sure. But unless you are a specific kind of user, it’s unlikely you’ll get the full value out of their price. They’re too expensive to be aspirational purchases for most semi-pro users and yet the software isn’t quite ready for full-on pro users (at least in the media creation space).
This is a situation that will either resolve itself to the relief of everybody as software catches up… or it won’t. The latter option is a bit of a worst case scenario, coming on the heels of the bad Trashcan Mac years and the years waiting for this new modular design.
I’ll just say it again: there’s a version of this Mac Pro that starts at, say, $2,500 — albeit with more consumer-grade components. Apple clearly doesn’t believe that it’s worth making that kind of tower computer anymore.
Like so many things Apple, it’s a bit of a walled garden: if you live in Apple’s pro apps, and use Apple’s preferred formats, the Mac Pro will be very fast. But step outside Apple’s ecosystem, and things revert to more familiar territory. The good news is that this Mac Pro seems likely to inspire some optimizations, but it’s hard to say how long those will take.
So this is a puzzle: Apple has to convince all of the people who gasped at the idea of a $5,000 monitor and $1,000 stand that the upgrade to the Pro Display XDR is worth it and convince the people picky enough to spend $43,000 on a reference monitor simply for color use that this display can hit the marks. To be completely honest with you, I have no idea how that’s going to go.
┏ The Verge tech survey 2020. Casey Newton will have more to say in his newsletter, The Interface. I’ll just note that I think people overestimate how beloved Apple is and underestimate how beloved Amazon and Google are.
By default, Apple will offer $25 to any current or former owner of a covered iPhone. Named class members will receive $1,500 or $3,500, and around $90 million will go toward attorneys.
┏ A Final Fantasy VII Remake demo is out now for PS4. Here are looks at what’s new in the remake from Megan Farokmanesh and Nick Statt.
┏ Imagine a world without YouTube. Incredible piece of writing by Adi Robertson. YouTube seems like a piece of the internet that’s always been there, been the way it is. But the reality is that it could have gone many other ways.
┏ AT&T TV now available nationwide with Android TV set-top box — and a two-year contract. How many ways is this ridiculous? The two-year contract. The price hikes after a year. The $120 price of a second box. The confusing name. That’s four off the top of my head. If this service is any kind of success, I think it makes the case for AT&T having too much market power. In no sane marketplace does this thing even get off the ground.
┏ Nvidia’s GeForce Now is becoming an important test for the future of cloud gaming. In earlier newsletters I’ve presented this as analogous to the channel carriage fits we see on cable TV. That’s still true, but it leaves out the fact that in gaming there are lots of smaller developers who also have concerns. Nick Statt’s evenhanded look at all of the controversy is the definitive take on the subject right now.