Its a big day for Google in India. The company is launching its Pixel smartphones in the country today. On the occasion, Google ran a two-page ad on Times of India newspaper, showcasing the power of one of Pixel’s most interesting features, Google Assistant. Its only sin: the almighty artificial intelligence bot has got the facts wrong.
In the ad, Google Assistant is shown responding to a users query who wants to know about their flight to London. The plane, the United Airlines Flight 83, is shown to depart from DEL (New Delhi) and reach LHR (London Heathrow, United Kingdom). Which seems about fine except that United Airlines Flight 83 doesnt actually fly to LHR. The plane instead flies to EWR (Newark Liberty International Airport).
In Googles defense, its Assistant probably knows all of this and its likely the fault of people who were tasked for this ad. We checked Google Assistant on Allo for United Airlines Flight 83 and it did show its destination to be EWR and not LHR.
It also doesnt help that Times of India is the countrys most circulated English newspaper and people are going to notice it. Oh well.
Google Assistant is the headline feature of new Pixel smartphones. The feature uses artificial intelligence to understand what users are saying and responds conversationally with most relevant and accurate answers. Google Assistant will be exclusive to Google’s Pixel smartphones Pixel and Pixel XL until next year.
In news that should make anyone who’s experienced an Apple Maps fail a little less angry, Bloomberg reports that unnamed sources say that Apple is taking steps to overhaul its Maps service.
The report’s sources claim Apple is building a new team of robotics and data-collection experts with the directive to use drones to capture and update map information. Up to now, Maps data has been collected by a fleet of street-bound cars, so taking to the sky would immediately expand the effort.
The drones would be especially helpful for up-to-the-minute road monitoring for accurate traffic information, an area where Apple Maps has lagged behind Google Maps. The data collected will be sent to Apple teams, which will be tasked with updating the app for the highest level of accuracy possible. According to Bloomberg‘s sources, at least one person from Amazons Prime Air division has been brought in for the work.
But do we really want a bunch of flying Apple cameras patrolling the skies across the country? The company will have to abide by the Federal Aviation Administration’s commercial drone-use regulations, which Apple committed to when they were rolled out back in August.
Those regulations might make the drone initiative in cities near impossible, since flying over people and buildings are two of its strongest prohibitions. But in countries where there aren’t commercial restrictions, Apple can fly all it wants.
Along with the drones, Bloomberg‘s sources said that Apple is also developing new Maps features for use indoors and for its in-car navigation service. In a move that went largely under the radar last year, Apple acquired Finnish startup Indoor.io, a deal now confirmed.
Google’s DoubleClick ad network had an issue with the Media Rating Council’s updated rules.
Image: (Associated press/David Goldman)
In the wake of revelations that Facebook overestimated a key video metric for years, Google the other half of the mobile advertising duopoly is having troubles of its own.
While Google’s issue was more of a technicality than intentionally misleading advertisers, it seems the search giant’s publisher ad network, DoubleClick, failed to update its measurement model to account for new rules from a major trade group in April. This led to the suspension of two of its mobile viewership metrics.
The Measurement Rating Council the benchmark accreditor for how online ad impressions are measured announced the decision last month, Business Insider first reported.
The group rewrote its rules in spring to mandate that ad impressions only be counted after “reasonable assurance that the ad wasrendered on the device.” The previous standard measured each time an ad was served.
Because of the scope of the project, Google wasn’t able to meet the new criteria within the allowed 30-day window, and accreditation of its mobile web impression measurement and viewability metric used to verify impressions was put on hold until the company is able to address the issue.
A Google spokesperson told Mashable the company is hoping to do so by the end of the year.
In the meantime, the Alphabet-owned site still has dozens of other metrics with the requisite papers in order.
As the industry transitions to new metrics for how they count ads, were working closely with publisher partners to make sure they continue to thrive,” the spokesperson said in an emailed statement. “Were updating the methodology for our publisher ad server [DoubleClick For Publishers] to reflect the change.”
Proper viewership gauges have become an increasingly big topic of conversation in the ad industry as ad fraud networks of bots designed to mimic human behavior and scam advertisers and technical load problems have thrown into question the true value of display advertising.
Two years ago, Google revealed that half of all digital ads served are never seen by actual people for these reasons.
The MRC an obscure, half-century-old agency with an outsized influence has been leading the charge for a new viewability standard along with other industry trade bodies like the Interactive Advertising Bureau, the Association of National Advertisers and the American Association of Advertising Agencies.
A consensus was eventually reached: At least half of every ad must be seen for one second or two seconds for video in order to qualify.
“We did a lot of research around this focused on where you had evidence that ads were in view and had been recognized by a user,” David Gunzerath, senior VP and associate director at the MRC, said in an interview at the time of the update. “With all our measurement standards, we’re always sort of revisiting them and re-challenging them over time.”
The MRC has accredited a total of 18 different companies for ad measurement, including giants like Nielsen and Rentrak, but not all of them have been vetted since the change due to the intervals of the group’s audits.
The suspension is expected to have little impact on Google’s day-to-day business.
With Samsungs Galaxy Note 7 effectively dead for now, Google phones are emerging as a strong alternative, along with the iPhone 7 Plus.
The first pure Google-branded phones could not have arrived at a better time. Googles new Pixel phone will be an attractive option to high-end Android phone owners, Bob O’Donnell, president and founder of TECHnalysis Research, told FoxNews.com in an email.
The larger version, the 5.5-inch Pixel XL, is priced at $869 (128GB), very close to the 64GB Galaxy Note 7, which was priced at around $850 at most U.S. carriers.
Like the Note 7, the Pixel XL sports an AMOLED display with a 2,560-by-1440 resolution. And other internal specs are similar, if not identical to, the Note 7, including the latest Qualcomm Snapdragon quad-core 820 processor (Google lists the processor as the 821 for Pixel), 4GB of RAM, a USB-C connector, and a 3.5mm headphone jack.
Google company headquarters in Mountain View, California.
Image: Marcio Jose Sanchez/ap photo
Working for Google may sound fun, but the interview process sure doesn’t.
After applying for a director of engineering role at the company, Pierre Gauthier a computer engineer who started his own tech company 18 years ago was asked some pretty intimidating questions in a phone interview.
After failing to give the Google recruiter the “right answers,” he decided to create a Gwan.com blog post to share the challenging questions, his responses and candid thoughts with the public.
Though Gauthier managed to answer the first four questions correctly, it was all downhill from there. Gauthier soon found himself arguing his answers with the recruiter, and by the ninth question, he frustratedly asked, “What’s the point of this test?”
Basically, if Google ever calls you for an interview, here are ten questions you’ll want to know the answers to:
1. What is the opposite function of malloc() in C?
2. What Unix function lets a socket receive connections?
3. How many bytes are necessary to store a MAC address?
4. Sort the time taken by: CPU register read, disk seek, context switch, system memory read.
5. What is a Linux inode?
6. What Linux function takes a path and returns an inode?
7. What is the name of the KILL signal?
8. Why Quicksort is the best sorting method?
9. There’s an array of 10,000 16-bit values, how do you count the bits most efficiently?
10. What is the type of the packets exchanged to establish a TCP connection?
Those sound like a joy, right?
And just in case you didn’t think Gauthier was properly qualified for the position, he began his blog post by summarizing his many years of experience:
For the sake of the discussion, I started coding 37 years ago (I was 11 years old) and never stopped since then. Beyond having been appointed as R&D Director 24 years ago (I was 24 years old), among (many) other works, I have since then designed and implemented the most demanding parts of TWD’s R&D projects…
Following his less-than-satisfying interview experience Gauthier posed the question, “Is Google raising the bar too high or is their recruiting staff seriously lacking the skills they are supposed to rate?”
The imagery gives you quite an amazing view into various processes that change the shape of our planet deforestation, glacial motion, urbanization, war. Google offers a curated selection of interesting locations and events, such as the reconstruction of the Oakland Bay Bridge in San Francisco or the movement of the Hourihan Glacier in Antarctica.
You can, however, point the map to any location in the world and see how it changed over time (though the imagery might not be of the same quality everywhere).
See a YouTube playlist with all of Google’s curated Timelapse examples, below.
Google has shared an interesting insight on how Timelapse was created on its blog it took three quadrillion pixels and more than 5,000,000 satellite images to do it. Check out the details here.
“I like good strong words that mean something,” Louisa May Alcott writes as Jo March in Little Women.
With our current political climate, this quote from Alcott’s iconic novel which is loosely based on her own childhood holds even more weight.
The novelist was born on Nov. 29, 1832, and this Tuesday is her 184th birthday. As such, Google is celebrating the life and wise words of the author who brought us the March family and so much more … with a Doodle!
The Doodle, by Sophie Diao, shows sisters Beth, Jo, Amy, and Meg, and Jo’s best friend Laurie (played by the delicious Christian Bale in the film).
Outside of her writing, Alcott was a suffragist, abolitionist, and feminist. She was a volunteer nurse during the American Civil War and her family’s home was a station on the Underground Railroad. An active member of the women’s suffrage movement, Alcott was the first woman to register to vote in Concord, Massachusetts.
“I want to do something splendid before I go into my castle, something heroic or wonderful that won’t be forgotten after I’m dead,” Jo March says in Little Women. “I don’t know what, but I’m on the watch for it, and mean to astonish you all some day.”
Google’s artificial intelligence can play the ancient game of Go better than any human. It can identify faces, recognize spoken words, and pull answers to your questions from the web. But the promise is that this same kind of technology will soon handle far more serious work than playing games and feeding smartphone apps. One day, it could help care for the human body.
“We were able to take something core to Google—classifying cats and dogs and faces—and apply it to another sort of problem,” says Lily Peng, the physician and biomedical engineer who oversees the project at Google.
But the idea behind this AI isn’t to replace doctors. Blindness is often preventable if diabetic retinopathy is caught early. The hope is that the technology can screen far more people for the condition than doctors could on their own, particularly in countries where healthcare is limited, says Peng. The project began, she says, when a Google researcher realized that doctors in his native India were struggling to screen all the locals that needed to be screened.
In many places, doctors are already using photos to diagnose the condition without seeing patients in person. “This is a well validated technology that can bring screening services to remote locations where diabetic retinal eye screening is less available,” says David McColloch, a clinical professor of medicine at the University of Washington who specializes in diabetes. That could provide a convenient on-ramp for an AI that automates the process.
Peng’s project is part of a much wider effort to detect disease and illness using deep neural networks, pattern recognition systems that can learn discrete tasks by analyzing vast amounts of data. Researchers at DeepMind, a Google AI lab in London, have teamed with Britain’s National Health Service to build various technologies that can automatically detect when patients are at risk of disease and illness, and several other companies, including Salesforce.com and a startup called Enlitic, are exploring similar systems. At Kaggle, an internet site where data scientists compete to solve real-world problems using algorithms, groups have worked to build their own machine learning systems that can automatically identify diabetic retinopathy.
Peng is part of Google Brain, a team inside the company that provides AI software and services for everything from search to security to Android. Within this team, she now leads a group spanning dozens of researchers that focuses solely on medical applications for AI.
The work on diabetic retinopathy started as a “20 Percent project” about two years ago, before becoming a full-time effort. Researchers began working with hospitals in the Indian cities of Aravindand Sankarathat were already collecting retinal photos for doctors to examine. Then the Google team asked more than four dozen doctors in India and the US to identify photos where mini-aneurysms, hemorrhages, and other issues indicated that diabetic patients could be at risk for blindness. At least three doctors reviewed each photo, before Pemng and team fed about 128,000 of these images into their neural network.
Ultimately, the system identified the condition slightly more consistently than the original group of doctors. At its most sensitive, the system avoided both false negatives and false positives more than 90 percent of the time, exceeding the National Institutes of Health’s recommended standard of at least 80 percent accuracy and precision for diabetic retinopathy screens.
Given the success of deep learning algorithms with other machine vision tasks, the results of the original trial aren’t surprising. But Yaser Sheikh, a professor of computer science at Carnegie Mellon who is working on other forms of AI for healthcare, says that actually moving this kind of thing into the developing world can be difficult. “It is the kind of thing that sounds good, but actually making it work has proven to be far more difficult,” he says. “Getting technology to actually help in the developing world—there are many, many systematic barriers.”
But Peng and her team are pushing forward. She says Google is now running additional trials with photos taken specifically to train its diagnostic AI. Preliminary results, she says, indicate that the system once again performs as well as trained doctors. The machines, it seems, are gaining new kinds of sight. And some day, they might save yours.