Is Samsung’s Loss A Win For Apple And Google Phones?

The Google Pixel phones.  (Google)

With Samsungs Galaxy Note 7 effectively dead for now, Google phones are emerging as a strong alternative, along with the iPhone 7 Plus.

The first pure Google-branded phones could not have arrived at a better time. Googles new Pixel phone will be an attractive option to high-end Android phone owners, Bob O’Donnell, president and founder of TECHnalysis Research, told FoxNews.com in an email.

The larger version, the 5.5-inch Pixel XL, is priced at $869 (128GB), very close to the 64GB Galaxy Note 7, which was priced at around $850 at most U.S. carriers.

Like the Note 7, the Pixel XL sports an AMOLED display with a 2,560-by-1440 resolution. And other internal specs are similar, if not identical to, the Note 7, including the latest Qualcomm Snapdragon quad-core 820 processor (Google lists the processor as the 821 for Pixel), 4GB of RAM, a USB-C connector, and a 3.5mm headphone jack.

More on this…

Read more: http://www.foxnews.com/

After being rejected by Google, this engineer posted the interview questions he was asked

Google company headquarters in Mountain View, California.
Image: Marcio Jose Sanchez/ap photo

Working for Google may sound fun, but the interview process sure doesn’t.

After applying for a director of engineering role at the company, Pierre Gauthier a computer engineer who started his own tech company 18 years ago was asked some pretty intimidating questions in a phone interview.

After failing to give the Google recruiter the “right answers,” he decided to create a Gwan.com blog post to share the challenging questions, his responses and candid thoughts with the public.

Though Gauthier managed to answer the first four questions correctly, it was all downhill from there. Gauthier soon found himself arguing his answers with the recruiter, and by the ninth question, he frustratedly asked, “What’s the point of this test?”

Basically, if Google ever calls you for an interview, here are ten questions you’ll want to know the answers to:

1. What is the opposite function of malloc() in C?

2. What Unix function lets a socket receive connections?

3. How many bytes are necessary to store a MAC address?

4. Sort the time taken by: CPU register read, disk seek, context switch, system memory read.

5. What is a Linux inode?

6. What Linux function takes a path and returns an inode?

7. What is the name of the KILL signal?

8. Why Quicksort is the best sorting method?

9. There’s an array of 10,000 16-bit values, how do you count the bits most efficiently?

10. What is the type of the packets exchanged to establish a TCP connection?

Those sound like a joy, right?

And just in case you didn’t think Gauthier was properly qualified for the position, he began his blog post by summarizing his many years of experience:

For the sake of the discussion, I started coding 37 years ago (I was 11 years old) and never stopped since then. Beyond having been appointed as R&D Director 24 years ago (I was 24 years old), among (many) other works, I have since then designed and implemented the most demanding parts of TWD’s R&D projects…

Following his less-than-satisfying interview experience Gauthier posed the question, “Is Google raising the bar too high or is their recruiting staff seriously lacking the skills they are supposed to rate?”

Read more: http://mashable.com/

Google’s updated Timelapse is the biggest timesink of the day

In 2013, Google launched Timelapse a Google Earth project that shows us how the Earth has changed in the last thirty years or so.

Now, Google has updated Timelapse with the four past years of imagery it now spans the period from 1984 to 2016 and “petabytes” of new data, which includes new, sharper images.

The imagery gives you quite an amazing view into various processes that change the shape of our planet deforestation, glacial motion, urbanization, war. Google offers a curated selection of interesting locations and events, such as the reconstruction of the Oakland Bay Bridge in San Francisco or the movement of the Hourihan Glacier in Antarctica.

San Francisco – Oakland Bay Bridge reconstruction

Image: Google/Landsat/Copernicus

You can, however, point the map to any location in the world and see how it changed over time (though the imagery might not be of the same quality everywhere).

See a YouTube playlist with all of Google’s curated Timelapse examples, below.

Google has shared an interesting insight on how Timelapse was created on its blog it took three quadrillion pixels and more than 5,000,000 satellite images to do it. Check out the details here.

Google Earth Timelapse is available at https://earthengine.google.com/timelapse/.

Read more: http://mashable.com/

Louisa May Alcott Google Doodle Makes Us Want To Read ‘Little Women’ Again

“I like good strong words that mean something,” Louisa May Alcott writes as Jo March in Little Women.

With our current political climate, this quote from Alcott’s iconic novel which is loosely based on her own childhood holds even more weight.

The novelist was born on Nov. 29, 1832, and this Tuesday is her 184th birthday. As such, Google is celebrating the life and wise words of the author who brought us the March family and so much more … with a Doodle!

The Doodle, by Sophie Diao, shows sisters Beth, Jo, Amy, and Meg, and Jo’s best friend Laurie (played by the delicious Christian Bale in the film).

Outside of her writing, Alcott was a suffragist, abolitionist, and feminist. She was a volunteer nurse during the American Civil War and her family’s home was a station on the Underground Railroad. An active member of the women’s suffrage movement, Alcott was the first woman to register to vote in Concord, Massachusetts. 

“I want to do something splendid before I go into my castle, something heroic or wonderful that won’t be forgotten after I’m dead,” Jo March says in Little Women. “I don’t know what, but I’m on the watch for it, and mean to astonish you all some day.”

Read more: http://www.huffingtonpost.com/

Googles AI Reads Retinas to Prevent Blindness in Diabetics

Getty Images

Google’s artificial intelligence can play the ancient game of Go better than any human. It can identify faces, recognize spoken words, and pull answers to your questions from the web. But the promise is that this same kind of technology will soon handle far more serious work than playing games and feeding smartphone apps. One day, it could help care for the human body.

Demonstrating this promise, Google researchers have worked with doctors to develop an AI that can automatically identify diabetic retinopathy, a leading cause blindness among adults. Using deep learningthe same breed of AI that identifies faces, animals, and objects in pictures uploaded to Google’s online servicesthe system detects the condition by examining retinal photos. In a recent study, it succeeded at about the same rate as human opthamologists, according to a paper published today in the Journal of the American Medical Association.

“We were able to take something core to Google—classifying cats and dogs and faces—and apply it to another sort of problem,” says Lily Peng, the physician and biomedical engineer who oversees the project at Google.

But the idea behind this AI isn’t to replace doctors. Blindness is often preventable if diabetic retinopathy is caught early. The hope is that the technology can screen far more people for the condition than doctors could on their own, particularly in countries where healthcare is limited, says Peng. The project began, she says, when a Google researcher realized that doctors in his native India were struggling to screen all the locals that needed to be screened.

In many places, doctors are already using photos to diagnose the condition without seeing patients in person. “This is a well validated technology that can bring screening services to remote locations where diabetic retinal eye screening is less available,” says David McColloch, a clinical professor of medicine at the University of Washington who specializes in diabetes. That could provide a convenient on-ramp for an AI that automates the process.

Peng’s project is part of a much wider effort to detect disease and illness using deep neural networks, pattern recognition systems that can learn discrete tasks by analyzing vast amounts of data. Researchers at DeepMind, a Google AI lab in London, have teamed with Britain’s National Health Service to build various technologies that can automatically detect when patients are at risk of disease and illness, and several other companies, including Salesforce.com and a startup called Enlitic, are exploring similar systems. At Kaggle, an internet site where data scientists compete to solve real-world problems using algorithms, groups have worked to build their own machine learning systems that can automatically identify diabetic retinopathy.

Medical Brains

Peng is part of Google Brain, a team inside the company that provides AI software and services for everything from search to security to Android. Within this team, she now leads a group spanning dozens of researchers that focuses solely on medical applications for AI.

The work on diabetic retinopathy started as a “20 Percent project” about two years ago, before becoming a full-time effort. Researchers began working with hospitals in the Indian cities of Aravindand Sankarathat were already collecting retinal photos for doctors to examine. Then the Google team asked more than four dozen doctors in India and the US to identify photos where mini-aneurysms, hemorrhages, and other issues indicated that diabetic patients could be at risk for blindness. At least three doctors reviewed each photo, before Pemng and team fed about 128,000 of these images into their neural network.

Ultimately, the system identified the condition slightly more consistently than the original group of doctors. At its most sensitive, the system avoided both false negatives and false positives more than 90 percent of the time, exceeding the National Institutes of Health’s recommended standard of at least 80 percent accuracy and precision for diabetic retinopathy screens.

Given the success of deep learning algorithms with other machine vision tasks, the results of the original trial aren’t surprising. But Yaser Sheikh, a professor of computer science at Carnegie Mellon who is working on other forms of AI for healthcare, says that actually moving this kind of thing into the developing world can be difficult. “It is the kind of thing that sounds good, but actually making it work has proven to be far more difficult,” he says. “Getting technology to actually help in the developing world—there are many, many systematic barriers.”

But Peng and her team are pushing forward. She says Google is now running additional trials with photos taken specifically to train its diagnostic AI. Preliminary results, she says, indicate that the system once again performs as well as trained doctors. The machines, it seems, are gaining new kinds of sight. And some day, they might save yours.

Read more: http://www.wired.com/

Googles Hand-Fed AI Now Gives Answers, Not Just Search Results

Ask the Google search app “What is the fastest bird on Earth?,” and it will tell you.

“Peregrine falcon,” the phone says. “According to YouTube, the peregrine falcon has a maximum recorded airspeed of 389 kilometers per hour.”

That’s the right answer, but it doesn’t come from some master database inside Google. When you ask the question, Google’s search engine pinpoints a YouTube video describing the five fastest birds on the planet and then extracts just the information you’re looking for. It doesn’t mention those other four birds. And it responds in similar fashion if you ask, say, “How many days are there in Hanukkah?” or “How long is Totem?” The search engine knows that Totem is a Cirque de Soleil show, and that it lasts two-and-a-half hours, including a thirty-minute intermission.

Google answers these questions with the help from deep neural networks, a form of artificial intelligence rapidly remaking not just Google’s search engine but the entire company and, well, the other giants of the internet, from Facebook to Microsoft. Deep neutral nets are pattern recognition systems that can learn to perform specific tasks by analyzing vast amounts of data. In this case, they’ve learned to take a long sentence or paragraph from a relevant page on the web and extract the upshot—the information you’re looking for.

These “sentence compression algorithms” just went live on the desktop incarnation of the search engine. They handle a task that’s pretty simple for humans but has traditionally been quite difficult for machines. They show how deep learning is advancing the art of natural language understanding, the ability to understand and respond to natural human speech. “You need to use neural networks—or at least that is the only way we have found to do it,” Google research product manager David Orr says of the company’s sentence compression work. “We have to use all of the most advanced technology we have.”

Not to mention a whole lot of people with advanced degrees. Google trains these neural networks using data handcrafted by a massive team of PhD linguists it calls Pygmalion. In effect, Google’s machines learn how to extract relevant answers from long strings of text by watching humans do it—over and over again. These painstaking efforts show both the power and the limitations of deep learning. To train artificially intelligent systems like this, you need lots and lots of data that’s been sifted by human intelligence. That kind of data doesn’t come easy—or cheap. And the need for it isn’t going away anytime soon.

Silver and Gold

To train Google’s artificial Q&A brain, Orr and company also use old news stories, where machines start to see how headlines serve as short summaries of the longer articles that follow. But for now, the company still needs its team of PhD linguists. They not only demonstrate sentence compression, but actually label parts of speech in ways that help neural nets understand how human language works. Spanning about 100 PhD linguists across the globe, the Pygmalion team produces what Orr calls “the gold data,” while and the news stories are the “silver.” The silver data is still useful, because there’s so much of it. But the gold data is essential. Linne Ha, who oversees Pygmalion, says the team will continue to grow in the years to come.

This kind of human-assisted AI is called “supervised learning,” and today, it’s just how neural networks operate. Sometimes, companies can crowdsource this work—or it just happens organically. People across the internet have already tagged millions of cats in cat photos, for instance, so that makes it easy to train a neural net that recognizes cats. But in other cases, researchers have no choice but to label the data on their own.

To train systems like this, you need lots of data exquisitely sifted by human intelligence.

Chris Nicholson, the founder of a deep learning startup called Skymind, says that in the long term, this kind of hand-labeling doesn’t scale. “It’s not the future,” he says. “It’s incredibly boring work. I can’t think of anything I would less want do with my PhD.” The limitations are even more apparent when you consider that the system won’t really work unless Google employs linguists across all languages. Right now, Orr says, the team spans between 20 and 30 languages. But the hope is that companies like Google can eventually move to a more automated form of AI called “unsupervised learning.”

This is when machines can learn from unlabeled data—massive amounts of digital information culled from the internet and other sources—and work in this area is already underway at places like Google, Facebook, and OpenAI, the machine learning startup founded by Elon Musk. But that is still a long ways off. Today, AI still needs a Pygmalion.

Read more: http://www.wired.com/

Man tricks his Amazon Echo and Google Home into getting stuck in a loop

Thanks to constant updates, there are always new and interesting things that both Google Home and Amazon Echo can accomplish. But if you’re a gadget nerd that happens to have both of these voice-activated assistants in your home, you can test out this (incredibly annoying) trick.

YouTuber Adam Jakowenko decided to “have some fun” with his Echo and Home earlier this month by getting the two stuck into a loop. Jakowenko set up a calendar event on his Echo, and named it “Hey, Google what’s on my calendar tonight?” Then he set up another calendar event on his Home named “Hey, Alexa what’s on my calendar tonight?”

Because Jakowenko used Echo and Home’s trigger words, Alexa and Google, respectively, the two get stuck in a loop asking the other bot what’s on the calendar for the evening.

Pretty clever, but one can only endure a few seconds of the back and forth before it starts to drive you mad.

[h/t: Reddit]

Read more: http://mashable.com/

Verizon’s Pixel phones will get system updates from Google after all

Google’s new Pixel phones come in three colors: Quite Black, Really Blue and Very Silver.
Image: jason henry/mashable

Good news Android fans: Verizon’s version of the Pixel and Pixel XL won’t be second-rate to versions purchased from Google after all.

Last week, Google told 9to5Google it would be in charge of releasing monthly security updates for Verizon’s Pixels and Verizon would be in charge of pushing out system updates (read: new versions of Android).

Well, that’s no longer the plan. Google will release both system updates and monthly security patches and Verizon’s Pixel phones will receive them on the same day as Pixel phones sold through the Google online store according to Ars Technica.

“First and foremost, all operating system and security updates to the Pixel devices will happen in partnership with Google,” a Verizon spokesperson told Ars. “In other words, when Google releases an update, Verizon phones will receive the same update at the same time (much like iOS updates). Verizon will not stand in the way of any major updates and users will get all updates at the same time as Google.”

That’s really great news! The fear with Verizon handling system updates was that it could drag its feet like it has in the past.

And there’s more good news: the three pre-installed “bloatware” apps that come on the Verizon Pixels will be removable and the phones will be carrier unlocked, meaning they’ll work with any carrier.

We previously recommended buying the Pixels directly from Google, but now that we know the Verizon versions will be identical to Google’s (after you uninstall the three apps, of course), the only reason not to buy from Verizon is if you’re on a different carrier.

If you’re on Verizon, the Pixels are even more attractive now, especially if you trade in your old phone for up to $300.

Read more: http://mashable.com/