Innovation

Google AI evolves with humanity

Google held an annual developer conference "Google I / O 2018" on May 8, CEO Sundar Pichai introduced at the beginning was a controversial case that modified hamburger's emoticons (Laugh) I think that only the person in charge of Google actually designed it, but it means that artificial intelligence (AI) is hard to understand (time consuming), but it is said that AI First's It was an interesting topic like Google (^ ^) It was announced at Google I / O that there was a medical field beyond human capabilities and an automatic driving field, while people such as language that can not be distinguished from people There were also many fields pursuing a friendly interface, and the Google I / O 2018 was supposed to imagine the direction and future of artificial intelligence (AI). Machine learning (ML) library An outline of Cloud TPU 3.0, which is the third generation of TPU (Tensor Processing Unit) for TensorFlow, has been announced. The TPU 3.0 pod seems to have processing performance of up to 100 peta FLOPS. Since last year's TPU 2.0 was 11.5 peta FLOPS (1.15 Kyoto operation per second), it is about nine times the computing capacity improvement. We are also considering the practical application of quantum computers, but it is the backbone of Google's AI First strategy, supporting the new functions and evolution such as Google Assistant, Google Photos, Google Lens and Gmail now. Artificial intelligence (AI) that has grown into an infrastructure indispensable for modern society, I am looking forward to how "Google AI will evolve with humanity" (^^) Watch the full Google I / O Keynote ( YouTube) 10 tips announced at Google I / O 2018: AI is already too great # io18 (GIZMODO)

By Nobuyuki
Innovation

TensorFlow supports JavaScript and Raspai (Google)

On 30th March, Google held a developer conference "TensorFlow Dev Summit 2018" at Silicon Valley, and the machine learning library TensorFlow announced that deep learning has become available even on smartphones and cheap IoT devices . Especially TensorFlow JavaScript version "TensorFlow.js" and a lightweight version for smartphones "TensorFlow Lite" are noticed in addition to the conventional Android and iOS, Raspberry Pi (Razpai) etc. Also attracting attention. TensorFlow Lite uses a hardware accelerator for deep learning carried by a mobile device and executes it after "quantizing" the value of the parameter of the neural network as an 8-bit integer or the like. The inference of deep learning combined with this quantization is said to be 3 times faster than usual TensorFlow. In addition, we also adapted Linux for embedded Linux adopted by Raspai and others. "Deep learning will be available even for small devices sold for 20 to 30 dollars, and the use of deep learning will be advanced in drone etc." By using TensorFlow.js, users can deploy Machine Learning Models learned (trained) with the regular version of TensorFlow on Web browsers and execute "inference" of machine learning. This demo video is fun to find emoji instructed using smartphone cameras in the real world. Google first announced TensorFlow in November 2015. Since then, more than two years have elapsed, not only the execution environment but also the function as a development tool has been greatly enhanced, steadily evolving as a "platform" of machine learning. Tensorflow.js: http://js.tensorflow.org/ You can try out 4 different demos (^ ^) Check out the demo: https: //emojiscavengerhunt.withgoogle .... Get the code: https://github.com/ google / emoji-scave ... Check out more AI Experiments: https://experiments.withgoogle.com/ai Learn Read more ...

By Nobuyuki
Media

$ 300 million (Google) to "News Initiative" to support journalism

On 20th March, Google announced the launch of the Google News Initiative (GNI) initiative to "build the future of journalism". Google's CEO Sundar Pichai said, "Google believes that dissemination of knowledge will enrich people's lives, spreading knowledge is the mission of Google, the mission of press and journalists In other words, the future we are aiming for is closely ties. " News Initiative - Building the Future of Journalism (Japanese Version) GNI is an effort to promote collaboration with the media industry to open up the future of journalism in the digital age. We are planning to invest 300 million dollars over the next three years, working on "strengthening high quality journalism" and "developing business models for sustainable growth" and "technological innovation for news media." We will launch 'Disinfo Lab' for countermeasures against false news such as elections and breaking news. In collaboration with third party institutions such as Poynter Institute, Stanford University and others in order to help users to be able to judge the authenticity of information, we developed the project "MediaWise" for improving digital information literacy in the United States I will. In business support for sustainable growth, "If you have a Google Account, you can subscribe to the news with just one click, no further troublesome login and payment processing will occur at all" will start the service. In technology innovation, we offer "new AMP format" and proprietary VPN called "Outline" for journalists, enabling secure Internet connection. The Google News Initiative: Building a stronger future for news (blog.google) Introducing Subscribe with Google (blog.google) Elevating quality journalism on the open web (blog.google)

By Nobuyuki
Innovation

Test introduction of Google-promoted AMP to blog

The sites that introduce AMP (Accelerated Mobile Pages) and the display speed are topics, and the sites and blogs which are described are increasing. In addition, Google has been promoting Mobile-first Indexing for a long time, and search algorithms seem to consider mobile correspondence and AMP as well. It has also become an environment where AMP can be easily introduced. https://kokai.jp/? am (AG Ver.) Although it is a trial and error, I am introducing AMP to blog (kokai.jp). I'd like to pursue layout and design advantages that make use of display speed (^ ^) Thumbnail of thumbnails of categories and tags (thumbnail), but because it is easy to see and display speed is fast, as a PC browser version AMP Project (Twitter) AMP Project (Official Site) Documents and tools have been enriched as well (^ AMP Project Success Story (AMP Project) that has published AMP pages.

By Nobuyuki
Innovation

Announced introduction of Accelerated Mobile Pages (AMP) to Gmail (Google)

On February 13, Google announced to introduce the open source framework AMP (Accelerated Mobile Pages) formulated in 2015 to Gmail in order to make the Internet experience of mobile terminals faster and more comfortable. The AMP project is planning to deal with Gmail in the latter half of 2018 by formulating "AMP for Email" which extends the function for e-mail. With schedule adjustment free service "Doodle", you can apply by selecting the event participation date in the mail without opening the web page (such as attendance service) (^ ^) For schedule adjustment with friends "Doodle" free service is convenient! Easy Usage Guide (Swiss Channel) An attendance table tool (Adjustment) that allows you to confirm the sessions of events and adjust the schedule by simply sending the URL to members Pintharest (Pinterest) allows you to check your favorite items on e-mail, select and save I can. At Booking.com you can check not only the photos of the hotel on the e-mail, but also check the latest availability. It seems to be useful (^ ^) Since this "AMP for Email" can update interactively and information more easily in the mail in the mail, it seems that a new funny way to use AMP mail will also be expanded ^ ^ ) In order to use the developer preview version of "AMP for Email" with Gmail, you can access the following and sign up. Bringing the power of AMP to email Sign up

By Nobuyuki
Innovation

An easy customization "Cloud AutoML" announced (Google)

On January 17, 2018, we announced "Cloud AutoML", which users can easily customize AI (Artificial Intelligence) service (Google Cloud Machine Learning Engine) offered by Google. The first Cloud AutoML allows you to easily customize machine learning models by adding the new "Cloud AutoML Vision" to the "Cloud Vision API" for image recognition. First, register "teacher data (with metadata)" using the web-based UI. Cloud AutoML Vision lets AI learn new subjects using Transfer Learning. Cloud AutoML Vision does not need big data for learning, because learned image recognition model corresponds to new subject. Specifically, just by registering dozens of "teacher data", new subjects can be recognized. By using Cloud AutoML Vision, user companies will be able to realize image recognition AI specialized for business types and task details without programming. Fei-Fei Li, chief scientist of the Google Cloud AI division said, "The biggest challenge of using AI in general corporations is that we can not secure AI human resources and budgets, even if there is no expert, It is the aim of Cloud AutoML to be able to enjoy it. " Cloud AutoML: Making AI accessible to every Read more ...

By Nobuyuki
Innovation

Google's fun AIY Vision Kit and AIY Voice Kit

DIY is self-made (self-made), but a kit that looks fun has been released (released) from Google's AIY (Do-it-yourself artificial intelligence) project. Vision Kit (AI camera etc for product) and Voice Kit (product is Google Assistant). The appearance is made of corrugated cardboard so that people around the world can easily try AI service as easily and cheaply as possible in various devices. I also feel the importance of intellectual production like Google and AI First policy (^ ^) aiyprojects.withgoogle.com (Website) These two products are kit, equivalent to eyes, ears and mouth connected with artificial intelligence (AI) is. Image recognition AI's "Vision Kit" is scheduled to be released on December 31, separately requires "Raspberry Pi Zero W", camera "Raspberry Pi Camera 2", microSD card, power adapter and so on. The price is $ 44.99 (about 5,000 yen). "Voice Kit" which corresponds to the smart speaker on sale is sold online at $ 24.99 (about 3,000 yen). It is an assembly kit that you can enjoy learning about cutting edge AI technology (^ ^) In addition to the functions that the kit has from the beginning, using SDK (software development kit) and TensorFlow (machine learning library provided by Google) , Function expansion, customization can be done. With Raspberry Pi, you can add state-of-the-art AI technology more easily. It is an important field of 21st century type digital skills which is an infinite possibility to question our creativity. Google AIY Vision Kit (Micro Center Web Store) Reservation Inquiries YouTube is an explanatory video about the Google AIY Voice Kit (1/3).

By Nobuyuki
Innovation

Attention to Google's new idea AI camera and AI earphone of 40 countries translation

On October 5, Google's event "Made by Google 2017" is announcing products of new ideas with AI (artificial intelligence) as the key word. Using AI, a new smartphone "Pixel 2 / XL" realizing portrait mode (a person with a blurred background) with a monocular camera, "Pixelbook" 2 in 1 note, automatically cuts a definite momentary shutter "Google Clips" small camera built in AI cameraman, and wireless earphone "Google Pixel Buds" simultaneous real-time translation in 40 languages. (See YouTube) Particularly noteworthy are the AI ​​camera (Google Clips) and the 40-country translation AI earphone (Google Pixel Buds). This is a new fun product embodying Google's "transition to AI First world" (^ ^) Google's "autonomous search engine" and "autonomous traveling car" are also the same, but the user's intention and annoyance Understanding and "attracting all scenes" will be noted as a product group of new ideas. The AI ​​mini camera "Google Clips" is a place to worry about "name photographer" or "stray photographer" (^^) The Verge (YouTube) is an AI mini camera (Google Clips) (6 minutes 42 seconds) I will explain Google Pixel Buds (3 minutes 43 seconds) in an easy-to-understand manner (continuous playback). It is a bit disappointing that the AI ​​earphone is compatible only with the Pixel series .... The best hardware, software and Read more ...

By Nobuyuki
Innovation

Google acquires AIMATTER for mobile AI real-time image processing

Google acquired AIMATTER of mobile AI real-time image processing startup. Based on Belarus, AIMATTER's own self-shooting image processing application "FABBY" can perform color processing, such as color adjustment, color, mask, background and makeup design, with AI. In "FABBY LOOK", hair color can be instantly switched. The left of the demonstration movie is "FABBY" and the right is "FABBY LOOK". The AIMATTER site says "the world's fastest mobile AI real-time image processing", it is said to be three times faster than the open source deep learning library Caffe and TensorFlow. For real-time image processing by machine learning (AI) under mobile environment, it seems that optimization design of a program different from cloud environment AI image processing is required (^ ^) We excited to announce that the team is joining Google! (AIMATTER News) AIMATTER (Website) Android / iOS version application Available Google acquired AIMatter, which made the computer vision application Fabby, used for advertisement technology innovation (TechCrunch Japan)

By Nobuyuki
Media

Code Lab function added to AI robot toy Cozmo

Anki's Cozmo is an adorable AI robot toy that recognizes your face, interacts with you, playing games together, understanding you as a relationship. When looking at the face of Cozmo rich in emotional expression, it looks a lot like Eve (movie Wallie) (original title: WALL-E) (^ ^) Updated "Cozmo" application on June 26 It was announced that a new "Code Lab" function for children was added. This "Code Lab" is based on "Scratch Blocks" developed by MIT Media Lab and Google. Keeping in mind that children are easy to understand, we are beginning with providing "dozens of programming blocks" using "horizontal method" that allows easy coding. Scratch for developers (scratch.mit.edu) Japanese Anki has also carried out full-scale update this autumn, adding "vertical method" capable of more complicated programming to "Code Lab", and now It is planned to provide the fully functional Python SDK to "Cozmo SDK" released in 2016. Anki Read more ...

By Nobuyuki
Innovation

Automatically optimize Google machine learning (AI) AutoML

In order to develop a machine learning (AI) program suitable for the purpose, it was necessary to design and code an optimum algorithm by many specialized software engineers. At Google, we are advancing the optimum design approach of machine learning program called AutoML (Auto Machine Learning). It seems that this AutoML automatically generates a machine learning program, and it seems to be talked about becoming the subject of no specialized software engineer (^ ^) It is already the optimum machine learning program for learning languages ​​and image recognition We are conducting research and testing to search the design with AutoML. As a result, in the field of image recognition, it seems that the design using AutoML is comparable to that of experts, and the language translation completely exceeds the experts in language translation. AutoML is used to develop machine learning programs that execute specific tasks, including applications such as Google Lens (Google lens) published at Google I / O. Furthermore, as a result of repeating the process of making a machine learning program with AutoML thousands of times, it seems that a new learning effect is born, such as recognition function being reinforced and unexpected variables being complemented. Google's Thunder Pichaai asked questions about the timing of the commercialization of AutoML "In the next five years or so, designing a machine learning program tailored to my purpose without knowing any coding or computer language at all (^^) Using Machine Learning to Explore Neural Network Architecture (Google Research Blog) AI Creates AI!? Google Launches a New Approach "AutoML" (Robotire) Easily Unveiled "Cloud AutoML" that can be customized (Google)

By Nobuyuki
Innovation

Google's core racer, Waymo Firefly (Firefly) finished development

"Google Car" developed in-house by Google from the body in 2014 was called a core racer (koala car) or a gumdrop (gumdrop) from a pretty style of a two-seater without handle and pedal (^ ^) Waymo under the control of Alphabet seems to call it firefly nickname, but we announced that we will finish the development of this advanced design fully automated driving car firefly (Firefly). From now on, we will focus on the development of mass-produced car Chrysler Pacifica Hybrid minivans, which is capable of "manual and automatic switching", with a new AI platform which took over Firefly's automatic driving system and greatly improved its performance We are announcing that we will do. It seems that the pretty core racer runs has relaxed the high-tech image of artificial intelligence (AI)'s automatic traveling car and has established itself as a friendly technology for children, disabled people and the elderly. Of course, it contributes to the accumulation of traveling data necessary for automatic operation and improvement of hardware and software. Several Firefly seem to be on display at California's Computer History Museum and the London Design Museum. Although it is a bit lonely, in the future this design may be reinstated (^ ^) From post-it note to prototype: The journey of our Firefly (Waymo / Medium) Google's Waymo handle and pedal-free automatic driving car "Firefly" (Gigazine) The new "Google Car" is developed from its own body by its own company

By Nobuyuki
en_USEN
jaJA en_USEN