Gadgets

Why the US Is Losing Ground on the Next Generation of Powerful Supercomputers

exascale-computing

 

“I feel the need — the need for speed.”

The tagline from the 1980s movie Top Gun could be seen as the mantra for the high-performance computing system world these days. The next milestone in the endless race to build faster and faster machines has become embodied in standing up the first exascale supercomputer.

Exascale might sound like an alternative universe in a science fiction movie, and judging by all the hype, one could be forgiven for thinking that an exascale supercomputer might be capable of opening up wormholes in the multiverse (if you subscribe to that particular cosmological theory). In reality, exascale computing is at once more prosaic — a really, really fast computer — and packs the potential to change how we simulate, model and predict life, the universe and pretty much everything.

First, the basics: exascale refers to high-performance computing systems capable of at least a billion billion calculations per second, about 50 times faster than the most powerful supercomputers in existence today. Computing systems capable of at least one exaFLOPS (a quintillion floating point operations per second) has additional significance, it’s estimated such an achievement would potentially match the processing power required to simulate the human brain.

Of course, as with any race, there is a healthy amount of competition, which Singularity Hub has covered over the last few years. The supercomputer version of NFL Power Rankings is the TOP500 List, a compilation of the most super of the supercomputers. The 48th edition of the list was released last week at the International Conference for High Performance Computing, Networking, Storage and Analysis, more succinctly known as SC16, in Salt Lake City.

In terms of pure computing power, China and the United States are pretty much neck and neck. Both nations now claim 171 HPC systems apiece in the latest rankings, accounting for two-thirds of the list, according to TOP500.org. However, China holds the top two spots with its Sunway TaihuLight, at 93 petaflops, and Tianhe-2, at 34 petaflops.

Michael Feldman, managing editor of TOP500, wrote earlier this year about what he characterized as a four-way race to exascale supremacy between the United States, China, Japan and France. The United States, he wagers, is bringing up the rear of the pack, as most of the other nations project to produce an exascale machine by about 2020. He concedes the race could be over today with enough money and power.

“But even with that, one would have to compromise quite a bit on computational efficiency, given the slowness of current interconnects relative to the large number of nodes that would be required for an exaflop of performance,” he writes. “Then there’s the inconvenient fact there are neither applications nor system software that are exascale-ready, relegating such a system to a gargantuan job-sharing cluster.”

Dimitri Kusnezov, chief scientist and senior advisor to the Secretary of the US Department of Energy, takes the long-term view when discussing exascale computing. What’s the use for all that speed if you don’t know where you’re going, he argues?

“A factor of 10 or 100 in computing power does not give you a lot in terms of increasing the complexity of the problems you’re trying to solve,” he said during a phone interview with Singularity Hub.

“We’re entering a new world where the architecture, as we think of exascale, [is] not just faster and more of the same,” he explained. “We need things to not only do simulation, but we need [them] at the same time to reach deeply into the data and apply cognitive approaches — AI in some capacity — to distill from the data, together with analytical methods, what’s really in the data that can be integrated into the simulations to help with the class of problems we face.”

“There aren’t any architectures like that today, and there isn’t any functionality like that today,” he added.

In July 2015, the White House announced the National Strategic Computing Initiative, which established a coordinated federal effort in “high-performance computing research, development, and deployment.”

The DoE Office of Science and DoE National Nuclear Security Administration are in charge of one cornerstone of that plan – the Exascale Computing Project (ECP) — with involvement from Argonne, Lawrence Berkeley, Oak Ridge, Los Alamos, Lawrence Livermore, and Sandia national labs.

Since September of this year, DoE has handed out nearly $90 million in awards as part of ECP.

More than half of the money will go toward what DoE calls four co-design centers. Co-design, it says, “requires an interdisciplinary engineering approach in which the developers of the software ecosystem, the hardware technology, and a new generation of computational science applications are collaboratively involved in a participatory design process.”

Another round of funds will support 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations. The modeling and simulation applications that were funded include projects ranging from “deep-learning and simulation-enabled precision medicine for cancer” to “modeling of advanced particle accelerators.”

The timeline — Feldman offers 2023 for a US exascale system — is somewhat secondary to functionality from Kusnezov’s perspective.

“The timeline is defined by the class of problems that we’re trying to solve and the demands they will have on the architecture, and the recognition that those technologies don’t yet exist,” he explains. “The timeline is paced by the functionality we’d like to include and not by the traditional benchmarks like LINPACK, which are likely not the right measures of the kinds of things we’re going to be doing in the future.

“We are trying to merge high-end simulation with big data analytics in a way that is also cognitive, that you can learn while you simulate,” he adds. “We’re trying to change not just the architecture but the paradigm itself.”

Kusnezov says the US strategy is certainly only one of many possible paths toward an exascale machine.

“There isn’t a single kind of architecture that will solve everything we want, so there isn’t a single unique answer that we’re all pushing toward. Each of the countries is driven by its own demands in some ways,” he says.

To illustrate his point about a paradigm shift, Kusnezov talks at length about President Barack Obama’s announcement during his State of the Union address earlier this year that the nation would pursue a cancer moonshot program. Supercomputers will play a key role in the search for a cure, according to Kusnezov, and the work has already forced DoE to step back and reassess how it approaches rich, complex data sets and computer simulations, particularly as it applies to exascale computing.

“A lot of the problems are societal, and getting an answer to them is everyone’s best interest,” he notes. “If we could buy all of this stuff off the shelf, we would do it, but we can’t. So we’re always looking for good ideas, we’re always looking for partners. We always welcome the competition in solving these things. It always gets people to innovate — and we like innovation.”

This post originally appeared on SingularityHub

November 28, 2016 / by / in , , , , , , , , , , ,
How Root Wants to Bring Coding to Every Classroom

Root educational robot
Image: Root

 

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

 

The push to teach coding in U.S. schools has been growing: Thanks to initiatives like the White House’s CS for All program, computer science is now recognized as a core skill for today’s students. A new study by Gallup and Google revealed that 90 percent of parents want their child to learn CS, yet only 40 percent of K-12 school districts offer some kind of CS course. Teacher recruitment and training efforts are beginning to solve the problem at the high-school level, but in K-8 schools (where very few schools offer CS and many teachers are generalists) the challenges are different. Many teachers without much coding experience understandably feel anxious about integrating this new literacy into their classrooms.

Our team at Harvard University is hoping to change that with Root. Root is a new kind of robot that colors outside the lines of the educational robotics category by providing unique capabilities along with a programming interface that grows with its user, bringing coding to life for all ages. After nearly three years of development, Root and its companion app, Root Square, have emerged as a solution to ease teachers’ anxiety about adding coding to the lessons that they teach.

 

f3d1ff38_original

 

 

Hardware

Root is intentionally designed as one single piece, setting it apart from many other educational robotics platforms because students can dive right into the programming and computational problem solving without having to grapple with parts assembly. In our pilot classrooms, teachers who were previously intimidated by complex and messy boxes of components approached Root with a sense of ease and playfulness. The robot is ready to go right out of the box, it’s easy to put away after class, and it’s much easier to share it between students and classes, which makes it significantly more affordable for schools.

 

Root educational robot hardware overviewImage: Root.

 
Another way that Root stands out in classrooms is by taking advantage of the great robot arena already at the front of most classrooms: whiteboards. Root is a magnetic robot; although it can work perfectly well on a table, floor, or on a piece of paper or poster board, it can drive vertically on a metal-backed, magnetic whiteboard. This was a significant technical hurdle for us to solve because of the need to compensate for drift due to gravity when driving on vertical surfaces. What may look like a small, simple package actually has sophisticated sensors and firmware inside: Root uses high-resolution encoders, a 3D accelerometer, and a 3D gyroscope to accurately interpret speed, orientation, and wheel position. This helps us correct the motor commands in real-time for drift due to gravity (actually caused by stretching of the tires).

As a result of the attention we gave to tuning Root’s self-correcting driving algorithm, Root can draw with high precision using its on-board marker. With a nod to the robotic “turtles” that came before it, Root has a gripper driven by an internal motor in its geometric center. The gripper holds a standard marker that can be programmed to lift and drop. And for those who also want help cleaning the board, Root can lift and drop its embedded eraser. In our pilot testing of Root we’ve found that the writing surface is as much a part of the programming experience as the code and the robot—in essence, it brings the inputs and outputs into a medium that is not only tangible, but familiar to anyone who has ever drawn a picture.

Working with Root can lead students to the type of “wow moment” that has become more elusive as daily life has been increasingly saturated with technology. The experience of physically interacting with Root and programming it in an understandable language is likely to be new and exciting for most students.

One of Root’s most important capabilities comes from the row of 32 color sensors on its underside. This is similar to a 1D camera, or a small color scanner. Color sensing is a great way to get kids to interact with Root both through the commands they give it and also through the environment they create for it on the whiteboard. Root’s color sensing capability isn’t only engaging, it’s also versatile enough to be used to solve the kind of complex problems (SLAM, path planning, and more) that students might encounter in a college-level class.

For connectivity and control, Root can talk to any Bluetooth LE device, and Root is also expandable: third party boards and other accessories (like Raspberry Pi, Arduino, BBC Micro:Bit, cameras, sensors, etc.) can be connected through an USB-C connector on the robot’s back.

Software

It’s not just the physical robot that matters in robotics education—the programming interface can make all the difference when working with newcomers to coding or robotics. While most programming tools are often overwhelming to novices, programming with Root is approachable because it is scaffolded: Beginners use a graphical interface with a limited instruction set that lets them get right to programming without having to learn syntax and structure. Regardless of age, people’s faces light up when they tap Play and see Root move according to a simple program they wrote themselves in just a few minutes. Reaching a quick win encourages learners to keep going.

However, as skills advance, they need to be given tools that are “just right” for their developmental level. For that reason, a big portion of our research effort focused on developing the software for programming Root. Root’s programming interface (“Root Square”)  is multi-level, ending with a choice of textual languages including Python, JavaScript, and Swift. Here’s how it works:

 

Root educational robot programming levelsImage: Root

 

 

Level 1 Programming

Root Square’s Level 1 interface is designed to be accessible to kids as young as 4, and for novices of any age who have never experienced coding before. There are a deliberately limited number of blocks, no words, and no numbers greater than twenty. This means that kids can start coding even before they learn how to read, and adults are not intimidated by an overly-complex interface. Some unique features that make this entry-level interface easy to program with are:

  • The user’s program can be modified even while it’s running. Adding, deleting, or modifying instructions (blocks) even inside a running loop is absolutely possible; the user’s program will just keep running with the new modifications. This capability makes it ideal for working with young kids, as they are really playing with the code while it runs.
  • It’s been optimized for touch screens. While graphical programing languages are not new, many follow a paradigm that has not changed in over a decade. Root Square’s user interface intentionally breaks some of these norms by optimizing for touch-screen interfaces, which we think many children will be most comfortable with.
  • Powerful programs can be written with surprisingly few blocks. Root Square Level 1 is completely events-based and thus, with very few blocks, it’s possible to solve complex problems that would require far more syntactic complexity in other environments. However, brevity does not mean that overpowered blocks hide important details: the student must still create each rule and program each response for the robot.

 

Level 2 and Level 3 Programming

For more advanced users, and for beginners who are ready to go beyond Level 1’s limits, Root Square provides a powerful instruction set in its Level 2 graphical programming language. Level 2 introduces variables, sensor values, units, flow control statements, arithmetic operations, recursion, and parallelism. Both Level 1 and 2 are accessed from the same app, and previously written Level 1 code can be automatically converted to Level 2 to ease the transition.

Programs created with Level 1 and Level 2 can also be converted into Python, JavaScript, or Swift code (Level 3). Root offers a rich and open API and a development kit (SDK) that opens the door to all kinds of advanced applications and interactions with other devices.

 

Root educational robot
Image: Root

 

 

How to Bring Root to Your Classroom and Home

Working with Root can lead students to the type of “wow moment” that has become more elusive as daily life has been increasingly saturated with technology. The experience of physically interacting with Root, dynamically creating and changing its environment with markers on a whiteboard, and programming it in an understandable language is likely to be new and exciting for most students.

Perhaps most important, we hope that enabling early experiences with robotics and programming through Root will help to close the diversity gap in technology fields. Root’s appeal to all ages, genders, and backgrounds, plus the ease with which teachers can bring it into the classroom, make it a strong candidate to help more kids experience these foundational learning moments.

Root is the result of more than three years of research and development. Now that it has been prototyped and pilot-tested extensively, it’s ready for production on a large scale. We’re excited to take the project out of Harvard’s research labs and into classrooms and homes everywhere. You can support the effort (and order your own Root!) on Kickstarter today.

Julián da Silva Gillig is the lead software developer for Root Square, the multi-level programming interface for the Root robot. He was founder and lead engineer of Multiplo and miniBloq, open source robotics and physical computing projects. Julián is currently a research associate at Harvard University’s Wyss Institute for Biologically Inspired Engineering, in Cambridge, Mass.

Shaileen Crawford Pokress is the head of education for the Root project. She is co-author of the recently released K-12 Computer Science Framework and former Education Director for MIT App Inventor. Shaileen is currently a visiting scholar at Harvard University’s Wyss Institute for Biologically Inspired Engineering, in Cambridge, Mass.

Raphael Cherney is the lead engineer and co-founder of the Root project. Raphael is currently a research assistant at Harvard University’s Wyss Institute for Biologically Inspired Engineering, in Cambridge, Mass. [IEEE]

 

 

November 24, 2016 / by / in , , , , , , , , , ,
Miniature WiFi device developed by Stanford engineers supplies missing link for the Internet of Things

Until now, there’s been no way to control all sorts of devices, wirelessly, via the internet because there’s been no two-way radio smart and small enough to make this possible. A new technology called HitchHike could change that.

Futurists and technology prognosticators have been known to make starry-eyed projections about the so-called Internet of Things. It’s a vision of a world where everything – from implantable biosensors and wearable devices to smart cars and smart home sensors – confers and collaborates wirelessly to make the world a better and more interconnected place.

 

illustration of Internet of Things
A new technology developed at Stanford that hitchhikes on radio signals could provide a way to control the tiny devices that will comprise the Internet of Things. (Image credit: iStock/a-image)

To date, this remains largely a dream. To become reality, the Internet of Things will require a new class of tiny, energy-efficient WiFi radios to pass commands to and from the network to a myriad of devices.

That is the idea behind HitchHike, a tiny, ultra-low-energy wireless radio from a Stanford research team led by Sachin Katti, an associate professor of electrical engineering and of computer science, and Pengyu Zhang, a postdoctoral researcher in Katti’s lab.

The researchers are describing and demonstrating HitchHike in a paper being presented at the Association for Computing Machinery’s SenSys Conference on Nov. 16.

 

sensys16_back_comm

 

“HitchHike is the first self-sufficient WiFi system that enables data transmission using just micro-watts of energy – almost zero,” Zhang said. “Better yet, it can be used as-is with existing WiFi without modification or additional equipment. You can use it right now with a cell phone and your off-the-shelf WiFi router.”

HitchHike is so low-power that a small battery could drive it for a decade or more, the researchers say. It even has the potential to harvest energy from existing radio waves and use that electromagnetic energy, plucked from its surroundings, to power itself, perhaps indefinitely.

“HitchHike could lead to widespread adoption in the Internet of Things,” Katti said. “Sensors could be deployed anywhere we can put a coin battery that has existing WiFi. The technology could potentially even operate without batteries. That would be a big development in this field.”

The researchers say HitchHike could be available to be incorporated into wireless devices in the next three to five years.

Clever design

The Hitchhike prototype is a processor and radio in one. It measures about the size of a postage stamp, but the engineers believe that they can make it smaller – perhaps even smaller than a grain of rice for use in implanted bio-devices like a wireless heart rate sensor (see video).

 

 

With a range of up to 50 meters and able to transmit up to 300 kilobits per second – several times faster than the fastest dial-up modem of yore – HitchHike could be a major steppingstone along the journey to the Internet of Things.

The system was named HitchHike for its clever design that hitchhikes on incoming radio waves from a smartphone or a laptop. It translates those incoming signals to its own message and retransmits its own data on a different WiFi channel.

The big Achilles’ heel of any efforts in this direction to date has been energy usage. HitchHike consumes 10,000 times less current than WiFi radios. It can operate for years on a simple coin battery, but the researchers say future versions might use tiny solar panels or even harvest the energy of incoming WiFi radio waves.

HitchHike is a variation on what is known in engineering circles as a backscatter radio. It is actually more a reflector than a radio. HitchHike merely bounces WiFi signals back into the atmosphere – a signal that is known as backscatter.

Translating and transferring

To work as a true radio, however, HitchHike must first make the all-important leap from simply reflecting an existing message to actually producing a meaningful message of its own. To do that, HitchHike’s designers developed what they call “code word translation.”

On the processor front, HitchHike is a simple translation device. In the binary digital world, a WiFi signal is little more than an endless stream of 1s and 0s, which standard WiFi transmits through a set of predefined code words. HitchHike cleverly translates the incoming code words into its own data. If, for instance, the incoming code word indicates a zero and HitchHike wants it to remain a zero, it passes that code word unaltered. If, however, HitchHike wants to change that zero to a one, or vice versa, it translates it to the alternate code word.

The next piece of the puzzle requires the avoidance of radio interference between the original signal and the new data stream coming from HitchHike—both of which are transmitted at the same time and on the same channel if unmodified. HitchHike instead shifts its new signal to another WiFi channel. And it does it all using almost no power.

“HitchHike opens the doors for widespread deployment of low-power WiFi communication using widely available WiFi infrastructure and, for the first time, truly empower the Internet of Things,” Zhang said.

[Stanford University]

November 24, 2016 / by / in , , , , , , , , , ,
Internet of Things to Drive the Fourth Industrial Revolution: Industrie 4.0 — Companies Endorse New Interoperable IIoT Standard
industry4-0
The Industrial Internet of Things (IIoT) will be the primary driver of the fourth Industrial Revolution and Cisco and other companies are at the forefront. It’s commonly referred to as Industrie 4.0.

“Industrie 4.0 is not digitization or digitalization of mechanical industry, because this is already there,” said Prof. Dr.-Ing. Peter Gutzmer, Deputy CEO and CTO of Schaeffler AG. “Industrie 4.0 is getting the data real-time information structure in this supply and manufacturing chain.”

 

 

“If we use IoT data in a different way we can be more flexible so we can adapt faster and make decisions if something unforeseen happens, even in the cloud and even with cognitive systems,” says Gutzmer.

From the 2013 Siemens video below:

“In intelligent factories machines and products will communicate with each other, cooperatively driving production. Raw materials and machines are interconnected, within an internet of things. The objective, highly flexible individualized and resource friendly mass production. That is the vision for the fourth industrial revolution.”

 

 

“The excitement surrounding the fourth industrial revolution or Industrie 4.0 is largely due to the limitless possibilities that come with connecting everything, everywhere, with everyone,” said Martin Dube, Global Manufacturing Leader in the Digital Transformation Group at Cisco, in a blog post today. “The opportunities to improve processes, reduce downtime and increase efficiency through the Industrial Internet of Things (IIoT) is easy to see in manufacturing, an industry heavily reliant on automation and control, core examples of operational technology.”

Connectivity between machines is vital for the success of Industrie 4.0, but it is far from simple. “The manufacturing environment is full of connectivity and communication protocols that are not interconnected and often not interoperable,” notes Dube. “That’s why convergence and interoperability are critical if this revolution is to live up to (huge) expectations.”

Dube explains that convergence is the concept of connecting machines so that communication is possible and interoperability is the use of a standard technology enabling that communcation.

 

Cisco Announces Interoperable IIoT Standard

Cisco announced today that a number of key tech companies have agreed on an Interoperable IIoT Standard. The group, which includes ABB, BoschRexroth, B&R, Cisco, General Electric, National Instruments, Parker Hannifin, Schneider Electric, SEW Eurodrive and TTTech, is aiming for an open, unified, standards-based and interoperable IIoT solution for communication between industrial controllers and to the cloud, according to Cisco:

 

ABB, Bosch Rexroth, B&R, CISCO, General Electric, KUKA, National Instruments (NI), Parker Hannifin, Schneider Electric, SEW-EURODRIVE and TTTech are jointly promoting OPC UA over Time Sensitive Networking (TSN) as the unified communication solution between industrial controllers and to the cloud.

Based on open standards, this solution enables industry to use devices from different vendors that are fully interoperable. The participating companies intend to support OPC UA TSN in their future generations of products.

 

 

screen-shot-2016-11-23-at-7-10-02-pm

[WebProNews]

November 24, 2016 / by / in , , , , , , , , , , , ,
AirSelfie. The only portable flying camera integrated in your phone cover

airselfie_device_3

 

AirSelfie is a revolutionary pocket-size flying camera that connects with your smartphone to let you take boundless HD photos of you, your friends, and your life from the sky. Its turbo fan propellers can thrust up to 20 meters in the air letting you capture wide, truly original photos and videos on your device. The anti-vibration shock absorber and 5 MP camera ensure the highest quality images. And its ultra-light 52g form that slips into a special phone cover and charger means you can keep AirSelfie on you at all times. Say hello to the future of selfies.

With its Aeronautical grade anodized aluminium case, four turbo fan propellers powered by brushless motors, a 5 Megapixel HD video camera, and a weight of nearly 52 gr., make it an easy to carry drone. The case also houses a battery that recharges the drone in 30 minutes.

 

2b523242a2c156af163601_original

airselfie-view

airselfie_size_4-800x512

airselfie_phonecover

airselfie-1a

airselfie-2

272021842a155b048807a84f1a440883_original

 

Check out their Kickstarter campaign; they are looking to raise €45,000 for production. But at the time of writing they’ve already raised €145,090.

 

Source: AirSelfie

November 24, 2016 / by / in , , , , , , , ,
The marketing genius behind Snap’s new Spectacles

spectacles

Spectacles.com

 

We all want a pair.

 

One week ago, I had virtually zero interest in owning a pair of Snap Spectacles, the company’s new video-recording sunglasses.

On Saturday, I contemplated the six-hour drive from San Francisco to LA to buy a pair out of a vending machine. What a difference a week can make.

The rollout of Spectacles has been, well, a spectacle. Everywhere Snap drops a Snapbot, the big yellow vending machines that serve as temporary storefronts for the glasses, crowds line up, dozens of people deep, and spend their hours waiting in line posting and tweeting about how excited they are to get their hands on some Spectacles.

It’s been a touch of marketing genius.

Snap isn’t going to make much money selling smart glasses one vending machine-full at a time. But that’s not the point. Instead, what the company has done is create the kind of buzz and excitement around a product — and thus the Snap brand, which is prepping for an IPO — that we haven’t seen in a long, long time.

How, exactly, did that happen?

  • Snapchat did a great job of setting expectations. From the get-go, Snap positioned its new glasses as a “toy,” which immediately differentiated Spectacles from Google Glass, the search giant’s failed smart glasses that made everyone question the future of wearables altogether. Spectacles are cool, dude. They’re for filming your friends partying at the football game, not for answering email. Who cares if there isn’t a killer use case? Toys don’t need one. Even Robert Scoble wearing Spectacles in the shower won’t kill Snap’s momentum. (Probably …)
  • Snap has done a great job creating perceived demand. After Snap drops a vending machine somewhere, it’s followed shortly by photos and videos of long lines, and eventually a bunch of sad customers once the machine sells out. But that has made Spectacles the hottest product in town — the $130 glasses are selling for thousands on eBay. Snap is likely selling just dozens of glasses per day, but it feels like it’s cleaning out the warehouse.
  • Snap’s rollout strategy is generating a lot of free press, both from users in line (see above) and more traditional media outlets. Instead of just one press cycle — the first day Spectacles went on sale — the press has covered each and every new Snapbot location. Users are eager to buy the glasses, and the press is happy to point them in the right direction in exchange for a few clicks.

The reality is that Spectacles aren’t going to be big business for Snap, at least not anytime soon. The company wouldn’t sell them out of vending machines if it was trying to make money here.

But Spectacles are giving Snap a new wave of momentum just before it plans to IPO — and the idea that it could sell a lot of glasses has been planted in everyone’s mind. And that feeling isn’t ephemeral.

[Original Article: Recode.net]

November 23, 2016 / by / in , , , , , , , , , , ,
Understanding the four types of AI, from reactive robots to self-aware beings

1477688129

Robots will need to teach themselves. Robot reading via shutterstock.com  

 
Arend Hintze, Michigan State University

The common, and recurring, view of the latest breakthroughs in artificial intelligence research is that sentient and intelligent machines are just on the horizon. Machines understand verbal commands, distinguish pictures, drive cars and play games better than we do. How much longer can it be before they walk among us?

The new White House report on artificial intelligence takes an appropriately skeptical view of that dream. It says the next 20 years likely won’t see machines “exhibit broadly-applicable intelligence comparable to or exceeding that of humans,” though it does go on to say that in the coming years, “machines will reach and exceed human performance on more and more tasks.” But its assumptions about how those capabilities will develop missed some important points.

As an AI researcher, I’ll admit it was nice to have my own field highlighted at the highest level of American government, but the report focused almost exclusively on what I call “the boring kind of AI.” It dismissed in half a sentence my branch of AI research, into how evolution can help develop ever-improving AI systems, and how computational models can help us understand how our human intelligence evolved.

The report focuses on what might be called mainstream AI tools: machine learning and deep learning. These are the sorts of technologies that have been able to play “Jeopardy!” well, and beat human Go masters at the most complicated game ever invented. These current intelligent systems are able to handle huge amounts of data and make complex calculations very quickly. But they lack an element that will be key to building the sentient machines we picture having in the future.

We need to do more than teach machines to learn. We need to overcome the boundaries that define the four different types of artificial intelligence, the barriers that separate machines from us – and us from them.

 

Type I AI: Reactive machines

The most basic types of AI systems are purely reactive, and have the ability neither to form memories nor to use past experiences to inform current decisions. Deep Blue, IBM’s chess-playing supercomputer, which beat international grandmaster Garry Kasparov in the late 1990s, is the perfect example of this type of machine.

Deep Blue can identify the pieces on a chess board and know how each moves. It can make predictions about what moves might be next for it and its opponent. And it can choose the most optimal moves from among the possibilities.

But it doesn’t have any concept of the past, nor any memory of what has happened before. Apart from a rarely used chess-specific rule against repeating the same move three times, Deep Blue ignores everything before the present moment. All it does is look at the pieces on the chess board as it stands right now, and choose from possible next moves.

This type of intelligence involves the computer perceiving the world directly and acting on what it sees. It doesn’t rely on an internal concept of the world. In a seminal paper, AI researcher Rodney Brooks argued that we should only build machines like this. His main reason was that people are not very good at programming accurate simulated worlds for computers to use, what is called in AI scholarship a “representation” of the world.

The current intelligent machines we marvel at either have no such concept of the world, or have a very limited and specialized one for its particular duties. The innovation in Deep Blue’s design was not to broaden the range of possible movies the computer considered. Rather, the developers found a way to narrow its view, to stop pursuing some potential future moves, based on how it rated their outcome. Without this ability, Deep Blue would have needed to be an even more powerful computer to actually beat Kasparov.

Similarly, Google’s AlphaGo, which has beaten top human Go experts, can’t evaluate all potential future moves either. Its analysis method is more sophisticated than Deep Blue’s, using a neural network to evaluate game developments.

These methods do improve the ability of AI systems to play specific games better, but they can’t be easily changed or applied to other situations. These computerized imaginations have no concept of the wider world – meaning they can’t function beyond the specific tasks they’re assigned and are easily fooled.

They can’t interactively participate in the world, the way we imagine AI systems one day might. Instead, these machines will behave exactly the same way every time they encounter the same situation. This can be very good for ensuring an AI system is trustworthy: You want your autonomous car to be a reliable driver. But it’s bad if we want machines to truly engage with, and respond to, the world. These simplest AI systems won’t ever be bored, or interested, or sad.

 

Type II AI: Limited memory

This Type II class contains machines can look into the past. Self-driving cars do some of this already. For example, they observe other cars’ speed and direction. That can’t be done in a just one moment, but rather requires identifying specific objects and monitoring them over time.

These observations are added to the self-driving cars’ preprogrammed representations of the world, which also include lane markings, traffic lights and other important elements, like curves in the road. They’re included when the car decides when to change lanes, to avoid cutting off another driver or being hit by a nearby car.

But these simple pieces of information about the past are only transient. They aren’t saved as part of the car’s library of experience it can learn from, the way human drivers compile experience over years behind the wheel.

So how can we build AI systems that build full representations, remember their experiences and learn how to handle new situations? Brooks was right in that it is very difficult to do this. My own research into methods inspired by Darwinian evolution can start to make up for human shortcomings by letting the machines build their own representations.

 

Type III AI: Theory of mind

We might stop here, and call this point the important divide between the machines we have and the machines we will build in the future. However, it is better to be more specific to discuss the types of representations machines need to form, and what they need to be about.

Machines in the next, more advanced, class not only form representations about the world, but also about other agents or entities in the world. In psychology, this is called “theory of mind” – the understanding that people, creatures and objects in the world can have thoughts and emotions that affect their own behavior.

This is crucial to how we humans formed societies, because they allowed us to have social interactions. Without understanding each other’s motives and intentions, and without taking into account what somebody else knows either about me or the environment, working together is at best difficult, at worst impossible.

If AI systems are indeed ever to walk among us, they’ll have to be able to understand that each of us has thoughts and feelings and expectations for how we’ll be treated. And they’ll have to adjust their behavior accordingly.

 

Type IV AI: Self-awareness

The final step of AI development is to build systems that can form representations about themselves. Ultimately, we AI researchers will have to not only understand consciousness, but build machines that have it.

This is, in a sense, an extension of the “theory of mind” possessed by Type III artificial intelligences. Consciousness is also called “self-awareness” for a reason. (“I want that item” is a very different statement from “I know I want that item.”) Conscious beings are aware of themselves, know about their internal states, and are able to predict feelings of others. We assume someone honking behind us in traffic is angry or impatient, because that’s how we feel when we honk at others. Without a theory of mind, we could not make those sorts of inferences.

While we are probably far from creating machines that are self-aware, we should focus our efforts toward understanding memory, learning and the ability to base decisions on past experiences. This is an important step to understand human intelligence on its own. And it is crucial if we want to design or evolve machines that are more than exceptional at classifying what they see in front of them.

The Conversation

Arend Hintze, Assistant Professor of Integrative Biology & Computer Science and Engineering, Michigan State University

This article was originally published on The Conversation. Read the original article.

November 17, 2016 / by / in , , , , , , , , , , ,
WorldViz is Creating a VR Platform for Enterprise Collaboration – Road to VR

worldviz-skofield

 

WorldViz, provider of virtual reality solutions for the enterprise and public sectors, is launching a new VR communication platform for businesses. Based on their popular Vizard software, it enables complex visual ideas to be assessed remotely, through VR presentations and virtual meetings.

Virtual Reality is considered by many to be the ‘final compute platform’, and a key promise of our VR-enriched future is communication. The ability to socially engage and interact within a shared virtual environment, particularly when the participants are spread across the world, is a complex hardware and software challenge, and WorldViz is on the forefront of this area of development.

Today, WorldViz revealed their new VR communication platform, codenamed ‘Skofield’, aimed at businesses looking for new, immersive ways to collaborate on complex projects and ideas. Currently on show at Autodesk University event this week, it is pitched as the VR equivalent of GoToMeeting, the popular video conferencing and desktop-sharing solution used by businesses across the world. Video conferencing is an invaluable tool, but until now there has been no substitute for being physically present with your team, particularly when assessing complex objects, environments, schematics, and the like. Skofield uses VR’s inherent ability to provide user ‘presence’ within a virtual environment to bring people together in a natural way, dramatically reducing the need to physically travel in order to assess visual content. If you recall last month’s predictions of the next five years of VR by Oculus Chief Scientist Michael Abrash, you’ll note that his idea of the ideal VR workspace sounds quite similar.

Built on WorldViz’s device-agnostic rapid-prototyping development software, Vizard, which includes a physics engine that supports rigid body dynamics, vehicle and robot simulation, Skofield promises to be a powerful creation and presentation tool. It incorporates a what-you-see-is-what-you-get editor within its ‘Presentation Designer’ software, allowing the creator to quickly drag and drop elements into a VR presentation, setting proximity triggers, defining which objects have interactivity, inserting PDFs or PPTs within the scene to work as training manuals or fact sheets.

Once created, the presenter can then invite attendees of the virtual meeting to join a session immediately or at a later date via email or text. Moving about in the space, using a virtual laser pointer, zooming in on objects, annotating and measuring them will all be possible through the included tools. Skofield also provides telephony for voice as well as gaze tracking, and can record meetings for later reference. Such a platform could be revolutionary for complex industries such as aerospace and construction, particularly if hazardous environments are involved.

Thanks to Vizard’s built-in ‘VizConnect’ feature, Skofield will support many VR hardware systems, including the Oculus Rift and HTC Vive, 3D displays, CAVE projection systems, input devices, and all the mobile devices WorldViz already supports. Other headsets will be added over time depending on demand.

“Accurately conveying visual ideas to remote decision makers is still a huge challenge for companies,” said WorldViz CEO and co-founder, Andrew Beall. “We see it all the time – modern communication technologies such as telephony, video conference calls, and PowerPoint sharing simply can’t bring people together in a collaborative setting or enable decision makers to experience complex concepts, designs, and spaces first hand. Companies are spending a staggering $1.25 trillion globally on business travel to circumvent this limitation. We believe Skofield is the answer to that challenge.”

Pricing has yet to be announced, but beta testing starts today and any interested company can sign up to give give Skofield a try.

[RoadtoVR]

November 17, 2016 / by / in , , , , , , , , , , ,
EyeSim – Medical Training for Ophthalmology

 

THE CHALLENGE

 

Medical students studying ophthalmology view the complex structures and functions of the human eye through two dimensional teaching aids and traditional teaching methods. Deliberate practice is necessary for mastery learning. Currently this takes place through physical simulators purpose built for specific procedures, however how do these students master the basic concepts on which these simulators are based? In order for the complex concepts of ophthalmology to be mastered, there needs to be innovation in the ways students learn.

 

  • 2D Representations of 3D Problems
    Complex subject matter, such as Visual Pathways, is poorly represented in standard teaching material and the time to master these subjects could be shortened.
  • Not Enough Early “Hands On” Practice
    Students currently practice on real patients or dissect a cadaver. Both of these are suboptimal, as a cadaver does not function like a live subject and practicing on real patients can potentially compromise the safety of the patient. This limits the amount of practice students receive while in the classroom.
  • Limited Amount of Dysfunctions That Can Be Simulated
    Currently, instructors are limited in how dysfunctions and diseases are demonstrated in classroom settings. These examples are often presented as case studies with limited hands on exploration.

 

THE SOLUTION

EyeSim, developed together with Dr. Anuradha Khanna from A Nu Reality, is a Virtual Reality ophthalmic training simulator application designed for educators to use in the classroom for learners to achieve mastery learning through deliberate practice. Currently available modules include ocular anatomy, pupil simulator, ocular motility simulator, and a visual pathway simulator.

 

Virtual Reality Improves Learning

Complex concepts can be accurately modeled to facilitate learning and lead to quicker understanding than older two-dimensional teaching aides. Complex subjects, like Visual Pathways, can be represented in a much more realistic and accurate fashion. This enables students to quickly understand what’s going on rather than requiring them to reconstruct a comparable model in their head from 2D representations.

 

Unlimited “Hands On” Practice

By bringing Virtual Reality to ophthalmology education, students can get hands on experience and gain valuable practice all without touching a patient or a cadaver. Structures and functions are realistically replicated within a virtual environment resulting in the ability to practice procedures and understand anatomical functions without repeated dissections or patient interactions.

 

Combine Dysfunctions for Improved Practice

Through Virtual Reality, instructors can select from any number of dysfunctions and have their students perform a diagnosis. More important, however, an instructor can combine several dysfunctions and create rare patient cases for their students to practice with.

 

eyesim_still7

 

THE RESULTS

 

 eyesim_still8

Virtual Reality presents the perfect platform for advanced medical education. With VR, students can get hands on practice to help them master complex and difficult tasks. EyeSim is available on Mobile Platforms, Desktop, Ibench Mobile, and Icatcher.

During my twenty years of experience in academic ophthalmology, I have transparencies to digital slides to PowerPoint. I see Virtual Research in many specialties has shown that simulation based medical education combined with deliberate practice enables mastery learning. EyeSim will provide my colleagues with a platform to build a simulation based ophthalmic educational curriculum on, and an opportunity for the learners to achieve mastery learning through deliberate practice in a safe environment.

Anuradha Khanna MD, Associate Professor of Ophthalmology, Loyola University
November 17, 2016 / by / in , , , , , , , , , , , , ,
Artificial-intelligence system surfs web to improve its performance

mit-webaid-learning_0Information extraction — or automatically classifying data items stored as plain text — is a major topic of artificial-intelligence research. Image: MIT News.

 
“Information extraction” system helps turn plain text into data for statistical analysis.

 

Of the vast wealth of information unlocked by the Internet, most is plain text. The data necessary to answer myriad questions — about, say, the correlations between the industrial use of certain chemicals and incidents of disease, or between patterns of news coverage and voter-poll results — may all be online. But extracting it from plain text and organizing it for quantitative analysis may be prohibitively time consuming.

Information extraction — or automatically classifying data items stored as plain text — is thus a major topic of artificial-intelligence research. Last week, at the Association for Computational Linguistics’ Conference on Empirical Methods on Natural Language Processing, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory won a best-paper award for a new approach to information extraction that turns conventional machine learning on its head.

Read more >> MIT News

November 17, 2016 / by / in , , , , , , , , , ,
Show Buttons
Hide Buttons

IMPORTANT MESSAGE: Scooblrinc.com is a website owned and operated by Scooblr, Inc. By accessing this website and any pages thereof, you agree to be bound by the Terms of Use and Privacy Policy, as amended from time to time. Scooblr, Inc. does not verify or assure that information provided by any company offering services is accurate or complete or that the valuation is appropriate. Neither Scooblr nor any of its directors, officers, employees, representatives, affiliates or agents shall have any liability whatsoever arising, for any error or incompleteness of fact or opinion in, or lack of care in the preparation or publication, of the materials posted on this website. Scooblr does not give advice, provide analysis or recommendations regarding any offering, service posted on the website. The information on this website does not constitute an offer of, or the solicitation of an offer to buy or subscribe for, any services to any person in any jurisdiction to whom or in which such offer or solicitation is unlawful.