Entertainment

Hotels VS. Airbnb: Positive Rivalry Drives Innovation

 

What could a global hotel executive have to say about Airbnb? The rule is typically: ‘If you don’t have anything nice to say, don’t say anything at all.’ Since peer-to-peer accommodation start-up Airbnb launched in 2008, the mood has been tense between traditional lodging providers and the DIY movement that Airbnb represents.

However Kimo Kippen is the Former Chief Learning Officer at Hilton Worldwide and view on Airbnb is defined by one word: exciting. Airbnb many not own hotel rooms, valuable property, or even a long-standing reputation, but what it does have is an ingenious platform that grants so much more autonomy and choice to its users. Kippen sees this competition as inspiration and is pushing Hilton to make greater efforts to innovate and keep up, for example through an integrated app that allows digital check in, greater room control, and digital room keys.

There are countless studies which demonstrate that competition increases motivation – as far back as 1891, psychologist Norman Triplett found that the presence of another cyclist made his study participants pedal faster.

The rivalry between companies like Apple and Microsoft has led to ever-advancing technology for the public, the result of two competitors spring-boarding off one another and pushing each other to innovate.

The hotel business is booming, with the industry showing all-time high performance and growth projections in 2015, according to competitive benchmarking firm STR. Supply is climbing, and the pace of hotel closings is slowing. This is even as a study from Boston University in June 2016 found that Airbnb has contributed to a reduction in “aggressive hotel room pricing, an impact that benefits all consumers, not just participants in the sharing economy.” That likely hurts the bottom-line of hotels and yet they have, on the whole, been resourceful enough to have the best year ever. In turn, changes are being enforced on Airbnb, most recently through a new law in New York that only permits room rentals if the host is also living in the apartment, and prohibits rentals in multi-unit buildings for less than 30 days – violations are punishable by a $7,500 fine. This is controversial for many reasons, and no doubt hinders Airbnb’s ability to function. Will they find ways to remain competitive?

Hotels and peer-to-peer accommodation will find themselves in a beneficial rivalry only if the focus is on self-improvement, as opposed to the destruction of the other. When the latter happens, it punishes the client and hinders the spirit of innovation. [BigThink]

December 2, 2016 / by / in , , , , , , , , ,
Why Very Smart People Are Happiest Alone

solitude

Quality time = alone time. (LUIGI MORANTE)

 

In a just-published study about how our ancestral needs impact our modern feelings, researchers uncovered something that will surprise few among the highly intelligent. While most people are happier when they’re surrounded by friends, smart people are happier when they’re not.

The researchers, Norman P. Li and Satoshi Kanazawa, of the Singapore Management University, Singapore and the London School of Economics and Political Science, UK, respectively, were investigating the “savannah theory” of happiness.

The savannah theory — also called the “evolutionary legacy hypothesis” and the “mismatch hypothesis” — posits that we react to circumstances as our ancestors would, having evolved psychologically based on our ancestors’ needs in the days when humankind lived on the savannah.

 

savannahSavannah (BJØRN CHRISTIAN TØRRISSEN)

The study analyzed data from interviews conducted by the National Longitudinal Study of Adolescent Health (Add Health) in 2001-2002 with 15,197 individuals aged 18–28. The researchers  looked for a correlation between where an interviewee lived — in a rural or urban area — and his or her life satisfaction. They were interested in assessing how population density and friendships affect happiness.

 

How We Feel About Being in Large Groups

 

crowdCrowded (KEVIN CASE)

 

The study found that people in general were less happy in areas of greater population density. The report’s authors see this is as support for the savannah theory because we would naturally feel uneasy in larger groups if — as evidence they cite suggests — our brains evolved for functioning in groups of about 150 people:

  • Comparing the size of our neocortex to other primates and the sizes of the groups in which they dwell suggests the natural size of a human group is 150 people (Dunbar, 1992).
  • Computer simulations show that the evolution of risk aversion happens only in groups of about 150 people (Hintze, Olson, Adami, & Hertwig, 2013).
  • The average size of modern hunter-gatherer societies is 148.4 people (Dunbar, 1993).
  • Neolithic villages in Mesopotamia had from 150–200 people (Oates, 1977).
  • When a group of people exceeds 150-200 people, it will tend to break into two in order to facilitate greater cooperation and reciprocity among its members (Chagnon, 1979).
  • The average personal network, as suggested by the typical number of holiday cards sent per person per year, is 153.5 people (Hill & Dunbar, 2003).

The study discovered, though, that the negative effect of the presence of lots of people is more pronounced among people of average intelligence. They propose that our smartest ancestors were better able to adapt to larger groups on the savannah due to a greater strategic flexibility and innate ingenuity, and so their descendants feel less stressed by urban environments today.

 

You’ve Got to Have Friends. Or Not.

 

BFFsBFFs (SONNY ABESAMIS)

 

While it seems self-evident that good friendships increase life satisfaction in most people, Li and Satoshi and Kanazawa note, surprisingly, that they know of only a single study that looked at the reason why this is true, and which concluded friendships satisfy psychological needs such as relatedness, the need to be needed, and an outlet for sharing experiences. Still, the reason a person has those needs remains unexplained.

Li and Kanazawa feel that we need look no further than the savannah. They say that friendships/alliances were vital for survival, in that they facilitated group hunting and food sharing, reproduction, and even group child-rearing.

The data they analyzed supports the assumption that good friendships — and a few good ones is better than lots of weaker ones — do significantly increase life satisfaction for most people.

In highly intelligent people, though, the finding is reversed: Smart people feel happier alone than when others, even good friends, are around. A “healthy” social life actually leaves highly intelligent people with less life satisfaction. Is it because their desires are more aspirational and goal-oriented, and other people are annoyingly distracting?

However, just in case this makes too much sense, the study also found that spending more time socializing with friends is actually an indicator of higher intelligence! This baffling contradiction is counter-intuitive, at least. Unless these smart people are not so much social as they are masochistic. [BigThink]

December 2, 2016 / by / in , , , , , , , , ,
Brain Implants that Augment the Human Brain Using AI

You probably clicked on this article because the idea of using brain implants to allow artificial intelligence (AI) to read your brain sounds futuristic and fascinating. It is fascinating, but it’s not as futuristic as you might think. Before we start talking about brain implants and how to augment the human brain using AI, we need to put some context around human intelligence and why we might want to tinker with it.

We floated the idea before that gene editing techniques could allow us to promote genetic intelligence by performing gene editing at the germline. That’s one approach. As controversial as it might be, some solid scientific research shows that genetics does play a role in intelligence. For those of us who are already alive and well, this sort of intelligence enhancement won’t work. This is where we might look towards augmented intelligence. This sort of augmentation of the brain will firstly be preventative in that it will look to assist those who have age associated brain disorders as an example. In order for augmented intelligence to be feasible though, we need a read/write interface to the human brain. One company called Kernel might be looking to address this with a technology that takes a page out of science fiction.

 

The advanced intelligence of tomorrow is a collaboration between the natural and the artificial. United, unheard of possibilities abound. We’re building off two decades of breakthrough research, working closely with private partners and scientists to get usable solutions in the hands of people everywhere. We’re starting with potential applications for patients with cognitive disorders.

 

To understand Kernel we must first understand the founder of Kernel, Bryan Johnson, a 39 year old man who exemplifies the American entrepreneurial success story. Growing up in the small city of Provo Utah, he hustled his way around as a serial entrepreneur from selling cell phones to establishing a VOIP company. He came up with his biggest idea while working a part-time job selling credit card processing services to businesses. The end result was a payment processing company called Braintree which he sold to eBay for $800 million in 2013.

Mr. Johnson then took some of his proceeds and founded a VC fund called OS Funds. “OS” stands for “operating system” and OS Funds then set out to invest in “entrepreneurs who are working towards quantum-leap discoveries that promise to reinvent the operating systems of life“. OS Funds managed to do just that by investing in ambitious startups like artificial intelligence pioneer Vicarious, drone delivery startup Matternet, nanobot factory Ginkgo Bioworks, and Human Longevity which wants to extend the lifespan of humans. Not content to just rest on his laurels, Mr. Johnson then went on to sink $100 million into a new startup he started this year called Kernel which wants to do nothing less than augment the human brain with artificial intelligence. In August of this year, Kernel came out of stealth mode and posted this cryptic video on their website:

 

If you can’t be asked to spend 44 seconds to watch the video, here’s what it says along with some cool futuristic animations:

 

Exploring our universe is extending the life of our earth. Understanding our genetic code is extending the life of our body. And now, we are unlocking our neural code to extend the life of our mind. So, what will it mean to live?

Kernel’s technology is centered around a researcher named Theodore Berger who has been working for the past 35 years to learn how to store brain memories on computer ships. Sound crazy? In a recent interview with MIT Technology Review, he stated “They told me I was nuts a long time ago”. The article goes on to state that “Berger is shedding the loony label and increasingly taking on the role of a visionary pioneer“. If $100 million in backing isn’t a total vindication of Dr. Berger’s “loony label”, then what else is? That’s the equivalent to the amount of money Illumina sunk into Helix.

Dr. Berger’s research has moved across the spectrum  from giving monkeys cocaine and seeing how they recall memories, to testing the memory process of people with epilepsy who have electrodes temporarily implanted in their brain. In the human tests, these electrodes were used to record signals sent to the hippocampus (shown highlighted in the below diagram):

 

Female Hippocampus Brain Anatomy - blue conceptFemale Hippocampus Brain Anatomy

Why the hippocampus? If we start to think of the brain as a sort of computer, the hippocampus is where short-term memories in RAM are converted to long term memories and then stored on the brain’s “hard drive” where long term memories are stored. It’s those long term memories that Dr. Berger is targeting. The ability to create a bridge between the hippocampus and a chip will allow for “memory implants” that can enhance the memory of those who are suffering from memory loss that accompanies aging. Since the way the brain works is often seen as a black box with 100 billion neurons firing away, Kernel is using machine learning in order to figure out how the brain goes about writing and retrieving memories. Kernel is actually using artificial intelligence to understand real intelligence which will lead to brain augmentation in the form of brain implants.

In an article he wrote on Medium about this incredible endeavor, Mr. Johnson states that the quest to enhance human intelligence “may be the largest market in history”. He also talks about he plans to “optimize for long term value creation by raising approximately a billion dollars from public and private sources” and that “each market approved product we create will require approximately $200M and 7–10 years”.

If Kernel can learn how to interpret the signals being sent to the hippocampus at 100% percent accuracy, then the “read/write” ability is covered. At the moment he claims to be more in the 80% range. Kind of like a 4-5 drink night out. If this whole thing works out, we’ll all be able to walk around with brain implants that give us the memory of an elephant and hopefully never have to worry about where we put our car keys. But that’s not all this means. Here’s where things can get a whole lot more interesting.

There’s been a lot of banter these days about the idea that we might be living in a simulation. The general idea is that if we can engineer our own simulations that are indistinguishable from our present reality, then it makes it likely that we are presently in a simulation. Then when Elon Musk came out and said recently that we’re almost certainly living in a simulation, everyone starts to think that maybe the idea isn’t so loony. If we think about what’s needed for this to happen in our present reality, virtual reality goggles aren’t going to cut it. You can take the goggles off anytime and you’re back in your living room.

The one thing that would make a simulation truly convincing would be a brain interface that would allow for every single one of your sensory inputs to be fed a stream of data. That’s the ultimate brain augmentation, the ability to plug a real-time data feed into our brain. That’s the direction Kernel is heading towards because if we can give the brain a place to store memories, we can speak the brain’s language and begin making it remember things that never happened, or forget things that did happen. Maybe psychologists are going the way of radiologists, or maybe we’ve read too many science fiction books, but some of the possible directions this could take are truly amazing to think about. This is exactly the sort of potential that led the founder of Kernel, Mr. Bryan Johnson, to state “We are at one of the most exciting moments in history“.

[Nanalyze]

November 29, 2016 / by / in , , , , , , , , , ,
Why the US Is Losing Ground on the Next Generation of Powerful Supercomputers

exascale-computing

 

“I feel the need — the need for speed.”

The tagline from the 1980s movie Top Gun could be seen as the mantra for the high-performance computing system world these days. The next milestone in the endless race to build faster and faster machines has become embodied in standing up the first exascale supercomputer.

Exascale might sound like an alternative universe in a science fiction movie, and judging by all the hype, one could be forgiven for thinking that an exascale supercomputer might be capable of opening up wormholes in the multiverse (if you subscribe to that particular cosmological theory). In reality, exascale computing is at once more prosaic — a really, really fast computer — and packs the potential to change how we simulate, model and predict life, the universe and pretty much everything.

First, the basics: exascale refers to high-performance computing systems capable of at least a billion billion calculations per second, about 50 times faster than the most powerful supercomputers in existence today. Computing systems capable of at least one exaFLOPS (a quintillion floating point operations per second) has additional significance, it’s estimated such an achievement would potentially match the processing power required to simulate the human brain.

Of course, as with any race, there is a healthy amount of competition, which Singularity Hub has covered over the last few years. The supercomputer version of NFL Power Rankings is the TOP500 List, a compilation of the most super of the supercomputers. The 48th edition of the list was released last week at the International Conference for High Performance Computing, Networking, Storage and Analysis, more succinctly known as SC16, in Salt Lake City.

In terms of pure computing power, China and the United States are pretty much neck and neck. Both nations now claim 171 HPC systems apiece in the latest rankings, accounting for two-thirds of the list, according to TOP500.org. However, China holds the top two spots with its Sunway TaihuLight, at 93 petaflops, and Tianhe-2, at 34 petaflops.

Michael Feldman, managing editor of TOP500, wrote earlier this year about what he characterized as a four-way race to exascale supremacy between the United States, China, Japan and France. The United States, he wagers, is bringing up the rear of the pack, as most of the other nations project to produce an exascale machine by about 2020. He concedes the race could be over today with enough money and power.

“But even with that, one would have to compromise quite a bit on computational efficiency, given the slowness of current interconnects relative to the large number of nodes that would be required for an exaflop of performance,” he writes. “Then there’s the inconvenient fact there are neither applications nor system software that are exascale-ready, relegating such a system to a gargantuan job-sharing cluster.”

Dimitri Kusnezov, chief scientist and senior advisor to the Secretary of the US Department of Energy, takes the long-term view when discussing exascale computing. What’s the use for all that speed if you don’t know where you’re going, he argues?

“A factor of 10 or 100 in computing power does not give you a lot in terms of increasing the complexity of the problems you’re trying to solve,” he said during a phone interview with Singularity Hub.

“We’re entering a new world where the architecture, as we think of exascale, [is] not just faster and more of the same,” he explained. “We need things to not only do simulation, but we need [them] at the same time to reach deeply into the data and apply cognitive approaches — AI in some capacity — to distill from the data, together with analytical methods, what’s really in the data that can be integrated into the simulations to help with the class of problems we face.”

“There aren’t any architectures like that today, and there isn’t any functionality like that today,” he added.

In July 2015, the White House announced the National Strategic Computing Initiative, which established a coordinated federal effort in “high-performance computing research, development, and deployment.”

The DoE Office of Science and DoE National Nuclear Security Administration are in charge of one cornerstone of that plan – the Exascale Computing Project (ECP) — with involvement from Argonne, Lawrence Berkeley, Oak Ridge, Los Alamos, Lawrence Livermore, and Sandia national labs.

Since September of this year, DoE has handed out nearly $90 million in awards as part of ECP.

More than half of the money will go toward what DoE calls four co-design centers. Co-design, it says, “requires an interdisciplinary engineering approach in which the developers of the software ecosystem, the hardware technology, and a new generation of computational science applications are collaboratively involved in a participatory design process.”

Another round of funds will support 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations. The modeling and simulation applications that were funded include projects ranging from “deep-learning and simulation-enabled precision medicine for cancer” to “modeling of advanced particle accelerators.”

The timeline — Feldman offers 2023 for a US exascale system — is somewhat secondary to functionality from Kusnezov’s perspective.

“The timeline is defined by the class of problems that we’re trying to solve and the demands they will have on the architecture, and the recognition that those technologies don’t yet exist,” he explains. “The timeline is paced by the functionality we’d like to include and not by the traditional benchmarks like LINPACK, which are likely not the right measures of the kinds of things we’re going to be doing in the future.

“We are trying to merge high-end simulation with big data analytics in a way that is also cognitive, that you can learn while you simulate,” he adds. “We’re trying to change not just the architecture but the paradigm itself.”

Kusnezov says the US strategy is certainly only one of many possible paths toward an exascale machine.

“There isn’t a single kind of architecture that will solve everything we want, so there isn’t a single unique answer that we’re all pushing toward. Each of the countries is driven by its own demands in some ways,” he says.

To illustrate his point about a paradigm shift, Kusnezov talks at length about President Barack Obama’s announcement during his State of the Union address earlier this year that the nation would pursue a cancer moonshot program. Supercomputers will play a key role in the search for a cure, according to Kusnezov, and the work has already forced DoE to step back and reassess how it approaches rich, complex data sets and computer simulations, particularly as it applies to exascale computing.

“A lot of the problems are societal, and getting an answer to them is everyone’s best interest,” he notes. “If we could buy all of this stuff off the shelf, we would do it, but we can’t. So we’re always looking for good ideas, we’re always looking for partners. We always welcome the competition in solving these things. It always gets people to innovate — and we like innovation.”

This post originally appeared on SingularityHub

November 28, 2016 / by / in , , , , , , , , , , ,
Antimatter changed physics, and the discovery of antimemories could revolutionise neuroscience

Antimemory, the yin to memory’s yang. Naeblys/shutterstock.com

 

Harriet Dempsey-Jones, University of Oxford

One of the most intriguing physics discoveries of the last century was the existence of antimatter, material that exists as the “mirror image” of subatomic particles of matter, such as electrons, protons and quarks, but with the opposite charge. Antimatter deepened our understanding of our universe and the laws of physics, and now the same idea is being proposed to explain something equally mysterious: memory.

When memories are created and recalled, new and stronger electrical connections are created between neurons in the brain. The memory is represented by this new association between neurons. But a new theory, backed by animal research and mathematical models, suggests that at the same time that a memory is created, an “antimemory” is also spawned – that is, connections between neurons are made that provide the exact opposite pattern of electrical activity to those forming the original memory. Scientists believe that this helps maintain the balance of electrical activity in the brain.

The growth of stronger connections between neurons, known as an increase in excitation, is part of the normal process of learning. Like the excitement that we feel emotionally, a little is a good thing. However, also like emotional excitement, too much of it can cause problems.

In fact, the levels of electrical activity in the brain are finely and delicately balanced. Any excessive excitation in the brain disrupts this balance. In fact, electrical imbalance is thought to underlie some of the cognitive problems associated with psychiatric and psychological conditions such as autism and schizophrenia.

In trying to understand the effects of imbalance, scientists reached the conclusion that there must be a second process in learning that acts to rebalance the excitation caused by the new memory and keep the whole system in check. The theory is that, just as we have matter and antimatter, so there must be an antimemory for every memory. This precise mirroring of the excitation of the new memory with its inhibitory antimemory prevents a runaway storm of brain activity, ensuring that the system stays in balance. While the memory is still present, the activity it caused has been subdued. In this way, antimemories work to silence the original memory without erasing it.

What does an antimemory do?

The evidence for antimemories so far comes only from experimental work in rats and mice and evidence from modelling. These experiments require direct recording from inside the brain using electrodes, and given that putting metal probes into human brains typically is frowned upon, scientists have not yet been able to directly support the presence of antimemories in humans. In a paper just published in the journal Neuron, a team of researchers from the University of Oxford and University College London have come up with a clever method to determine whether human memory operates on similar lines to those of our animal cousins.

Test subjects were asked to learn a task that created a new memory. When the researchers used fMRI brain scanning to examine the brain a few hours after learning, however, they found no trace of the memory, as it had been quietened by the antimemory. They then applied a weak flow of electricity in the area of the brain where the memory had formed (using a safe technique called anodal transcranial direct current stimulation). This allowed them to reduce inhibitory brain activity in this area – disrupting the inhibitory antimemory and thus revealing the hidden memory.

 


How the antimemory counters the brain activity of a memory.
HC Barron et al/Neuron

 

This diagram shows four coloured shapes that will be paired together by the test participant during a memory task. The two pairs of shapes are learned, with the memory represented by the orange connections between them. Having learned this pairing, the excitation in the brain caused by learning and creating the memory is balanced out by an inhibitory antimemory, represented by the new grey lines.

The yellow boxes below represent the rate of firing of neurons during this learning process. At first, before pairing, they respond only to the red square. After learning the pairing of the red and green squares, the neurons fire to either stimulus. As the antimemory develops this association is silenced and neurons activate only in response to the red stimulus. Finally, after temporarily disturbing the antimemory, the underlying association is evident once again, with the neurons activating to either stimulus.

So it seems that in humans as well as in animals, antimemories are critical to prevent a potentially dangerous build-up of electrical excitation in the brain, something that could lead to epileptic-like brain states and seizures. It’s thought antimemories may also play an important role in stopping memories from spontaneously activating each other, which would lead to confusion and severely disordered thought processes.

Just as the mathematical theory of antimatter and its later discovery in nature and creation in a lab was hugely important to 20th century physics, it seems that the investigation of these enigmatic antimemories will be potentially revolutionary for our understanding of the brain and an important focus for the coming century.

The Conversation

Harriet Dempsey-Jones, Researcher in Clinical Neurosciences, University of Oxford

This article was originally published on The Conversation. Read the original article.

November 28, 2016 / by / in , , , , , , , ,
How Root Wants to Bring Coding to Every Classroom

Root educational robot
Image: Root

 

This is a guest post. The views expressed here are solely those of the authors and do not represent positions of IEEE Spectrum or the IEEE.

 

The push to teach coding in U.S. schools has been growing: Thanks to initiatives like the White House’s CS for All program, computer science is now recognized as a core skill for today’s students. A new study by Gallup and Google revealed that 90 percent of parents want their child to learn CS, yet only 40 percent of K-12 school districts offer some kind of CS course. Teacher recruitment and training efforts are beginning to solve the problem at the high-school level, but in K-8 schools (where very few schools offer CS and many teachers are generalists) the challenges are different. Many teachers without much coding experience understandably feel anxious about integrating this new literacy into their classrooms.

Our team at Harvard University is hoping to change that with Root. Root is a new kind of robot that colors outside the lines of the educational robotics category by providing unique capabilities along with a programming interface that grows with its user, bringing coding to life for all ages. After nearly three years of development, Root and its companion app, Root Square, have emerged as a solution to ease teachers’ anxiety about adding coding to the lessons that they teach.

 

f3d1ff38_original

 

 

Hardware

Root is intentionally designed as one single piece, setting it apart from many other educational robotics platforms because students can dive right into the programming and computational problem solving without having to grapple with parts assembly. In our pilot classrooms, teachers who were previously intimidated by complex and messy boxes of components approached Root with a sense of ease and playfulness. The robot is ready to go right out of the box, it’s easy to put away after class, and it’s much easier to share it between students and classes, which makes it significantly more affordable for schools.

 

Root educational robot hardware overviewImage: Root.

 
Another way that Root stands out in classrooms is by taking advantage of the great robot arena already at the front of most classrooms: whiteboards. Root is a magnetic robot; although it can work perfectly well on a table, floor, or on a piece of paper or poster board, it can drive vertically on a metal-backed, magnetic whiteboard. This was a significant technical hurdle for us to solve because of the need to compensate for drift due to gravity when driving on vertical surfaces. What may look like a small, simple package actually has sophisticated sensors and firmware inside: Root uses high-resolution encoders, a 3D accelerometer, and a 3D gyroscope to accurately interpret speed, orientation, and wheel position. This helps us correct the motor commands in real-time for drift due to gravity (actually caused by stretching of the tires).

As a result of the attention we gave to tuning Root’s self-correcting driving algorithm, Root can draw with high precision using its on-board marker. With a nod to the robotic “turtles” that came before it, Root has a gripper driven by an internal motor in its geometric center. The gripper holds a standard marker that can be programmed to lift and drop. And for those who also want help cleaning the board, Root can lift and drop its embedded eraser. In our pilot testing of Root we’ve found that the writing surface is as much a part of the programming experience as the code and the robot—in essence, it brings the inputs and outputs into a medium that is not only tangible, but familiar to anyone who has ever drawn a picture.

Working with Root can lead students to the type of “wow moment” that has become more elusive as daily life has been increasingly saturated with technology. The experience of physically interacting with Root and programming it in an understandable language is likely to be new and exciting for most students.

One of Root’s most important capabilities comes from the row of 32 color sensors on its underside. This is similar to a 1D camera, or a small color scanner. Color sensing is a great way to get kids to interact with Root both through the commands they give it and also through the environment they create for it on the whiteboard. Root’s color sensing capability isn’t only engaging, it’s also versatile enough to be used to solve the kind of complex problems (SLAM, path planning, and more) that students might encounter in a college-level class.

For connectivity and control, Root can talk to any Bluetooth LE device, and Root is also expandable: third party boards and other accessories (like Raspberry Pi, Arduino, BBC Micro:Bit, cameras, sensors, etc.) can be connected through an USB-C connector on the robot’s back.

Software

It’s not just the physical robot that matters in robotics education—the programming interface can make all the difference when working with newcomers to coding or robotics. While most programming tools are often overwhelming to novices, programming with Root is approachable because it is scaffolded: Beginners use a graphical interface with a limited instruction set that lets them get right to programming without having to learn syntax and structure. Regardless of age, people’s faces light up when they tap Play and see Root move according to a simple program they wrote themselves in just a few minutes. Reaching a quick win encourages learners to keep going.

However, as skills advance, they need to be given tools that are “just right” for their developmental level. For that reason, a big portion of our research effort focused on developing the software for programming Root. Root’s programming interface (“Root Square”)  is multi-level, ending with a choice of textual languages including Python, JavaScript, and Swift. Here’s how it works:

 

Root educational robot programming levelsImage: Root

 

 

Level 1 Programming

Root Square’s Level 1 interface is designed to be accessible to kids as young as 4, and for novices of any age who have never experienced coding before. There are a deliberately limited number of blocks, no words, and no numbers greater than twenty. This means that kids can start coding even before they learn how to read, and adults are not intimidated by an overly-complex interface. Some unique features that make this entry-level interface easy to program with are:

  • The user’s program can be modified even while it’s running. Adding, deleting, or modifying instructions (blocks) even inside a running loop is absolutely possible; the user’s program will just keep running with the new modifications. This capability makes it ideal for working with young kids, as they are really playing with the code while it runs.
  • It’s been optimized for touch screens. While graphical programing languages are not new, many follow a paradigm that has not changed in over a decade. Root Square’s user interface intentionally breaks some of these norms by optimizing for touch-screen interfaces, which we think many children will be most comfortable with.
  • Powerful programs can be written with surprisingly few blocks. Root Square Level 1 is completely events-based and thus, with very few blocks, it’s possible to solve complex problems that would require far more syntactic complexity in other environments. However, brevity does not mean that overpowered blocks hide important details: the student must still create each rule and program each response for the robot.

 

Level 2 and Level 3 Programming

For more advanced users, and for beginners who are ready to go beyond Level 1’s limits, Root Square provides a powerful instruction set in its Level 2 graphical programming language. Level 2 introduces variables, sensor values, units, flow control statements, arithmetic operations, recursion, and parallelism. Both Level 1 and 2 are accessed from the same app, and previously written Level 1 code can be automatically converted to Level 2 to ease the transition.

Programs created with Level 1 and Level 2 can also be converted into Python, JavaScript, or Swift code (Level 3). Root offers a rich and open API and a development kit (SDK) that opens the door to all kinds of advanced applications and interactions with other devices.

 

Root educational robot
Image: Root

 

 

How to Bring Root to Your Classroom and Home

Working with Root can lead students to the type of “wow moment” that has become more elusive as daily life has been increasingly saturated with technology. The experience of physically interacting with Root, dynamically creating and changing its environment with markers on a whiteboard, and programming it in an understandable language is likely to be new and exciting for most students.

Perhaps most important, we hope that enabling early experiences with robotics and programming through Root will help to close the diversity gap in technology fields. Root’s appeal to all ages, genders, and backgrounds, plus the ease with which teachers can bring it into the classroom, make it a strong candidate to help more kids experience these foundational learning moments.

Root is the result of more than three years of research and development. Now that it has been prototyped and pilot-tested extensively, it’s ready for production on a large scale. We’re excited to take the project out of Harvard’s research labs and into classrooms and homes everywhere. You can support the effort (and order your own Root!) on Kickstarter today.

Julián da Silva Gillig is the lead software developer for Root Square, the multi-level programming interface for the Root robot. He was founder and lead engineer of Multiplo and miniBloq, open source robotics and physical computing projects. Julián is currently a research associate at Harvard University’s Wyss Institute for Biologically Inspired Engineering, in Cambridge, Mass.

Shaileen Crawford Pokress is the head of education for the Root project. She is co-author of the recently released K-12 Computer Science Framework and former Education Director for MIT App Inventor. Shaileen is currently a visiting scholar at Harvard University’s Wyss Institute for Biologically Inspired Engineering, in Cambridge, Mass.

Raphael Cherney is the lead engineer and co-founder of the Root project. Raphael is currently a research assistant at Harvard University’s Wyss Institute for Biologically Inspired Engineering, in Cambridge, Mass. [IEEE]

 

 

November 24, 2016 / by / in , , , , , , , , , ,
Internet of Things to Drive the Fourth Industrial Revolution: Industrie 4.0 — Companies Endorse New Interoperable IIoT Standard
industry4-0
The Industrial Internet of Things (IIoT) will be the primary driver of the fourth Industrial Revolution and Cisco and other companies are at the forefront. It’s commonly referred to as Industrie 4.0.

“Industrie 4.0 is not digitization or digitalization of mechanical industry, because this is already there,” said Prof. Dr.-Ing. Peter Gutzmer, Deputy CEO and CTO of Schaeffler AG. “Industrie 4.0 is getting the data real-time information structure in this supply and manufacturing chain.”

 

 

“If we use IoT data in a different way we can be more flexible so we can adapt faster and make decisions if something unforeseen happens, even in the cloud and even with cognitive systems,” says Gutzmer.

From the 2013 Siemens video below:

“In intelligent factories machines and products will communicate with each other, cooperatively driving production. Raw materials and machines are interconnected, within an internet of things. The objective, highly flexible individualized and resource friendly mass production. That is the vision for the fourth industrial revolution.”

 

 

“The excitement surrounding the fourth industrial revolution or Industrie 4.0 is largely due to the limitless possibilities that come with connecting everything, everywhere, with everyone,” said Martin Dube, Global Manufacturing Leader in the Digital Transformation Group at Cisco, in a blog post today. “The opportunities to improve processes, reduce downtime and increase efficiency through the Industrial Internet of Things (IIoT) is easy to see in manufacturing, an industry heavily reliant on automation and control, core examples of operational technology.”

Connectivity between machines is vital for the success of Industrie 4.0, but it is far from simple. “The manufacturing environment is full of connectivity and communication protocols that are not interconnected and often not interoperable,” notes Dube. “That’s why convergence and interoperability are critical if this revolution is to live up to (huge) expectations.”

Dube explains that convergence is the concept of connecting machines so that communication is possible and interoperability is the use of a standard technology enabling that communcation.

 

Cisco Announces Interoperable IIoT Standard

Cisco announced today that a number of key tech companies have agreed on an Interoperable IIoT Standard. The group, which includes ABB, BoschRexroth, B&R, Cisco, General Electric, National Instruments, Parker Hannifin, Schneider Electric, SEW Eurodrive and TTTech, is aiming for an open, unified, standards-based and interoperable IIoT solution for communication between industrial controllers and to the cloud, according to Cisco:

 

ABB, Bosch Rexroth, B&R, CISCO, General Electric, KUKA, National Instruments (NI), Parker Hannifin, Schneider Electric, SEW-EURODRIVE and TTTech are jointly promoting OPC UA over Time Sensitive Networking (TSN) as the unified communication solution between industrial controllers and to the cloud.

Based on open standards, this solution enables industry to use devices from different vendors that are fully interoperable. The participating companies intend to support OPC UA TSN in their future generations of products.

 

 

screen-shot-2016-11-23-at-7-10-02-pm

[WebProNews]

November 24, 2016 / by / in , , , , , , , , , , , ,
AirSelfie. The only portable flying camera integrated in your phone cover

airselfie_device_3

 

AirSelfie is a revolutionary pocket-size flying camera that connects with your smartphone to let you take boundless HD photos of you, your friends, and your life from the sky. Its turbo fan propellers can thrust up to 20 meters in the air letting you capture wide, truly original photos and videos on your device. The anti-vibration shock absorber and 5 MP camera ensure the highest quality images. And its ultra-light 52g form that slips into a special phone cover and charger means you can keep AirSelfie on you at all times. Say hello to the future of selfies.

With its Aeronautical grade anodized aluminium case, four turbo fan propellers powered by brushless motors, a 5 Megapixel HD video camera, and a weight of nearly 52 gr., make it an easy to carry drone. The case also houses a battery that recharges the drone in 30 minutes.

 

2b523242a2c156af163601_original

airselfie-view

airselfie_size_4-800x512

airselfie_phonecover

airselfie-1a

airselfie-2

272021842a155b048807a84f1a440883_original

 

Check out their Kickstarter campaign; they are looking to raise €45,000 for production. But at the time of writing they’ve already raised €145,090.

 

Source: AirSelfie

November 24, 2016 / by / in , , , , , , , ,
The marketing genius behind Snap’s new Spectacles

spectacles

Spectacles.com

 

We all want a pair.

 

One week ago, I had virtually zero interest in owning a pair of Snap Spectacles, the company’s new video-recording sunglasses.

On Saturday, I contemplated the six-hour drive from San Francisco to LA to buy a pair out of a vending machine. What a difference a week can make.

The rollout of Spectacles has been, well, a spectacle. Everywhere Snap drops a Snapbot, the big yellow vending machines that serve as temporary storefronts for the glasses, crowds line up, dozens of people deep, and spend their hours waiting in line posting and tweeting about how excited they are to get their hands on some Spectacles.

It’s been a touch of marketing genius.

Snap isn’t going to make much money selling smart glasses one vending machine-full at a time. But that’s not the point. Instead, what the company has done is create the kind of buzz and excitement around a product — and thus the Snap brand, which is prepping for an IPO — that we haven’t seen in a long, long time.

How, exactly, did that happen?

  • Snapchat did a great job of setting expectations. From the get-go, Snap positioned its new glasses as a “toy,” which immediately differentiated Spectacles from Google Glass, the search giant’s failed smart glasses that made everyone question the future of wearables altogether. Spectacles are cool, dude. They’re for filming your friends partying at the football game, not for answering email. Who cares if there isn’t a killer use case? Toys don’t need one. Even Robert Scoble wearing Spectacles in the shower won’t kill Snap’s momentum. (Probably …)
  • Snap has done a great job creating perceived demand. After Snap drops a vending machine somewhere, it’s followed shortly by photos and videos of long lines, and eventually a bunch of sad customers once the machine sells out. But that has made Spectacles the hottest product in town — the $130 glasses are selling for thousands on eBay. Snap is likely selling just dozens of glasses per day, but it feels like it’s cleaning out the warehouse.
  • Snap’s rollout strategy is generating a lot of free press, both from users in line (see above) and more traditional media outlets. Instead of just one press cycle — the first day Spectacles went on sale — the press has covered each and every new Snapbot location. Users are eager to buy the glasses, and the press is happy to point them in the right direction in exchange for a few clicks.

The reality is that Spectacles aren’t going to be big business for Snap, at least not anytime soon. The company wouldn’t sell them out of vending machines if it was trying to make money here.

But Spectacles are giving Snap a new wave of momentum just before it plans to IPO — and the idea that it could sell a lot of glasses has been planted in everyone’s mind. And that feeling isn’t ephemeral.

[Original Article: Recode.net]

November 23, 2016 / by / in , , , , , , , , , , ,
How the blockchain will radically transform the economy

 

Say hello to the decentralized economy — the blockchain is about to change everything. In this lucid explainer of the complex (and confusing) technology, Bettina Warburg describes how the blockchain will eliminate the need for centralized institutions like banks or governments to facilitate trade, evolving age-old models of commerce and finance into something far more interesting: a distributed, transparent, autonomous system for exchanging value.


Bettina Warburg is a blockchain researcher, entrepreneur and educator. A political scientist by training, she has a deep passion for the intersection of politics and technology.

Why you should listen

A graduate of both Georgetown and Oxford, Bettina Warburg started her career as a political scientist and public foresight researcher at a prominent Silicon Valley think tank. Today, she has taken her skills as a researcher and scientist and is applying them toward an entrepreneurial career by co-founding a venture studio business called Animal Ventures. There, she spends most of her time incubating new startup ideas, advising Fortune 500 clients, governments and universities in developing minimum viable products, and strategizing around blockchain, artificial intelligence, industrial internet of things and digital platforms.

Warburg is the executive producer of a Silicon Valley tech show called Tech on Politics, interviewing some of the world’s most influential political operatives, entrepreneurs, government official, and the creators of some of the most exciting digital products on the market.

Warburg recently launched a new Blockchain education course called “The Basics of Blockchain.” The hope is that this course will help spread the body of blockchain knowledge, inspire a new generation of entrepreneurs and get more people ready for the coming revolution. [TED]

November 23, 2016 / by / in , , , , , , , ,
Show Buttons
Hide Buttons

IMPORTANT MESSAGE: Scooblrinc.com is a website owned and operated by Scooblr, Inc. By accessing this website and any pages thereof, you agree to be bound by the Terms of Use and Privacy Policy, as amended from time to time. Scooblr, Inc. does not verify or assure that information provided by any company offering services is accurate or complete or that the valuation is appropriate. Neither Scooblr nor any of its directors, officers, employees, representatives, affiliates or agents shall have any liability whatsoever arising, for any error or incompleteness of fact or opinion in, or lack of care in the preparation or publication, of the materials posted on this website. Scooblr does not give advice, provide analysis or recommendations regarding any offering, service posted on the website. The information on this website does not constitute an offer of, or the solicitation of an offer to buy or subscribe for, any services to any person in any jurisdiction to whom or in which such offer or solicitation is unlawful.