Lifestyle

Visual Intelligence: Make Better High-Stakes Decisions

 

Amy Herman created and conducts all sessions of ‘The Art of Perception’, an education program that was initially used to help medical students improve their observation skills. Often in diagnostics, you’re not looking for what you can see, but what you can’t – this is called the ‘pertinent negative’. The same goes for investigations, and so the program was adapted for the New York City Police Department, and other intelligence agencies. Really, Herman says, it’s about fine-tuning something we take as a given: our visual intelligence. This refers to the concept that we see more than we can possibly process. What we register is just a fraction of the world around us, so how can we see more? Like any other skill or muscle, to get the most and best use out of it, it needs training.

According to Herman, we need to think more consciously about what we see and deliberately take information in so that we can do our jobs more effectively and live our lives more purposefully. To that end, she runs us through a building block of ‘The Art of Perception’ course: The Four A’s.

Tune into the video above for four practical steps to make more perceptive and informed decisions. Amy Herman is the author of Visual Intelligence:Sharpen Your Perception, Change Your Life. [BigThink]

December 2, 2016 / by / in , , , , , , , , ,
Hotels VS. Airbnb: Positive Rivalry Drives Innovation

 

What could a global hotel executive have to say about Airbnb? The rule is typically: ‘If you don’t have anything nice to say, don’t say anything at all.’ Since peer-to-peer accommodation start-up Airbnb launched in 2008, the mood has been tense between traditional lodging providers and the DIY movement that Airbnb represents.

However Kimo Kippen is the Former Chief Learning Officer at Hilton Worldwide and view on Airbnb is defined by one word: exciting. Airbnb many not own hotel rooms, valuable property, or even a long-standing reputation, but what it does have is an ingenious platform that grants so much more autonomy and choice to its users. Kippen sees this competition as inspiration and is pushing Hilton to make greater efforts to innovate and keep up, for example through an integrated app that allows digital check in, greater room control, and digital room keys.

There are countless studies which demonstrate that competition increases motivation – as far back as 1891, psychologist Norman Triplett found that the presence of another cyclist made his study participants pedal faster.

The rivalry between companies like Apple and Microsoft has led to ever-advancing technology for the public, the result of two competitors spring-boarding off one another and pushing each other to innovate.

The hotel business is booming, with the industry showing all-time high performance and growth projections in 2015, according to competitive benchmarking firm STR. Supply is climbing, and the pace of hotel closings is slowing. This is even as a study from Boston University in June 2016 found that Airbnb has contributed to a reduction in “aggressive hotel room pricing, an impact that benefits all consumers, not just participants in the sharing economy.” That likely hurts the bottom-line of hotels and yet they have, on the whole, been resourceful enough to have the best year ever. In turn, changes are being enforced on Airbnb, most recently through a new law in New York that only permits room rentals if the host is also living in the apartment, and prohibits rentals in multi-unit buildings for less than 30 days – violations are punishable by a $7,500 fine. This is controversial for many reasons, and no doubt hinders Airbnb’s ability to function. Will they find ways to remain competitive?

Hotels and peer-to-peer accommodation will find themselves in a beneficial rivalry only if the focus is on self-improvement, as opposed to the destruction of the other. When the latter happens, it punishes the client and hinders the spirit of innovation. [BigThink]

December 2, 2016 / by / in , , , , , , , , ,
Why Very Smart People Are Happiest Alone

solitude

Quality time = alone time. (LUIGI MORANTE)

 

In a just-published study about how our ancestral needs impact our modern feelings, researchers uncovered something that will surprise few among the highly intelligent. While most people are happier when they’re surrounded by friends, smart people are happier when they’re not.

The researchers, Norman P. Li and Satoshi Kanazawa, of the Singapore Management University, Singapore and the London School of Economics and Political Science, UK, respectively, were investigating the “savannah theory” of happiness.

The savannah theory — also called the “evolutionary legacy hypothesis” and the “mismatch hypothesis” — posits that we react to circumstances as our ancestors would, having evolved psychologically based on our ancestors’ needs in the days when humankind lived on the savannah.

 

savannahSavannah (BJØRN CHRISTIAN TØRRISSEN)

The study analyzed data from interviews conducted by the National Longitudinal Study of Adolescent Health (Add Health) in 2001-2002 with 15,197 individuals aged 18–28. The researchers  looked for a correlation between where an interviewee lived — in a rural or urban area — and his or her life satisfaction. They were interested in assessing how population density and friendships affect happiness.

 

How We Feel About Being in Large Groups

 

crowdCrowded (KEVIN CASE)

 

The study found that people in general were less happy in areas of greater population density. The report’s authors see this is as support for the savannah theory because we would naturally feel uneasy in larger groups if — as evidence they cite suggests — our brains evolved for functioning in groups of about 150 people:

  • Comparing the size of our neocortex to other primates and the sizes of the groups in which they dwell suggests the natural size of a human group is 150 people (Dunbar, 1992).
  • Computer simulations show that the evolution of risk aversion happens only in groups of about 150 people (Hintze, Olson, Adami, & Hertwig, 2013).
  • The average size of modern hunter-gatherer societies is 148.4 people (Dunbar, 1993).
  • Neolithic villages in Mesopotamia had from 150–200 people (Oates, 1977).
  • When a group of people exceeds 150-200 people, it will tend to break into two in order to facilitate greater cooperation and reciprocity among its members (Chagnon, 1979).
  • The average personal network, as suggested by the typical number of holiday cards sent per person per year, is 153.5 people (Hill & Dunbar, 2003).

The study discovered, though, that the negative effect of the presence of lots of people is more pronounced among people of average intelligence. They propose that our smartest ancestors were better able to adapt to larger groups on the savannah due to a greater strategic flexibility and innate ingenuity, and so their descendants feel less stressed by urban environments today.

 

You’ve Got to Have Friends. Or Not.

 

BFFsBFFs (SONNY ABESAMIS)

 

While it seems self-evident that good friendships increase life satisfaction in most people, Li and Satoshi and Kanazawa note, surprisingly, that they know of only a single study that looked at the reason why this is true, and which concluded friendships satisfy psychological needs such as relatedness, the need to be needed, and an outlet for sharing experiences. Still, the reason a person has those needs remains unexplained.

Li and Kanazawa feel that we need look no further than the savannah. They say that friendships/alliances were vital for survival, in that they facilitated group hunting and food sharing, reproduction, and even group child-rearing.

The data they analyzed supports the assumption that good friendships — and a few good ones is better than lots of weaker ones — do significantly increase life satisfaction for most people.

In highly intelligent people, though, the finding is reversed: Smart people feel happier alone than when others, even good friends, are around. A “healthy” social life actually leaves highly intelligent people with less life satisfaction. Is it because their desires are more aspirational and goal-oriented, and other people are annoyingly distracting?

However, just in case this makes too much sense, the study also found that spending more time socializing with friends is actually an indicator of higher intelligence! This baffling contradiction is counter-intuitive, at least. Unless these smart people are not so much social as they are masochistic. [BigThink]

December 2, 2016 / by / in , , , , , , , , ,
Scientists Accidentally Discover Efficient Process to Turn CO2 Into Ethanol
gettyimages-188066739

 

The process is cheap, efficient, and scalable, meaning it could soon be used to remove large amounts of CO2 from the atmosphere. 

 

Scientists at the Oak Ridge National Laboratory in Tennessee have discovered a chemical reaction to turn CO2 into ethanol, potentially creating a new technology to help avert climate change. Their findings were published in the journal ChemistrySelect. [Go here for a new in-depth interview about the findings with one of the lead researchers.]

The researchers were attempting to find a series of chemical reactions that could turn CO2 into a useful fuel, when they realized the first step in their process managed to do it all by itself. The reaction turns CO2 into ethanol, which could in turn be used to power generators and vehicles.

The tech involves a new combination of copper and carbon arranged into nanospikes on a silicon surface. The nanotechnology allows the reactions to be very precise, with very few contaminants.

“By using common materials, but arranging them with nanotechnology, we figured out how to limit the side reactions and end up with the one thing that we want,” said Adam Rondinone.

This process has several advantages when compared to other methods of converting CO2 into fuel. The reaction uses common materials like copper and carbon, and it converts the CO2 into ethanol, which is already widely used as a fuel.

Perhaps most importantly, it works at room temperature, which means that it can be started and stopped easily and with little energy cost. This means that this conversion process could be used as temporary energy storage during a lull in renewable energy generation, smoothing out fluctuations in a renewable energy grid.

“A process like this would allow you to consume extra electricity when it’s available to make and store as ethanol,” said Rondinone. “This could help to balance a grid supplied by intermittent renewable sources.”

The researchers plan to further study this process and try and make it more efficient. If they’re successful, we just might see large-scale carbon capture using this technique in the near future.

Source: Oak Ridge National Laboratory via New Atlas

 

December 2, 2016 / by / in , , , , , , ,
A New Implant is Being Developed for Enhancing Human Memory

Article Image

Would that it was this easy.

In 1998, Andy Clark and David Chalmers proposed that a computer operates together with our brains as an “extended mind,” potentially offering additional processing capabilities as we work out problems, as well as an annex for our memories containing information, images, and so on. Now a professor of biomedical engineering at the University of Southern California, Theodore Berger, is working to bring to market human memory enhancement in the form of a prosthetic implanted in the brain. He’s already testing it attached to humans.

The prosthetic, which Berger has been working on for ten years, can function as an artificial hippocampus, the area in the brain associated with memory and spatial navigation.

Hippocampus

Hippocampus (LIFE SCIENCE DATABASES)

The plan is for the device to convert short-term memory into long-term memory and potentially store it as the hippocampus does. His research has been encouraging so far.

Berger began by teaching a rabbit to associate an audio tone with a puff of air administered to the rabbit’s face, causing it to blink. Electrodes attached to the rabbit allowed Berger to observe patterns of activity firing off in the rabbit’s hippocampus. Berger refers to these patterns as a “space-time code” representing where the neurons are in the rabbit’s brain at a specific moment. Berger watched them evolving as the rabbit learned to associate the tone and puff of air. He told Wired, “As the space-time code propagates into the different layers of the hippocampus, it’s gradually changed into a different space-time code.” Eventually, the tone alone was enough for the hippocampus to produce a recallable space-time code based on the latest incoming version to make the rabbit blink.

The manner in which the hippocampus was processing the rabbit’s memory and producing a recallable space-time code became predictable enough to Berger that he was able to develop a mathematical model representing the process.

Berger then built an artificial rat hippocampus — his experimental prosthesis —to test his observations and model. By training rats to press a lever with electrodes monitoring their hippocampuses, Berger was able to acquire the corresponding space-time codes. Running that code through his mathematical model and sending it back to the rats’ brains, his system was validated as the rats successfully pressed their levers. “They recall the correct code as if they’ve created it themselves. Now we’re putting the memory back into the brain,” Berger reports.

It’s maybe the this last statement that’s so intriguing. Does the brain have some kind of master memory index? Has it somehow integrated the artificial hippocampus’s memories into the rats’ directory? Will it also happen in humans?

Dustin Tyler, a professor of engineering at Case Western Reserve University, cautioned Wired, “All of these prosthetics interfacing with the brain have one fundamental challenge. There are billions of neurons in the brain and trillions of connections between them that make them all work together. Trying to find technology that will go into that mass of neurons and be able to connect with them on a reasonably high-resolution level is tricky.”

Still, Bergen himself is optimistic, telling IEEE Spectrum, “We’re testing it in humans now, and getting good initial results. We’re going to go forward with the goal of commercializing this prosthesis.”

What he envisions bringing to market based on his research is a brain prosthetic for people with memory problems. The tiny device would be implanted in the patient’s own hippocampus from where it would stimulate the neurons responsible for turning short-term memories into long-term memories. He hopes it can help patients suffering from Alzheimer’s, other forms of dementia, stroke victims and people whose brains have been injured.

prosthetic

(TED BERGER)

Berger’s business partner in this is tech entrepreneur Bryan Johnson. After selling his payment gateway Braintree to PayPal for $800, he started a venture capital fund, the OS Fund. Its web site states its mission: “The OS Fund invests in entrepreneurs working towards quantum-leap discoveries that promise to rewrite the operating systems of life.” Johnson sees Berger’s work as one such discovery, and formed kernel to support it, running the company himself with Berger as the company’s Chief Science Officer.

 

(KERNEL)

Rats and monkeys — the prosthetic improved the memories of rhesus monkeys attached to their prefrontal cortex — are one thing. The greater number of neurons in human brains is a big issue that needs to be grappled before Berger’s implant will work well for humans: It’s difficult to gain a comprehensive view of what’s going on with larger brains due to their greater number of neurons. (Rat brains have about 200 million neurons; humans have 86 billion.) Berger warns, “Our information will be biased based on the neurons we’re able to record from,” and he looks forward to tools that can capture broader swaths of data going forward. It’s anticipated that they’ll need to pack a greater number of electrodes into prostheses.

Human trials so far have been with in-patient epileptics with electrodes already in place for their epilepsy treatments. Berger’s team has observed and recorded activity in the hippocampus during memory tests, and they’ve been encouragingly successful at enhancing patients’ memories by stimulating neurons there. kernel will be funding additional human trials. [via BigThink]

December 2, 2016 / by / in , , , , , , , ,
Five reasons why cutting NASA’s climate research would be a colossal mistake

1479979834 Image: Shutterstock.

James Dyke, University of Southampton

Will President Trump really slash funding of NASA’s “politicised” climate change science?

It certainly has been politicised, but not by the scientists conducting it. Blame instead the fossil fuel industry-funded lobby groups and politicians that have for more than a generation tried using doubt, obfuscation or straightforward untruths to argue that humans are not in fact causing significant changes to the climate.

That is what must irk Trump’s team of sceptics. NASA’s organisations such as the Goddard Institute for Space Studies and Jet Propulsion Laboratory have made seminal contributions to our understanding of how humans are changing the Earth’s climate. All funded by the US taxpayer.

De-funding NASA’s climate change science is effectively sticking your fingers in your ears and whistling Dixie. The Earth’s climate is indifferent to politics and will continue to respond to human emissions of greenhouse gases. All that would happen is US leadership in this area would end, with the risk that not just America but humanity would be the loser.

Specifically, here are five reasons why de-funding (aka wilfully destroying) NASA’s climate change research would be colossally stupid.

 

1. NASA’s satellites are our eyes on our world

NASA currently operates more than a dozen satellites that orbit the Earth and remotely sense ocean, land and atmospheric conditions. Its research encompasses solar activity, sea level rise, the temperature of the atmosphere and the oceans, the ozone layer, air pollution, and changes in sea and land ice.

All of this is directly relevant to climate change, but also represents vital research on these different components of the Earth system itself. Billions of dollars have been sunk into these programs which produce data that is used by an international community of scientists studying many different aspects of the Earth.

 

NASA Earth observation satellites.
NASA

2. Climate science is a key part of NASA’s mission

Okay, we can’t turn all these satellites off, but we can stop the Administration using its data to progress climate change science. NASA was created with the National Aeronautics and Space Act of 1958 with a remit to develop technology for “space observations” but not Earth science. That was the job of other federal agencies.

But the model of cross-agency research failed during the 1970s due to a lack of funding. Budgets were cut and NASA ended up conducting some of the science that was made possible by the data it was collecting. Moreover, it was told to put more emphasis on research towards “national needs” such as energy efficiency, pollution, ozone depletion and yes, climate change. As such, Earth and climate change science is one of the central remits of the agency which has become a global leader in it.

 

3. NASA attracts the best of the best

NASA is world famous, largely because of programs such as Apollo which put humans on the Moon. But its fame extends well beyond those interested in space flight. NASA attracts some of the world’s best and brightest Earth and climate change scientists because its operations offer unparalleled breadth and scale of research. And saying “I work for NASA” is still pretty cool.

De-funding climate change science would mean putting many scientists – some of whom are just starting their careers – out of work. Some would be happily gobbled up by other agencies in other countries, in fact I’m sure overtures to some staff are already in the post. This would be America’s loss.

 

4. NASA has transformed climate change communication

A visit to climate.nasa.gov will immediately show how effective NASA’s communication of Earth science has become. Climate science is complex. NASA along with other US agencies such as the National Oceanic and Atmospheric Administration produce unparalleled visualisations of climate change. These are used by other agencies and communicators around the world and further increases the profile and reputation of NASA and the US as leaders in Earth science.

 

 

5. Climate science can be NASA’s next great legacy

It’s easy to get misty-eyed about some of NASA’s operations. Apollo was a staggering achievement. But while US astronauts visited the Moon “for all mankind” we should remember that the space race was driven by the cold war and rivalry with the USSR. The fact humans have never returned to the Moon should tell us that there isn’t much to be gained from such fleeting visits.

In terms of legacy, I think Eugene Cernan, the commander of Apollo 17 and so the last human to walk on the moon, summed it up best: “We went to explore the Moon, and in fact discovered the Earth”. It was one of the crew of Apollo 17 that took photograph AS17-148-22727 as they left Earth orbit on their way to the Moon on the December 7, 1972. This photograph is now known as the Blue Marble and has become one of the most reproduced images in all of human history.

 

The Blue Marble photograph.

There have been profound changes to the Earth since that photograph was taken. There are nearly twice as many humans living on it. The number of wild animals has halved. Concentrations of CO₂ in the atmosphere are higher than they have been for many thousands of years. And yes, the Earth’s surface and oceans are warmer, glaciers are melting and sea levels rising.

The Blue Marble, like all of NASA’s images, was released to the public domain. Free to be used by anyone. The science that NASA conducts on climate change is similarly shared across the world. Its Earth and climate science represents the best of not just the US, but humanity. We need it now, more than ever.

The Conversation

James Dyke, Lecturer in Sustainability Science, University of Southampton

This article was originally published on The Conversation. Read the original article.

December 1, 2016 / by / in , , , , , , , , ,
What will the world actually look like at 1.5°C of warming?

1479392539 Image: Shutterstock.

 

 

Richard Betts, University of Exeter

The high ambition of the Paris Agreement, to limit global warming to “well below 2°C”, was driven by concern over long-term sea level rise. A warmer climate inevitably means melting ice – you don’t need a computer model to predict this, it is simple common sense.

As temperatures rise, sooner or later much of the world’s glaciers will become water, which will end up in the ocean. With enough warming, ice sheets could also begin to melt irreversibly. Also, water expands as it warms. Although the full impact will take a long time – centuries or more – the implications of even only 2°C warming for low-lying coastal areas and island states are profound. This is why, in Paris, the world agreed to “pursue efforts” to go further, and limit warming to 1.5°C above pre-industrial levels.

“Pre-industrial” is not always well-defined, but is often taken as 1850-1900 since that is when accurate measurements became widespread enough to estimate global temperature change. By the 1980s, when scientists first warned about the risks of climate change, the world had already warmed by around 0.4°C. Things have accelerated since, and while year-to-year changes show downs as well as ups, the general ongoing trend is upwards. Latest data from the Met Office shows 2016 is expected to be 1.2°C above pre-industrial levels – the hottest year ever recorded.

So given this, what will a world above 1.5°C look like?

 

Not much different … at first

Depending on climate sensitivity and natural variability, we could conceivably see the first year above 1.5°C as early as the late 2020s – but it is more likely to be later. In any case, the first year above 1.5°C above pre-industrial temperatures will not represent what a world that warm looks like in the longer term.

During that year we’d expect some extreme weather events somewhere in the world, as happens every year. Some of these heatwaves, heavy downpours or droughts may well have become more likely as part of the changing climate. Others, however, may not have changed in likelihood. Teasing out the signal of climate change from the noise of natural variability is hard work.

 


It’s hard to say how much climate change is responsible for any individual storm.
Zacarias Pereira da Mata / shutterstock 

But there will be some places which do not yet see major impacts in that first year, that nevertheless will have become more likely to be affected. The “loaded dice” analogy is rather clichéd, but nevertheless useful – even a pair of loaded dice will not roll a double six every time, just more often than normal dice. So while the chances of an extreme heatwave, for instance, may have increased by the time we exceed 1.5°C, it may not necessarily occur in that year.

Furthermore, some impacts such as sea level rise or species extinctions will lag behind the change in climate, simply because the processes involved can be slow. It takes decades or more to melt glaciers, so the input of extra water to the oceans will take time.

None of this should lull us into a false sense of security, however. While rising seas or biodiversity losses may not be obvious in the first year above 1.5°C, some of these changes will probably be already locked in and unavoidable.

 

Beyond global warming

The impacts of increased carbon dioxide do not just come from its effects as a greenhouse gas. It also affects plant growth directly by enhancing photosynthesis (“CO₂ fertilisation”), and makes the sea less alkaline and more acidic. “Ocean acidification” is unhealthy for organisms which make calcium in their bodies, like corals and some forms of plankton. All other things being equal, CO₂ fertilisation could be viewed to some extent as “good news” as it could help improve crop yields, but even so, the implications for biodiversity may not all be positive – research has already shown that higher CO₂ benefits faster-growing species such as lianas, which compete with trees, so the makeup of ecosystems can change.

 


Increased carbon dioxide favours lianas (woody vines) more than trees.
Stephane Bidouze / shutterstock 

The extent to which a 1.5°C world will see these other impacts depends on the still-uncertain level of “climate sensitivity” – how much warming occurs for a given increase in carbon dioxide. Higher sensitivity would mean even a small rise in CO₂ would lead to 1.5°C, so fertilisation and acidification would be relatively less important, and vice versa.

 

Impacts of staying at 1.5°C

There is a huge debate about whether limiting warming to 1.5°C is even possible or not. But even if it is, limiting global warming will itself have consequences. I’m not talking here about potential economic impacts (whether positive or negative). I’m talking about impacts on the kind of thing we are trying to protect by minimising climate change itself, things like biodiversity and food production.

In scenarios that limit warming at 1.5°C, net CO₂ emissions would have to become negative well before the end of the century. This would mean not only stopping the emission of CO₂ into the atmosphere, but also taking huge quantities of it out. Large areas of new forest and/or large plantations of bioenergy crops would have to be grown, coupled with carbon capture and storage. This will require land. But we also need land for food, and also value biodiverse wilderness. There is only so much land to go round, so difficult choices may be ahead.

So while the Paris Agreement ramped up the ambition and committed the world to trying to limit warming to 1.5°C, we should remember that there is much more than a single number that is important here.

It would be naive to look at the climate in the first 1.5°C year and say “Okay, that’s not so bad, maybe we can relax and let the warming continue”. It’s vital to remember that at any given level of global warming, we have not yet seen the full impacts of it. But nor have we seen the impacts of holding back warming at low levels. One way or another, ultimately the world is going to be a very different place.

The Conversation

Richard Betts, Chair in Climate Impacts, University of Exeter

This article was originally published on The Conversation. Read the original article.

December 1, 2016 / by / in , , , , , , , , ,
We can cut emissions in half by 2040 if we build smarter cities

A man works on a construction site of a residential building in Mumbai, India, October 31, 2016. REUTERS/Shailesh Andrade - RTX2R7EU

 

Shobhakar Dhakal, Asian Institute of Technology

As a planet, we have some serious climate targets to meet in the coming years. The Paris Agreement, signed by 192 countries, set an aspirational goal of limiting global warming to 1.5ᵒC. The United Nations Sustainable Development Goals, set to be achieved by 2030, commit the world to “take urgent action” on climate change.

All this will require ridding our economies of carbon. If we’re to do so, we need to completely rethink our cities.

The UN’s peak climate body showed in its most recent report that cities are crucial to preventing drastic climate change. Already, cities contribute 71% to 76% to energy-related carbon emissions.

In the Global South, energy consumption and emissions in urban areas tend to be way higher than those in rural areas. Future population growth is expected to take place almost entirely in cities and smaller urban settlements. Unfortunately, those smaller centres generally lack the capacity to properly address climate change.

China’s “New-type Urbanisation Policy” aims to raise its city populations from 54.2% in 2012 to 60% in 2020. This will mean building large urban infrastructure projects, and investing trillions of dollars into new developments. Meanwhile, India’s sheer volume of urbanisation and infrastructure needs are phenomenal.

 

Infrastructure is booming in China.
Jason Lee/Reuters
 

The problem with infrastructure

Infrastructure contributes to greenhouse gas emissions in two ways: through construction (for example, the energy footprints of cement, steel and aluminium used in the building process) and through the things that go on to use that infrastructure (for example, cars or trains using new roads or tracks).

In a recent study, my colleagues and I have shown that the design of today’s transportation systems, buildings and other infrastructure will largely determine tomorrow’s CO2 emissions.

But by building climate-smart urban infrastructure and buildings, we could cut future emissions in half from 2040 onwards. We could reduce future emissions by ten gigatonnes per year: almost the same quantity currently being emitted by the United States, Europe and India put together (11 gigatonnes).

We assessed cities’ potential to reduce emissions on the basis of three criteria: the emissions savings following upgrades to existing infrastructure; emissions savings from using new, energy-efficient infrastructure; and the additional emissions generated by construction.

In established cities, we found that considerable progress can be made through refurbishment of existing infrastructure. But the highest potential is offered by construction of new, energy-efficient projects from the beginning.

The annual reductions that could be achieved by 2040 by using new infrastructure is three to four times higher than that of upgrading existing roads or buildings.

With this in mind, governments worldwide must guide cities towards low-carbon infrastructure development and green investment.

 

Urbanisation is about more than megacities

Significant opportunities exist to promote high-density living, build urban set-ups that mix residential, work and leisure in single spaces, and create better connectivity within and between cities. The existing window of opportunity to act is narrowing over time, as the Global South develops rapidly. It should not be missed.

 

Zero-emissions transport will be essential to achieve our climate goals.
Edgard Garrido/Reuters

Besides global megacities such as Shanghai and Mumbai, smaller cities must also be a focus for lowering emissions. Studies have shown a paradox for these places: the capacity for governance and finance are lower in the smaller cities, despite the fact that the majority of future urban populations will grow there, and they will expand quicker than their larger cousins.

We must give up on our obsession with megacities. Without building proper capacity in mid- and small-sized cities to address climate solutions, we cannot meet our climate goals.

Perhaps most important is raising the level of ambition in the existing climate policies in cities of all sizes, making them far-reaching, inclusive and robust. Despite the rhetoric, the scale of real change on ground from existing cities climate actions are unproven and unclear.

Existing cities’ climate mitigation plans and policies, such as in Tokyo, London, Bangkok, and activities promoted by networks such as ICLEI, C40, Covenant of Mayors for Energy and Environment are a good start; they must be appreciated but further strengthened.

But, to further support these good ideas, the world urgently needs support measures for urban mitigation from local to global levels together with a tracking framework and agreed set of indicators for measuring the extent of progress towards low-carbon future.

Only if we start with cities, big and small, will we manage to limit warming to 1.5°C.

The Conversation

Shobhakar Dhakal, Associate Professor, Asian Institute of Technology

This article was originally published on The Conversation. Read the original article.

December 1, 2016 / by / in , , , , , ,
These Wearables Detect Health Issues Before They Happen

Technologies created by the federally funded MD2K project could lead to consumer devices that offer health guidance in real time.

 

wearablesopenerElectrocardiogram data transmitted from MD2K’s AutoSense chest-band is displayed on a smartphone running the mCerebrum software platform. This researcher is also wearing a MotionSense wristband.

 

Future generations of Apple Watches, Fitbits, or Android Wear gadgets may be able to detect and mitigate health problems rather than simply relay health data, thanks to a federally funded project that is applying big-data tools to mobile sensors.

The project, called MD2K, won $10.8 million from the National Institutes of Health to develop hardware and software that compiles and analyzes health data generated by wearable sensors. MD2K’s ultimate goal is to use these sensors and data to anticipate and prevent “adverse health events,” such as addiction relapse. Though the project is aimed at researchers and clinicians, its tools are freely available, so these innovations could turn up in consumer wearables.

Commercial wearable devices aren’t suitable for research because they only gather a few types of health data about a user, such as number of steps taken and heart rate, and they typically display specific results rather than raw sensor data. In addition, their batteries can’t support a full day’s worth of high-frequency data collection and they don’t quantify the degree of uncertainty associated with their data.  

 

wearables1

MD2K’s EasySense wearable is a cardiorespiratory monitor that can measure lung fluid level in congestive heart failure patients.
 

wearables2

EasySense uses a circular antenna array to obtain stable measurements irrespective of orientation.
 

To address these shortcomings, the MD2K team, which spans 12 different universities, produced a set of gadgets capable of collecting a variety of raw, reliable sensor data for 24 hours per charge. MotionSense is a smart watch that deciphers users’ arm movements through sensors and can track heart rate variability. EasySense is a micro-radar sensor worn near the chest to measure heart activity and lung fluid volume. MD2K researchers are also using AutoSense—invented before MD2K was established—a chest-band that gleans electrocardiogram (ECG) and respiration data. All three devices stream data via Wi-Fi to Android phones where an MD2K-built software platform processes the information and translates it into digital biomarkers about the wearer’s health status and risk factors.

Since MD2K’s work is open-source, manufacturers such as Apple, Garmin, and Samsung could use the project’s designs to build similar sensors and apps for their own wearable devices. For example, MD2K’s MotionSense “HRV” wristband has three types of LED sensors (red, infra-red, and green) embedded in its underside, while most fitness trackers and commercial smart watches, such as the Apple Watch, have only green LEDs. Because the MD2K gadget can calculate differences in the ways a user’s blood absorbs its various sensor lights, it is able to compute heart rate variability, i.e., variations in the time interval between heartbeats, instead of just measuring a user’s heart rate in terms of beats per minute, as most of today’s wearables do.

 

MD2K researchers are using this AutoSense chest-band to monitor study participants’ heart activity and respiration.

This heart-rate variability data, along with respiratory signals, can help gauge a person’s stress levels. Emre Ertin, an Ohio State University professor who developed MD2K’s wearable gadgets, says manufacturers could easily implement this “stress biomarker” in their devices. Some commercial wearables, such as the Spire “mindfulness and activity tracker” and Fitbit’s more expensive models, claim to detect stress (through tense breathing), but other popular wearables, including the Apple Watch, Garmin’s “vivo” series, and Samsung’s GearFit2, do not.

Academics at Northwestern and Ohio State universities are already using the MD2K wearables to understand when and why abstinent smokers relapse and to assess congestion in congestive heart failure patients so they can avoid hospitalization. The smoking cessation study pulls information from multiple sources, including the MotionSense wristband’s accelerometer and gyrometer, which evaluate the wearer’s wrist position and movement to identify smoking gestures; the gadget’s heart-rate variability sensors, which assess stress; and the GPS in the user’s smartphone, which yields clues about location. MD2K researchers then examine the data to see which environments and behaviors trigger smoking lapses. Eventually, they will leverage that knowledge to launch “just-in-time” interventions in the form of pop-up messages or surveys on the participant’s smartphone.

It seems inevitable that these advances will trickle down to consumer wearables, but some experts advise caution. “If you take one thousand people who are trying to quit smoking and add an intervention that is digital and mobile, you’ll get some uptake because these people were previously using nothing [to guard against relapse],” says Joseph Kvedar, who heads the Boston-based Partners HealthCare SystemCenter for Connected Health and teaches at Harvard Medical School. “But I don’t think anyone really knows how effective any of these things are, long term.” [MIT Technology Review]

November 30, 2016 / by / in , , , , , , , ,
$89 Linux laptop? Check out the new Pinebook from Raspberry Pi rival Pine

pinebooksize1

The sub-$100 Pinebook runs on an ARM CPU and Linux. Image: Pine 

 

 

The Pinebook is a low-cost Linux laptop with an ARM CPU that undercuts the cheapest Chromebooks.

 

The makers of a popular Raspberry Pi challenger, the $20 Pine A64, have returned with two sub-$100 Linux laptops, called Pinebooks.

The Pine A64 stood out among developer boards because it was cheap and relatively powerful, helping its maker raise $1.7m on Kickstarter last year with just a $30,000 target.

With an Allwinner quad-core ARM Cortex A53 64-bit processor, the A64 board could run Ubuntu, Debian, or Android Lollipop 5.1. The same processor is powering the 11-inch and 14-inch Pinebook notebooks, which at $89 and $99 respectively, could become some of the cheapest laptops available.

A dozen faster, better, or cheaper alternatives to the Raspberry Pi

A dozen faster, better, or cheaper alternatives to the Raspberry Pi

The Raspberry Pi might be the name that springs to mind when people think of single board computers for homebrew projects, but there are other boards out there worth considering. (Updated Nov 1, 2016). Read more >>

The displays on both models have a 1,280 x 720-pixel resolution, and besides the A64’s ARM processor, the Pinebooks include the basics needed for a functional laptop, including display, keyboard, touchpad, storage, memory, and ports.

Both models feature 2GB LPDDR3 RAM, 16GB eMMC 5.0 storage, two USB 2.0 ports, a microSD slot supporting up to 256GB additional storage, a mini HDMI input to connect to an external display, headphone jack, built-in microphone, a 1.2-megapixel camera, and a 10,000mAh lithium polymer battery. They also support Wi-Fi and Bluetooth wireless connections.

CNX Software, which first reported the new laptops, notes that the Pinebook’s system on a chip (SoC) includes a Mali-400MP2 GPU. Also, while the machines will run all operating systems supported by the A64 boards, the firmware needs to be modified due to the LPDDR3 RAM. The devices should support the Remix OS Android fork.

While the new netbooks share a common system on a chip, CNX notes that the new laptops aren’t actually based on the A64 board itself, but rather on a custom board that’s designed to keep the laptops thin.

According to the Pinebook’s spec sheet, the notebook is 352mm wide, 233mm deep, and 18mm high, or 14in by 9in by 0.7in. It weighs 1.2kg, or 2.65lb.

The devices aren’t actually for sale yet, but would-be buyers can register to be told when sales commence. [ZDNet]

November 30, 2016 / by / in , , , , , , , , ,
Show Buttons
Hide Buttons

IMPORTANT MESSAGE: Scooblrinc.com is a website owned and operated by Scooblr, Inc. By accessing this website and any pages thereof, you agree to be bound by the Terms of Use and Privacy Policy, as amended from time to time. Scooblr, Inc. does not verify or assure that information provided by any company offering services is accurate or complete or that the valuation is appropriate. Neither Scooblr nor any of its directors, officers, employees, representatives, affiliates or agents shall have any liability whatsoever arising, for any error or incompleteness of fact or opinion in, or lack of care in the preparation or publication, of the materials posted on this website. Scooblr does not give advice, provide analysis or recommendations regarding any offering, service posted on the website. The information on this website does not constitute an offer of, or the solicitation of an offer to buy or subscribe for, any services to any person in any jurisdiction to whom or in which such offer or solicitation is unlawful.