Big Data

Royal Mint and CME to launch digital gold on blockchain

gold-bar The link-up will transform the way traders and investors trade, execute and settle goldRoyal Mint

 

The blockchain-based Royal Mint Gold (RMG) will be available in 2017.

 

The Royal Mint and CME Group, the diverse derivatives marketplace, are collaborating on a “digitised gold offering” which will be traded on a blockchain – to be made available around mid-2017.

The blockchain-based gold product, called Royal Mint Gold (RMG) will transform the way traders and investors trade, execute and settle gold, said a statement. One RMG will equal one gram of physical gold.

The 1,000-year-old Royal Mint, which is owned by HM Treasury, will issue RMG as a digital record of ownership for gold stored at its highly-secure on-site bullion vault storage facility. CME Group will develop, implement and operate the product’s digital trading platform. Taken together, this new service will provide an easier, cost-effective and cryptographically secure alternative to buying, holding and trading spot gold.

Vin Wijeratne, CFO of The Royal Mint, said in a conference call: “Gold still remains a relatively expensive commodity to invest in; there are costs associated with vaulting or management of the physical assets and as a result is often referred to as a negative return investment.

“The Royal mint will place large gold bars into its secure vaults. We will then create the equivalent amount of RMG digitally and the signed ownership of these on the blockchain. Once this is done holders of RMG will be able to trade them peer-to-peer using a new platform that has been created and will be run by CME Group.

“A good way to look at this project is very similar to pay-as-you-go products in mobile telephony. So in other words, unless you physically buy or sell the product itself, there is no charge or ongoing charge for holding it. If you look at gold investment products that are out there, every single one of them charges some sort of annual management fee.

“By creating RMGs anyone can benefit from economies of scale usually reserved for larger investors. RMGs will be available to buy and sell through multiple intermediaries and trusted parties.

He said the addition of a blockchain-type system will mean tracking ownership in near real time and therefore some costly administration attached to this process can be dispensed with. In terms of technical detail about the blockchain design itself, not much information is is being released at this time.

Sandra Ro, digitisation lead at CME Group, said: “This is going to be a permissioned network. We will have all known actors and there will be a mechanism by which validators will validate the transactions. We will go into further details about exactly how a lot of process will work and the finer details around the platform at a later date.”

Ro reiterated that this is an investment product only and not for any sort of collateral management etc. “Another point we would like to make is this is one day one a fully funded platform so there is no leverage,” she said.

Regarding potential interoperability with other precious metal blockchains such as Paxos’s BankChain, Ro said: “Paxos will have to speak for itself regarding it’s platform. This is very much a digital gold offering as an investment product. And it happens to be that it is using a blockchain ledger to record transactions. This is a trading platform.

David Janczewski, director of new business at The Royal Mint said in a statement: “Distributed ledger technology is a game changer and supplying gold on a blockchain has been on our minds for some time, but only after partnering with CME Group did we feel we had the right fit and proposition.

“We’re now inviting the wider market to participate in this project alongside us and CME Group and we look forward to engaging with interested parties in the days ahead. Participation will enable us to develop the platforms to be able to connect to the CME network and trade gold.”

CME Group will launch a digital trading platform which will operate 24 hours a day, 365 days a year. Unlike the traditional physical spot cost model for investing in gold with management fees and ongoing storage charges levied, RMGs will offer ownership of the underlying gold with the option for conversion to physical gold by The Royal Mint with zero storage cost.

The initial amount of RMG at launch could be up to $1 billion worth of gold. It will be offered through investment providers. Further RMG will then be issued based on market demand.

Julie Winkler, senior managing director, research, product development and index services at CME Group said: “Developing a digital gold trading platform will help ensure that CME Group’s current product offerings meet the evolving needs of the global marketplace. As we continue to expand our global footprint and develop new products, this platform will help set standards for digital assets in financial markets.”

Sandra Ro added: “Innovation is at the heart of CME Group’s business, and the work we have done on RMG with The Royal Mint is testament to CME Group’s progress on the application of digital assets and distributed ledger technology to financial markets. By collaborating with The Royal Mint, we have set a new milestone in the digitisation of value.” [International Business Times]

November 30, 2016 / by / in , , , , , , , , ,
New unique brain ‘fingerprint’ method can identify a person with nearly 100% accuracy

Could provide biomarkers to help researchers determine how factors such as disease, the environment, and different experiences impact the brain and change over time.

 

 

 

 

A research team led by Carnegie Mellon University used diffusion MRI to measure the local connectome of 699 brains from five data sets. The local connectome comprises the point-by-point connections along all of the white matter pathways in the brain, as opposed to the connections between brain regions. To create a fingerprint, they used diffusion MRI data to calculate the distribution of water diffusion along the cerebral white matter’s fibers. (credit: Carnegie Mellon University)

Researchers have “fingerprinted” the white matter of the human brain using a new diffusion MRI method, mapping the brain’s connections (the connectome) at a more detailed level than ever before. They confirmed that structural connections in the brain are unique to each individual person and the connections were able to identify a person with nearly 100% accuracy.

The new method could provide biomarkers to help researchers determine how factors such as disease, the environment, genetic and social factors, and different experiences impact the brain and change over time.

“This means that many of your life experiences are somehow reflected in the connectivity of your brain,” said Timothy Verstynen, an assistant professor of psychology at Carnegie Mellon University and senior author of the study, published in open-access PLOS Computational Biology.

 

The local connectome: a personal biomarker



 

 

Demonstrating the level of detail, one local connectome fingerprint is shown in different zoom-in resolutions. A local connectome fingerprint has a total of 513,316 entries of scalar values. (credit: Fang-Cheng Yeh et al./PLoS Comput Biol)

For the study, the researchers used diffusion MRI to measure the local connectome of 699 brains from five data sets. The local connectome is the point-by-point connections along all of the white matter pathways in the brain, as opposed to the connections between brain regions. To create a fingerprint for each person, they used the diffusion MRI data to calculate the distribution of water diffusion along the cerebral white matter’s fibers.*

The measurements revealed the local connectome is highly unique to an individual and can be used as a personal biomarker for human identity. To test the uniqueness, the team ran more than 17,000 identification tests. With nearly 100 percent accuracy, they were able to tell whether two local connectomes, or brain “fingerprints,” came from the same person or not.

Curiously, they discovered that identical twins only share about 12 percent of structural connectivity patterns and the brain’s unique local connectome is sculpted over time, changing at an average rate of 13 percent every 100 days.

 

Decoding unexplored connectome data

“The most exciting part is that we can apply this new method to existing data and reveal new information that is already sitting there unexplored. The higher specificity allows us to reliably study how genetic and environmental factors shape the human brain over time, thereby opening a gate to understand how the human brain functions or dysfunctions,” said Fang-Cheng (Frank) Yeh, the study’s first author and now an assistant professor of neurological surgery at the University of Pittsburgh.

So we can start to look at how shared experiences — for example, poverty, or people who have the same pathological disease — are reflected in their brain connections, which could lead to new medical biomarkers for certain health concerns.

The team included researchers at the U.S. Army Research Laboratory, the University of Pittsburgh, the National Taiwan University, and the University of California, Santa Barbara. The Army Research Laboratory funded this research, which was supported by NSF BIGDATA, WU-Minn Consortium, the Ruentex Group, the Ministry of Economic Affairs, Taiwan, and National Institutes of Health.

* The local connectome is defined as the degree of connectivity between adjacent voxels within a white matter fascicle measured by the density of the diffusing water. A collection of these density measurements provides a high-dimensional feature vector that can describe the unique configuration of the structural connectome within an individual, providing a novel approach for comparing differences and similarities between individuals as pairwise distances. To evaluate the performance of this approach, the researchers used four independently collected diffusion MRI datasets with repeat scans at different time intervals (ranging from the same day to a year) to examine whether local connectome fingerprints can reliably distinguish the difference between within-subject and between-subject scans.

 


Abstract of Quantifying Differences and Similarities in Whole-Brain White Matter Architecture Using Local Connectome Fingerprints

 

Quantifying differences or similarities in connectomes has been a challenge due to the immense complexity of global brain networks. Here we introduce a noninvasive method that uses diffusion MRI to characterize whole-brain white matter architecture as a single local connectome fingerprint that allows for a direct comparison between structural connectomes. In four independently acquired data sets with repeated scans (total N = 213), we show that the local connectome fingerprint is highly specific to an individual, allowing for an accurate self-versus-others classification that achieved 100% accuracy across 17,398 identification tests. The estimated classification error was approximately one thousand times smaller than fingerprints derived from diffusivity-based measures or region-to-region connectivity patterns for repeat scans acquired within 3 months. The local connectome fingerprint also revealed neuroplasticity within an individual reflected as a decreasing trend in self-similarity across time, whereas this change was not observed in the diffusivity measures. Moreover, the local connectome fingerprint can be used as a phenotypic marker, revealing 12.51% similarity between monozygotic twins, 5.14% between dizygotic twins, and 4.51% between none-twin siblings, relative to differences between unrelated subjects. This novel approach opens a new door for probing the influence of pathological, genetic, social, or environmental factors on the unique configuration of the human connectome.

 

References:

November 29, 2016 / by / in , , , , , , , , ,
What can you achieve with CRISPR therapy today?

crispr_cas9

CRISPR is the newest, most efficient and most accurate method to edit a cell’s genome. It opens up a myriad of wonderful opportunities as well as frightening ethical challenges in healthcare. We have to understand it and prepare for the medical revolution it brings upon us, so here I summarised everything to know about this genome editing method from DNA-scissors to currently unimaginable possibilities, such as having an army of gene-edited soldiers. Let me introduce you, what can you achieve with CRISPR therapy today.

 

 

CRISPR therapy

 

1) The Gene-Editing Tool

So you should know that scientists around the world are already using this technique in several of their projects. In addition, global research and development companies started using CRISPR/Cas 9 for the development of drugs to treat a number of life-threatening medical conditions, including sickle-cell anaemia and cancer.

For example, Columbia University Medical Center (CUMC) and University of Iowa scientists have used CRISPR to repair a genetic mutation responsible for retinitis pigmentosa (RP), an inherited condition that causes the retina to degrade and leads to blindness in at least 1.5 million cases worldwide. The researchers published their study about it in Scientific Reports.

The authors of the study said that: “We still have some way to go, but we believe that the first therapeutic use of CRISPR will be to treat an eye disease. Here we have demonstrated that the initial steps are feasible”.

 

 

2) The Tool Turning Genes On and Off

In 2013, a researcher called Stanley Qi, working currently at Stanford University, found a way to “mess up” the working of the DNA scissors, actually blunting them, and so creating a “dead” version of Cas9 that can’t cut anything at all.

The team developed ways of using the blunted enzyme to switch genes off (CRISPRi, where the i stands for interference) or on (CRISPRa, where the a stands for activation), or to tune their activity over a 1,000-fold range. They used these techniques to quickly and thoroughly screen human cells for genes that they need to grow, or to deal with a bacterial toxin.

Now, instead of a precise and versatile set of scissors, which can cut any gene you want, you have a precise and versatile delivery system, which can control any gene you want. You don’t just have an editor, but you have a tiny entity controlled from outside. It is genius and scary at the same time.

 

CRISPR therapy - genome editing

 

3) The Tool Treating Huntington’s Disease

One recent breakthrough is the use of a CRISPR formed from mouth bacterium that is capable of breaking RNA, the part of cells that help transform genes into usable proteins. The RNA version of CRISPR was developed by researchers at the Massachusetts Institute of Technology (MIT). It is based on a certain enzyme known as C2c2, which helps keep bacteria protected against other microbes such as viruses.

By manipulating the RNA, researchers could influence gene activity as well as the production of protein in the body. This would effectively grant them the ability to turn the process up or down, or even switch it on or off to suit their purposes without affecting the genetic codes stored in the RNA. This whole method means that it is now becoming increasingly possible to develop better forms of treatment that can target specific malignancies in the body, such as Huntington’s disease.

 

 

4) The Tool Against Malaria

The World Health Organisation (WHO) estimates that about 3.2 billion people – nearly half of the world’s population – are at risk of malaria. In 2015, there were roughly 214 million malaria cases and an estimated 438 000 malaria deaths, thus it is overtly important to fight and prevent the disease. One of the best methods is to somehow fight off its primary transmitter, infected mosquitoes.

Researchers have used gene-editing to create mosquitoes that are almost entirely resistant to the parasite that causes malaria. They used CRISPR to remove a segment of mosquito DNA, and when the mosquitoes’ genetic system tried to repair the genome, it was tricked into replacing it with a DNA construct engineered by the scientists. They found that 99 per cent of the offspring of the genetically modified insects also had the malaria-resistant genes. So some genetic change had occurred.

 

CRISPR therapy - Against Malaria-spreading Mosquitos

 

5) The ultimate weapon against cancer

As a very simple explanation, cancer occurs when cells refuse to die and keep multiplying in various places in our bodies, while hiding from our immune system. With CRISPR, we will have the chance to edit our cells in our immune systems to improve them against cancer cells and to help them kill these malevolent entities in time. In the future, getting rid of cancer could mean just an injection as now against mumps which was a deadly disease for children in the 1800s.

And lately, something miraculous happened. After trying traditional cancer-treating methods such as chemotherapy and bone-marrow transplants, doctors decided to use gene-editing technologies in a last-ditch effort to save a girl who was suffering from lymphoblastic leukemia. The doctors altered the immune system, namely T-cells of a donor to more effectively locate and kill leukemia cells – without attacking the patient’s organism. Actually, they did not use CRISPR, but another method, TALEN, but in any case, it turned out to be a huge success.

 

 

6) The Shield Against Duchenne Syndrome

Patients with the devastating Duchenne’s Muscular Dystrophy lose the ability to walk by their teens, and often die from one of a number of complications—like respiratory or heart failure—at a young age. The disease is caused by a mutation that prevents the body from producing the dystrophin protein, a critical protein in the development of muscle tissue.

Since the syndrome is retraceable to one specific mutation of a gene, researchers are experimenting with the use of CRISPR in finding a treatment for it. This year, experiments showed that scientists were able to treat mice with the Duchenne’s Muscular Dystrophy through gene editing – thus the technology has great promise in treating people suffering from this deadly illness in the near future.

 

 

7) With the Advancement of Research, We Could Have Clinical Trials in 2017 (!!)

The discovery of CRISPR has been having the impact on science as the discovery of the DNA and the Human Genome Project in itself was. Research labs are popping up like mushrooms, such as Intellia Therapeutics or CRISPR Therapeutics. Apparently, it is very sexy to work on CRISPR-related projects.

Editas Medicine is one of the leading genome editing, biotechnological company dedicated to treating patients with genetically-defined diseases by correcting their disease-causing genes. Its mission is to translate the promise of genome editing science into a broad class of transformative genomic medicines to benefit the greatest number of patients. Their areas of research include eye diseases, blood- muscle or lung diseases and cancer.

With the advancement of CRISPR research, it is somehow almost natural that the possibility for clinical trials appeared. Editas Medicine said that the company hopes to start a clinical trial in 2017 to treat a rare form of blindness (leber congenital amaurosis, which affects the light-receiving cells of the retina) using CRISPR. If Editas’s plans move forward, the study would likely be the first to use CRISPR to edit the DNA of a person.

 

 

These are only a few examples of the many out there. But I’m sure they will mark the beginning of a new era in genome editing and healthcare in general. However, I also believe that the long-term impacts on society and the ethics of doing research will be of much higher altitude. CRISPR could actually bring tremendous change to our future with designer babies, the eradication of diseases or that of ageing. Exactly due to such impacts, it will also generate huge ethical dilemmas and ultimate questions about our way of life. [The Medical Futurist]

November 29, 2016 / by / in , , , , , , , , ,
Japan is about to build the world’s fastest computer, ever

23amx

Image Source: NASA Goddard

 

When it comes to the most powerful computers in the world, the game is dominated by two players: the United States and China. In fact, of the top of five most powerful supercomputing sites on the planet, China owns the top two spots, with the U.S. holding the third, fourth, and fifth places on the list. By 2018, that top spot will belong to Japan, should the company’s plan to build the world’s fastest supercomputer not hit any snags along the way.

Reuters reports that Japan has budgeted $173 million to hold the supercomputing crown, and plans to build a machine capable of 130 petaflops, which will top China’s Sunway TaihuLight — with a max of just over 93 petaflops but theoretical peak of around 125 — to claim the crown of “world’s fastest.”

Japan aims to use the project to revitalize its somewhat stagnant technology industry, which has stumbled in recent years while neighboring China has muscled into the scene.

The supercomputer project, which Japan has named AI Bridging Cloud Infrastructure, or ABCI for short, is currently seeking bids from manufacturers willing to take on the monumental task. ABCI is slated to be ready for action by 2018, when it will be used by Japan to boost its research into artificial intelligence platforms. The country also plans to provide the computer’s power to Japanese companies who currently rely on the likes of Google and Microsoft for their heavy data lifting, for fee, of course.

This won’t be Japan’s first venture into the world of supercomputing, of course, and the country holds the number six and seven spots on the chart behind China and the United States. However, the country’s fastest computer is rated at less than 14 petaflops, so jumping from that to 130 petaflops will definitely be an accomplishment. [BGR]

November 29, 2016 / by / in , , , , , , , ,
Morgan Brown – Lessons Learned from Building a Fast Growing Subscription Business

 

There is a lot of talk and excitement these days about ‘growth’, but how companies approach, achieve and sustain growth is still far from an exact science. Yet some companies achieve breakout success. How do they do it? In this talk, I’ll share our team’s lessons learned from growing a new subscription business from $0 to nearly $3 million in ARR in 14 months, and how we applied the processes of the world’s fastest growing companies to achieve it.

November 28, 2016 / by / in , , , , , , , , ,
Why the US Is Losing Ground on the Next Generation of Powerful Supercomputers

exascale-computing

 

“I feel the need — the need for speed.”

The tagline from the 1980s movie Top Gun could be seen as the mantra for the high-performance computing system world these days. The next milestone in the endless race to build faster and faster machines has become embodied in standing up the first exascale supercomputer.

Exascale might sound like an alternative universe in a science fiction movie, and judging by all the hype, one could be forgiven for thinking that an exascale supercomputer might be capable of opening up wormholes in the multiverse (if you subscribe to that particular cosmological theory). In reality, exascale computing is at once more prosaic — a really, really fast computer — and packs the potential to change how we simulate, model and predict life, the universe and pretty much everything.

First, the basics: exascale refers to high-performance computing systems capable of at least a billion billion calculations per second, about 50 times faster than the most powerful supercomputers in existence today. Computing systems capable of at least one exaFLOPS (a quintillion floating point operations per second) has additional significance, it’s estimated such an achievement would potentially match the processing power required to simulate the human brain.

Of course, as with any race, there is a healthy amount of competition, which Singularity Hub has covered over the last few years. The supercomputer version of NFL Power Rankings is the TOP500 List, a compilation of the most super of the supercomputers. The 48th edition of the list was released last week at the International Conference for High Performance Computing, Networking, Storage and Analysis, more succinctly known as SC16, in Salt Lake City.

In terms of pure computing power, China and the United States are pretty much neck and neck. Both nations now claim 171 HPC systems apiece in the latest rankings, accounting for two-thirds of the list, according to TOP500.org. However, China holds the top two spots with its Sunway TaihuLight, at 93 petaflops, and Tianhe-2, at 34 petaflops.

Michael Feldman, managing editor of TOP500, wrote earlier this year about what he characterized as a four-way race to exascale supremacy between the United States, China, Japan and France. The United States, he wagers, is bringing up the rear of the pack, as most of the other nations project to produce an exascale machine by about 2020. He concedes the race could be over today with enough money and power.

“But even with that, one would have to compromise quite a bit on computational efficiency, given the slowness of current interconnects relative to the large number of nodes that would be required for an exaflop of performance,” he writes. “Then there’s the inconvenient fact there are neither applications nor system software that are exascale-ready, relegating such a system to a gargantuan job-sharing cluster.”

Dimitri Kusnezov, chief scientist and senior advisor to the Secretary of the US Department of Energy, takes the long-term view when discussing exascale computing. What’s the use for all that speed if you don’t know where you’re going, he argues?

“A factor of 10 or 100 in computing power does not give you a lot in terms of increasing the complexity of the problems you’re trying to solve,” he said during a phone interview with Singularity Hub.

“We’re entering a new world where the architecture, as we think of exascale, [is] not just faster and more of the same,” he explained. “We need things to not only do simulation, but we need [them] at the same time to reach deeply into the data and apply cognitive approaches — AI in some capacity — to distill from the data, together with analytical methods, what’s really in the data that can be integrated into the simulations to help with the class of problems we face.”

“There aren’t any architectures like that today, and there isn’t any functionality like that today,” he added.

In July 2015, the White House announced the National Strategic Computing Initiative, which established a coordinated federal effort in “high-performance computing research, development, and deployment.”

The DoE Office of Science and DoE National Nuclear Security Administration are in charge of one cornerstone of that plan – the Exascale Computing Project (ECP) — with involvement from Argonne, Lawrence Berkeley, Oak Ridge, Los Alamos, Lawrence Livermore, and Sandia national labs.

Since September of this year, DoE has handed out nearly $90 million in awards as part of ECP.

More than half of the money will go toward what DoE calls four co-design centers. Co-design, it says, “requires an interdisciplinary engineering approach in which the developers of the software ecosystem, the hardware technology, and a new generation of computational science applications are collaboratively involved in a participatory design process.”

Another round of funds will support 15 application development proposals for full funding and seven proposals for seed funding, representing teams from 45 research and academic organizations. The modeling and simulation applications that were funded include projects ranging from “deep-learning and simulation-enabled precision medicine for cancer” to “modeling of advanced particle accelerators.”

The timeline — Feldman offers 2023 for a US exascale system — is somewhat secondary to functionality from Kusnezov’s perspective.

“The timeline is defined by the class of problems that we’re trying to solve and the demands they will have on the architecture, and the recognition that those technologies don’t yet exist,” he explains. “The timeline is paced by the functionality we’d like to include and not by the traditional benchmarks like LINPACK, which are likely not the right measures of the kinds of things we’re going to be doing in the future.

“We are trying to merge high-end simulation with big data analytics in a way that is also cognitive, that you can learn while you simulate,” he adds. “We’re trying to change not just the architecture but the paradigm itself.”

Kusnezov says the US strategy is certainly only one of many possible paths toward an exascale machine.

“There isn’t a single kind of architecture that will solve everything we want, so there isn’t a single unique answer that we’re all pushing toward. Each of the countries is driven by its own demands in some ways,” he says.

To illustrate his point about a paradigm shift, Kusnezov talks at length about President Barack Obama’s announcement during his State of the Union address earlier this year that the nation would pursue a cancer moonshot program. Supercomputers will play a key role in the search for a cure, according to Kusnezov, and the work has already forced DoE to step back and reassess how it approaches rich, complex data sets and computer simulations, particularly as it applies to exascale computing.

“A lot of the problems are societal, and getting an answer to them is everyone’s best interest,” he notes. “If we could buy all of this stuff off the shelf, we would do it, but we can’t. So we’re always looking for good ideas, we’re always looking for partners. We always welcome the competition in solving these things. It always gets people to innovate — and we like innovation.”

This post originally appeared on SingularityHub

November 28, 2016 / by / in , , , , , , , , , , ,
Trump or NASA – who’s really politicising climate science?

NASA has a long history of conducting climate science. Here, a NASA camera captures a storm over South Australia. NASA

 

John Cook, The University of Queensland

Climate research conducted at NASA had been “heavily politicised”, said Robert Walker, a senior adviser to US President-elect Donald Trump.

This has led him to recommend stripping funding for climate research at NASA.

Walker’s claim comes with a great deal of irony. Over the past few decades, climate science has indeed become heavily politicised. But it is ideological partisans cut from the same cloth as Walker who engineered such a polarised situation.

Believe it or not, climate change used to be a bipartisan issue. In 1988, Republican George H.W. Bush pledged to “fight the greenhouse effect with the White House effect”.

Since those idealistic days when conservatives and liberals marched hand-in-hand towards a safer climate future, the level of public discourse has deteriorated.

Surveys of the US public over the past few decades show Democrats and Republicans growing further apart in their attitudes and beliefs about climate change.

For example, when asked whether most scientists agree on global warming, perceived consensus among Democrats has steadily increased over the last two decades. In contrast, perceived consensus among Republicans has been in stasis at around 50%.

 

Polarisation of perceived consensus among Republicans and Democrats.
Dunlap et al. (2016)
 

 

How is it that party affiliation has become such a strong driver of people’s views about scientific topics?

In the early 1990s, conservative think-tanks sprang to life on this issue. These are organisations promoting conservative ideals such as unregulated free markets and limited government.

Their goal was to delay government regulation of polluting industries such as fossil fuel companies. Their main tactic was to cast doubt on climate science.

Using a constant stream of books, newspaper editorials and media appearances, they generated a glut of misinformation about climate science and scientists.

The conservative think-tanks were assisted by corporate funding from the fossil fuel industry – a partnership that Naomi Oreskes poetically describes as an “unholy alliance”.

Over the past few decades, conservative organisations that receive corporate funding have grown much more prolific in publishing polarising misinformation compared to groups that didn’t receive corporate funding.

 

Politicising the scientific consensus

Robert Walker also brought up the topic of agreement among climatologists. The scientific consensus on human-caused global warming is a topic I’ve been rather heavily involved in over the past few years.

In 2013, I was part of a team that analysed 21 years worth of peer-reviewed climate papers. We found that among papers stating a position on human-caused global warming, 97% endorsed the consensus.

Our 97% consensus paper has been incessantly critiqued by Republican senators, right-wing think-tanks, Republican congressmen and contrarian blogs promoting a conservative agenda (eagle-eyed readers might detect a pattern here).

This led us to publish a follow-up paper summarising the many different studies into consensus. A number of surveys and analyses independently found around 90% to 100% scientific agreement on human-caused global warming, with multiple studies converging on 97% consensus.

 

Summary of consensus studies.
Skeptical Science

 

Raising doubt about the scientific consensus has been an integral part of the conservative strategy to polarise climate change. A clear articulation of this strategy came from an infamous memo drafted by Republican strategist Frank Luntz. He recommended that Republicans win the public debate about climate change by casting doubt on the scientific consensus:

Voters believe that there is no consensus about global warming in the scientific community. Should the public come to believe that the scientific issues are settled, their views about global warming will change accordingly. Therefore, you need to continue to make the lack of scientific certainty a primary issue in the debate.

Conservatives dutifully heeded the market-research-driven recommendations from Luntz. One of the most common arguments against climate change in conservative opinion pieces has been “there is no consensus”.

Their persistence has paid off. There continues to be a huge gap between public perception of consensus and the actual 97% consensus among climate scientists (although new data indicates the consensus gap is closing).

In the following graph, taken from my own research into public perceptions of consensus, the horizontal axis is a measure of political ideology, with liberals to the left and conservatives to the right.

The slope in the curve visualises the polarisation of climate perceptions. While perceived consensus is lower for more conservative groups, there is a significant gap between perceived consensus and the 97% reality even among liberals.

This “liberal consensus gap” has two contributing factors: a lack of awareness of the 97% consensus, or the impact of misinformation.

 

The consensus gap: the divide between public perception of consensus and the 97% consensus. Skeptical Science

 

This data, consistent with Riley Dunlap’s polarisation data mentioned at the start of this article, indicates that many conservatives think the consensus is around 50%. This matches what Walker claimed to The Guardian:

Walker, however, claimed that doubt over the role of human activity in climate change “is a view shared by half the climatologists in the world”.

Given the multitude of studies finding consensus between 90% and 100%, where does this 50% figure come from? Further clues come from an interview on Canadian radio where Walker again claims that only half of climatologists agree that humans are causing global warming.

The source for Walker’s consensus figure seems to be the National Association of Scholars, a conservative group that lists “multiculturalism”, “diversity” and “sustainability” in academia as sources of concerns. A press release on the group’s website includes the following excerpt:

 

S. Fred Singer said in an interview with the National Association of Scholars (NAS) that “the number of sceptical qualified scientists has been growing steadily; I would guess it is about 40% now”.

 

Multiple studies have measured the consensus among climatologists by diverse methods including examining their papers, looking at their public statements, and simply asking them.

But Walker doesn’t appear to be interested in evidence. Instead, he seems to be relying on an unsupported guess by retired physicist S. Fred Singer.

It’s telling that Walker cites conservative sources in his efforts to manufacture doubt about the scientific consensus. If there is any politicising of science going on, it appears to be by Walker, not by the scientists.

The Conversation

John Cook, Climate Communication Research Fellow, Global Change Institute, The University of Queensland. This article was originally published on The Conversation. Read the original article.

November 28, 2016 / by / in , , , , , , , ,
10-Steps To Launching An Influencer Marketing Campaign

How to launch your fist influencer marketing campaign in 10 stepsImage source: Unsplash

Social media is inextricably tied to almost every aspect of daily life. It has changed how we communicate, how we share information, and how we make decisions. Now, instead of turning to television for news and entertainment, audiences look to Facebook and Twitter to learn about the world around them; in lieu of reading magazine articles about where to shop, eat, and vacation, consumers now seek out recommendations from YouTubers, Instagrammers, and Snapchat stars.With less time being spent consuming traditional media (especially television), the time spent on social media platforms has grown to over 60-90 minutes per day among U.S. internet users. The rise of ad blocking usage, too, has become a major concern for marketers, as this technology prevents digital display ads (like banner ads) and pre-roll ads (which precede Facebook and YouTube videos) from reaching their intended audiences.

In response to these trends, influencer marketing has increasingly replaced traditional forms of advertising as one of the most effective ways to reach online audiences. Today, 84% of companies plan on implementing an influencer marketing strategy in the next 12 months, and one study found that 82% of consumers would take the advice of a social media influencer when deciding what product to buy.

Graphic sourced via Google Trends

Despite the growing popularity of influencer marketing, however, many marketers are still unsure how to leverage the influence of social media stars to develop, launch, and measure effective influencer marketing campaigns.

 

How to develop an influencer marketing campaign in 10 steps

Launching your first influencer marketing campaign can seem like an insurmountable task, especially for marketers who have never worked with social media stars before. Here, we break the process down into ten, manageable steps to help companies navigate the complexities and avoid some of the common pitfalls associated with influencer marketing campaigns:

 

The official roadmap for influencer marketing by Mediakix

Infographic created by Mediakix

Step 1: Determine budget, audience, & goals

Like most advertising initiatives, the first step to creating an influencer marketing campaign is determining the budget and establishing the target audience. Setting campaign goals, or key performance indicators (KPIs), will also help inform what type of campaign you create and which influencer (or group of influencers) you will partner with.

Step 2: Choose the best platform

The platform you select for your influencer marketing campaign will likely be determined by your target audience. Do your consumers spend the most time on Facebook or Snapchat? Would your audience be most receptive to a YouTube video or a sponsored Instagram post? Answering these questions will help you choose which social platform will result in the most successful campaign.

Step 3: Set a publishing schedule

To ensure an influencer’s content is impactful, you should consider coordinating the campaign launch with other advertising initiatives and across multiple social media channels. Once you select an influencer, he or she will also know when their followers are the most engaged—take advantage of peak days and optimal publishing times for best results.

Step 4: Find the right social influencer

75% of marketers say identifying the right social media influencer to work with is the most challenging aspect of rolling out an influencer strategy, though the process can be simplified by checking each social media star before initial contact is made, utilizing influencer marketing tools or platforms, and/or partnering with established influencer marketing companies and agencies. Influencers should align with your company’s messaging and brand identity, have high levels of social engagement on their content, and correspond in a prompt and professional manner.

Step 5: Outline the influencer marketing campaign

Once an influencer has been chosen, it’s your responsibility to clearly communicate what you expect from the campaign. Create a campaign brief that includes copy points, creative guidance, and goals, but don’t try to exert too much control over an influencer’s content. He or she will likely know best what will resonate with their audience, and marketers are advised to work with each social media influencer when developing campaign content, not dictate at them.

Step 6: Negotiate rates & draw up contracts

Though many influencers will have a “rate card” with predetermined compensation figures, more complex campaigns may require negotiations. Once an agreement has been reached, all parties should sign a legal contract stipulating payment, deliverables, publishing schedule, and, licensing rights.

Step 7: Review campaign content

Before launching the campaign, take the time to review all aspects of the influencer-related content to make sure it includes the necessary copy points, aligns with your brand’s messaging and, most importantly, adheres to Federal Trade Commission (FTC) guidelines regarding proper disclosure of sponsored content.

Step 8: Launch the influencer marketing campaign

Once all content has been reviewed and approved, give the go-ahead to push the campaign “live” as scheduled. You should closely monitor the campaign and document any notable engagement at this time, as sponsored content will likely generate the most social attention immediately after publishing.

Step 9: Amplify & Optimize

To boost the reach of the influencer marketing campaign, share sponsored content on your own social media accounts and ask the social influencer to do the same across all social platforms (if appropriate or as predetermined in campaign contract). You may also choose to publish additional content that directs audiences to visit the campaign’s photo, videos, Snapchat Story, or blog post, and optimize the campaign by changing the wording on the content’s Call To Action (CTA).

Step 10: Report & Analyze

Evaluate the success of the influencer marketing campaign by collecting as much data as possible, including reach, impressions, views, engagement, click-through, sales, and anything else that will help you determine campaign performance. For an objective assessment of how engaging the campaign was, compare the sponsored content’s performance with the influencer’s typical metrics for non-sponsored content.

Read more at Business2Community

November 24, 2016 / by / in , , , , , ,
Show Buttons
Hide Buttons

IMPORTANT MESSAGE: Scooblrinc.com is a website owned and operated by Scooblr, Inc. By accessing this website and any pages thereof, you agree to be bound by the Terms of Use and Privacy Policy, as amended from time to time. Scooblr, Inc. does not verify or assure that information provided by any company offering services is accurate or complete or that the valuation is appropriate. Neither Scooblr nor any of its directors, officers, employees, representatives, affiliates or agents shall have any liability whatsoever arising, for any error or incompleteness of fact or opinion in, or lack of care in the preparation or publication, of the materials posted on this website. Scooblr does not give advice, provide analysis or recommendations regarding any offering, service posted on the website. The information on this website does not constitute an offer of, or the solicitation of an offer to buy or subscribe for, any services to any person in any jurisdiction to whom or in which such offer or solicitation is unlawful.