All posts by Naomi Smith

Naomi is a UK-based Journalist, writer and online content creator with around six years experience. She has a master's degree in investigative journalism and experience working as a beat reporter, primarily covering aviation law, regulation and politics. She has written for online publications on a variety of topics, including politics, gaming and film.

Joyent discontinues public cloud offering

Image by Nikin from Pixabay

California-based software and services firm Joyent said on 6 June that it will discontinue its public cloud offering – which competed with the likes of market-leader Amazon Web Services, and Google Cloud and Microsoft – three years after it was acquired by South Korean technology conglomerate Samsung.

Like its better known competitors, Joyent provided developers with the opportunity to rent computing capacity from its datacenters on a pay-as-you-go basis but will leave the public cloud sphere in November and focus its resources elsewhere, instead of continuing to compete head-to-head with Amazon Web Services.

Joyent will specifically focus on so-called “single-tenant” cloud services, providing a dedicated chunk of computing infrastructure to a single customer, a service that it said Samsung currently uses. Joyent will also continue to provide cloud software for customers’ own data centers and servers.

Samsung acquired Joyent in June 2016 for US$125 million, taking one of very few independent competitors to Amazon Web Services off the market. According to Business Insider, Joyent was backed by Peter Thiel, Intel Capital, and others. The company was an early proponent of software container technology, which has since been popularized by US$1.3 billion startup Docker and an open source cloud project called Kubernetes.

“To all of our public cloud customers, we will work closely with you over the coming five months to help you transition your applications and infrastructure as seamlessly as possible to their new home,” Steve Tuck, president and chief operating officer of Joyent, said in a blog post published on the company’s website.

“Starting in November, we will be scaling back the availability of the Joyent Public Cloud to customers of our single-tenant cloud offering,” he said, adding that the company is “currently working on finding different homes” for its on-demand cloud customers.

“For some that will involve deploying the same open source software that powers the Joyent Public Cloud today in their own datacenter or on a BMaaS provider like SoftLayer with our help and ongoing support,” Tuck said.

“For those customers that don’t have the level of scale for their own datacenter or to run BMaaS, we have taken the time to evaluate different options to support this transition, and have been hard at work to make the experience as smooth as possible,” he added. “To that end, we are proud to say that we have many partners working with us to support the transition for customers who wish to move to alternative on-demand clouds.”

Verily partners with drug companies for clinical trials and outreach

Image by Darko Stojanovic from Pixabay

Verily, a health and life sciences company owned by Google’s parent company Alphabet, said on 21 May it is moving into the clinical trials space through a partnership with pharmaceuticals companies Novartis, Sanofi, Otsuka and Pfizer.

The goal of the partnership is to find new ways to reach patients, make it easier to enroll and participate in clinical trials, and to aggregate data from a range of sources, including electronic medical records and health-tracking wearable devices.

Using its existing Project Baseline platform, Verily said it hopes to “engage more patients and clinicians in research, increase the speed and ease of conducting studies and collect more comprehensive, higher quality data, including outside the four walls of a clinic”.

Clinical trials have traditionally been expensive processes that rely on outdated technologies so many pharmaceutical companies are looking at the potential of leveraging the latest technology developed by companies like Google to refine and streamline the process.

According to Verily, the number of people participating in clinical research across the United States is less than 10 percent of the population, and challenges in research other than low numbers can include “data fragmentation, inefficient operations and limited value for patients”.

Using the Baseline platform, Verily, alongside its new industry partners – and with input from academic research institutions, patient-advocacy groups and health systems – said it hopes to “implement a more patient-centric, technology-enabled approach to research, and increase the number and diversity of clinical research participants”, and to develop “novel approaches to generating real-world evidence”.

Over the coming years, Novartis, Otsuka, Pfizer and Sanofi each plan to launch clinical studies leveraging the platform across diverse therapeutic areas, such as cardiovascular disease, oncology, mental health, dermatology and diabetes, Verily said.

Project Baseline launched in 2017 with the Project Baseline Health Study, aiming to “develop the technology and tools to help researchers create a more comprehensive, precise map of human health”.

This includes “devices, dashboards and analytical tools” to support both the patient experience and research; an “interoperable platform” to provide timely access to data to “streamline enrollment and management” of studies; and a “robust infrastructure” that “enables collection of dynamic data”.

Project Baseline has also built a “connected ecosystem with the aim of linking patients

and advocacy groups with clinicians and health systems, integrating clinical research with

clinical practice and making the process engaging”.

Verily anticipates that the new partnership with strengthen Project Baseline’s existing “ecosystem that will continue to expand and could help foster greater scientific discovery through the creation of next-generation research and development programs”.

“If we are truly to achieve the realization of patient-centered care, we must advance innovative research methodologies that focus on the patient and their needs, values and lifestyles,” Dr Reed Tuckson, chairman of the Project Baseline Advisory Board, said in a statement. “Project Baseline, in collaboration with these innovative companies, is well positioned to achieve this vision and have a transformative impact on research.”

“Evidence generation through research is the backbone of improving health outcomes,” Dr Jessica Mega, chief medical and scientific officer at Verily, added. “We need to be inclusive and encourage diversity in research to truly understand health and disease, and to provide meaningful insights about new medicines, medical devices and digital health solutions.”

“Novartis, Otsuka, Pfizer and Sanofi have been early adopters of advanced technology and digital tools to improve clinical research operations, and together we’re taking another step towards making research accessible and generating evidence to inform better treatments and care,” she said.

Novartis’ head of global development operations, Badhri Srinivasan, said the company was “advancing treatments that stand to change the course of disease, or even offer cures” but noted that its “ability to bring new medicines to patients quickly is often hampered by inefficient or limited participation in clinical trials”.

“By combining our complementary sets of expertise, we have the opportunity to develop a new trial recruitment model that gives patients and their physicians greater insight into the process of finding treatments for their disease, and how they can participate,” he concluded.

Describing the clinical research process as “antiquated in many ways”, Dr Debbie Profit, vice president of applied innovation and process improvement at Otsuka, said she hoped the company’s collaboration with Verily would help make clinical trials more accessible, precise and targeted to “obtain results and seek approvals sooner”.

“In clinical research, for several years now we have been pursuing game-changing possibilities to deploy digital technology and data science to re-engineer how we operate,” Rod MacKenzie, chief development officer and executive vice president of Pfizer, said. “The science behind our potential new medicines is cutting edge, yet many clinical trial processes have remained relatively unchanged over decades.”

“To bring scientific breakthroughs to patients more quickly and increase the diversity of the patient population in our clinical trials, Pfizer is committed to exploring new technologies and innovative ways to conduct clinical research, and we are proud to partner with Verily in that effort,” he added.

“Our scientific knowledge has exploded over the past generation, but efficiently bringing these new breakthroughs from lab bench to patient requires us to greatly improve the way we conduct these complex clinical trials,” Lionel Bascles, global head of clinical sciences and operations at Sanofi, said. “Project Baseline will allow us to better recruit appropriate patients and more efficiently integrate data for a greater understanding of diseases, reconnecting trials to our patients’ healthcare journeys.”

ESA rocket enters final stage ahead of 2020 launch

Image by WikiImages from Pixabay

The European Space Agency (ESA) said on 6 June that its Ariane 6 rocket has entered the final stages of its development ahead of its first commercial launch in 2020 and that the rocket’s launch zone at Europe’s Spaceport in French Guiana is near completion.

In an update published on it’s website, the ESA said hot firing tests of the Vinci engine that will power the rocket’s upper stage are now completed and the firing tests of the Vulcain 2.1 engine that will power the core stage are close to completion at DLR-Institute of Space Propulsion in Lampoldshausen, Germany. The P120C solid-fuel boosters that will be attached to the core booster will be tested in early 2020.

new test facility, the P5.2, at the same DLR site, was inaugurated in February and will enable testing of the complete Ariane 6 upper stage.

This upper stage will come from ArianeGroup in Bremen, Germany who are currently focusing on engine integration, final operations and testing. MT Aerospace, also in Bremen, are supplying the fuel tanks.

An ArianeGroup facility in Les Mureaux, France, hosts the largest friction stir welding machines in Europe for producing the Ariane 6 cryogenic tanks for Ariane 6’s core stage. The Aft bay, which secures the Vulcain 2.1 engine to the core stage is in production and being integrated at the same location.

The first qualification model of the P120C strap-on booster configured for Vega-C was static fired in January on the test bench at Europe’s Spaceport.

The second qualification model, configured for Ariane 6, will be tested at the beginning of next year. The 11.5 m long and 3.4 m diameter insulated P120C motor case is made of carbon composite built in one piece by Avio in Colleferro, Italy.

At ArianeGroup in Issac and Le-Haillan, France, new fully robotic production lines have the capability of increasing production by 30% to assemble the rear skirts and build nozzles for the P120C strap-on solid rocket motors. MT Aerospace in Augsburg, Germany, are supplying the rear skirts.

RUAG Space in Switzerland has recently produced the first large half-shell of the fairing for Ariane 6. Built in one piece using carbon fibre, it was cured in an industrial oven instead of an autoclave – a process developed with the help of ESA.

ESA said that the P120C solid rocket motor configured for Ariane 6 will be test fired in Kourou early next year to qualify it for flight. Ariane 6’s upper stage will be test fired at the DLR-Institute of Space Propulsion in Lampoldshausen, Germany. A test model Ariane 6 will also start combined tests in Kourou, including a static fire of the core stage engine, the Vulcain 2.1.

Ariane 6 launch base near completion

According to ESA, the Ariane 6 launch base at Europe’s Spaceport is on track and near completion. The main structures include the Launch Vehicle Assembly Building, the mobile gantry, and launch pad.

The launch vehicle assembly building used for horizontal integration and preparation of Ariane 6 stages before rollout to the launch pad, is complete and tools are now being installed.

The 90-metre tall metal frame of the mobile gantry is fully constructed and in February cladding started. The mobile gantry houses Ariane 6 until it is retracted before launch. The first rolling test of this 8200-tonne structure will be performed this summer.

The launch pad flame deflectors were installed at the end of April. They will funnel the fiery plumes of Ariane 6 at lift-off into the exhaust tunnels buried deep under the launch table. The nearby water tower has also been installed.

The first four levels of the mast have been mounted and welded and in February the integration started of the fluidic lines that will interface with the launch vehicle. The LH2 and the LOX plants that produce and store the liquid hydrogen and liquid oxygen needed to fuel the launcher’s engines are complete.

DHS says Chinese-made drones could pose data security risk

Image by Pexels from Pixabay

The US Department of Homeland Security (DHS) sent out an alert on 20 May warning that Chinese-made drones can relay sensitive flight data back to their manufacturers in China.

The alert, obtained by US cable news channel CNN, from DHS’s Cybersecurity and Infrastructure Security Agency reportedly says that some drones may pose a risk to firms’ data privacy and information by sharing it on servers that could potentially be accessed by the Chinese government.

The products “contain components that can compromise your data and share your information on a server accessed beyond the company itself,” the alert said, warning pilots to take caution when buying Chinese drones, and to learn how to limit a drone’s access to networks and remove secure digital cards.

According to the alert, “the United States government has strong concerns about any technology product that takes American data into the territory of an authoritarian state that permits its intelligence services to have unfettered access to that data or otherwise abuses that access.”

“Those concerns apply with equal force to certain Chinese-made (unmanned aircraft systems) – connected devices capable of collecting and transferring potentially revealing data about their operations and the individuals and entities operating them, as China imposes unusually stringent obligations on its citizens to support national intelligence activities,” the alert added.

“Organizations that conduct operations impacting national security or the Nation’s critical functions must remain especially vigilant as they may be at greater risk of espionage and theft of proprietary information,” the alert concluded.

The agency did not name any specific drone manufacturers but approximately 80 percent of all drones used in the US and Canada are produced by Shenzhen-based DJI, according to one industry analysis cited by CNN. In response to the alert, DJI expressed support for the recommendations and said it provides its customers with “full and complete control over how their data is collected, stored, and transmitted”.

In a statement to CNN, DJI said that it gives customers “full and complete control over how their data is collected, stored, and transmitted,” adding that “customers can enable all the precautions DHS recommends.”

“At DJI, safety is at the core of everything we do, and the security of our technology has been independently verified by the US government and leading US businesses,” DJI added. “For government and critical infrastructure customers that require additional assurances, we provide drones that do not transfer data to DJI or via the internet, and our customers can enable all the precautions DHS recommends.

“Every day, American businesses, first responders, and US government agencies trust DJI drones to help save lives, promote worker safety, and support vital operations, and we take that responsibility very seriously,” DJI said.

The alert followed an executive order issued by the White House that effectively banned US firms from using telecommunications equipment produced by Chinese technology giant Huawei, which has recently drawn similar national security concerns of government spying.

Researchers develop AI tool to help detect brain aneurysms

Image by Raman Oza from Pixabay

Researchers at Stanford University in California have developed a new artificial intelligence tool that can identify areas of a brain scan that are likely to contain aneurysms.

In a paper published on 7 June in JAMA Network Open, researchers described how the tool, which was built using an algorithm called HeadXNet, boosted their ability to locate aneurysms, in blood vessels in the brain that can leak or burst open, potentially leading to strokes, brain damage and death.

Researchers were able to find six more aneurysms in 100 scans that contain aneurysms when using the tool and it “also improved consensus among the interpreting clinicians”.

While the success of HeadXNet in these experiments is promising, the team of researchers cautioned that “further investigation is needed to evaluate generalizability of the AI tool prior to real-time clinical deployment given differences in scanner hardware and imaging protocols across different hospital centers”. They plan to address such problems through “multi-center collaboration”.

Combing brain scans for signs of an aneurysm can mean scrolling through hundreds of images. Aneurysms come in many sizes and shapes and balloon out at tricky angles – some register as no more than a blip within the movie-like succession of images.

“There’s been a lot of concern about how machine learning will actually work within the medical field,” Allison Park, a Stanford graduate student in statistics and co-lead author of the paper, said. “This research is an example of how humans stay involved in the diagnostic process, aided by an artificial intelligence tool.”

“Search for an aneurysm is one of the most labor-intensive and critical tasks radiologists undertake,” Kristen Yeom, associate professor of radiology and co-senior author of the paper, added. “Given inherent challenges of complex neurovascular anatomy and potential fatal outcome of a missed aneurysm, it prompted me to apply advances in computer science and vision to neuroimaging.”

Yeom brought the idea to the AI for Healthcare Bootcamp run by Stanford’s Machine Learning Group, which is led by Andrew Ng, adjunct professor of computer science and co-senior author of the paper. The central challenge was to create an AI tool that was able to accurately process large stacks of three dimensional images and “complement diagnostic practice.

To train their algorithm, Yeom worked with Park and Christopher Chute, a graduate student in computer science, and outlined clinically significant aneurysms detectable on 611 computerized tomography (CT) angiogram head scans.

“We labelled, by hand, every voxel – the 3D equivalent to a pixel – with whether or not it was part of an aneurysm,” Chute, who is also co-lead author of the paper, said. “Building the training data was a pretty gruelling task and there were a lot of data.”

After the training, the algorithm decides for each voxel of a scan whether there is an aneurysm present, with the end result overlaid as a semi-transparent highlight on top of the scan, making it easy for clinicians to see what the scans look like without HeadXNet’s input.

“We were interested how these scans with AI-added overlays would improve the performance of clinicians,” Pranav Rajpurkar, a graduate student in computer science and co-lead author of the paper, said. “Rather than just having the algorithm say that a scan contained an aneurysm, we were able to bring the exact locations of the aneurysms to the clinician’s attention.”

HeadXNet was tested by eight clinicians by evaluating a set of 115 different brain scans for aneurysms, once with the help of HeadXNet and once without. With the tool, the clinicians correctly identified more aneurysms, and therefore reduced the “miss” rate, and the clinicians were more likely to agree with one another.

The researchers believe that the tool did not influence how long it took the clinicians to decide on a diagnosis or their ability to correctly identify scans without aneurysms – a guard against telling someone they have an aneurysm when they don’t.

The machine learning methods that form the core of HeadXNet could likely be trained to identify other diseases both inside and outside the brain, the researchers believe, but there is a “considerable hurdle” in integrating AI medical tools with daily clinical workflow in radiology across hospitals.

Current scan viewers aren’t designed to work with deep learning assistance, so the researchers had to custom-build tools to integrate HeadXNet within scan viewers. Furthermore, variations in real-world data – as opposed to the data on which the algorithm is tested and trained – could reduce model performance.

If the algorithm processes data from different kinds of scanners or imaging protocols, or a patient population that wasn’t part of its original training, it might not work as expected.

“Because of these issues, I think deployment will come faster not with pure AI automation, but instead with AI and radiologists collaborating,” Ng said. “We still have technical and non-technical work to do, but we as a community will get there and AI-radiologist collaboration is the most promising path.”

SEC Examiners Warn About Cloud Storage Risks for Broker-Dealers, Investment Advisers

Image by Alessandro D’Andrea from Pixabay

The US usc (SEC) has issued an alert warning advisers on the potential security risks of storing information on cloud-based platforms as they do not all offer encryption or password protection.

The risk alert – which was issued on 23 May – said that the US Office of Compliance Inspections and Examinations (OCIE) had identified security risks “associated with the storage of electronic customer records and information by broker-dealers and investment advisers in various network storage solutions, including those leveraging cloud-based storage”.

While “the majority of these network storage solutions offered encryption, password protection, and other security features designed to prevent unauthorized access, examiners observed that firms did not always use the available security features”, the alert continued, noting that “weak or misconfigured security settings on a network storage device could result in unauthorized access to information”.

In a summary, the OCIE said its staff had identified a number of specific concerns that could raise compliance issues under regulations governing information security and identity theft. The Safeguards Rule of Regulation S-P “requires every broker-dealer and investment adviser registered with the SEC to adopt written policies and procedures that address administrative, technical, and physical safeguards for the protection of customer records and information”.

Similarly, the Identity Theft Red Flags Rule of Regulation S-ID requires broker-dealers and investment advisers registered or required to be registered with the SEC to develop and implement a written identity theft prevention program designed to “detect, prevent, and mitigate identity theft in connection with the opening of a covered account or any existing covered account”.

The concerns identified in the alert include: misconfigured network storage solutions, inadequate oversight of vendor-provided network storage solutions, and insufficient data classification policies and procedures.

In some cases, the alert said, firms had not “adequately” configured the settings on their network storage solution of choice to “protect against unauthorized access” and some firms did not have “policies and procedures addressing the security configuration” of that “solution”.

Furthermore, some firms failed to ensure that the configuration of security settings on “vendor-provided network storage solutions were configured in accordance with the firm’s standards”, and in some cases, firms’ “policies and procedures did not identify the different types of data stored electronically by the firm and the appropriate controls for each type of data”, the alert said.

According to the OCIE, implementation of a “configuration management program that includes policies and procedures governing data classification, vendor oversight, and security features will help to mitigate the risks incurred when implementing on-premise or cloud-based network storage solutions”.

OCIE staff observed “several features of effective configuration management programs, data classification procedures, and vendor management programs”, the alert said.

These included: policies and procedures to support installation, maintenance and review of the network storage solution; guidelines for security controls and “baseline security configuration standards”; and vendor management policies and procedures, including regular implementation of software patches and hardware updates.

The OCIE called for registered broker-dealers and investment advisers to “review their practices, policies, and procedures with respect to the storage of electronic customer information and to consider whether any improvements are necessary”. It also encouraged firms to “actively oversee any vendors they may be using for network storage to determine whether the service provided by the vendor is sufficient to enable the firm to meet its regulatory responsibilities”.

Study: Usefulness of AR in precision tasks in doubt

Image by StockSnap from Pixabay

A new study conducted by researchers at the University of Pisa in Italy has cast doubt on the efficiency of mixed – or augmented – reality (AR) to perform high-precision tasks.

The study, published on 6 May in IEEE Transactions on Biomedical Engineering, suggests that accomplishing an AR-assisted high-precision task that’s close at hand (i.e. within two meters) may not be feasible with existing technology.

Researchers conducted a small-scale experiment in which 20 Microsoft HoloLens users took a “connect the dots” test four times – with and without the AR headset, and with one or both eyes open – and performed better when using the naked eye.

With this type of AR test, computer-generated content is projected onto a semi-transparent display in front of the user’s eyes, while they are still able to see real-world objects beyond the screen. A sequence of numbered dots were projected onto the HoloLens screen and participants then had to draw the connecting lines using a ruler on real paper in front of them.

Study coordinator Dr Marina Carbone believes the difference in performance may be due to the way that the human eye focuses, and pointed to the fact that users were unaware of the difference in performance during follow-up interviews. They also said the headset made them feel more tired.

Essentially, the study found that human eyes aren’t really quite up to the task of focusing on two separate objects – one real and one not – simultaneously, when they are in close proximity to one another.

This discovery is likely to limit the usefulness of AR, which has been carving out a role in high-precision fields such as medicine and engineering, helping to guide skilled workers who maintain or use complicated machines and other equipment, such as jet engines, by giving visual cues as they work.

“Although there is increasing interest in using commercial optical see-through head-mounted displays [for] manual tasks that require accurate alignment of VR data to the actual target – such as surgical tasks – attention must be paid to the current limitations of available technology,” the study found.

While the study concluded that the HoloLens and other AR devices should not be used for high-precision manual tasks, the Pisa team is planning more research to deepen its understanding of when –  and how – AR in its current state might become useful.

Survey: Cost biggest hurdle for smart homes

Source: www.quotecatalog.com via Flickr

Cost is the primary concern for consumers considering turning their house into a smart home, according to a recent survey of 581 US-based adults who are familiar with Internet of Things (IoT) technologies, conducted by Washington DC-based B2B ratings and reviews firm Clutch on 30 May.

Smart home devices – home appliances and devices equipped with IoT technology, such as a smart thermostat or smart security system – allow people to control and monitor their homes remotely.

According to the survey, 53% of people currently own a smart home device and one-third (33%) plan to invest in one within the next three years. Smart home devices are the IoT technology people are most familiar with, ahead of wearable devices (75%) and digital assistants (76%).

According to Clutch, one reason people are familiar with smart home devices is “forced adoption,” which occurs because many home devices and appliances are now built with connected capabilities.

“I’m not sure there’s much of a choice anymore,” said Bob Klein, president of Digital Scientists, describing the prevalence of smart home devices on the market relative to “legacy” home appliances and devices.

Smart home devices allow people intimate access to information about their home from remote locations, which provides them peace-of-mind and a sense of security. Remote control and monitoring (37%) are the biggest benefits of owning a smart home device.

The benefits people experience from smart home devices may impact the smart home devices they purchase. For example, smart security system users benefit most from the remote monitoring benefits of smart home devices. S smart security systems (50%) are the most commonly owned smart home device, ahead of smart thermostats (48%) and smart lights (46%).

Clutch also found that people are concerned about the cost of smart home devices, though most people recognize that some smart home devices can reduce utility costs in their home. Cost (26%) is the primary concern people have with smart home devices over security vulnerability (21%), according to the survey.

However, people also believe that smart home devices have cost benefits. Over half of the people surveyed (53%) claimed that smart home thermostats decrease utility costs. This is double the number who thought a smart home thermostat increases utility costs (24%).

Google to inks deal to acquire data analytics firm Looker

Image by Photo Mix from Pixabay

Google said on 6 June that it had signed a deal to acquire data analytics firm Looker for US$2.6 billion in an all-cash transaction. Upon the close of the acquisition, Looker will join Google Cloud.

In a somewhat lengthy blog post, Google Cloud CEO Thomas Kurian, said that a “fundamental requirement for organizations wanting to transform themselves digitally is the need to store, manage, and analyze large quantities of data from a variety of sources”.

Many customers, he said, use Google Cloud for business analytics because it “offers . . . a broad and integrated suite of cloud services to ingest data in real time, cleanse it, process it, aggregate it in a highly scalable data warehouse and analyze it”.

“A rapidly growing list of customers are also migrating their existing enterprise data warehouses from legacy technology stacks to our business analytics offering,” he added. “These customers are choosing to do so because our offering is comprehensive, easy to use, cost effective and scales from a few gigabytes to multiple petabytes with excellent performance.”

Kurian expects the addition of Looker to extend Google Cloud’s business analytics offering by providing customers with the ability to “define business metrics once in a consistent way across data sources”, and by giving users access to an analytics platform for “business intelligence and use-case specific solutions” as well as a “flexible, embedded analytics product to collaborate on business decisions”.

The acquisition will build upon an existing partnership between the two companies, Kurian said, in which they share over 350 joint customers, including Buzzfeed, Hearst, King, Sunrun, WPP Essence and Yahoo! It is Kurian’s first major move since joining the company in November after leaving Oracle, and follows after multiple services in Google Cloud, G Suite and YouTube were affected by an outage in June.

“One of the most important ways we advance Google’s mission is by helping other businesses realize theirs,” Sundar Pichai, Google’s CEO, said in a statement. “We are excited to welcome Looker to Google Cloud and look forward to working together to help our customers solve some of their biggest challenges.”

“Google Cloud is being used by many of the leading organizations in the world for analytics and decision-making. The combination of Google Cloud and Looker will enable customers to harness data in new ways to drive their digital transformation,” Kurian added. “We remain committed to our multi-cloud strategy and will retain and expand Looker’s capabilities to analyze data across Clouds.”

Looker CEO Frank Bien said that the combination of the two companies would advance Looker’s initial mission “to empower humans through the smarter use of data”.

“Now, we’ll have greater reach, more resources, and the brightest minds in both Analytics and Cloud Infrastructure working together to build an exciting path forward for our customers and partners,” he concluded. “Together, we are reinventing what it means to solve business problems with data at an entirely different scale and value point.”

“The combination of Google Cloud and Looker will enable us to further accelerate our leadership as a WordPress digital experience platform,” Heather Brunner, Chairwoman and CEO of WP Engine, said. “By combining our BigQuery data warehouse with extended [business intelligence] and visualization tools from Looker, we’ll be empowered with faster, more actionable data insights that will help drive our business forward and better serve our customers.”

Google said it expects the acquisition – which is subject to customary closing conditions, including the receipt of regulatory approvals – to be complete later this year.

NASCAR to use Amazon Web Services to archive historic library

Image by skeeze from Pixabay

American stock car racing series NASCAR said on 4 June that it had chosen to migrate over seventy years of historical footage, including a new video series called “This Moment in NASCAR History”, to Amazon’s cloud service.

Amazon Web Services (AWS) will help deliver the new show, which will premiere with the Monster Energy NASCAR Cup Series race at the Michigan International Speedway, with a new historical moment released each week for fans to watch on NASCAR.com.

The racing series also plans to leverage Amazon Rekognition, an AWS service which automatically adds metadata to videos, such as car type, lap times, drivers and sponsors to the video, theoretically making the search for specific footage much easier.

Using intelligent image and video analysis, Amazon Rekognition can automatically tag specific video frames – with information such as the lap, driver and car – “ so the industry can easily search those tags to surface the most iconic moments from past races”.

By using AWS’s services, NASCAR expects to save thousands of hours of manual search time each year, and will be able to easily surface flashbacks like Dale Earnhardt Sr.’s 1987 “Pass in the Grass” or Denny Hamlin’s 2016 Daytona 500 photo finish, and quickly deliver these to fans via video clips on its website and social media channels.

“NASCAR is utilizing the breadth and depth of our cloud services to enhance the way people experience the sport and deliver even more impactful content to fans,” Mike Clayville, Vice President, Worldwide Commercial Sales at AWS, said in a statement published on the NASCAR website.

“We are pleased to welcome AWS to the NASCAR family,” Jon Tuck, NASCAR Chief Revenue Officer, added. “This relationship underscores our commitment to accelerate innovation and the adoption of cutting-edge technology across our sport.”

“NASCAR continues to be a powerful marketing vehicle and will position AWS’s cutting-edge cloud technology in front of industry stakeholders, corporate sponsors, broadcast partners and ultimately our fans,” he said.

“AWS’s . . . cloud technology will archive all of the defining moments in our sport’s deep-rooted history and will provide fans access to those unforgettable memories throughout the year,” Craig Neeb, Executive Vice President of Innovation and Development at NASCAR, concluded. “Speed and efficiency are key in racing and business which is why we chose AWS . . . to accelerate our migration to the cloud.”

Northrop Grumman performs static fire test on OmegA rocket

Image by WikiImages from Pixabay

Virginia-based global aerospace and defense technology company Northrop Grumman said on 30 May it had “successfully” completed a full-scale static fire test of the OmegA rocket – which it is developing for national security missions – in Promontory, Utah.

During the test, the craft’s first stage motor fired for approximately 122 seconds, producing more than two million pounds of maximum thrust—roughly the equivalent to that of eight-and-a-half jumbo jets – according to Northrop Grumman.

The company said that the test verified the performance of the motor’s ballistics, insulation and joints as well as control of the nozzle position. A full-scale static fire test of OmegA’s second stage is planned for this autumn, the company said.

The OmegA rocket’s design “leverages flight proven technologies from Northrop Grumman’s Pegasus, Minotaur and Antaresrockets as well as the company’s interceptors, targets and strategic rockets”.

Northrop Grumman’s vehicle development team is working on the program in Arizona, Utah, Mississippi and Louisiana, with launch integration and operations planned at Kennedy Space Center in Florida, and Vandenberg Air Force Base in California. The program will also support thousands of jobs across the country in its supply chain.

In 2018, the US Air Force awarded Northrop Grumman a US$792 Launch Service Agreement contract to complete the development of the OmegA rocket and the required launch sites with a projected launch date sometime in 2021.

The 2015 National Defense Authorization Act specified that a domestic next-generation rocket propulsion system “shall be developed by not later than 2019”, a deadline that Northrop Grumman said believes it will meet based on the reported success of the 30 May test.

“The OmegA rocket is a top priority and our team is committed to provide the US Air Force with assured access to space for our nation’s most critical payloads,” Scott Lehr, vice president and general manager of flight systems for Northrop Grumman, said in a statement. “We committed to test the first stage of OmegA in spring 2019, and that’s exactly what we’ve done.”

“Congratulations to the entire team on today’s successful test,” Kent Rominger, OmegA vice president at Northrop Grumman, added. “OmegA’s design using flight-proven hardware enables our team to meet our milestones and provide an affordable launch system that meets our customer’s requirements and timeline.”

However, at a new conference following the test, Rominger reportedly told journalists that there was an anomaly seen near the end of the test as sparks and burning debris came out of the rocket’s nozzle. Noting that rocket engines are tested at both high and low temperatures, he said that this test was at a high temperature of 90 degrees “so you get a little bit higher thrust”.

“It appears that everything worked very well. At the very end when the engine was tailing off, we observed the aft exit cone, maybe a portion of it, doing something a little strange that we need to go further look into,” he added.

A large plume of black smoke seen during the test was normal, explained Rominger, who would allegedly not confirm whether a piece or pieces of the aft exit cone came apart during the test. He reiterated that the company would have to “dig into all that data [and] analyze it to see what happened” before coming to any definitive conclusions.

Michael Sanjume, chief of the Launch Enterprise Acquisition Division at the Air Force Space and Missile Systems Center, said that the Air Force would work with Northrop Grumman to analyze the data, a process that Rominger said would not affect the planned schedule for a full-scale static fire test of OmegA’s second stage later in the year.

Live footage of the test can be found here.

Apple 2019 design award winners announced at #WWDC

Image by Niek Verlaan from Pixabay

The winners of Apple’s annual design awards were announced on 3 June, recognising nine iOS developers for “outstanding artistry, technical achievement, user interface and application design”, at its annual Worldwide Developers Conference which ran from 3 to 7 June in San Jose, California.

Apple said that the developers – who hail from companies both large and small, all over the globe – were recognised “for outstanding artistry, technical achievement, user interface and application design”. Past winners include iTranslate Converse, Procreate, Complete Anatomy, Florence, and Alto’s Odyssey.

The winning apps represent a wide range of categories spanning photo editing, drawing, medical imaging, sports and games. According to Apple, they all offer a “unique approach to user interface design, sound design, graphics, controls or gameplay and take advantage of breakthrough Apple technologies such as haptics, Metal or Core ML”.

“iOS developers keep raising the bar. This year, we are especially proud to see so many apps and games putting health, fitness, creativity and exciting gameplay at the centre of their app experience,” Ron Okamoto, Apple’s vice president of Worldwide Developer Relations, said in a statement. “We congratulate all the Apple Design Award winners on their incredible creativity and ingenuity.”

These are all nine apps that won at the awards (and their descriptions from the Apple website):

Ordia – Loju LTD (England)

“Ordia is a one-finger action platformer that blends simple gameplay and rich visuals with a clever concept. As a new life-form exploring its primordial world, you’ll slingshot yourself through a burbling alien landscape. Playing couldn’t be simpler: Drag to aim, leap from dot to dot, avoid hairy-looking obstacles, and try to keep up as the game gets trickier over its dozens of levels.”

Flow by Moleskine – Moleskine Srl (Italy)

“Flow is a practical and artful note-taking app worthy of the Moleskine name, coupling powerful functionality and elegant design. It’s packed with helpful touches: a hidable interface to help you stay focused on the task at hand, colors for every last pen (everything from Corellian Gray to Electric Pink), and more paper options than a big-city print shop. If you’re serious about your scribbles, Flow is a notable choice.”

The Gardens Between – The Voxel Agents (Australia)

“The Gardens Between is a stirring example of how games can be powered by heart. Yes, it’s a surreal puzzler in which you control the passage of time instead of characters. But it’s also the story of two best friends and how their relationship is changed over the years. The beautifully crafted graphics alone make the game worth playing, but it’s the sweet narrative that truly hits home.”

Asphalt 9: Legends – Gameloft (France)

“Asphalt 9: Legends is no stranger to acclaim. For more than a decade, the Asphalt series has offered console-grade arcade racing with all the trimmings: incredible graphics, blazing speed, exceptional production value, and gameplay that pushes the boundaries of hardware performance. Like previous editions, Asphalt 9 is deep enough for advanced players but easy enough that anyone can get behind the wheel. It once again proves an unyielding truth: Racing games are awesome.”

Pixelmator Photo – Pixelmator Team (Lithuania)

“Pixelmator Photo manages to deliver impressive editing power in a beautiful, uncluttered interface. For beginners, Pixelmator is surprisingly approachable (your edits are conveniently nondestructive). For experts who wish to maximize every last pixel of their iPad screen, it offers a robust toolset and support for RAW images. Most helpful of all, it offers machine-learning-powered editing tools that have been trained using more than 20 million photos.”

ELOH – Broken Rules (Austria)

“ELOH is the rare puzzle game that keeps you pleasingly perplexed while also totally chilling you out. The goal is to shift blocks to help bouncing balls get from point A to point B — but with the aid of rhythm and percussion. Rearranging blocks builds a soothing beat that adds a whole new dimension. ELOH’s hand-painted visuals and charming animations belie the game’s trickiness, which sneakily compounds over its many levels. But the organic vibe and earthy soundtrack transform the game into your own moment of Zen.”

Butterfly iQ — Ultrasound – Butterfly Network (USA)

“Butterfly iQ is an innovative whole-body ultrasound app that’s CE-approved, FDA-cleared, and a total game changer. When coupled with a supported device, it enables mobile ultrasounds anywhere. Simple enough to be operated by laypeople but advanced enough to use AR and machine learning to guide users along the way, Butterfly iQ offers an uncluttered UI that can be operated with one hand. Its images can be uploaded to a secure cloud for remote review by a medical professional — or elated family members.”

Thumper: Pocket Edition – Drool LLC (USA)

“Thumper: Pocket Edition, a heavy-metal rhythm game, is all about blistering speed, glowing electric visuals, and adrenaline. The idea is simple enough — tap the screen to keep your metallic beetle on a sleek chrome track. But the masterful combination of ’80s neon, thumping electronica, and smooth 60-fps gameplay is like nothing else you’ve tapped.”

HomeCourt – The Basketball App – NEX Team Inc. (USA)

“HomeCourt has revolutionized basketball practice more than anything since the advent of the orange cone. Thanks to real-time A.I.-powered shot tracking, advice from real coaches, and clean design, HomeCourt has established itself as the go-to for players of all skill levels who want to grow their game. And its excellent social features let players interact with coaches thousands of miles away or in a gym down the street.” You can watch the award ceremony and download the apps here.

Report: Bitcoin usage “miniscule” compared to traditional non-cash payment systems

Image by MichaelWuensch from Pixabay

In a report published on 10 May, the US Congressional Research Service (CRS) – the public research arm or think tank of the United States Congress – found that bitcoin usage is miniscule compared to traditional non-cash payment systems such as credit, debit, Venmo, Apple Pay and check payments.

The CRS discovered that while demand for cash in the US is steadily growing, its usage for payments is declining – but despite this trend, people have yet to turn towards bitcoin as an alternative like industry insiders may have hoped.

“To date, the migration away from cash has largely been in favour of traditional non-cash payment systems; however, some observers predict new alternative systems will play a larger role in the future,” the report said. “Such alternative systems aim to address some of the inefficiencies and risks of traditional non-cash systems, but face obstacles to achieving that aim and involve costs of their own.”

“Private systems using distributed ledger technology, such as cryptocurrencies, may not serve the main functions of money well and face challenges to widespread acceptance and technological scalability,” the report found.

The report also said that the price of bitcoin does not accurately reflect its overall demand. The CRS looked at how many times bitcoin is transferred per day and found that the number of transactions were “miniscule” compared to other, more traditional systems.

For example, in 2019 through 12 March “the bitcoin system averaged about 310,000 transactions per day globally, a pace that would result in about 113 million transactions per year”, while over US$144 billion traditional non-cash payments were made in 2015, almost 1,275 times the average number of yearly bitcoin transactions. Learn how to buy cryptocurrency like Dogecoin.

Researchers described this as a measure of the number of times that two parties have exchanged bitcoin; this kind of data does not tell us how many times bitcoin has been used to buy something.

“Some portion of those exchanges, possibly a significantly large portion, is driven by investors giving fiat currency to an exchange to buy and hold the Bitcoin as an investment. In those transfers, Bitcoin is not acting as money (i.e., not being exchanged for a good or service),” the report said.

The CRS said that it found it difficult to envision an economy where cash had been replaced, at least in the near future, but conceded that cash’s “hegemony as a payment system appears to have come to an end,” and that the ubiquity of its acceptance in the real-world seems somewhat precarious.

“If non-cash payment systems significantly displace cash and cash usage, and acceptance significantly declines, there would be a number of effects (both positive and negative) on the economy and society,” the CRS warned.

“Now or in the near future, policymakers may face decisions about whether to impede or hasten the decline of cash and consider the implications of doing so,” it added.

Twitter acquires deep-learning start-up Fabula AI

Image by William Iven from Pixabay

Social media giant Twitter announced on 3 June that it had acquired London-based deep learning start-up Fabula AI in an attempt to boost its machine learning expertise, feeding into an internal research group led by the company’s senior director of engineering Sandeep Pandey.

The research group’s stated aim is to “continually advance the state of machine learning, inside and outside Twitter”, focusing on “a few key strategic areas such as natural language processing, reinforcement learning, [machine learning] ethics, recommendation systems, and graph deep learning”.

Fabula AI’s researchers specialise in employing graph deep learning to detect network manipulation, applying machine learning techniques to network-structured data in order to analyse very large and complex datasets describing relations and interactions, and extract signals in ways that traditional machine learning techniques are not capable of doing.

Twitter described the acquisition as a “strategic investment” and a “key driver” as the company works to “help people feel safe on Twitter and help them see relevant information”. Financial terms of the deal were not disclosed.

“Specifically, by studying and understanding the Twitter graph, comprised of the millions of Tweets, Retweets and Likes shared on Twitter every day, we will be able to improve the health of the conversation, as well as products including the timeline, recommendations, the explore tab and the onboarding experience,” the social network said.

Fabula was founded by Michael Bronstein, Damon Mannion, Federico Monti and Ernesto Schmitt. It is led today by Bronstein – who currently serves as chief scientist – and Monti, now the company’s chief technologist, who began their collaboration together while at the University of Lugano, Switzerland.

“We are really excited to join the ML research team at Twitter, and work together to grow their team and capabilities,” Bronstein said in a post on Twitter’s blog. “Specifically, we are looking forward to applying our graph deep learning techniques to improving the health of the conversation across the service.”

Bronstein is currently the Chair in Machine Learning & Pattern Recognition at Imperial College, and will remain in that position while leading graph deep learning research at Twitter. He will be joined by long-time collaborators from academia (including current or former students) who research advances in geometric deep learning.

Twitter – along with other social media platforms and internet search engines – has recently come under fire from the media, academics and politicians for its perceived failure to properly deal with abuse and hate on its platform. It has previously been criticized for failing to take action against accounts that spread hate speech and still does not have a clear policy in place for dealing with white supremacist accounts.

Sony signs licensing agreement for haptic technology

Image by StockSnap from Pixabay

Japanese multinational technology corporation Sony Interactive Entertainment (SIE) has signed an agreement with haptic feedback technology company Immersion Corp. to license its “advanced haptics patent portfolio”, the California-based developer said on 13 May.

Under the agreement, SIE can also leverage Immersion’s haptics technology for gaming controllers and VR controllers. Immersion Corp stated that such technology could be used to simulate “sensations of pushing, pulling, grasping, and pulsing”, and claimed that “adding the sense of touch to games heightens the experience and keeps players engaged”.

Simply put, haptic technology refers to any device or hardware that is able to simulate or create the experience of touch by applying forces, vibrations or motions to the user.  So when you die in an explosion during a video game and your controller vibrates, that’s haptics. Immersion describe it as “touch feedback technology”.

Immersion Corp doesn’t actually manufacture the hardware for haptic feedback, instead certifying suitable hardware and licensing its software as well as over 3500 issued or pending patents to companies that want to add haptics to their products.

“Research shows that haptics makes games come to life, increasing players’ satisfaction and enjoyment through peripherals and games enhanced with the power of touch,” Ramzi Haidamus, Immersion’s CEO, said in a press release. “We are thrilled to work with SIE, a true pioneer in gaming, to provide incredible experiences to their customers.”

“We are pleased to reach agreement with Immersion,” Riley Russell, Chief Legal Officer for Sony Interactive Entertainment, added. “High quality haptics technology enhances the sense of presence and immersion for gamers, and this agreement is consistent with [our] desire to provide the best gaming experiences to gamers around the world.”

Immersion also said recently that it had signed a license agreement with Panasonic Avionics – a subsidiary of Japanese multinational electronics corporation Panasonic that produces in-flight entertainment and communications – to provide the company with “access to Immersion’s patented haptic technology for use in in-flight entertainment”.

“By incorporating haptics into in-flight entertainment systems, Panasonic Avionics is able to modernize the experience and make access to the system more intuitive and engaging. As capacitive touch buttons provide feedback, the person will know if the buttons have been activated,” Haidamus said in a statement. “We are pleased to work with Panasonic Avionics and look forward to seeing how the company continues to enhance its in-flight systems with touch technology.”

New VR game created by MIT looks to challenge how we think about race

Image by StockSnap from Pixabay

Researchers at the Massachusetts Institute of Technology (MIT) said on 16 May they had created a “new computational model that captures how individuals might have been taught to think about race in their upbringing”.

According to MIT, this new model of “racial and ethnic socialization” – which was presented at the AAAI 2019 Spring Symposium – could have the potential to enhance video game simulations while simultaneously “facilitating training for teachers and students who might encounter racial issues in the classroom”.

The researchers embedded the model in a virtual reality software prototype called “Passage Home VR” which “serves up an immersive story, grounded in social science work conducted in the physical world on how parents socialize their children to think about race and ethnicity, both verbally and nonverbally, and the impact on how individuals perceive and cope with racial stressors”.

In the game, the user assumes the virtual identity of an African American girl whose high school teacher has accused her of plagiarizing an essay when, in fact, the character is a passionate, high-achieving English student who took the assignment very seriously and wrote the essay herself.

As users navigate the discriminatory encounter with the teacher, the ways in which they respond to the teacher’s actions — with different body language, verbal responses and more — influence the outcome and feedback presented at the end of the game.

The researchers said they found that “the experiences people have [had] in their lives with how they have been socialized to think about the role of race and ethnicity in society — their racial and ethnic socialization — influence their behavior in the game”.

The majority of the 17 participants in the study who tested the game were identified by the game as “colorblind”, which researchers later “confirmed” through semi-structured verbal interviews. These users were “less likely to explicitly mention race” in “thematic analyses of the story of the game”.

A smaller number of users “displayed in-game behavior” that identified them as “having other socialization strategies” – such as “alertness to discrimination” or “preparation for bias” – the researchers added.

“People are socialized to think about race in a variety of ways — some parents teach their children to ignore race entirely, while others promote an alertness to racial discrimination or cultural pride,” D. Fox Harrell, professor of digital media and of artificial intelligence at MIT, said in a statement.

“The system we’ve developed captures this socialization, and we hope that it may become an effective tool for training people to be thinking more about racial issues, perhaps for teachers and students to minimize discrimination in the classroom,” he said, noting that users choices in the game “were aligned with their real-world socialization of these issues”.

Harrell, who is also director of the MIT Center for Advanced Virtuality, where he designs virtual technologies to stimulate social change, added that his lab is “preparing to deploy and study the efficacy of “Passage Home VR” as a professional development tool for teachers”.

“Learning with virtual reality can only be effective if we present robust simulations that capture experiences as close to the real-world as possible,” he said. “Our hope is that this work can help developers to make their simulations much richer, unlocking the power to address social issues.”

“As video game developers, we have the ability within virtual worlds to challenge the biased ideologies that exist in the physical world, rather than continue replicating them,” says Danielle Olson, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), whose dissertation project includes the work reported at the symposium.

“My hope is that this work can be a catalyst for dialogue and reflection by teachers, parents, and students in better understanding the devastating social-emotional, academic, and health impacts of racialized encounters and race-based traumatic stress,” she added.