Category Archives: Tech

European Commission says Google not doing enough to fight disinformation

Image by Photo Mix from Pixabay

In a report published on 17 May, the European Commission said that while Google, Facebook and Twitter had all improved their attempts to fight disinformation online, the ubiquitous search engine was still lacking in transparency regarding its political advertising.

The three online platforms are signatories to the Code of Practice against disinformation and have committed to report monthly on measures taken ahead of the European Parliament elections in May 2019.

This was the fourth of these reports, the last of which will be published at the end of June when the European elections are over, at which point the Commission will carry out a “comprehensive assessment” of the effectiveness of the Code. Should the results prove unsatisfactory, the EC may “propose further measures, including of a regulatory nature”.

According to the EC, Google reported that it has taken “additional measures” to improve scrutiny of ad placements in the EU, and noted that they had created a publicly accessible political ad library and enabled searches through its API.

The search engine also detailed its ongoing efforts to “provide transparency around issue-based advertising” but said that a solution would not be in place before the European elections. The EU noted that Google “again” provided data on “the removal of a significant number of YouTube channels for violation of its policies on spam, deceptive practices and scams, and impersonation”.

For its part, Facebook reported on measures it had taken in the EU against ads that violated its policies for containing “low quality, disruptive, misleading or false content or trying to circumvent its systems”, and the opening of its new elections operation center in Dublin, Ireland.

The social media giant said it had taken down a “coordinated inauthentic behavior network originating from Russia and focusing on Ukraine” but did not mention whether this network had affected users in the EU.

Twitter reported on ads that had been “rejected for not complying with its policies on unacceptable business practices and quality ads” and “provided information on ads not served because of uncompleted certification process that is obligatory for political campaign advertisers”.

It also detailed a new “election integrity policy” and provided figures on measures against “spam and fake accounts” but did not provide any further insight on these measures, such as how they might relate specifically to activity in the EU.

In a joint statement, the EU’s Vice President for the Digital Single Market Andrus Ansip and three EU Commissioners (Věra Jourová, Julian King and Mariya Gabriel) said they recognized the companies’ continued progress on “their commitments to increase transparency and protect the integrity of the upcoming elections”.

They welcomed the “robust measures that all three platforms have taken against manipulative behavior on their services, including coordinated disinformation operations”, such as the Russian government’s alleged attempts to influence elections in the US and the United Kingdom. They categorized the companies’ efforts as a “clear improvement”.

However, they found that the companies needed to do more to “strengthen the integrity” of their services and suggested that the data provided lacked enough detail for “an independent and accurate assessment” of how their polices had actually contributed to reducing the spread of disinformation in the EU.

“We regret . . . that Google and Twitter were not able to develop and implement policies for the identification and public disclosure of issue-based ads, which can be sources of divisive public debate during elections, hence prone to disinformation,” they added.

They called for the companies to “step up” efforts to broaden cooperation with fact checkers in the EU’s member states and to “empower users and the research community” in the wake of the recent European Elections.

The companies need to engage with “traditional media” to develop “transparency and trustworthiness indicators” for information sources so that users are offered “a fair choice of relevant, verified information”, they added.

Finally, they suggested that the companies would also benefit from closer cooperation with the research community to identify and access relevant datasets to enable “better detection and analysis” of disinformation, “better monitoring” of the implementation and impact of the Code, and independent oversight of algorithms.

Maxar Technologies wins US$375 million Gateway lunar station contract

Image by Ponciano from Pixabay

The US National Aeronautics and Space Administration (NASA) said on 23 May it had selected Colorado-based space technology company Maxar Technologies to design, build and launch the core element of its Gateway lunar station: its power, propulsion and communications systems.

The Gateway is a high-power, 50-kilowatt solar electric propulsion spacecraft – three times more powerful than current capabilities – that will serve as a reusable command and service module in lunar orbit.

It will be fitted with large, roll-out solar panels and high-power xenon thrusters that will allow it to change orbits to launch missions to different parts of the Moon, and – as a mobile command and service module – provides a communications relay for human and robotic expeditions to the lunar surface, starting at the Moon’s South Pole.

During lunar expeditions, a team of crew members will remain aboard the Gateway for scientific investigations while a separate team will explore the surface. All crew members ultimately board the Orion spacecraft for a return to Earth.

The US$375 million contract awarded to Maxar includes an indefinite-delivery/indefinite-quantity portion and begins with a 12-month base period of performance, followed by a 26-month option, a 14-month option and two 12-month options.

According to NASA, the spacecraft design will be completed during the base period, after which “the exercise of options will provide for the development, launch, and in-space flight demonstration”. The flight demonstration is expected to last as long as one year, during which the spacecraft will be fully owned and operated by Maxar.

Following a successful demonstration, NASA will have the option to acquire the spacecraft for use as the first element of the Gateway. The agency is targeting launch of the power and propulsion element on a commercial rocket in late 2022. 

The Gateway is a central piece in NASA’s plan to send astronauts to the moon by 2024, which is based on a two-phase approach: the first is focused on speed – landing on the Moon in the next five years – while the second plans to establish a sustained human presence on and around the Moon by 2028. NASA then plans to use what its learns on the Moon to prepare to send astronauts to Mars.

“The power and propulsion element is the foundation of Gateway, and a fine example of how partnerships with US companies can help expedite [our] return to the Moon with the first woman and next man by 2024,” NASA Administrator Jim Bridenstine said in a statement. “It will be the key component upon which we will build our lunar Gateway outpost, the cornerstone of NASA’s sustainable and reusable Artemis exploration architecture on and around the Moon.”

“We’re excited to demonstrate our newest technology on the power and propulsion element. Solar electric propulsion is extremely efficient, making it perfect for the Gateway,” added Mike Barrett, power and propulsion element project manager at NASA’s Glenn Research Center in Cleveland.

“This system requires much less propellant than traditional chemical systems, which will allow the Gateway to move more mass around the Moon, like a human landing system and large modules for living and working in orbit,” he said.

“The Gateway can be positioned in a variety of orbits around the Moon, allows for access to entire lunar surface, and supports development of a reusable human lander system,” William Gerstenmaier, associate administrator for Human Exploration and Operations Mission Directorate at NASA Headquarters, said when NASA first accepted the challenge to return to the Moon by 2024.

“Resiliency and reusability are key for sustainable human lunar exploration, and that’s what the Gateway gives us,” he added. “Furthermore, there’s broad interest from the international community for supporting as well.”

Sony and Microsoft to “explore strategic partnership”

Image by Efes Kitap from Pixabay

Japanese multinational technology corporation Sony Corporation and American multinational technology company Microsoft said on 16 May that they had signed a memorandum of understanding (MoU) to “partner on new innovations to enhance customer experiences in their direct-to-consumer entertainment platforms and AI solutions”.

The two companies said they would explore joint development of “future cloud solutions” in Microsoft Azure, the company’s cloud computing service for building, testing, deploying, and managing applications and services through Microsoft-managed data centers. These solutions would “support their respective game and content-streaming services”.

In addition, Sony and Microsoft said they planned to “explore the use of current Microsoft Azure datacenter-based solutions for Sony’s game and content streaming services” and to collaborate “in the areas of semiconductors and AI [artificial intelligence]”.

For semiconductors, this includes potential joint development of new intelligent image sensor solutions, the companies said, integrating Sony’s image sensors with Microsoft’s AI technology in a “hybrid manner across cloud and edge”. By leveraging this combined technology, they hope to “provide enhanced capabilities for enterprise customers”.

In terms of AI, they said they would “explore incorporation of Microsoft’s . . . AI platform and tools in Sony consumer products” to provide what they described as “highly intuitive and user-friendly AI experiences”. They did not offer any details of what these products and experiences might practically look like or how they might function for users.

By working together, the two companies aim to “to deliver more enhanced entertainment experiences for their worldwide customers”, which they said would “include building better development platforms for the content creator community”.

“Sony is a creative entertainment company with a solid foundation of technology,” Kenichiro Yoshida, president and CEO of Sony, said in a statement. “We collaborate closely with a multitude of content creators that capture the imagination of people around the world, and through our cutting-edge technology, provide the tools to bring their dreams and vision to reality.”

“PlayStation® itself came about through the integration of creativity and technology,” he added. “Our mission is to seamlessly evolve this platform as one that continues to deliver the best and most immersive entertainment experiences, together with a cloud environment that ensures the best possible experience, anytime, anywhere.”

He noted that – although they have competed in some areas – the two companies have been business partners for “many years” and said he believed their “joint development of future cloud solutions will contribute greatly to the advancement of interactive content”.

“Additionally, I hope that in the areas of semiconductors and AI, leveraging each company’s cutting-edge technology in a mutually complementary way will lead to the creation of new value for society,” he concluded.

“Sony has always been a leader in both entertainment and technology, and the collaboration we announced today builds on this history of innovation,” Satya Nadella, CEO of Microsoft, added. “Our partnership brings the power of Azure and Azure AI to Sony to deliver new gaming and entertainment experiences for customers.”

The two companies said that they would share additional information about the partnership when it becomes available.

Microsoft plans new “sustainable” data centers in Sweden

Image by Efes Kitap from Pixabay

American multinational technology company Microsoft had announced that it is planning to build two new data centers in the Swedish cities of Gävle and Sandviken, just north of Stockholm, that will be powered by 100 percent renewable energy sources . It is also aiming for the two centers to achieve zero-waste operations.

Microsoft President Brad Smith said that, by the end of this year, the company intends to be powering its datacenters with 60 percent renewable energy, and will aim to reach 70 percent renewable energy by 2023, on the path to 100 percent. The company has operated as carbon neutral since 2012 and “is continuously increasing the amount of energy the company uses from renewable sources – wind, solar, and hydropower”.

It plans to collaborate with Vattenfall –  a state-owned Swedish power company that also generates power in Denmark, Finland, Germany, the Netherlands and the United Kingdom – on the sourcing and supply of renewable energy for the two planned datacenters.

The two companies said they planned to collaborate to develop solutions to reduce the carbon footprint of the datacenters and construct new power infrastructure to provide stable power for the facilities and the surrounding areas in Sweden in the coming years”. They anticipate that, over time, the new power infrastructure will help further reduce the carbon footprint of the datacenters.

Microsoft and Vattenfall previously announced “the largest wind energy deal in the Netherlands in 2017”, in which Microsoft purchased 100 percent of the wind energy generated from a 180-megawatt wind farm adjacent to its local datacenter operations in the Netherlands. The wind farm is being constructed and operated by Vattenfall in the Wieringermeer Polder, north of Amsterdam.

“We intend for our datacenters in Sweden to be among the most sustainably designed and operated in the world with the ultimate ambition of achieving zero-carbon operations,” Noelle Walsh, CVP of Cloud Operations & Innovation at Microsoft Corp, said in a statement. “The datacenter design we’re developing will further Microsoft’s ongoing commitment to transition to a sustainable, low-carbon future.”

Vattenfall’s Senior Vice President of Strategic Development, Andreas Regnell, said the company was “fully committed” to help its customers to live “fossil fee” within one generation and that the partnership with Microsoft would fit “very well” with Vattenfall’s overall strategy.

The new datacenters in Sweden are “in anticipation of future needs for cloud and internet services as demand in Europe continues to grow”, Microsoft said. In its recent Q3 2019 earnings report, Microsoft told investors that demand for its cloud offerings drove commercial cloud revenue to $9.6 billion in its most recent quarter, up 41 percent year-over-year.

The datacentres in Sweden will add to the company’s existing European datacentre footprint, joining the ranks of its other planned datacentres in Norway and Switzerland, and already available datacentres in Austria, France, Finland, Germany, Ireland, the Netherlands and the United Kingdom.

As part of a drive to focus on research and development for greater efficiency and increased renewable energy across its global infrastructure, Microsoft said it plans to launch a new data-driven circular cloud initiative using the Internet of Things (IoT), blockchain and artificial intelligence (AI) to monitor performance and “streamline the reuse, resale and recycling of datacentre assets, including servers”.

IAU calls for stricter regulation of satellite constellations

Image by PIRO4D from Pixabay

The International Astronomical Union (IAU), an umbrella organization representing over 13,500 astronomy professionals, called on 3 June for stricter regulations controlling the growth of so-called satellite constellations.

A satellite constellation is the name given to networks of artificial satellites with coordinated ground coverage, operating together under shared control, synchronized so that they overlap well in coverage, the period in which a satellite or other spacecraft is visible above the local horizon.

The deployment of these groups of satellites – usually in low-Earth orbits – has begun to increase rapidly over the last few years with plans to deploy potentially tens of thousands of them, at which point satellite constellations will come to outnumber all previously launched satellites.

The IAU has scientific concerns about these satellites as their surfaces are made of highly reflective metals – this is why they often appear to the naked eye on Earth as slow moving dots. These reflections are not always visible to the naked eye but “can be detrimental to the sensitive capabilities of large ground-based astronomical telescopes, including the extreme wide-angle survey telescopes currently under construction”.

Secondly, the IAU is concerned that “despite . . . efforts to avoid interfering with radio astronomy frequencies, aggregate radio signals emitted from the satellite constellations can still threaten astronomical observations at radio wavelengths”.

“Recent advances in radio astronomy, such as producing the first image of a black hole or understanding more about the formation of planetary systems, were only possible through concerted efforts in safeguarding the radio sky from interference,” the union said.

The IAU recommended that “all stakeholders in this new and largely unregulated frontier of space utilization work collaboratively to their mutual advantage”, while acknowledging that “significant effort has been put into mitigating the problems with the different satellite constellations”.

Satellite constellations “can pose a significant or debilitating threat to important existing and future astronomical infrastructures”, the IAU said.

It urged designers, developers and policy makers to work with the astronomical community to “analyze and understand” the impacts of satellite constellations, and to devise a regulatory framework “to mitigate or eliminate the [potential] detrimental impacts on scientific exploration as soon as practical”.

“The IAU’s Commission B7 Protection of Existing and Potential Observatory Sites welcomes the opportunity to work proactively with everyone involved in these efforts,” the union concluded.

The union’s statement comes as some astronomers have expressed concerns that plans by Elon Musk’s California-based aerospace manufacturer SpaceX to launch over 12,000 Starlink satellites will undermine their ability to study the universe with the use of ground-based telescopes as the satellites may reflect sunlight towards Earth. Without naming Starlink, the IAU expressed similar concerns in its statement.

The Royal Astronomical Society (RAS), the Association of Universities for Research in Astronomy (AURA) and the European Southern Observatory (ESO) also released statements on the Starlink launch and increasing number of satellite constellations soon to orbit the Earth.

The RAS is concerned that “increasing the number of satellites so significantly presents a challenge to ground-based astronomy” as the new “networks could make it much harder to obtain images of the sky without the streaks associated with satellites, and thus compromise astronomical research”.

It noted that the scale of the planned projects means that “there is also the prospect of a significant and lasting change to the views of the night sky until now enjoyed throughout human history and pre-history”. The night sky is part of humanity’s cultural heritage, the society said, and thus “deserves protection”.

While there has been no apparent consultation between SpaceX and the scientific community in advance of the Starlink launch, the society noted that after initial press reports Musk expressed a desire to minimize the impact on astronomy, an offer which the RAS welcomes.

It urged SpaceX – and other satellite providers such as OneWebAmazon and Telesat – to work with “scientists, engineers and others to mitigate the effects of the new constellations”, and to consider the potential impact on “human heritage”.

Likewise, AURA – the managing organization for many ground-based telescopes for National Science Foundation (NSF), both extant and under construction – expressed concern that the Starlink constellation could have a significant (and negative) impact on research using such telescopes.

This would include, for example, the Large Synoptic Survey Telescope (LSST), under construction by NSF in Chile and slated to begin wide-field imaging of the sky in 2021. LSST will create an astronomical survey that depends on dark skies for its core science.

AURA noted that LSST’s frequent imaging of the same region of sky “will be a mitigating factor for Starlink interference, providing enough uncontaminated images to reject the images that contain satellite trails or other anomalies”.

In the case of the full constellation of Starlink satellites, AUSA said that initial calculations indicate that LSST images would – on average – contain about one satellite trail per visit for an hour or two after sunset and before sunrise so the number of pixels affected would be around 0.01 percent or smaller.

“Therefore, for LSST, even a constellation of about 10,000 Starlink satellites would be a nuisance rather than a real problem,” the statement said.

However, AURA emphasized that “the impact of satellite constellations on other AURA telescopes that have wider fields, longer exposures, and/or less sophisticated data processing pipelines may be much more significant”, and that Starlink may be only the first of many technologies that could affect these kinds of studies.

Joyent discontinues public cloud offering

Image by Nikin from Pixabay

California-based software and services firm Joyent said on 6 June that it will discontinue its public cloud offering – which competed with the likes of market-leader Amazon Web Services, and Google Cloud and Microsoft – three years after it was acquired by South Korean technology conglomerate Samsung.

Like its better known competitors, Joyent provided developers with the opportunity to rent computing capacity from its datacenters on a pay-as-you-go basis but will leave the public cloud sphere in November and focus its resources elsewhere, instead of continuing to compete head-to-head with Amazon Web Services.

Joyent will specifically focus on so-called “single-tenant” cloud services, providing a dedicated chunk of computing infrastructure to a single customer, a service that it said Samsung currently uses. Joyent will also continue to provide cloud software for customers’ own data centers and servers.

Samsung acquired Joyent in June 2016 for US$125 million, taking one of very few independent competitors to Amazon Web Services off the market. According to Business Insider, Joyent was backed by Peter Thiel, Intel Capital, and others. The company was an early proponent of software container technology, which has since been popularized by US$1.3 billion startup Docker and an open source cloud project called Kubernetes.

“To all of our public cloud customers, we will work closely with you over the coming five months to help you transition your applications and infrastructure as seamlessly as possible to their new home,” Steve Tuck, president and chief operating officer of Joyent, said in a blog post published on the company’s website.

“Starting in November, we will be scaling back the availability of the Joyent Public Cloud to customers of our single-tenant cloud offering,” he said, adding that the company is “currently working on finding different homes” for its on-demand cloud customers.

“For some that will involve deploying the same open source software that powers the Joyent Public Cloud today in their own datacenter or on a BMaaS provider like SoftLayer with our help and ongoing support,” Tuck said.

“For those customers that don’t have the level of scale for their own datacenter or to run BMaaS, we have taken the time to evaluate different options to support this transition, and have been hard at work to make the experience as smooth as possible,” he added. “To that end, we are proud to say that we have many partners working with us to support the transition for customers who wish to move to alternative on-demand clouds.”

Verily partners with drug companies for clinical trials and outreach

Image by Darko Stojanovic from Pixabay

Verily, a health and life sciences company owned by Google’s parent company Alphabet, said on 21 May it is moving into the clinical trials space through a partnership with pharmaceuticals companies Novartis, Sanofi, Otsuka and Pfizer.

The goal of the partnership is to find new ways to reach patients, make it easier to enroll and participate in clinical trials, and to aggregate data from a range of sources, including electronic medical records and health-tracking wearable devices.

Using its existing Project Baseline platform, Verily said it hopes to “engage more patients and clinicians in research, increase the speed and ease of conducting studies and collect more comprehensive, higher quality data, including outside the four walls of a clinic”.

Clinical trials have traditionally been expensive processes that rely on outdated technologies so many pharmaceutical companies are looking at the potential of leveraging the latest technology developed by companies like Google to refine and streamline the process.

According to Verily, the number of people participating in clinical research across the United States is less than 10 percent of the population, and challenges in research other than low numbers can include “data fragmentation, inefficient operations and limited value for patients”.

Using the Baseline platform, Verily, alongside its new industry partners – and with input from academic research institutions, patient-advocacy groups and health systems – said it hopes to “implement a more patient-centric, technology-enabled approach to research, and increase the number and diversity of clinical research participants”, and to develop “novel approaches to generating real-world evidence”.

Over the coming years, Novartis, Otsuka, Pfizer and Sanofi each plan to launch clinical studies leveraging the platform across diverse therapeutic areas, such as cardiovascular disease, oncology, mental health, dermatology and diabetes, Verily said.

Project Baseline launched in 2017 with the Project Baseline Health Study, aiming to “develop the technology and tools to help researchers create a more comprehensive, precise map of human health”.

This includes “devices, dashboards and analytical tools” to support both the patient experience and research; an “interoperable platform” to provide timely access to data to “streamline enrollment and management” of studies; and a “robust infrastructure” that “enables collection of dynamic data”.

Project Baseline has also built a “connected ecosystem with the aim of linking patients

and advocacy groups with clinicians and health systems, integrating clinical research with

clinical practice and making the process engaging”.

Verily anticipates that the new partnership with strengthen Project Baseline’s existing “ecosystem that will continue to expand and could help foster greater scientific discovery through the creation of next-generation research and development programs”.

“If we are truly to achieve the realization of patient-centered care, we must advance innovative research methodologies that focus on the patient and their needs, values and lifestyles,” Dr Reed Tuckson, chairman of the Project Baseline Advisory Board, said in a statement. “Project Baseline, in collaboration with these innovative companies, is well positioned to achieve this vision and have a transformative impact on research.”

“Evidence generation through research is the backbone of improving health outcomes,” Dr Jessica Mega, chief medical and scientific officer at Verily, added. “We need to be inclusive and encourage diversity in research to truly understand health and disease, and to provide meaningful insights about new medicines, medical devices and digital health solutions.”

“Novartis, Otsuka, Pfizer and Sanofi have been early adopters of advanced technology and digital tools to improve clinical research operations, and together we’re taking another step towards making research accessible and generating evidence to inform better treatments and care,” she said.

Novartis’ head of global development operations, Badhri Srinivasan, said the company was “advancing treatments that stand to change the course of disease, or even offer cures” but noted that its “ability to bring new medicines to patients quickly is often hampered by inefficient or limited participation in clinical trials”.

“By combining our complementary sets of expertise, we have the opportunity to develop a new trial recruitment model that gives patients and their physicians greater insight into the process of finding treatments for their disease, and how they can participate,” he concluded.

Describing the clinical research process as “antiquated in many ways”, Dr Debbie Profit, vice president of applied innovation and process improvement at Otsuka, said she hoped the company’s collaboration with Verily would help make clinical trials more accessible, precise and targeted to “obtain results and seek approvals sooner”.

“In clinical research, for several years now we have been pursuing game-changing possibilities to deploy digital technology and data science to re-engineer how we operate,” Rod MacKenzie, chief development officer and executive vice president of Pfizer, said. “The science behind our potential new medicines is cutting edge, yet many clinical trial processes have remained relatively unchanged over decades.”

“To bring scientific breakthroughs to patients more quickly and increase the diversity of the patient population in our clinical trials, Pfizer is committed to exploring new technologies and innovative ways to conduct clinical research, and we are proud to partner with Verily in that effort,” he added.

“Our scientific knowledge has exploded over the past generation, but efficiently bringing these new breakthroughs from lab bench to patient requires us to greatly improve the way we conduct these complex clinical trials,” Lionel Bascles, global head of clinical sciences and operations at Sanofi, said. “Project Baseline will allow us to better recruit appropriate patients and more efficiently integrate data for a greater understanding of diseases, reconnecting trials to our patients’ healthcare journeys.”

ESA rocket enters final stage ahead of 2020 launch

Image by WikiImages from Pixabay

The European Space Agency (ESA) said on 6 June that its Ariane 6 rocket has entered the final stages of its development ahead of its first commercial launch in 2020 and that the rocket’s launch zone at Europe’s Spaceport in French Guiana is near completion.

In an update published on it’s website, the ESA said hot firing tests of the Vinci engine that will power the rocket’s upper stage are now completed and the firing tests of the Vulcain 2.1 engine that will power the core stage are close to completion at DLR-Institute of Space Propulsion in Lampoldshausen, Germany. The P120C solid-fuel boosters that will be attached to the core booster will be tested in early 2020.

new test facility, the P5.2, at the same DLR site, was inaugurated in February and will enable testing of the complete Ariane 6 upper stage.

This upper stage will come from ArianeGroup in Bremen, Germany who are currently focusing on engine integration, final operations and testing. MT Aerospace, also in Bremen, are supplying the fuel tanks.

An ArianeGroup facility in Les Mureaux, France, hosts the largest friction stir welding machines in Europe for producing the Ariane 6 cryogenic tanks for Ariane 6’s core stage. The Aft bay, which secures the Vulcain 2.1 engine to the core stage is in production and being integrated at the same location.

The first qualification model of the P120C strap-on booster configured for Vega-C was static fired in January on the test bench at Europe’s Spaceport.

The second qualification model, configured for Ariane 6, will be tested at the beginning of next year. The 11.5 m long and 3.4 m diameter insulated P120C motor case is made of carbon composite built in one piece by Avio in Colleferro, Italy.

At ArianeGroup in Issac and Le-Haillan, France, new fully robotic production lines have the capability of increasing production by 30% to assemble the rear skirts and build nozzles for the P120C strap-on solid rocket motors. MT Aerospace in Augsburg, Germany, are supplying the rear skirts.

RUAG Space in Switzerland has recently produced the first large half-shell of the fairing for Ariane 6. Built in one piece using carbon fibre, it was cured in an industrial oven instead of an autoclave – a process developed with the help of ESA.

ESA said that the P120C solid rocket motor configured for Ariane 6 will be test fired in Kourou early next year to qualify it for flight. Ariane 6’s upper stage will be test fired at the DLR-Institute of Space Propulsion in Lampoldshausen, Germany. A test model Ariane 6 will also start combined tests in Kourou, including a static fire of the core stage engine, the Vulcain 2.1.

Ariane 6 launch base near completion

According to ESA, the Ariane 6 launch base at Europe’s Spaceport is on track and near completion. The main structures include the Launch Vehicle Assembly Building, the mobile gantry, and launch pad.

The launch vehicle assembly building used for horizontal integration and preparation of Ariane 6 stages before rollout to the launch pad, is complete and tools are now being installed.

The 90-metre tall metal frame of the mobile gantry is fully constructed and in February cladding started. The mobile gantry houses Ariane 6 until it is retracted before launch. The first rolling test of this 8200-tonne structure will be performed this summer.

The launch pad flame deflectors were installed at the end of April. They will funnel the fiery plumes of Ariane 6 at lift-off into the exhaust tunnels buried deep under the launch table. The nearby water tower has also been installed.

The first four levels of the mast have been mounted and welded and in February the integration started of the fluidic lines that will interface with the launch vehicle. The LH2 and the LOX plants that produce and store the liquid hydrogen and liquid oxygen needed to fuel the launcher’s engines are complete.

DHS says Chinese-made drones could pose data security risk

Image by Pexels from Pixabay

The US Department of Homeland Security (DHS) sent out an alert on 20 May warning that Chinese-made drones can relay sensitive flight data back to their manufacturers in China.

The alert, obtained by US cable news channel CNN, from DHS’s Cybersecurity and Infrastructure Security Agency reportedly says that some drones may pose a risk to firms’ data privacy and information by sharing it on servers that could potentially be accessed by the Chinese government.

The products “contain components that can compromise your data and share your information on a server accessed beyond the company itself,” the alert said, warning pilots to take caution when buying Chinese drones, and to learn how to limit a drone’s access to networks and remove secure digital cards.

According to the alert, “the United States government has strong concerns about any technology product that takes American data into the territory of an authoritarian state that permits its intelligence services to have unfettered access to that data or otherwise abuses that access.”

“Those concerns apply with equal force to certain Chinese-made (unmanned aircraft systems) – connected devices capable of collecting and transferring potentially revealing data about their operations and the individuals and entities operating them, as China imposes unusually stringent obligations on its citizens to support national intelligence activities,” the alert added.

“Organizations that conduct operations impacting national security or the Nation’s critical functions must remain especially vigilant as they may be at greater risk of espionage and theft of proprietary information,” the alert concluded.

The agency did not name any specific drone manufacturers but approximately 80 percent of all drones used in the US and Canada are produced by Shenzhen-based DJI, according to one industry analysis cited by CNN. In response to the alert, DJI expressed support for the recommendations and said it provides its customers with “full and complete control over how their data is collected, stored, and transmitted”.

In a statement to CNN, DJI said that it gives customers “full and complete control over how their data is collected, stored, and transmitted,” adding that “customers can enable all the precautions DHS recommends.”

“At DJI, safety is at the core of everything we do, and the security of our technology has been independently verified by the US government and leading US businesses,” DJI added. “For government and critical infrastructure customers that require additional assurances, we provide drones that do not transfer data to DJI or via the internet, and our customers can enable all the precautions DHS recommends.

“Every day, American businesses, first responders, and US government agencies trust DJI drones to help save lives, promote worker safety, and support vital operations, and we take that responsibility very seriously,” DJI said.

The alert followed an executive order issued by the White House that effectively banned US firms from using telecommunications equipment produced by Chinese technology giant Huawei, which has recently drawn similar national security concerns of government spying.

Researchers develop AI tool to help detect brain aneurysms

Image by Raman Oza from Pixabay

Researchers at Stanford University in California have developed a new artificial intelligence tool that can identify areas of a brain scan that are likely to contain aneurysms.

In a paper published on 7 June in JAMA Network Open, researchers described how the tool, which was built using an algorithm called HeadXNet, boosted their ability to locate aneurysms, in blood vessels in the brain that can leak or burst open, potentially leading to strokes, brain damage and death.

Researchers were able to find six more aneurysms in 100 scans that contain aneurysms when using the tool and it “also improved consensus among the interpreting clinicians”.

While the success of HeadXNet in these experiments is promising, the team of researchers cautioned that “further investigation is needed to evaluate generalizability of the AI tool prior to real-time clinical deployment given differences in scanner hardware and imaging protocols across different hospital centers”. They plan to address such problems through “multi-center collaboration”.

Combing brain scans for signs of an aneurysm can mean scrolling through hundreds of images. Aneurysms come in many sizes and shapes and balloon out at tricky angles – some register as no more than a blip within the movie-like succession of images.

“There’s been a lot of concern about how machine learning will actually work within the medical field,” Allison Park, a Stanford graduate student in statistics and co-lead author of the paper, said. “This research is an example of how humans stay involved in the diagnostic process, aided by an artificial intelligence tool.”

“Search for an aneurysm is one of the most labor-intensive and critical tasks radiologists undertake,” Kristen Yeom, associate professor of radiology and co-senior author of the paper, added. “Given inherent challenges of complex neurovascular anatomy and potential fatal outcome of a missed aneurysm, it prompted me to apply advances in computer science and vision to neuroimaging.”

Yeom brought the idea to the AI for Healthcare Bootcamp run by Stanford’s Machine Learning Group, which is led by Andrew Ng, adjunct professor of computer science and co-senior author of the paper. The central challenge was to create an AI tool that was able to accurately process large stacks of three dimensional images and “complement diagnostic practice.

To train their algorithm, Yeom worked with Park and Christopher Chute, a graduate student in computer science, and outlined clinically significant aneurysms detectable on 611 computerized tomography (CT) angiogram head scans.

“We labelled, by hand, every voxel – the 3D equivalent to a pixel – with whether or not it was part of an aneurysm,” Chute, who is also co-lead author of the paper, said. “Building the training data was a pretty gruelling task and there were a lot of data.”

After the training, the algorithm decides for each voxel of a scan whether there is an aneurysm present, with the end result overlaid as a semi-transparent highlight on top of the scan, making it easy for clinicians to see what the scans look like without HeadXNet’s input.

“We were interested how these scans with AI-added overlays would improve the performance of clinicians,” Pranav Rajpurkar, a graduate student in computer science and co-lead author of the paper, said. “Rather than just having the algorithm say that a scan contained an aneurysm, we were able to bring the exact locations of the aneurysms to the clinician’s attention.”

HeadXNet was tested by eight clinicians by evaluating a set of 115 different brain scans for aneurysms, once with the help of HeadXNet and once without. With the tool, the clinicians correctly identified more aneurysms, and therefore reduced the “miss” rate, and the clinicians were more likely to agree with one another.

The researchers believe that the tool did not influence how long it took the clinicians to decide on a diagnosis or their ability to correctly identify scans without aneurysms – a guard against telling someone they have an aneurysm when they don’t.

The machine learning methods that form the core of HeadXNet could likely be trained to identify other diseases both inside and outside the brain, the researchers believe, but there is a “considerable hurdle” in integrating AI medical tools with daily clinical workflow in radiology across hospitals.

Current scan viewers aren’t designed to work with deep learning assistance, so the researchers had to custom-build tools to integrate HeadXNet within scan viewers. Furthermore, variations in real-world data – as opposed to the data on which the algorithm is tested and trained – could reduce model performance.

If the algorithm processes data from different kinds of scanners or imaging protocols, or a patient population that wasn’t part of its original training, it might not work as expected.

“Because of these issues, I think deployment will come faster not with pure AI automation, but instead with AI and radiologists collaborating,” Ng said. “We still have technical and non-technical work to do, but we as a community will get there and AI-radiologist collaboration is the most promising path.”

Northrop Grumman performs static fire test on OmegA rocket

Image by WikiImages from Pixabay

Virginia-based global aerospace and defense technology company Northrop Grumman said on 30 May it had “successfully” completed a full-scale static fire test of the OmegA rocket – which it is developing for national security missions – in Promontory, Utah.

During the test, the craft’s first stage motor fired for approximately 122 seconds, producing more than two million pounds of maximum thrust—roughly the equivalent to that of eight-and-a-half jumbo jets – according to Northrop Grumman.

The company said that the test verified the performance of the motor’s ballistics, insulation and joints as well as control of the nozzle position. A full-scale static fire test of OmegA’s second stage is planned for this autumn, the company said.

The OmegA rocket’s design “leverages flight proven technologies from Northrop Grumman’s Pegasus, Minotaur and Antaresrockets as well as the company’s interceptors, targets and strategic rockets”.

Northrop Grumman’s vehicle development team is working on the program in Arizona, Utah, Mississippi and Louisiana, with launch integration and operations planned at Kennedy Space Center in Florida, and Vandenberg Air Force Base in California. The program will also support thousands of jobs across the country in its supply chain.

In 2018, the US Air Force awarded Northrop Grumman a US$792 Launch Service Agreement contract to complete the development of the OmegA rocket and the required launch sites with a projected launch date sometime in 2021.

The 2015 National Defense Authorization Act specified that a domestic next-generation rocket propulsion system “shall be developed by not later than 2019”, a deadline that Northrop Grumman said believes it will meet based on the reported success of the 30 May test.

“The OmegA rocket is a top priority and our team is committed to provide the US Air Force with assured access to space for our nation’s most critical payloads,” Scott Lehr, vice president and general manager of flight systems for Northrop Grumman, said in a statement. “We committed to test the first stage of OmegA in spring 2019, and that’s exactly what we’ve done.”

“Congratulations to the entire team on today’s successful test,” Kent Rominger, OmegA vice president at Northrop Grumman, added. “OmegA’s design using flight-proven hardware enables our team to meet our milestones and provide an affordable launch system that meets our customer’s requirements and timeline.”

However, at a new conference following the test, Rominger reportedly told journalists that there was an anomaly seen near the end of the test as sparks and burning debris came out of the rocket’s nozzle. Noting that rocket engines are tested at both high and low temperatures, he said that this test was at a high temperature of 90 degrees “so you get a little bit higher thrust”.

“It appears that everything worked very well. At the very end when the engine was tailing off, we observed the aft exit cone, maybe a portion of it, doing something a little strange that we need to go further look into,” he added.

A large plume of black smoke seen during the test was normal, explained Rominger, who would allegedly not confirm whether a piece or pieces of the aft exit cone came apart during the test. He reiterated that the company would have to “dig into all that data [and] analyze it to see what happened” before coming to any definitive conclusions.

Michael Sanjume, chief of the Launch Enterprise Acquisition Division at the Air Force Space and Missile Systems Center, said that the Air Force would work with Northrop Grumman to analyze the data, a process that Rominger said would not affect the planned schedule for a full-scale static fire test of OmegA’s second stage later in the year.

Live footage of the test can be found here.

Apple 2019 design award winners announced at #WWDC

Image by Niek Verlaan from Pixabay

The winners of Apple’s annual design awards were announced on 3 June, recognising nine iOS developers for “outstanding artistry, technical achievement, user interface and application design”, at its annual Worldwide Developers Conference which ran from 3 to 7 June in San Jose, California.

Apple said that the developers – who hail from companies both large and small, all over the globe – were recognised “for outstanding artistry, technical achievement, user interface and application design”. Past winners include iTranslate Converse, Procreate, Complete Anatomy, Florence, and Alto’s Odyssey.

The winning apps represent a wide range of categories spanning photo editing, drawing, medical imaging, sports and games. According to Apple, they all offer a “unique approach to user interface design, sound design, graphics, controls or gameplay and take advantage of breakthrough Apple technologies such as haptics, Metal or Core ML”.

“iOS developers keep raising the bar. This year, we are especially proud to see so many apps and games putting health, fitness, creativity and exciting gameplay at the centre of their app experience,” Ron Okamoto, Apple’s vice president of Worldwide Developer Relations, said in a statement. “We congratulate all the Apple Design Award winners on their incredible creativity and ingenuity.”

These are all nine apps that won at the awards (and their descriptions from the Apple website):

Ordia – Loju LTD (England)

“Ordia is a one-finger action platformer that blends simple gameplay and rich visuals with a clever concept. As a new life-form exploring its primordial world, you’ll slingshot yourself through a burbling alien landscape. Playing couldn’t be simpler: Drag to aim, leap from dot to dot, avoid hairy-looking obstacles, and try to keep up as the game gets trickier over its dozens of levels.”

Flow by Moleskine – Moleskine Srl (Italy)

“Flow is a practical and artful note-taking app worthy of the Moleskine name, coupling powerful functionality and elegant design. It’s packed with helpful touches: a hidable interface to help you stay focused on the task at hand, colors for every last pen (everything from Corellian Gray to Electric Pink), and more paper options than a big-city print shop. If you’re serious about your scribbles, Flow is a notable choice.”

The Gardens Between – The Voxel Agents (Australia)

“The Gardens Between is a stirring example of how games can be powered by heart. Yes, it’s a surreal puzzler in which you control the passage of time instead of characters. But it’s also the story of two best friends and how their relationship is changed over the years. The beautifully crafted graphics alone make the game worth playing, but it’s the sweet narrative that truly hits home.”

Asphalt 9: Legends – Gameloft (France)

“Asphalt 9: Legends is no stranger to acclaim. For more than a decade, the Asphalt series has offered console-grade arcade racing with all the trimmings: incredible graphics, blazing speed, exceptional production value, and gameplay that pushes the boundaries of hardware performance. Like previous editions, Asphalt 9 is deep enough for advanced players but easy enough that anyone can get behind the wheel. It once again proves an unyielding truth: Racing games are awesome.”

Pixelmator Photo – Pixelmator Team (Lithuania)

“Pixelmator Photo manages to deliver impressive editing power in a beautiful, uncluttered interface. For beginners, Pixelmator is surprisingly approachable (your edits are conveniently nondestructive). For experts who wish to maximize every last pixel of their iPad screen, it offers a robust toolset and support for RAW images. Most helpful of all, it offers machine-learning-powered editing tools that have been trained using more than 20 million photos.”

ELOH – Broken Rules (Austria)

“ELOH is the rare puzzle game that keeps you pleasingly perplexed while also totally chilling you out. The goal is to shift blocks to help bouncing balls get from point A to point B — but with the aid of rhythm and percussion. Rearranging blocks builds a soothing beat that adds a whole new dimension. ELOH’s hand-painted visuals and charming animations belie the game’s trickiness, which sneakily compounds over its many levels. But the organic vibe and earthy soundtrack transform the game into your own moment of Zen.”

Butterfly iQ — Ultrasound – Butterfly Network (USA)

“Butterfly iQ is an innovative whole-body ultrasound app that’s CE-approved, FDA-cleared, and a total game changer. When coupled with a supported device, it enables mobile ultrasounds anywhere. Simple enough to be operated by laypeople but advanced enough to use AR and machine learning to guide users along the way, Butterfly iQ offers an uncluttered UI that can be operated with one hand. Its images can be uploaded to a secure cloud for remote review by a medical professional — or elated family members.”

Thumper: Pocket Edition – Drool LLC (USA)

“Thumper: Pocket Edition, a heavy-metal rhythm game, is all about blistering speed, glowing electric visuals, and adrenaline. The idea is simple enough — tap the screen to keep your metallic beetle on a sleek chrome track. But the masterful combination of ’80s neon, thumping electronica, and smooth 60-fps gameplay is like nothing else you’ve tapped.”

HomeCourt – The Basketball App – NEX Team Inc. (USA)

“HomeCourt has revolutionized basketball practice more than anything since the advent of the orange cone. Thanks to real-time A.I.-powered shot tracking, advice from real coaches, and clean design, HomeCourt has established itself as the go-to for players of all skill levels who want to grow their game. And its excellent social features let players interact with coaches thousands of miles away or in a gym down the street.” You can watch the award ceremony and download the apps here.

Twitter acquires deep-learning start-up Fabula AI

Image by William Iven from Pixabay

Social media giant Twitter announced on 3 June that it had acquired London-based deep learning start-up Fabula AI in an attempt to boost its machine learning expertise, feeding into an internal research group led by the company’s senior director of engineering Sandeep Pandey.

The research group’s stated aim is to “continually advance the state of machine learning, inside and outside Twitter”, focusing on “a few key strategic areas such as natural language processing, reinforcement learning, [machine learning] ethics, recommendation systems, and graph deep learning”.

Fabula AI’s researchers specialise in employing graph deep learning to detect network manipulation, applying machine learning techniques to network-structured data in order to analyse very large and complex datasets describing relations and interactions, and extract signals in ways that traditional machine learning techniques are not capable of doing.

Twitter described the acquisition as a “strategic investment” and a “key driver” as the company works to “help people feel safe on Twitter and help them see relevant information”. Financial terms of the deal were not disclosed.

“Specifically, by studying and understanding the Twitter graph, comprised of the millions of Tweets, Retweets and Likes shared on Twitter every day, we will be able to improve the health of the conversation, as well as products including the timeline, recommendations, the explore tab and the onboarding experience,” the social network said.

Fabula was founded by Michael Bronstein, Damon Mannion, Federico Monti and Ernesto Schmitt. It is led today by Bronstein – who currently serves as chief scientist – and Monti, now the company’s chief technologist, who began their collaboration together while at the University of Lugano, Switzerland.

“We are really excited to join the ML research team at Twitter, and work together to grow their team and capabilities,” Bronstein said in a post on Twitter’s blog. “Specifically, we are looking forward to applying our graph deep learning techniques to improving the health of the conversation across the service.”

Bronstein is currently the Chair in Machine Learning & Pattern Recognition at Imperial College, and will remain in that position while leading graph deep learning research at Twitter. He will be joined by long-time collaborators from academia (including current or former students) who research advances in geometric deep learning.

Twitter – along with other social media platforms and internet search engines – has recently come under fire from the media, academics and politicians for its perceived failure to properly deal with abuse and hate on its platform. It has previously been criticized for failing to take action against accounts that spread hate speech and still does not have a clear policy in place for dealing with white supremacist accounts.

Sony signs licensing agreement for haptic technology

Image by StockSnap from Pixabay

Japanese multinational technology corporation Sony Interactive Entertainment (SIE) has signed an agreement with haptic feedback technology company Immersion Corp. to license its “advanced haptics patent portfolio”, the California-based developer said on 13 May.

Under the agreement, SIE can also leverage Immersion’s haptics technology for gaming controllers and VR controllers. Immersion Corp stated that such technology could be used to simulate “sensations of pushing, pulling, grasping, and pulsing”, and claimed that “adding the sense of touch to games heightens the experience and keeps players engaged”.

Simply put, haptic technology refers to any device or hardware that is able to simulate or create the experience of touch by applying forces, vibrations or motions to the user.  So when you die in an explosion during a video game and your controller vibrates, that’s haptics. Immersion describe it as “touch feedback technology”.

Immersion Corp doesn’t actually manufacture the hardware for haptic feedback, instead certifying suitable hardware and licensing its software as well as over 3500 issued or pending patents to companies that want to add haptics to their products.

“Research shows that haptics makes games come to life, increasing players’ satisfaction and enjoyment through peripherals and games enhanced with the power of touch,” Ramzi Haidamus, Immersion’s CEO, said in a press release. “We are thrilled to work with SIE, a true pioneer in gaming, to provide incredible experiences to their customers.”

“We are pleased to reach agreement with Immersion,” Riley Russell, Chief Legal Officer for Sony Interactive Entertainment, added. “High quality haptics technology enhances the sense of presence and immersion for gamers, and this agreement is consistent with [our] desire to provide the best gaming experiences to gamers around the world.”

Immersion also said recently that it had signed a license agreement with Panasonic Avionics – a subsidiary of Japanese multinational electronics corporation Panasonic that produces in-flight entertainment and communications – to provide the company with “access to Immersion’s patented haptic technology for use in in-flight entertainment”.

“By incorporating haptics into in-flight entertainment systems, Panasonic Avionics is able to modernize the experience and make access to the system more intuitive and engaging. As capacitive touch buttons provide feedback, the person will know if the buttons have been activated,” Haidamus said in a statement. “We are pleased to work with Panasonic Avionics and look forward to seeing how the company continues to enhance its in-flight systems with touch technology.”

Ex-Google CEO to create new home for computer science at Princeton University

Source: JD Lasica/Socialmedia.biz via Flickr

Ex-Google CEO Eric Schmidt and his wife, Wendy Schmidt, have given Princeton a gift large enough for the university to rebuild and expand Guyot Hall into a new place for its Department of Computer Science, the New Jersey-based Ivy League university said on 29 May.

When completed in 2026, the renovated building will be called the Eric and Wendy Schmidt Hall, and will consolidate the computer science department — which is currently spread out over nine different buildings — into one purpose-built space.

Earlier this year, the University announced the gift establishing the Schmidt DataX Fund, which will advance the breadth and depth of data science impact on campus, accelerate discovery in three large, interdisciplinary research efforts, and create opportunities to educate, train, convene and support a broad data science community at the University.

Guyot Hall was built in 1909, and was named for Princeton’s first professor of geology and geography, Arnold Guyot, a member of the faculty from 1854 to 1884. The building’s construction was supported with proceeds from gifts made to Princeton by Cleveland H. Dodge 1879 and his mother to benefit the University’s programs in geology and biology.

According to the university, renovations will preserve the original collegiate Gothic architectural details of the building’s exterior, and the Guyot name will be recognized in a new built space located elsewhere on campus which will be associated with Princeton’s environmental science programs.

The university expects the renovation of Guyot Hall to increase the square feet assigned to the computer science department and will also build in capacity for future growth of the department’s faculty and student body.

During renovation of Guyot, the University said it will provide additional interim space in the Friend Center for the department. Construction is planned to begin in early 2024, with the computer science department projected to move into the renovated building in mid-2026.

The Schmidts also established the Eric and Wendy Schmidt Transformative Technology Fund in 2009, an endowment which supports “the invention, development and utilization of cutting-edge technology that has the capacity to transform research in the natural sciences and engineering at Princeton”.

Eric Schmidt was formerly chief executive officer of Google from 2001 to 2011 and then served as executive chairman of Alphabet Inc, Google’s parent company. He is a member of Alphabet’s board of directors through June and has also previously served as a Trustee of Princeton.

Wendy Schmidt is a businesswoman and philanthropist, and the president of The Schmidt Family Foundation and co-founder of Schmidt Ocean Institute.

Schmidt’s “career as a computer scientist makes the . . . name especially fitting for the new home of Princeton’s world-class Department of Computer Science,” President Christopher L. Eisgruber said in a statement. “We are deeply grateful to Eric Schmidt, his wife, Wendy Schmidt, and Schmidt Futures for their spectacular vision and generosity.

“Their extraordinary commitments to this new facility, to the Schmidt DataX Fund, and to the Schmidt Transformative Technology Fund have powerfully enhanced Princeton’s capacity for teaching, innovation and collaboration that open new frontiers of learning and improve the world,” he added.

“Princeton recognizes that computational thinking as a mode of scholarship, inquiry and critical thinking is essential across campus,” Jennifer Rexford, Professor of Engineering and chair of Princeton’s computer science department, said.

“We are deeply grateful for [the] gift, which makes it possible to have a central location for computer science in which we can create intellectual collisions and serendipitous encounters between faculty and among students, creating human connections that spark new ideas across campus and beyond,” she concluded.

Schmidt noted that when he earned his undergraduate degree from Princeton in 1976 he “majored in electrical engineering, because computer science was barely an option. Now it’s the largest department at Princeton and data science has the potential to transform every discipline, and find solutions to profound societal problems”.

“Wendy and I are excited to think about what will be possible when Princeton is able to gather students and faculty in one place, right at the centre of campus, to discover now-unimaginable solutions for the future century,” he added.

Study: Smart speakers are increasingly common in American households

Source: www.quotecatalog.com via Flickr

An annual study conducted by the Consumer Technology Association (CTA) recently found that smart speaker ownership in the US rose by almost 100% for a second consecutive year, the Virginia-based consumer technology trade association said on 9 May.

The CTA’s 21st Annual Consumer Technology Ownership and Market Potential Study – which examines household ownership and the intent to purchase across almost 60 consumer tech products – showed that 31 percent of US homes now own a smart speaker, despite security concerns surrounding these products.

It also found that smart appliances are now owned by 17% of households, with smart light bulbs, thermostats, home security cameras and robotic vacuums rounding out the most-owned smart home devices in 2019.

Also read: Best Affordable True Wireless Earbuds

The CTA predicted that smart home devices will see the biggest gains in household adoption in the next year as first-time purchasers make up the largest proportion of prospective buyers, led by households planning to buy smart door locks, smart door bells and smart home hubs for the first time.

Among wearables, smartwatch adoption grew by five percentage points to reach 23 percent household ownership in 2019, narrowing the gap with fitness trackers, which grew four percentage points and are now owned by 29 percent of US households.

Televisions, smartphones and laptops remain the most commonly owned tech devices in American homes. Televisions were the most owned device at 95 percent household ownership, while smartphones were now owned in 91 percent of American homes – and, for a third consecutive year, had the highest combined intent to purchase among repeat and first-time purchases for the next year.

American households adopted wireless earbuds and headphones in greater numbers over the last year, the survey found, with growth outpacing their wired counterparts – and can be found in almost half of all US homes. They are expected to outplace wired products “very soon”.

“Americans are embracing AI tech in the home at unprecedented levels,” Steve Koenig, vice president of research at CTA, said in a statement. “The dramatic rise in household ownership of intelligent devices like smart speakers shows American consumers endorse the benefits and convenience of artificial intelligence and voice recognition to help them with everyday tasks.”

“Innovation is spurring demand for emerging technologies and driving consumers to upgrade existing devices,” he added. “The paradigm in consumer technology is rapidly evolving to a new IoT – The Intelligence of Things. AI, voice recognition, sensors, wireless connectivity and more are bringing greater capabilities and convenience to consumers.”

CTA’s 21st Annual Consumer Technology Ownership and Market Potential Study was administered as an online survey among 2,608 American adults (18+) between 7-14 March 2019. It is designed and formulated by CTA Market Research. The complete study is available for free for CTA member companies or purchase here.