All posts by Naomi Smith

Naomi is a UK-based Journalist, writer and online content creator with around six years experience. She has a master's degree in investigative journalism and experience working as a beat reporter, primarily covering aviation law, regulation and politics. She has written for online publications on a variety of topics, including politics, gaming and film.

British Airways to trial VR entertainment first-class perk

Image by pkozmin from Pixabay

UK flag carrier British Airways announced on 14 August plans to offer virtual reality entertainment as a first-class perk on some of its flights in a first for Britain.

From now until the end of 2019, British Airways passengers travelling in the first-class cabin on “selected BA117 flights” from London’s Heathrow Airport to JFK in New York will be provided with AlloSky headsets made by San Francisco-based immersive inflight entertainment company Skylights, an alumni of the airline’s parent company IAG’s Hangar 51 start-up accelerator programme.

This will allow customers to “enjoy a selection of award-winning films, documentaries and travel programmes in 2D, 3D or 360° formats,” British Airways said in a statement.

Earlier this year, British Airways trialed the technology at Heathrow Terminal 5, giving customers a glimpse of the Club World cabin through virtual reality.

British Airways is the first UK airline to trial the technology and said it had “worked with experts” to select a range of therapeutic programmed, including guided meditation and sound therapy, specifically designed for customers who have a fear of flying.

“We are always looking at the latest technology to enhance our customers’ experience on the ground and in the air,” Sajida Ismail, Head of Inflight Product at British Airways, said.  “Virtual reality has the power to revolutionise in-flight entertainment and we’re really excited to trial these new glasses as they should create a unique and memorable journey for our [first-class] customers”.

This is the airline’s centenary year. During August, British Airways’ birthday month, a celebratory exhibition – BA 2119: Flight of the Future – will run at the Saatchi Gallery in London. The exhibition was created in collaboration with the Royal College of Art and is based on global research commissioned by the airline to identify what aviation could look like in the future.

It will also showcase a virtual reality experience called “Fly” that traces humankind’s relationship with flying from the earliest imaginings of Leonardo da Vinci and his “ornithopter” to the Wright Brothers’ success on Kitty Hawk and the first passengers flight to Paris among other things.

The airline plans to invest £6.5 billion in its customers over five years, including better quality Wi-Fi and power in every seat, new interiors for 128 of its long-haul aircraft, and taking delivery of 72 new aircraft.

It has already introduced a new business class seat with direct aisle access – the Club Suite – and will host a range of activities and events through the rest of the year with the aim of exploring “the future of sustainable aviation fuels and the aviation careers of the future”.

NASA engineers successfully test deployment of second James Webb Space Telescope mirror

Image courtesy of NASA.

Engineers at the US National Aeronautics and Space Administration (NASA) have successfully tested the system that will deploy the secondary mirror of the James Webb Space Telescope (JWST), the agency said on 6 August.

The James Webb Space Telescope is a large, space-based observatory, optimized for infrared wavelengths, which will complement and extend the discoveries of the Hubble Space Telescope. It is expected to launch in 2021.

NASA intends for the telescope to cover longer wavelengths of light than Hubble and to have greatly improved sensitivity. The longer wavelengths should let it “look further back in time to see the first galaxies that formed in the early universe, and to peer inside dust clouds where stars and planetary systems are forming today”.

Before it can do any of these things, the telescope must “perform an extremely choreographed series of deployments, extensions and movements” to “bring the observatory to life” shortly after launch. In its fully deployed form, the telescope is too big to fit in any rocket available so it has been “engineered to intricately fold in on itself to achieve a much smaller size during transport”.

Technicians and engineers recently tested commanding the JWST to deploy the support structure that holds its secondary mirror in place. NASA sees this as is a critical milestone in preparing the observatory for its journey to orbit.

“The proper deployment and positioning of [the JWST’s] secondary mirror is what makes this a telescope – without it, Webb would not be able to perform the revolutionary science we expect it to achieve,” Lee Feinberg, optical telescope element manager for the JWST at NASA’s Goddard Space Flight Centre in Maryland, said in a statement that included a time-lapse video of the test.

“This successful deployment test is another significant step towards completing the final observatory,” he added.

It also demonstrated that the electronic connection between the spacecraft and the telescope is working properly, and is capable of delivering commands throughout the observatory as designed.

The secondary mirror is one of the most important pieces of equipment on the telescope, and is essential to the success of the mission. When deployed, this mirror will sit out in front of the JWST’s hexagonal primary mirrors, which form an iconic honeycomb-like shape.

This smaller circular mirror serves an important role in collecting light from the telescope’s 18 primary mirrors into a focused beam. That beam is then sent down into the tertiary and fine steering mirrors, and finally to its four powerful scientific instruments.

The project’s next significant milestone will be the mating of the two halves of the telescope, which are being built separately. The construction of the telescope has been marred by delays and cost overruns that have pushed the launch date from 2018 to 2021.

The JWST is named after James E. Webb, NASA’s second administrator. Webb is best known for leading Apollo, a series of lunar exploration programs that landed the first humans on the Moon. He also initiated a vigorous space science program that was responsible for over 75 launches during his tenure, including America’s first interplanetary explorers.

NASA ‘optometrists’ verify Mars 2020 Rover’s 20/20 vision

Image courtesy of Billy Brown on Flickr, under a Creative Commons 2.0 license

The US National Aeronautics and Space Administration (NASA) said on 5 August that its Mars 2020 rover had undergone an “eye exam” after several new cameras were installed on it.

The rover contains a veritable armada of imaging capabilities, from wide-angle landscape cameras to narrow-angle high-resolution zoom lens cameras. The cameras tested included two Navcams, four Hazcams, a SuperCam and two Mastcam-Z cameras.

Mounted on the rover’s remote sensing mast, the Navcams (navigation cameras) will acquire panoramic 3D image data that will support route planning, robotic-arm operations, drilling and sample acquisition.

The Navcams can work in tandem with the Hazcams (hazard-avoidance cameras) mounted on the lower portion of the rover chassis to provide complementary views of the terrain to safeguard the rover against getting lost or crashing into unexpected obstacles. They’ll be used by software enabling the Mars 2020 rover to perform self-driving over the Martian terrain.

Along with its laser and spectrometers, SuperCam’s imager will examine Martian rocks and soil, seeking organic compounds that could be related to past life on Mars. The rover’s two Mastcam-Z high-resolution cameras will work together as a multispectral, stereoscopic imaging instrument to enhance the Mars 2020 rover’s driving and core-sampling capabilities.

The Mastcam-Z cameras will also enable science team members to observe details in rocks and sediment at any location within the rover’s field of view, helping them piece together the planet’s geologic history.

“We completed the machine-vision calibration of the forward-facing cameras on the rover,” Justin Maki, chief engineer for imaging and the imaging scientist for Mars 2020 at the agency’s Jet Propulsion Laboratory, said in a statement. “This measurement is critical for accurate stereo vision, which is an important capability of the vehicle.”

To perform the calibration, the 2020 team imaged target boards that feature grids of dots, placed at distances ranging from one to 44 yards (one to 40 meters) away. The target boards were used to confirm that the cameras meet the project’s requirements for resolution and geometric accuracy.

“We tested every camera on the front of the rover chassis and also those mounted on the mast,” Maki added. “Characterizing the geometric alignment of all these imagers is important for driving the vehicle on Mars, operating the robotic arm and accurately targeting the rover’s laser.”

NASA expects the imagers on the back of the rover body and on the turret at the end of the rover’s arm to undergo similar calibration sometime in the next few weeks.

The Jet Propulsion Laboratory is building and will manage operations of the Mars 2020 rover for the NASA Science Mission Directorate at the agency’s headquarters in Washington. NASA plans to use Mars 2020 and other missions, including to the Moon, to prepare for human exploration of the so-called Red Planet.

The agency intends to establish a sustained human presence on and around the Moon by 2028 through NASA’s Artemis lunar exploration plans.

Groupon acquires Presence AI

Image courtesy of GrouponRUS via Wikimedia Commons under a Creative Commons Attribution-Share Alike 4.0 International license.

Chicago-based worldwide e-commerce marketplace Groupon announced on 8 August that it had acquired Presence AI, an AI-powered text and voice communications tool that is working on a communications platform to automate business-to-customer (B2C) calls and messaging. Terms of the transaction were not disclosed.

According to a survey conducted by Bizrate Insights, consumers – especially millennials – vastly prefer messaging and chat-based communications over phone calls. Presence AI says it aims to enable merchants to respond to this trend by “offering a 24/7 business assistant that integrates with a merchant’s existing scheduling software”.

It will “accept and manage bookings, provide instant answers to customer questions, remind people when it’s time to re-book and much more”. The company already has some integrations with “popular booking software providers”.

Amazon’s Alexa Fund is one of the investors in Presence AI, which has so far raised US$20,000 in seed funding. The company was founded in 2015 in San Francisco and operates in the health, beauty and wellness space, which is one of Groupon’s largest categories. In 2018, it participated in the Alexa Accelerator, which “supports early-stage start-ups using voice to deliver transformative customer and business experiences”

As Groupon starts trying to move towards “universal bookability” for certain services, it hopes Presence AI’s technology will provide merchants with the capabilities to support this “booking vision”.

“We’re pleased to welcome the Presence AI team and their booking technology to Groupon,” Groupon Chief Product Officer Sarah Butterfass said in a statement. “Booking is a key part of our voucher-less initiative aimed at improving the redemption experience, providing always-on availability . . . opening up our marketplace to a broader range of merchants.”

“Presence AI’s technology is very complementary to what we’ve been building into our existing booking experience and will accelerate our roadmap with its text- and chat-based interface,” she added.

“We’re very excited to join Groupon and continue transforming client conversations through the use of artificial intelligence,” Presence AI co-founder and CEO Michel Meyer said. “With more than 3 million text messages generated last year, Presence AI is saving merchants time and generating additional revenues. We can’t wait to bring our technology to more businesses.”

Knight Foundation pledges US$750,000 to immersive projects

Image by StockSnap from Pixabay

The Knight Foundation said on 27 July that it would award US$750,000 in funding to for “ideas exploring how arts institutions can present immersive experiences to engage audiences”. Recipients will be announced in late fall 2019.

The application window is already open, and US-based cultural organizations, technologists, and others who are working to use immersive technology in the arts are welcome to apply.

Grant recipients will be awarded a share of the funding pool, and receive mixed-reality mentorship and technology support from Microsoft, as well as the opportunity to be featured across the company’s marketing channels.

The John S. and James L. Knight Foundation is a national foundation that invests in journalism, the arts and in the “success of cities where brothers John S. and James L. Knight once published newspapers”. The foundation’s stated goal is to “foster informed and engaged communities, which [it believes] are essential for a healthy democracy”.

The award calls for ideas that demonstrate innovative approaches to this question: In what new ways might arts institutions engage audiences through immersive experiences? The foundation is seeking  ideas from arts institutions — as well as technologists, companies and artists partnering with arts institutions — that demonstrate the ability of immersive technologies to strengthen audience engagement. 

Successful projects will address one of the following areas or related concepts: 

  • Engaging new audiences: How might arts institutions use immersive experiences to better welcome and engage new and diverse audiences?
  • Building new service models: How can institutions design pleasant and efficient audience experiences that avoid clunky interactions with technology?
  • Expanding beyond walls: In what new ways can arts institutions use immersive technology to reach people beyond their physical space?
  • Distribution to multiple institutions: How can immersive experiences become more portable and be presented easily at multiple institutions?

This is part of Knight Foundation’s arts and technology focus, which aims to “help arts institutions better meet changing audience expectations and use digital tools to help people better experience and delight in the arts”.

Last year, Knight made a US$600,000 investment in twelve projects designed to harness the power of technology to engage people with the arts. Most recently, Knight launched the “On View” podcast, which examines how museums and cultural institutions are evolving to keep pace with a changing world.

“We’ve seen how immersive technologies can reach new audiences and engage existing audiences in new ways,” Chris Barr, director for arts and technology innovation at Knight Foundation, said in a statement. “But arts institutions need more knowledge to move beyond just experimenting with these technologies to becoming proficient in leveraging their full potential.”

“When done right, life-changing experiences can happen at the intersection of arts and technology,” Victoria Rogers, Knight Foundation vice president for arts, added. “Our goal through this call is to help cultural institutions develop informed and refined practices for using new technologies, equipping them to better navigate and thrive in the digital age.”

“We’re incredibly excited to support this open call for ways in which technology can help art institutions engage new audiences,” Mira Lane, Partner Director Ethics & Society at Microsoft, said.  “We strongly believe that immersive technology can enhance the ability for richer experiences, deeper storytelling, and broader engagement.”

The opening of the call coincided with the Gray Area Festival in San Francisco, where representatives from Knight and Microsoft shared details with an audience of international thought leaders in the arts and the technology industry. 

NASA’s CubeSat launch initiative opens call for payloads on Artemis 2 mission

Image courtesy of Diophantus654 via Wikipedia under a Creative Commons Attribution-ShareAlike 3.0 license.

The US National Aeronautics and Space Administration (NASA) is seeking proposals from US-based small satellite developers to fly CubeSat missions as secondary payloads aboard the agency’s Space Launch System (SLS) rocket on the 2023 Artemis 2 mission under its CubeSat Launch Initiative (CSLI).

CubeSats are a class of research spacecraft called nanosatellites and are, unsurprisingly, cube-shaped. They are spacecraft size in units or U’s, typically up to 12 U  (a unit is defined as a volume of about 10 cm x 10 cm x 10 cm and usually weighs under 1.33 kg).

The CSLI aims to give CubeSat developers a low-cost pathway to conduct research in space that advances NASA’s strategic goals in the areas of science, exploration, technology development, education and operations. It also allows students, teachers and faculty to gain hands-on experience designing, building, and operating these small research satellites.

Proposals must include elements designed to extend human presence beyond low-Earth orbit and reduce risk for future deep space human exploration missions. The proposed missions should address at least one aspect of NASA’s goals outlined in NASA’s 2018 Strategic Plan and address identified strategic knowledge gaps related to the Moon or Mars.

This opportunity will be open to US participants only, including large and small businesses and other federal agencies, as well as NASA centres, and non-profit or accredited education organizations.

They agency is also seeking proposals from CubeSat developers for ride-share launch opportunities on missions other than Artemis 2. These opportunities are open to NASA centres, non-profit or accredited education organizations, and will be for flight as secondary payloads on launches other than SLS, as well as deployments from the International Space Station.  

Mission proposals for all opportunities must be submitted by 4:30 p.m. EST, Nov. 4, 2019. Selections will be made by mid-February 2020, however selection does not guarantee a launch opportunity.

To date, the CubeSat Launch Initiative has selected 175 CubeSat missions from 39 states and 97 unique organizations across the country, has launched 88 missions into space, and has 37 scheduled missions to launch within the next 12 months.

“CubeSats continue to play an increasingly larger role in NASA’s exploration plans,” John Guidi, deputy director for the Advanced Exploration Systems division, said in a statement.

“[They] provide a low-cost platform for a variety of technology demonstrations that may offer solutions for some of the challenges facing long-term human exploration of the Moon and Mars, such as . . . laser communications, energy storage, in-space propulsion, and autonomous movement,” he added.

CubeSats dance: one water-powered NASA spacecraft commands another in orbit

Image courtesy of NASA.

Two of the US National Aeronautics and Space Administration’s (NASA) CubeSats have executed a coordinated manoeuvre in space for the first time, demonstrating technology that could one day allow swarms of small satellites to carry out coordinated missions, the agency said on 3 August.

The water-powered spacecraft, which are about the size of a standard tissue box, were approximately 5.5 miles apart when one told the other to activate its thruster and move in closer via radio frequency communications. The fuel tanks on both spacecraft are filled with water, which was converted to steam by the thrusters to move the spacecraft during the manoeuvre.

CubeSats are a class of research spacecraft called nanosatellites and are, unsurprisingly, cube-shaped. They are spacecraft size in units or U’s, typically up to 12 U  (a unit is defined as a volume of about 10 cm x 10 cm x 10 cm and usually weighs under 1.33 kg).

Conducted on 21 June, the demonstration took place in low-Earth orbit as part of NASA’s Optical Communications and Sensor Demonstration (OCSD) mission. It was designed with a series of safeguards to ensure that only a pre-planned and authorized manoeuvre could take place.

While it was choreographed by human operators on the ground, the demonstration shows it is possible for a series of manoeuvres to be planned using onboard processing and executed cooperatively by a group of small spacecraft, NASA said.

Three OCSD spacecraft were developed and are operated for NASA by The Aerospace Corporation. The first OCSD was a risk-reduction mission that launched in 2015 to calibrate and refine tools to support this current flight of the OCSD-B and OCSD-C spacecraft.

OCSD is funded by NASA’s Small Spacecraft Technology program within the agency’s Space Technology Mission Directorate. NASA’s Small Spacecraft Technology program is managed by NASA’s Ames Research Center in California’s Silicon Valley.

“Demonstrations such as this will help advance technologies that will allow for greater and more extended use of small spacecraft in and beyond Earth-orbit,” Roger Hunter, program manager of the Small Spacecraft Technology program, said in a statement.

“The OCSD team is very pleased to continue demonstrating new technical capabilities as part of this extended mission, over 1.5 years after deployment,” Darren Rowen, director of the Small Satellite Department at The Aerospace Corporation, added.

“It is exciting to think about the possibilities enabled with respect to deep space, autonomously organizing swarms of small spacecraft,” he said.

NASA is developing mirrors that could double the sensitivity of X-ray telescopes

Hubble telescope. Image by Ondřej Šponiar from Pixabay

The US National Aeronautics and Space Administration (NASA) is developing mirrors that could double the sensitivity of X-ray telescopes, the agency said on 29 July.

Imaging systems based on x-rays use mirrors to reflect x-rays off an object at incidental angles, in the same way that more traditional optics or imaging systems reflect light off objects so that they can be viewed with naked eye or photographed. They are typically made of glass, ceramic, or metal foil, coated by a reflective layer – the most commonly used materials are gold and iridium.

Recent testing has shown that super-thin, lightweight X-ray mirrors made of a material commonly used to make computer chips can meet the stringent imaging requirements of next-generation X-ray observatories.

They are fifty times lighter – a two orders-of-magnitude leap in sensitivity – than those currently fitted in NASA’s flagship Chandra X-ray observatory and the European Space Agency’s Advanced Telescope for High-Energy Astrophysics, or Athena.

The mirrors could be fitted into the conceptual Lynx X-ray Observatory which is expected to launch at some point in the 2030s – one of four potential missions that scientists vetted as worthy pursuits under the 2020 Decadal Survey for Astrophysics.

If selected and ultimately launched in the 2030s, Lynx could potentially carry tens of thousands of the mirror segments. Chandra itself offered a significant leap in capability when it launched in 1999. It can observe X-ray sources — exploded stars, clusters of galaxies, and matter around black holes —100 times fainter than those observed by previous X-ray telescopes.

The mirrors in question are being developed by Will Zhang and his team at NASA’s Goddard Space Flight Center in Greenbelt, Maryland. Zhang and his team have secured a nearer-term flight opportunity than Lynx, aboard a sounding rocket mission scheduled for 2021, which would represent the new technology’s first demonstration in space.

Seven years in the making

Efforts to develop the new mirrors began seven years ago when Zhang started to experiment with mono-crystalline, a single-crystal silicon that had not previously been used to create x-ray mirrors.

His goal — given the cost of building space observatories, which only increase in price as they get larger and heavier — was to develop easily reproducible, lightweight, super-thin mirrors, without sacrificing quality.

“What we’ve done is shown from a scientific perspective and empirically that these optics can be built using an inexpensive, abundantly available material that is immune from the internal stresses that can change the shape of X-ray mirrors made of glass, the more traditional mirror-making material”, Zhang said in a statement.

According to a NASA-commissioned panel of 40 experts, Zhang’s mirrors made from the brittle, highly stable silicon are capable of producing the same image quality as the four larger – and heavier – pairs currently flying on Chandra. The panel also deemed two other technologies – full-shell mirrors and adjustable optics – as able to fulfil the requirements of the conceptual Lynx Observatory.

Not only could Zhang’s mirrors provide an image resolution comparable to the quality of an ultra-high-definition television screen, they also met his low-mass requirements. But, Zhang said, he and his team are still “far, far away from flying our optics”.

Next steps

Zhang and his team now have to figure out how to bond these fragile mirror segments inside the canister that protects the entire mirror assembly during a rocket launch and maintains their “nested alignment”.

“We have a lot to do, and not a lot of time to do it,” Zhang said. “This is now an engineering challenge.”

He added that “time is of the essence” because in two years, he and his team are expected to deliver a 288-segment mirror assembly to Randall McEntaffer, a professor at Pennsylvania State University in State College who is developing a sounding rocket mission called the Off-plane Grating Rocket Experiment (OGRE), expected to launch from the Wallops Flight Facility in 2021.

In addition to the mirrors, OGRE will carry a “university-developed spectrograph equipped with next-generation X-ray diffraction gratings used to split X-ray light into its component colours or wavelengths to reveal an object’s temperature, chemical makeup, and other physical properties”.

Zhang expects that OGGRE will “do much to advance the mirror assembly” and that the mission will help to determine if its design will be able to protect the delicate mirrors from the extreme launch forces during lift-off and ascent through the Earth’s atmosphere.

Even if Lynx isn’t chosen for development by the 2020 Decadal Survey, Zhang envisions a bright future for the team’s optics. Other proposed missions could benefit, he said, including a couple X-ray observatories now being investigated as potential astrophysics Probe-class missions and another now being considered by the Japanese.

“Five years ago, people said it couldn’t be done, but we proved our ideas,” Zhang said. “My team is grateful to Goddard’s Internal Research and Development program for giving us the seed money. We couldn’t have achieved this without it.

Google’s Code Next students merge computer science and activism

Image by Photo Mix from Pixabay

At this year’s, Google Code Next Hackathon, students used computer science to build applications that they hope will make a difference in the world, including a website to fight the housing crisis in San Francisco’s Bay Area, and projects to inform citizens of their rights when stopped by law enforcement and on the gender pay gap.

Code Next (a Code With Google program) is a free computer science education program for Black and Latinx high school students. The program works in communities to inspire students, and to equip them with the skills and education necessary for careers in computer science.

At the two-day Hackathon, which took place this year in both Oakland and New York City in June, students use the knowledge learned in the classroom to come up with ideas, develop them and pitch prototypes.

This year, students were challenged to develop a mobile or web application that addressed social justice, inequality or the environment. Day one of the Hackathon centred on ideas, while day two focused on coding and preparation for the pitch, which occurred at the end of the day.

Both of this year’s winners addressed the environment. In Oakland, Code Next students Adesina Taylor, Luis Sanchez, Jacob Sonhthila, Xzavier Ceja and David Ung took home the first place prize. The team, who called their project “STEN,” created a web application that allows users to buy and distribute stone paper, an alternative to paper made from wood, as a means to fight deforestation.

In New York, students Mohammad Hasan, Mohammed Ibrahim, Andy Asante, Alexander Leonardi and Rafid Almustaqim won first prize with a mobile application, “NextGen Carbon,” that tracks pollution levels. The app places users in competition with one another by tracking their day-to-day carbon emissions, encouraging them to reduce their numbers.

“We want to emphasize that there are people that know what global warming is,” Asante said in a blog post. “They just don’t know what causes it. Our app informs them.”

“We do discuss what we want to do for the world and how to save it, but we don’t usually pitch like this,” Merelis Peralta, a Code Next student, whose “Police Brutality” app won third place in New York City, said. “Having to pitch about how we want to help our community and make them safer opens our voice.”

“After trying Code Next, I found out that although [computer science] might be hard, it’s fun at the same time,” student Ayan Cooper said. “I want people to see that that it’s meaningful.”

At the conclusion of the two days, the students celebrated their achievements, their hard work and the challenges they overcame as a team in front of their Code Next mentors, coaches, family and friends.

ESA: Earth’s close call with asteroid demonstrates need for more eyes in the sky

Image by Alexander Antropov from Pixabay

The European Space Agency (ESA) said on 2 August that the fly-by of a 100-metre-wide asteroid last month illustrates the need to increase Earth’s asteroid detection capabilities.

Dubbed “2019 OK”, the football field-sized asteroid came within 65 000 km of the Earth’s surface during its closest approach – about one fifth of the distance to the Moon. It was detected just days before it passed Earth, although archival records from sky surveys show it had previously been observed but wasn’t recognised as a near-Earth asteroid.

Asteroids the size of “2019 OK” are relatively common but hit Earth once every 100,000 years. ESA said that its planned network of Flyeye telescopes will allow astronomers to detect risky space rocks in order to provide early warnings.

The ESA observed the asteroid just before its flyby, by requesting two separate telescopes in the International Scientific Optical Network (ISON) take images of the space rock. With these observations, asteroid experts at the ESA were able to extract precise measurements of the position and movement of the rocky body.

“With the ISON observations we were able to determine the distance of the close approach incredibly accurately,” explained Marco Micheli from the ESA’s Near-Earth Object Coordination Centre. “In fact, with a combination of observations from across the globe, the distance is now known to better than one kilometre!”

The asteroid was first discovered by the Southern Observatory for Near-Earth Asteroids Research (SONEAR) just a day before its close approach. Observations of “2019 OK” were independently confirmed by other observatories, including the Arecibo radar in Puerto Rico and a third telescope in the ISON network.

Since the discovery, with knowledge of where the asteroid would have been and by searching for it by eye, existing images were found in the Pan-STARRS and ATLAS sky survey archives. Both surveys had in fact captured the asteroid in the weeks before the flyby, but the slow space rock appeared to move just a tiny amount between images, and was therefore not recognised.

“This ‘un-recognition’ of an asteroid, despite it being photographed will be used to test the software going into ESA’s upcoming asteroid-hunting telescope, the Flyeye,” Rüdiger Jehn, ESA’s Head of Planetary Defence, said.

Eyes on the sky

Scientists know of – and are tracking – thousands of asteroids in the Solar System, so why was this one discovered so late? Unfortunately, there is currently no single obvious reason, apart from its slow motion in the sky before close approach.

“2019 OK” travels in a highly elliptical orbit, taking it from within the orbit of Venus to well beyond that of Mars. This means that the time it spends near Earth, and is detectable with current telescope capabilities, is relatively short.

The ESA, the US National Aeronautics and Space Administration (NASA), and other agencies and organisations around the globe – both professional and amateur – discover new asteroids every day, which constantly increases scientists’ understanding of the number, distribution and movement of orbiting “rocky bodies”.

Asteroids the size of “2019 OK” size are relatively common in our Solar System but hit Earth on average only every 100,000 years. Travelling in a highly elliptical orbit that takes it within the orbit of Venus, this asteroid won’t come close to Earth again for at least another 200 years.

Planetary Defence at the ESA

According to the ESA, it’s planned developments should mean that by 2030, Europe will be able to:

  • provide early warning for dangerous asteroids larger than 40 m in size, about three weeks in advance;
  • deflect asteroids smaller than 1 km if known more than two years in advance.

The ESA’s planned network of Flyeye telescopes is expected to significantly help in the global search for risky space rocks, which is necessary to provide early warnings. The agency’s Hera mission – currently being designed to test asteroid deflection for the first time – will look to develop the ESA’s capacity to knock asteroids off a dangerous path.

Apple to acquire majority of Intel’s smartphone business

Apple announced on 25 July that it would acquire the majority of Intel’s smartphone business in a US$1 billion deal that will see Apple net 2200 employees a well as intellectual property, equipment and leases for producing 5G modems.

The transaction is expected to close in the fourth quarter of 2019, subject to regulatory approvals and other customary conditions, including works council and other relevant consultations in certain jurisdictions.

Combining the acquired patents for current and future wireless technology with Apple’s existing portfolio, the company said it will hold over 17,000 wireless technology patents, ranging from protocols for cellular standards to modem architecture and modem operation.

Intel will retain the ability to develop modems for non-smartphone applications, such as PCs, internet-of-things devices and autonomous vehicles, Apple said in a press release.

Bob Swan, Intel’s CEO, said the company “really only had one customer” – i.e. Apple – for modems anyway and that it’s happy to let said customer buy the business. This deal seems unlikely to bear fruit overnight; a plan to get an Apple-built modem into one product by 2021 is considered aggressive. Looking ahead, analyst Ben Bajarin says to watch for Apple to buy a baseband radio company next.

“This agreement enables us to focus on developing technology for the 5G network while retaining critical intellectual property and modem technology that our team has created,” Swan said in a statement

“We have long respected Apple and we’re confident they provide the right environment for this talented team and these important assets moving forward,” he added. “We’re looking forward to putting our full effort into 5G where it most closely aligns with the needs of our global customer base, including network operators, telecommunications equipment manufacturers and cloud service providers.”

“We’ve worked with Intel for many years and know this team shares Apple’s passion for designing technologies that deliver the world’s best experiences for our users,” Johny Srouji, Apple’s senior vice president of Hardware Technologies, said.

“Apple is excited to have so many excellent engineers join our growing cellular technologies group, and know they’ll thrive in Apple’s creative and dynamic environment,” he concluded. “They, together with our significant acquisition of innovative IP, will help expedite our development on future products and allow Apple to further differentiate moving forward.”

Microsoft partners with OpenAI to develop artificial general intelligence

Image by Efes Kitap from Pixabay

Tech giant Microsoft recently committed a US$1 billion investment into OpenAI, a San Francisco-based research lab founded by Elon Musk and Sam Altman, becoming the company’s exclusive cloud provider as they work to build new Azure artificial intelligence (AI) supercomputing technology.

Under the terms of the deal, which was announced on 22 July, Microsoft will also serve as OpenAI’s preferred partner to commercialise its inventions.

Through the partnership, the two companies hope to further extend Azure’s capabilities in large-scale AI systems, accelerate breakthroughs in AI and power OpenAI’s efforts to develop artificial general intelligence (AGI).

AGI is typically understood to mean he intelligence of a machine that has the capacity to understand or learn any intellectual task that a human being can. In short, it’s the type of AI that we’re used to seeing in science-fiction movies; a computer with a consciousness that can think and feel in the same way as a flesh and blood human.

Microsoft Azure is a cloud computing service intended for building, testing, deploying, and managing applications and services through Microsoft-managed data centres.

The agreement will see Microsoft and OpenAI focus on building a new computational platform within Azure that will “rain and run increasingly advanced AI models”, include hardware that builds on Microsoft’s supercomputing technology.

The companies hope that the results will “create the foundation for advancements in AI to be implemented in a safe, secure and trustworthy way and is a critical reason the companies chose to partner together”.

Advancements in the application of deep neural networks coupled with increasing computational power have led to AI-focused breakthroughs in vision, speech, language processing, translation, robotic control and gaming.

These systems work well for the specific problem they’ve been trained to solve but getting AI to address more complex problems that the world faces today – such as climate change – will require “generalization and deep mastery of multiple AI technologies”, Microsoft said.

OpenAI and Microsoft’s vision is for AGI to work with people to help solve currently intractable multidisciplinary problems.

“The creation of AGI will be the most important technological development in human history, with the potential to shape the trajectory of humanity,” Sam Altman, CEO of OpenAI, said in a statement. “Our mission is to ensure that AGI technology benefits all of humanity, and we’re working with Microsoft to build the supercomputing foundation on which we’ll build AGI.”

OpenAI believes that it is “crucial” that any such AI should be used “safely and securely”, and that the economic benefits should be “widely distributed”, Altman added.

Microsoft’s CEO, Satya Nadella, described AI as “one of the most transformative technologies of our time”, which he believes has “the potential to help solve many of our world’s most pressing challenges”.

“By bringing together OpenAI’s breakthrough technology with new Azure AI supercomputing technologies, our ambition is to democratize AI — while always keeping AI safety front and center — so everyone can benefit,” he said.

Altman started OpenAI in 2015 with Elon Musk, although the latter is no longer involved in the business. It currently operates as a capped-profit entity, from which investors can only expect up to 100x in returns, but it unclear what the terms of the Microsoft investment will be other than making it an exclusive provider of cloud services to OpenAI and working together on new technologies.

HoloLens inventor and Microsoft Technical Fellow Alex Kipman tweeted that he was very excited about this new partnership, which suggests that we might possibly be seeing some mixed reality AI crossovers later on down the line.

NASA selects 12 new lunar science, technology investigations

Image by Ponciano from Pixabay

The US National Aeronautics and Space Administration (NASA) said on 1 July that it has selected 12 new science and technology “payloads” that it expects to help it study the Moon and explore more of its surface as part of the agency’s Artemis lunar program.

The selected investigations and demonstrations are expected to help the agency to send astronauts to the Moon by 2024 as a way to prepare to send humans to Mars for the first times.

They will go to the Moon on future flights through NASA’s Commercial Lunar Payload Services (CLPS) project which allows “rapid acquisition of lunar delivery services” for things that “advance capabilities for science, exploration, or commercial development of the Moon”.

According to NASA, many of the new selections incorporate existing hardware designed for missions that have already flow. Seven of the investigations are focused on answering questions in planetary science or “heliophysics” – the study of the effects of the Sun on our solar system – and five will “demonstrate new technologies”.

NASA’s plans for lunar exploration are based on a two-phase approach, focusing first on speed – landing astronauts on the Moon by 2024 – and then on establishing a “sustained human presence” on the Moon by 2028.

“The selected lunar payloads represent cutting-edge innovations, and will take advantage of early flights through our commercial services project,” Thomas Zurbuchen, associate administrator of NASA’s Science Mission Directorate in Washington, said in a statement.

“Each demonstrates either a new science instrument or a technological innovation that supports scientific and human exploration objectives, and many have broader applications for Mars and beyond,” he added.

Here are just a few of the selected investigations:

MoonRanger

This small, fast-moving rover can drive beyond communications range with a lander and then return to it, allowing investigations that stretch within one kilometre of a lander. MoonRanger will aim to continually map the terrain it traverses and transmit data for future improvements to its systems.

The principle investigator on this project is: Andrew Horchler of Astrobotic Technology, Inc., Pittsburgh.

Heimdall

Sharing a name with the Norse god and guardian of Asgard, Heimdall is a flexible camera system built for conducting lunar science using commercial vehicles. It will use a single digital video recorder and four camera to model the properties of the Moon’s “regolith” – the soil and other material that makes up the top layer of the lunar surface – and “characterize and map geologic features” as well as potential landing or “trafficability” hazards, among other goals.

The principle investigator on this project is: R. Aileen Yingst of the Planetary Science Institute, Tucson, Arizona.

The Lunar Magnetotelluric Sounder

Using a flight-spare magnetometer – a device that measures magnetic fields – the Lunar Magnetotelluric Sounder is designed to characterize the structure and composition of the Moon’s mantle by studying electric and magnetic fields. The magnetometer in questions was originally made for the MAVEN spacecraft, which is currently orbiting Mars.

The principle investigator on this project is: Robert Grimm of the Southwest Research Institute, San Antonio.

PlanetVac

PlanetVac is a technology for acquiring and transferring lunar regolith from the surface to other instruments that would analyse the material or put it in a container that another spacecraft could return to Earth.

The principal investigator on this project is: Kris Zacny of Honeybee Robotics, Ltd., Pasadena, California.

SAMPLR: Sample Acquisition, Morphology Filtering, and Probing of Lunar Regolith

SAMPLR is another sample acquisition technology that will make use of a robotic arm that is a flight spare from the Mars Exploration Rover mission, which included the long-lived rovers Spirit and Opportunity.

The principal investigator on this project is: Sean Dougherty of Maxar Technologies, Westminster, Colorado.

Peraton to acquire Solers Inc

Image by Arek Socha from Pixabay

Peraton, a US defence and intelligence provider, said on 17 June it had entered into a “definitive agreement” to acquire Solers, a satellite ground systems and cloud-based services company, as part of an attempt to boost the company’s national security initiatives.

The company said that the acquisition would “accelerate both near- and long-range growth opportunities and enhance Peraton’s ability to deliver highly differentiated space protection and resiliency solutions that directly support mission objectives and critical national security initiatives” but did not specify what this would mean in practice.

The combined capabilities would enable Peraton to “expand its offerings of innovative and agile end-to-end solutions that address the growing complexity of customer mission needs across both national security and civilian agency space & ground programs”, Peraton said.

In a statement, Peraton chairman, president and CEO Stu Shea said that the acquisition represented “an important step” for the company and that it would “significantly enhance” its ability to serve customers on “critical missions” by bringing together “some of the most proven and innovative space protection and ground operations technologies in the industry”.

“I’m excited to welcome the talented Solers team to Peraton, strengthening our already robust space portfolio, technical excellence and rapid innovation capabilities,” he added.

David Kellogg, president and CEO of Solers, described the partnership as “truly a strategic fit” and expressed his “full confidence” that the companies would continue to offer “high quality support” to their government clients.

 “Through our combination with Peraton – a company with whom we have many shared values – our customers will have access to some of the best people and technologies available to address their critical missions and our employees will benefit from greatly expanded growth opportunities as part of this new company,” he concluded.

“Peraton’s transformational acquisition of Solers will accelerate the company’s presence in the high-priority, emerging space and communications markets,” Ramzi Musallam, CEO and Managing Partner of Veritas Capital, which owns Peraton, said. “This combination will create a differentiated platform, strengthening Peraton’s ability to provide mission-critical services and solutions to its dynamic customer base.”

Investment bank KippsDeSanto acted as the financial advisor to Solers and Macquarie Capital acted as financial advisor to Peraton for the deal.

NASA learns to search for life underground using “cave rover”

Image courtesy of Billy Brown on Flickr, under a Creative Commons 2.0 license

Engineers from the US National Aeronautics and Space Administration (NASA) recently visited lava tubes in the North East of California to test a rover that could be used to search for underground life on other planet, the agency said on 26 June.

Scientists predict that there are caves – known as lava tubes – beneath the surfaces of the Moon, Mars and Venus that are formed by flowing magma and covered in tiny crystals, and which could potentially host living organisms.

These lava tubes can stretch for miles. On other planets with less gravity, some caves could even be large enough to fit small cities. For places like Mars too dry for life and with atmospheres too thin to block dangerous space radiation, lava tubes could safely harbor potential life.

Beyond helping us pinpoint the best spots to search for life, these caves could bring us one step closer to a permanent presence on the Moon and safe exploration at Mars – the ultimate goal of NASA’s Artemis program.

On Earth, similar caves are home to complex ecosystems, all supported by microbes that “eat” rocks, converting them into energy for life. The scientists of the BRAILLE project believe such life could exist – or have once existed – in the caves of Mars as well.

Operated out of NASA’s Ames Research Center in Silicon Valley, the Biologic and Resource Analog Investigations in Low Light Environments (BRAILLE) team is developing the capability to detect life on the walls of volcanic caves from afar by venturing into North America’s largest network of lava tubes, with the goal of advancing efforts to search for life elsewhere in the universe.

Already, data from the team’s first field deployment is helping scientists understand the interactions between biology and geology in these volcanic caves. New science from the project will be presented this week at the Astrobiology Science Conference in Seattle.

“We don’t think there’s life to find on the Moon now, but some day the life on the Moon might be us,” Jennifer Blank, the principal investigator for the BRAILLE project, said in a statement. “And if I were going to the Moon, I’d want to go to a lava tube.”

“Orbital satellite data suggests that there are a lot of these lava tubes on Mars,” she added. “If there is life there, those tubes are a good place to look. And if there was life in Mars’ ancient past, that’s where it’s most likely to be preserved.”

The BRAILLE team made its first descent at Valentine Cave, one of over 750 at the Lava Beds National Monument in California, close to the state’s northern border. According to NASA, smooth walls around 15 feet high and walkways up to 70 feet wide make it a practical place to drive a rover, and its well preserved lava flow features are similar to what the agency expects to find inside Martian lava caves.

With the right lighting, the cave’s layers of microbial material and mineral deposits create a dazzling array of colors but NASA’s cave rover – CaveR – can do even more with its scientific cameras and imaging tools.

These instruments take in small amounts of light that reflect off the cave wall’s surface, allowing scientists to identify chemical components that reveal signs of life. The rover also uses a laser scanner to map the subterranean caves.

BRAILLE’s three-week deployment involved sample collection from nine different caves, tackling scientific questions ranging from geochemistry to DNA sequencing. One result of this study is a working theory that the team calls the “Micro-Mineral Continuum,” describing how past and present microbial life appears in the caves.

Between two endpoints on this spectrum – from the walls being visibly bare rock to coated with colored films of microscopic life – are a range of different features, textures and secondary minerals created by the interactions of those microbes with the basaltic rock and water that drips down into the caves.

By studying the continuum further in future returns to Lava Beds and understanding the interplay between geology and biology in these caves, scientists will be able to know what they’re looking at when we one day send rovers to Martian caves.

Google to acquire Elastifile

Image by 377053 from Pixabay

Google said on 9 July that it had entered into a definitive agreement to purchase Elastifile – a provider of scalable, enterprise file storage for the cloud – with the deal expected to close later this year.

The acquisition is expected to be completed later this year and is subject to customary closing conditions, including the receipt of regulatory approvals. Upon the close of the acquisition, Elastifile will join Google Cloud.

Elastifile uses a unique software-defined approach to managed Network Attached Storage (NAS) in order to tackle the challenges of file storage for enterprise-grade applications running at scale in the cloud.

This theoretically enables “organizations to scale performance or capacity without cumbersome overhead”. Google said it is “excited” to build on this technology and integrate Elastifile with Google Cloud Filestore.

It expects that the “combination of Elastifile and Google Cloud will support bringing traditional workloads into [Google Cloud Platform] faster and simplify the management and scaling of data and compute intensive workloads”.

The company said it believes the combination will “empower businesses to build industry-specific, high performance applications that need petabyte-scale file storage more quickly and easily” which it claims is “critical” for the media, life sciences and manufacturing industries.

Earlier this year, Google launched Elastifile File Service on Google Cloud Platform, a fully-managed version of Elastifile integrated with Google Cloud, with customers including Appsbroker, eSilicon and Forbes.

“The integrated circuit (IC) design  process can produce a wide spectrum of compute and storage requirements,” Naidu Annamaneni, CIO and VP of Global IT at eSilicon, said in a statement. “This can translate into thousands of cores and petabytes of storage for some portions of the IC design. The combination of Elastifile and Google Cloud provides the scale and performance that we need to successfully deliver these ICs on time and on budget.”  

“Helping our customers solve difficult storage challenges for their most critical workloads has enabled these enterprises to unleash the full benefits of the cloud,” Erwan Menard, CEO at Elastifile, added. “We’re excited to join Google for the next part of our journey, building on the success we’ve had together over the past two and a half years. File storage is essential to enterprise cloud adoption and, together with Google, we are well-positioned to serve those needs.”

“In recent years, we’ve seen enterprises increasingly deploy traditional applications as well as new performance sensitive applications to the cloud,” Deepak Mohan, Research Director at IDC, said. “These applications require on-premises level of performance for latency and consistency alongside of the scalability benefits of the cloud.”

“The acquisition of Elastifile will better enable Google Cloud customers to meet this mix of needs, as they deploy such workloads to the Google Cloud Platform,” he concluded.