Category Archives: Tech

Amazon to enter Hong Kong cloud market amid competition with Chinese rivals

Amazon Web Services (AWS), the online retail giant’s cloud services arm, announced on 24 April the opening of its new Asia Pacific region in Hong Kong.

Users will be able to “leverage” the new region to “run their applications locally, serve end-users across Hong Kong with lower latency, and leverage advanced technologies from the world’s leading cloud with the broadest and deepest suite of cloud services to drive innovation”.

Previously, local AWS customers in Hong Kong were forced to store their cloud data in other locations in the Asia Pacific area, such as Singapore or Tokyo.

AWS regions are comprised of “availability zones” that the company defines as “technology infrastructure in separate and distinct geographic locations with enough distance to significantly reduce the risk of a single event impacting business continuity, yet near enough to provide low latency for high availability applications”.

Each zone has “independent power, cooling, and physical security” and is connected using “redundant, ultra-low-latency networks”, the company said.

The new Asia Pacific region will have three availability zones, allegedly allowing customers to “achieve even greater fault-tolerance” and enabling organisations to “provide lower latency to end users” in Hong Kong and across the Asia Pacific area.

AWS notes that local customers in Hong Kong will be able to store content in the region “with the assurance that [it] will not move without [their] consent”.

Companies use cloud computing to buy, sell, lease, and distribute software and other digital resources on-demand over the internet, while journalists use the cloud to securely store information. These resources are managed inside data centres.

“Hong Kong is globally recognized as a leading financial tech hub and one of the top places where startups build their businesses, so we’ve had many customers asking us for an AWS Region in Hong Kong,” Peter DeSantis, Amazon Web Services’ Vice President of global infrastructure and customer support, said in a statement. “The dynamic business environment that exists in Hong Kong – among start-ups, enterprises, and government organizations – is pushing them to be one of the foremost digital areas in Asia.”

“By providing an AWS Region in Hong Kong Special Administrative Region, we hope this enables more customers to be more agile, innovate, and transform their end-users’ experience for decades to come,” he added.

Nicholas W. Yang, Secretary for Innovation and Technology for the Hong Kong Special Administrative Region Government, said: “We are delighted to see the official launch of the AWS Asia Pacific (Hong Kong) Region, and it comes at a time when we are embracing digital transformation and developing into an international innovation and technology hub.”

Yang described data as the “new currency of the digital economy and a new fuel for innovation”, adding that AWS’ new region will be “an integral component to foster technology advancements, allowing for greater innovation and further facilitating [Hong Kong’s] digital transformation”.

“The opening of the AWS Asia Pacific (Hong Kong) Region also enhances our . . . position as a data hub and puts Hong Kong in a strong position to lead the next wave of innovation in data-related technologies,” he concluded.

The launch comes as competition in Hong Kong heats up with the company’s Chinese rivals, Alibaba Cloud – a subsidiary of  and Tencent Cloud, opening their own operations in the city in 2014 and 2017, respectively.

A recently released report found that Alibaba Cloud was the market leader for infrastructure services in the Asia Pacific area in 2018, commanding nearly 20 percent of the market, compared to 11 percent for AWS and Microsoft’s 8 percent.

The company, a subsidiary of e-commerce giant Alibaba Group Holding, is currently the world’s third-largest cloud services provider and reportedly holds the biggest share in China’s cloud market.

Last month, The Wall Street Journal reported that China was considering a “liberalization pilot” in one of its free trade zones to allow foreign cloud computing providers to operate without a local partner to provide cloud services that comply with Chinese law as they are currently required to do.

Facebook in trouble over insecure Instagram passwords

Facebook’s mastery of the “news dump” is only getting more and more impressive as the social network attempts to weather a series of scandals that would most likely have brought other companies to their knees some time ago.

On the Thursday before Easter – aka a major holiday weekend in the USA – and a mere hour before the hotly anticipated Mueller report was released to the public, Facebook updated a months old blogpost entitled “Keeping Passwords Secure” with a couple of lines of italicized text.

The update read: “Since this post was published, we discovered additional logs of Instagram passwords being stored in a readable format. We now estimate that this issue impacted millions of Instagram users”.

You may remember that the original post revealed to the public in March that Facebook had stored passwords for hundreds of millions of its users and “tens of thousands” of Instagram users (Instagram is owned by Facebook) as plain text in a database that could be accessed by over 20,000 of the company’s staff.

Most passwords are encrypted for storage so that no one can read them, even if the file they are kept in is compromised (e.g. by hackers). Storing them in a plain text format, however, means that every Facebook employee hypothetically had access to the affected accounts.

Back in March, Facebook claimed it was a system issue that had subsequently been fixed. However, the updated blogpost informed readers of a significant increase in affected users, although a company investigation reportedly found no evidence that the information had been abused.

Facebook said that it would notify all users who had been affected by the lapse in security, and experts have recommended that all users change their passwords and set-up two-factor authentication for their accounts.

A year of scandal

The company is already under investigation by US government agencies – such as the Department of Justice and the Federal Trade Commission – for its data-collection and privacy practices in the wake of the Cambridge Analytica scandal.

Last March, journalists revealed that Facebook had shared users’ data with an outside app developer without their permission back in 2014. The developer then sold the information to Cambridge Analytica, a data analytics firm that would go on to work with Donald Trump’s 2016 US presidential campaign.

At the time, it was not against Facebook’s rules for the app developer to collect the information but they were not allowed to sell the data. The incident raised serious data privacy concerns and left Facebook trying to explain (and, in some cases, justify) its data collection practices – which have since been changed.

Facebook and the art of the “news dump”

The social media network didn’t just wait for one of the biggest events in recent political history – and one that was certain to divert journalists’ attention in a big way – to alert users of the widened scope of the security breach, it chose to bury the news in an old press release.

This kind of timing trick is known as a “news dump”, a tactic typically employed by communications departments in both private sector firms and governments, to bury negative news about topics such as hacks, mishandling of customer data, and bad behavior of executives, politicians, or other high profile individuals.

There are a few versions of this tactic, the most common of which is releasing news on a Friday afternoon after the markets close, so that investors have time to digest the news without stock taking a hit, and journalists are heading out for the day.

Sometimes, companies will also hold on to bad news, only sharing it when an unrelated, massive story breaks and has the public’s – and journalists’ – attention. Facebook’s favored method is releasing bad news just before a holiday – as they did in this case.

For example, the company released its tool allowing users to see if they have been exposed to Russian propaganda on the Friday before Christmas 2017, and on the night before the US midterm elections last October, it put out a report saying it had failed to do enough to prevent its use to fuel bloodshed and political division in Myanmar.

For an organization fighting claims of impropriety and careless handling of information, the use of new-burying techniques seems risky at best, and threatens to further damage the company, especially if the tactic stops working. Instead of protecting Facebook’s reputation, it risks making it look more suspicious.

Business Insider calls for lawmakers to regulate facial recognition technology

Facial recognition technology has already been implemented in US airports in an effort to improve security and efficiency but there are concerns that the American government will use it to create a “digital ID library of millions of Americans without consent”, according to news website Business Insider.

The New York-based website argued that the accuracy rate of facial recognition could be “almost perfect” if it is fed good enough data but that without clear protections it could theoretically be used as a tool to violate human rights.

While facial recognition can be used to automate some tasks, such as checking-in at the airport or finding missing persons, making them more convenient and efficient, there needs to be regulation and specific protections in place to ensure that it is used ethically.

The US Custom and Border Protection (CBT) agency, which provides security at airports, claims that it deletes photographs taken for facial recognition purposes after 12 hours, and that US citizens can opt-out of checking-in using the technology by going through the procedure manually in the old fashioned way. However, Business Insider notes that many passengers may not know that the opt-out exists. 

Airports are not the only institutions implementing this technology. The US Immigration and Customs Enforcement (ICE) agency are using it for security reasons, and China is using it on a larger scale for mass surveillance and to give citizens “social scores” to supposedly improve society.

A citizen’s score is based on their economic and social reputation, and can be affected by a number of factors, such as jaywalking, bad driving, or posting fake news online as well as a whole host of more innocuous actions.

Facial recognition can used to track a targeted individual’s behavior as they go about their daily business, especially in cities with a large number of CCTV cameras, and their face is linked to government records and social networks.

Business Insider reported that citizens have “reported concerns about their privacy and the lack of checks and balances on this system”, which can result in rewards for behavior that is deemed to be good or punishments for behavior deemed to be bad.

The publication called for “more terms and services and guidelines and regulations that will help protect the rights, security, safety, and privacy of the people who are affected by the technology” before its used becomes more widespread.

How does facial recognition work and how good is it really?

Facial recognition exited the realm of science-fiction and entered our daily lives some time ago. It may sound like something out of Star Trek but it’s not a new thing and the technology behind it is actually pretty simple.

It’s the same thing that Facebook uses to identify people in photographs and that allows you to unlock your screen by merely showing it your face. It measures and records your facial features, using a deep learning-based method, and is allegedly more likely to recognize you than another real life human being will be.

However, the technology is only as good as the databases it can access, which means that it is more likely to recognize individuals belonging to groups that are regularly featured in databases. Since the US typically does not have databases for African people, Business Insider noted, facial recognition often appears racist and is frequently accused to containing pre-programmed biases.

The good news is that the technology isn’t subject to the same issues with facial recognition as our own brains; humans are more likely to accurately identify people of the same race, a problem which computers – which obviously do not have a race – do not share. 

Apple expands global recycling programs with iPhone disassembling robot

 

Eponymous Californian tech company Apple announced a “major expansion” of its recycling programs on 18 April, with plans to quadruple the number of locations some customers can send their old iPhone’s to be disassembled by its recycling robot, Daisy.

In a press release, the company said that Daily would disassemble and recycle select used iPhones that have been returned to Best Stores in the US and KPN retailers in the Netherlands. Eligible devices can also be turned into the Apple store or apple.com for recycling as part of the Apple Trade In program.

The company said it had received almost 1 million devices to be recycled through Apple programs and claimed that each Daisy robot can disassemble 1.2 million devices per year. Last year, Apple allegedly refurbished over 7.8 million devices and “helped divert more than 48,000 metric tons of electronic waste from landfills”.Daisy is now capable of disassembling fifteen different iPhone at the rate of two hundred per hour, Apple said, allowing the company to recover “even more important materials for re-use”, which are recycled back into the production process. For example, cobalt is a key battery material that is “for the first time” being recycled to make brand-new Apple batteries.

Apple also uses one hundred percent recycled tin in a key component of the main logic boards of eleven different products, the company claimed, and an alloy made from one hundred percent recycled aluminum “allows the new MacBook Air and Mac mini to have nearly half the carbon footprint of earlier models”.

The number of Apple Stores and network of Authorized Service Providers has grown to over 5,000 worldwide. Last fall, Apple rolled out a new method for optimising iPhone screen repairs that allows thousands more independent shops to offer the service. Apple also launched a battery replacement and recycling programme for all of its products.

“Advanced recycling must become an important part of the electronics supply chain, and Apple is pioneering a new path to help push our industry forward,” said Lisa Jackson, Apple’s vice president of Environment, Policy, and Social Initiatives, said in a statement. “We work hard to design products that our customers can rely on for a long time. When it comes time to recycle them, we hope that the convenience and benefit of our programmes will encourage everyone to bring in their old devices.”

Future recycling processes

Apple also announced the opening of its new Material Recovery Lab, which is “dedicated to discovering future recycling processes” and will “look for innovative solutions involving robotics and machine learning to improve on traditional methods” such as “targeted disassembly, sorting and shredding”.

The new 9,000-square-foot facility will be located Austin, Texas, and will working with Apple engineering teams as well as academics to “address and propose solutions to today’s industry recycling challenges”, including a continued effort to “ensure devices are used for as long as possible”.

This comes as Apple released its 2019 Environment report, which contains additional information on the company’s climate change solutions, including a recent announcement that 44 of its suppliers have committed to 100 percent renewable energy for their Apple production. The report can be viewed here.

Study: Most Americans are more concerned with data privacy than healthcare

study conducted by market research firm The Harris Poll found that the most pressing issue on the mind’s of Americans in 2018 was data privacy, followed closely by healthcare, veterans support, and education.

Alongside Finn Partners, a marketing communications company, The Harris Poll surveyed over 2,000 adults (people over 18 years of age) across the country and identified social issues that they wanted private sector companies to address.

This data was then used to inform the development of The Harris Poll’s “Societal ROI Index”, which seeks to “better understand the social role and image of highly visible companies”. The firm describes the index as a “new metric and diagnostic tool to understand a company’s reputation relative to social good”.

Respondents were asked about issues they believed companies should address and issues they believed that companies were actually “making an impact on”. The data was then used to evaluate the gap between “perceived impact and importance” of each issue, essentially testing how well major companies are actually meeting these social needs.

This is known as the Societal Return on Investment (SROI) Index which the firms uses to score and rank the most visible companies according to the public’s perception of their work to drive positive change by quantifying the relationship between a corporation’s perceived social contribution and its bottom line. 

According to 24 percent of respondents to the survey, the area that companies have the most positive impact on is job creation, whereas 65 percent of respondents said that the area they most believed companies should be trying to address is data privacy.

A further 61 percent of respondents said that the second most pressing issue for companies to consider was healthcare, followed by supporting military veterans (59 percent of respondents) and education (56 percent of respondents).

Conversely, just 22 percent of respondents said that companies were making a positive impact on veterans support and just 18 percent said they thought companies had a positive impact one education. Less than 18 percent believed that companies were adequately making a difference in areas such as hunger, sexual harassment, LGBTQ+ rights, or immigration.

Clearly, there is still a significant gap between the areas that people believe need improvement and those in which they say companies are making a significant impact. However, it is important to remember that this survey reflects respondents’ opinions as opposed to providing objective metrics for companies’ performance in these areas.

Companies in the grocery and technology industries tended to score better, including Wegman’s a privately owned American supermarket; outdoor clothing brand Patagonia; and UPS, a well-known multinational package delivery and shipping company.

Other companies featuring in the top ten included Aldi, Microsoft, Lowes, Tesla Motors, and Kellogg Company, with Amazon.com, Whole Foods Market, IBM, and Berkshire Hathaway breaking into the top twenty.

In an interview with USA Today, Wendy Salomon, Managing Director of Corporate Reputation at The Harris Poll suggested that these companies performed well because they “evoke comfort” and become “part of your daily regime”.

In order to improve its social score a company must enact systemic change, she added, by “using employees to bring about positive change for decades” rather than just “galvanizing to take a stance in the moment”.

“We know that companies are making big financial investments to help public perception, but perhaps that message isn’t getting out,” Salomon opined. “If you’re a company that’s operating on a national scale—if your values are not coming across to the public then this is something you should think about.”

Voice Search

With ComScore projecting that “50% of all searches will be voice searches by 2020.”, it is safe to say that voice is the future. Speech recognition technology has only recently entered public consciousness, driven by the amusement of having a machine that can understand us.

Although there was a point when people typed keywords into search engines themselves, more recent developments in technology has led to a change of trends. This shift to voice search is being greatly driven by smart speakers such as Alexa or Siri. While research suggests that above 20% of the U.S. population already has access to these smart speakers, others are only becoming more intrigued to buy it as tech giants continually bring forth more appealing features.

Digital assistants like Siri and Cortana initially only existed in smartphones. However, this concept has been decentralized over the past few years as we experience an age of ‘hyper-adoption’ which suggests that consumers are now willing to adopt new technologies much faster than before. With every passing day, the shift to voice search will only accelerate.

While voice search has been around for many years as a smartphone feature, the primary focus today is on voice-activated home speakers. As they reside in a corner of a consumers living room, these speakers act as a gateway to the proliferation of smart devices. Home speakers can not only serve their primary purpose but also be used to control other devices such as lights, switches, TVs or thermostats. In addition, users will soon be able to control smart fridges, mirrors, and smoke alarms through this one central device.

In today’s modern world, technology has become genuinely useful in the accomplishment of daily tasks. Alongside assisting you with reminders, these voice-controlled devices can also be asked simple questions like what the weather will be like tomorrow and it will provide a short, spoken summary. Even though voice search technology is not perfect yet, speech recognition has reached an acceptable level of accuracy for most people.

It is safe to say that we are still at some distance from maximizing the use of speech recognition technology. Alongside further sophistication in the technology itself, it will also take some time for humans to learn to integrate this new development in their lives effectively. Although speech recognition remains confined to only a few selected devices, the rate of progress in this field is quite commendable.

At this point in the revolution, business owners must be quick in adopting new technological advancements. Since technology brings much change to the lives of people, it is highly important for brands and organizations to ensure that their websites are optimized for voice search. Search engine optimization (SEO) methods will face the impact of voice search technology as people search using different terms from specific keywords and tend to rely on natural language.

While it is safe to say that voice technology hasn’t already taken over the way we think, research suggests that it soon will. Technological developments have become much bigger and quicker than in the past, and the best way to survive in this advanced world is to integrate these technologies into our lives as quickly as possible.

The Best New Technologies of 2019

It’s not at all unreasonable to say that 2018 was a big year for innovation in technology on a global scale, a trend that is expected to continue well into 2019 and beyond.

At the start of the year, members of the Forbes Technology Council shared their predictions for upcoming trends, including increased automation and connected devices, a comeback for blockchain, better human/AI collaboration, upgrades to cybersecurity, more technological convergence and a solution to the recent backlash against tech caused by negative headlines about its effect on democracy and interpersonal relationships.

Also read: Best Affordable True Wireless Earbuds

Here are some of the new advancements in tech that we have already seen this year and can hopefully look forward to in the coming months. Judge for yourself whether they meet the optimistic expectations of the experts at Forbes…

LG Signature OLED TV R9

This roll-up television takes compact to a whole new level with a 65 inch screen that retracts into a small, compact box. Neither the price (which is sure to be high) nor the full specs have been released yet but what we do know looks promising. You’ll get a 4K HDR Smart TV-watching experience like no other with both Google Assistant and Alexa, and it takes up next to no room in your living space.

The Impossible Burger 2.0

While this isn’t strictly a tech product, Impossible Foods launched the next evolution of its soy-based meat replacement at CES 2019, the Consumer Technology Association’s world-renowned annual tech trade show. Winner of CES’s “best of the best award”, the plant-based Impossible Burger claims to perfectly simulate meat-based burgers, making it a must for vegetarians and vegans who still enjoy the taste of meat. According to Impossible Foods CEO Pat Brown, the original recipe “was great” but the new one “will blow people’s minds”. This isn’t Quorn, people.

Mophie Juice Pack Access

At US$119.95, this portable charging case for your Apple iPhone isn’t exactly cheap but it’s worth every cent for one simple reason: it doesn’t use or cover up your lightening port. That means you can finally charge your phone on the go while listening to headphones. No more interrupting your podcast or audiobook when you start to run out of juice. One of the most practical pieces of tech announced at CES, Mophie’s Juice Pack Access will give you up to 31 hours of battery life – and it’s strong enough to protect your phone.

Sony WH-1000XM3 headphones

Consistently ranked as the best noise-cancelling and wireless headphones on the market right now, Sony’s WH-1000XM3 headphones represent some of the renowned audio company’s best work yet. As always, they win on sound quality and design, exemplified by easy-to-use touch controls on the right ear and a battery life that lasts for up to 30 hours. And they come with USB-C rapid charging instead of the now-old fashioned (and relatively slow) microUSB connection that the previous model used. They’re not cheap but they are worth it…

Sony Glass Sound Speaker

In another win for Sony, this wireless speaker toes the line between mainstream and quirky with a unique design and attractively delicate tone. Basically, it’s made out of glass set into a base and looks like some kind of tall, skinny lamp. The glass vibrates and its resonance creates its signature sonorous sound, while an internal light can be set to flicker like a candle or to serve old-school oil lamp vibes. Effortlessly Pinterest-worthy and perfect for a romantic dinner.

Technics SL-1210 turntable

This updated version of Technics’ classic turntable for DJs harks back to the excellent build-quality and reliability of its much-loved seventies ancestor. The new model features a cool, matte-black finish alongside neat features such as the ability to spin the disk in the opposite direction and a long-life white LED positioned to make it easier to see the tip of the stylus in dim or no light. It also looks super cool with a retro design that reflects the styling of the original, described by some as one of the most renowned products in the history of music.

Volta Mookie

Add this to the list of things you didn’t know you needed but definitely do. The Volta Mookie is a pet feeder with a twist: it has AI-based facial recognition so your cats can’t steal each other’s dinner anymore. It’s also useful if you have two pets of the same species with specific – but different – dietary needs and prevents strays (or wild animals) from eating your pet’s food if you leave it outside. It has two separate feeding bowls and a front-facing camera, and should go on sale this fall.

A software bug caused Boeing’s new plane to crash twice

US-based airplane manufacturer Boeing officially took responsibility for the two crashes of their new 737 Max jets on 4 April this year in an attempt to get the planes approved to fly again after they were grounded by officials in multiple countries around the world.

The company admitted that it had found two different flaws in the plane’s software – the second of which was reportedly unrelated to the crashes – that it needs to fix, which will delay the process of getting the planes back into the air.

Boeing said that it has a plan in place to replace the faulty software and eliminate the problem but regulators – such as the US’s Federal Aviation Administration (FAA) – will still need to clear the plane to fly (which more than begs the question as to why the software flaw slipped past regulators in the first place).

The scandal was thrust into the public eye on 10 March when Ethiopian Airlines flight 302 from Addis Ababa (Ethiopia) to Nairobi (Kenya) crashed soon after take-off, killing all 157 people on board just months after a Lion Air flight of the same model crashed after taking off from Jakarta (Indonesia), killing all 189 passengers.

US President Donald Trump suggested in a tweet posted on 15 April that the company should “rebrand” the plane by changing its name after fixing the flawed software, and adding some “great additional features”. Trump has taken a keen interest in the saga, lobbying for the planes to remain in the air, and the US was one of the last countries to ground the 737 Max despite the obvious safety concerns involved.

His advice seems unlikely to be well-received as branding really isn’t Boeing’s problem here: it’s the automated software system that is believed to have been at issue in both crashes, specifically the plane’s Manoeuvring Characteristics Augmentation System (MCAS), an anti-stall system that can allegedly make it difficult for pilots to control the 737 Max without being overridden.

While a preliminary report on the Ethiopian Airlines crash did not assign blame – and it is not yet definitively known whether the MCAS or pilot error was at fault – investigators have said that the pilots were correctly following Boeing’s procedures.

“The full details of what happened in the two accidents will be issued by the government authorities in the final reports, but, with the release of the preliminary report of the Ethiopian Airlines. . . accident investigation, it’s apparent that in both flights the [MCAS] activated in response to erroneous angle of attack information,” Boeing CEO Dennis Muilenburg said in a statement.

“The history of our industry shows most accidents are caused by a chain of events. This again is the case here, and we know we can break one of those chain links in these two accidents,” he added. “As pilots have told us, erroneous activation of the MCAS function can add to what is already a high workload environment.”

It was Boeing’s “responsibility to eliminate this risk,” he said, adding: “we own it and we know how to do it.”

But the real scandal here may not be the software bug at all but the rivalry that may have allegedly spurred Boeing to cut corners when developing the 737 Max. With a whopping 38% of market share (in 2016), Boeing is one of the top aircraft manufacturers in the world. It’s main competitor, European manufacturer Airbus, comes in a close second with a 28% share of the market, and the two companies share a fierce rivalry.

In 2010, Airbus announced an update to the A320, their most popular single-aisle aircraft which services many domestic flights in the US. The new version, dubbed the A320neo, would have a new, larger engine that was 15% more fuel efficient and the aircraft’s operation would not change enough to require pilots to undergo much retraining, saving airlines a bucketload of money.

This posed a problem for Boeing which moved to upgrade the engine on their own single aisle plane – the 737-800 – in order to compete with Airbus. However, the 737-800 didn’t have enough room for a new, larger engine as it sat too close to the ground.

The company attempted to fix this issue by moving the engine higher on the new model, which they named the 737 Max. Like Airbus, Boeing claimed that pilots would need only minimal retraining as it was allegedly almost indistinguishable from the 737-800.

The plane sold incredibly well and helped the company to compete with Airbus but the new engine placement had a side effect: the nose of the plane tended to point too far upward during take-off, which could lead to a stall.

Boeing chose not to reengineer the plane, instead installing software that would push the nose downward if it was flying at a higher angle in order to force it to behave like the original model. This was the MCAS.

As Boeing was selling the planes as virtually the same as the old model, they didn’t highlight the new system and regulators cleared the plane to fly without pilots receiving more than minimal retraining that didn’t mention the MCAS. The first sign of trouble was reports from pilots that the planes were suddenly nosing down without any warning and then, on 29 October 2018, the first crash occurred.

Artificial intelligence technologies for smart healthcare

Would you trust a computer to correctly diagnose a health problem? Most of us would probably prefer to leave it in the hands of our highly trained general practitioner, emergency room doctor or surgeon. The narrative concerning the intersection between artificial intelligence (AI) is often grossly distorted towards one extreme or another: either the robots are coming to kill us and steal our jobs or they herald some new utopian era and represent the only possible source of future prosperity for the human race. Reality – as in most instances – is far more nuanced and probably lies somewhere in between these two extremes.

We’re a long way from developing Star Trek-esque androids that can perfectly simulate human behaviour and supplant your current, fully human doctor. However, there are a few ways in which AI has already begun to supplement your friendly neighbourhood doctor’s practice and a few more in the pipeline…

Wearables

Consider the humble FitBit. We’re not entirely sure that they track our steps correctly all of the time or get our heartbeat right but they’re increasingly popular and there is evidence that they do work. They monitor our fitness levels, warn us when we need to get more exercise and can also record abnormalities such as heart palpitations, potentially saving lives.

The information they record can be shared with healthcare professionals and AI systems to be analysed, giving doctors a more accurate picture of the habits and needs of their patient, especially when supplemented with medical histories and other useful patient information. This allows doctors to more carefully and accurately tailor treatments, rendering them increasingly more effective.

However, critics are concerned that this information could also be used by companies to discriminate against their employees should the data be used unethically. Experts have also voiced concerns about invasion of privacy if the data collected and stored by manufacturers of fitness trackers is either hacked or sold.

Machine learning

Healthcare professionals have already begun to use machine learning-based applications, support vector machines and optical character recognition programs such as MATLAB’s handwriting recognition technology and Google’s Cloud Vision API to assist in the process of digitising healthcare information. This helps to speed up diagnosis and treatment times as healthcare professionals are able to more quickly access complete sets of records on their patients.

The Massachusetts Institute of Technology (MIT) Clinical Machine Learning Group is leading the pack in developing the next generation of intelligent electronic healthcare records by developing applications with built-in AI – specifically machine learning capabilities – that can help with the diagnostic process. In theory, this will allow healthcare professionals to quickly make clinical decisions and create individual treatment plans tailored to their patients.

According to MIT, there is an ever growing need for “robust machine learning [that is] safe, interpretable, can learn from little labelled training data, understand natural language, and generalize well across medical settings and institutions”.

Smart algorithms

The term “AI” is somewhat misleading as it implies something more than the technology that we currently use it to describe. We don’t literally mean artificial intelligence – no true AI has been invented yet – but advanced algorithms that run on ever more powerful computers and can recognise patterns, pick information out of complex texts or even derive the meaning of an entire document from just a few sentences. This is known as artificial narrow intelligence (ANI) and comes nowhere close to artificial general intelligence (AGI) – aka the next step in developing a fully conscious AI or “superintelligence” – that can abstract concepts from limited experience and transfer knowledge from one place to another.

However, natural language processing and computer vision – the two main applications for ANI – are developing phenomenally quickly, the latter of which is based on pattern recognition and crucial for diagnostics in healthcare. Algorithms are trained to recognise various patterns seen in medical images and used to help doctors diagnose specific conditions in their patients, such as DNA mutations in tumours, heart disease, and skin cancer. This methodology does have limitations, however, as the medical evidence that the algorithms are programmed to recognise tend to originate in highly developed regions and reflect the subjective assumptions (or biases) of the working team. Furthermore, the forecasting and predictive elements of these algorithms are anchored in previous cases, and may therefore be useless in new cases of treatment resistance or side effects of drugs. Finally, the majority of AI research already conducted has been done on training data sets collected from medical facilities and doctors are provided with the same dataset after the algorithm analyses the images, usually without any attempt to reproduce the clinical conditions.

European Commission announces pilot program for AI ethics guidelines

The European Commission (EC) announced on 8 April that it would launch a pilot program to ensure that ethical guidelines for the development and use of artificial intelligence (AI) can be implemented in practice.

This is the second step in the Commission’s three-part approach to the question of ethical AI, following the development of seven key requirements or guidelines for creating “trustworthy” AI developed by the High-Level Expert Group.

These include: human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. The Commission added that any AI that can be considered trustworthy should also respect “all applicable law and regulations”.

Industry, research institutes and public authorities have been invited to test an assessment list drafted by the group to complement the guidelines. The 52-strong panel of independent experts was appointed by the Commission in June 2018, and is comprised of representatives from industry, academia and civil society.

According to the Commission, the third and final step in its plan will be to work on building an “international consensus” on human-centric AI as “technologies, data and algorithms know no borders”.

These plans are a component of the Commission’s overarching “AI strategy”, which aims to increase public and private investments to at least €20 billion annually over the next decade in order to make more data available, foster talent and “ensure trust”.

Members of the group will present their work in detail at the third “Digital Day” in Brussels on 9 April. Following the conclusion of the pilot phase in early 2020, they will review the assessment lists for the key requirements, building on the feedback they receive, after which the Commission plans to evaluate the outcome of the project so far and propose next steps.

The Commission has also pledged to launch a set of networks of AI research excellence centres; begin setting up networks of digital innovation hubs; and together with Member States and stakeholders, start discussions to develop and implement a model for data sharing and making best use of common data spaces; before autumn 2019.

“I welcome the work undertaken by our independent experts,” Vice-President for the Digital Single Market Andrus Ansip said in a statement. “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies.”

For Ansip, ethical AI is a “win-win proposition” that could create a “competitive advantage for Europe” should it become “a leader of human-centric AI that people can trust”.

“Today, we are taking an important step towards ethical and secure AI in the EU,” Commissioner for Digital Economy and Society Mariya Gabriel added. “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society.”

The Commission is looking to put these requirements into practice while simultaneously fostering “an international discussion on human-centric AI,” she said.

AI refers to digital systems that show intelligent, human-like behaviour. By analysing their environment they can perform various tasks with some degree of autonomy to achieve specific goals, learning from data to make predictions and deliver useful insights.

The Commission estimates that the economic impact of the automation of knowledge work, robots and autonomous vehicles on the EU will reach between €6.5 and €12 trillion annually by 2025. The body has already invested what it describes as “significant amounts” in the development of AI, cognitive systems, robotics, big data, and future and emerging technologies in a bid to make Europe more competitive in this area.

This includes around €2.6 billion on AI-related areas and €700 million on research programs studying smart robots. The Commission intends to invest further in research and innovation up to and after 2020, including €20 billion per year in combined public and private investment.

However, Europe is currently behind in private investments in AI having spent €2.4 to €3.2 billion on development in 2016, compared with the €6.5 to €9.7 billion spent in Asia and €12.1 to €18.6 billion in North America.

In a press release, the Commission acknowledged that while AI has the potential to benefit a wide range of sectors – such as healthcare, climate change, law enforcement and security, and financial risk management, among others – it brings new challenges for the future of work, and raises significant legal and ethical questions.

The Best Smartphones of 2019

If there’s one thing that we’re never short of these days, it’s new smartphones. We’re a few months in 2019 now so there’s been quite a few new product launches so far, necessitating an update on all those “best smartphone” lists out there. When choosing a new phone, there are a few things to look out for: cost, build quality and design, ease of use, features, performance, and value for money.

A flagship phone usually costs anywhere between £600 and £800 but can run to over £1000 in a few cases. On a contract, you’re probably looking at something between £30 and £50 per month but you can spend much more if you’re after an expensive phone and a whole load of mobile internet data.

Buying a phone outright may be the best value for money but not everyone has the liquid capital to do that.

You’ll also need to think about operating systems (the eternal android versus iPhone debate), whether or not to buy an unlocked phone, and to ensure that you buy the right SIM card (for example, if you want 4G, you’ll need a 4G-enabled phone and SIM).

Google Pixel 3/Google Pixel 3 XL

Easily one of the highest quality phones on the market right now with the best camera ever on a smartphone. Both the regular size and the XL sport Google’s most advanced hardware to date, good battery life, wireless charging, and are waterproof.

It’s not the most exciting upgrade ever – and the previous model will satisfy most at a lower price – but it’s a great device made better through a small number of improvements at a reasonably low cost. For example, a new glass rear cover with a soft matt finish enables wireless charging  and is bang on trend.

There’s still no headphone jack, unfortunately, but this seems like a small price to pay overall. And the now iconic design comes in a small range of colours, including Clearly White, Just Black and the new Not Pink (which is closer to peach in real life).

 Samsung Galaxy S10 Plus

The S10 Plus’ Super AMOLED 6.4-inch display has been measured as one of the best available with great colours, dynamic range, and the best viewing experience you can ask for on a smartphone. Plus, there’s a fingerprint scanner actually embedded into the display which theoretically makes it easier to unlock your phone when it’s resting flat than when using a back-mounted fingerprint button or facial recognition.

The battery life is an improvement over the previous model thanks to a larger battery size and should easily last you through the day. It also has the option for Samsung’s new Wireless PowerShare, allowing you to wirelessly charge other devices on the rear of the handset.

With no less than three cameras, the S10 Plus offers a wide range of photographic features, shooting modes, and better overall clarity than on the previous model.

Huawei Mate 20 Pro

This is quite possibly the Chinese manufacturer’s best smartphone to date with an in-screen fingerprint scanner, three great cameras (including one wide angle and one telephoto lens), a huge 6.39-inch high-resolution OLED display, and fantastic battery life.

AI features also improve the camera experience over the previous model, and add 3D face unlock and reverse wireless charging so it can charge other phones like the Samsung Galaxy S10 Plus. Stereo speakers, waterproofing and 128GB storage seal the deal, ensuring that this is a phone that should last for at least two or three years of use.

With higher specs than most of its competitors and a few features that you won’t be able to find elsewhere, the Huawei Mate 20 Pro is a high quality phone at a high price.

The future of augmented and virtual reality in entertainment

It may not take as long as you might think for augmented reality (AR) – and virtual reality (VR) – to become an integral, everyday part of our consumption of entertainment-focused media.

The value of virtual reality has long been recognised and depicted on the small screen in the form of Star Trek’s holodeck, a fully interactive virtual environment with which you can physically interact, and Ready Player One’s OASIS, an expansive virtual reality universe that humans basically live their everyday lives inside as the planet sits on the brink of chaos and collapse.

The storytelling potential granted to both filmmakers and game designers by both AR and VR is immense, allowing them the ability to create or recreate specific environments or even whole worlds in which the audience or players can become entirely immersed.

We’ve already seem some attempts to integrate this kind of technology into gaming through Pokémon Go!’s use of augmented reality (which sometimes works and sometimes doesn’t, mostly depending on your choice of smartphone) and virtual reality headsets such as the Occulus or HTC Vive which work brilliantly for some and make others feel seasick.

The technology definitely isn’t perfect yet and there are many refinements that could – and will – be made in order to make it truly commercially viable, user friendly and good value for money. But in the meantime, here are just a few ways in which this futuristic tech could be used to level up your post-work entertainment…

AR post-Pokémon Go!

It wasn’t so long ago that no one had even really heard about augmented reality. And then Pokémon Go! was released and everything changed. The technology didn’t even really need to work right for people to start getting excited – even the mere promise of something close to virtual reality was enough to get nerdy hearts all over the world singing with excitement. What gamer wouldn’t want the chance to actually live a game in the real world, at least in some way?

Perhaps more importantly for game developers, AR works best on smartphones and/or tablets and most people in developed (and undeveloped) countries already have one or both. Traditional gaming platforms such as the Xbox or Playstation require specialised equipment and confine users to a single location, whereas AR offers freedom and mobility turning the entire world into a gaming environment. Snapchat filters use the same technology; toy giants such as Hasbro and Lego are hoping it will breathe new life into old toys; and Apple, Google, and Snapchat have all released AR platforms in recent years.

There are two possible futures for AR in gaming: developers can take a tethered approach where users will need to pair their smartphone with some kind of headset or a standalone option that will be more expensive to create but far more convenient for users. Much needed improvements to the current technology include a better field of view, increased brightness and battery life, and 3D sensing capabilities. Investment in AR is pretty steep right now and most companies are waiting for the necessary components to become more readily available – but consumer demand definitely supports jumping headfirst into development to make this technology a reality.

VR headsets

It’s looking increasingly likely that gaming will be the industry that delivers workable, consumer friendly VR technology that may become mainstream in the consumer sector even before it reaches the business world (imagine it though: virtual offices really could mean the end of the daily commute). Unlike office workers, gamers aren’t pressed for time and are willing to spend time working out how virtual environments function, particularly if the game offers full immersion in the experience.

Over the years, new innovations in gaming technology have added 360-degree views of more realistic environments and haptic feedback through controls (see the Nintendo Switch, among many others), which VR takes a step further, giving users the desirable illusion that they are actually part of the game itself.

Of course, the technology is very much still in development and there are obvious limitations to the systems currently on offer. For example, game designers are still working on creating flawless virtual worlds that properly orientate direction, adjust to gamers’ movements in real-time, and accurately understand which part of the virtual world the player is interacting with at any given moment. There’s also the aforementioned seasickness, which is caused by discrepancies between the virtual world experienced by the mind and the real world experienced by the body.

However, experts predict that despite these challenges, the technology will go mainstream within the next five years or so, and anticipate an eventual world in which players can manipulate a game on a screen with the wave of a glove equipped with motion sensors. It’s even possible that we might see VR that can manipulated by a player who can move through the artificial world while remaining completely sedentary in the real world. Whether this is a good idea is up for debate of course but it’s not going to stop the industry reaching for this science-fiction level technology.

VR and film

Some filmmakers have already begun making films specifically for virtual reality but it’s unlikely that the technology is the future of the format. Five years ago, the world-renowned Sundance Film Festival’s New Frontier program proved to be a launch pad for the VR filmmaking boom but in 2019, creators had already started branching out and incorporating a slew of other technological advancements into their films, including augmented reality, artificial intelligence, and connected devices to create more dynamic ways of storytelling.

Many of those creators are independent studios with the larger, more mainstream and traditional studios – including Disney – just starting to dip their toes into virtual reality-based content. Disney brought their first-ever VR animated short to the festival this year, while 21st Century Fox brought a VR experience based on hit Matt Damon vehicle The Martian just three years previous. VR films have started attracting the seven-figure acquisition deals that are normally reserved for standard 2D films, however, who sold their movies to studios or cable outfits.

Filmmakers are just starting to look beyond the formats provided by headset manufacturers like Oculus, Samsung and HTC as they are not necessarily cost effective, comfortable, or user friendly like the traditional cinema or home DVD experience that moviegoers are used to. Innovators are now looking for new platforms or backing away from the technology completely. For filmmakers, the future of VR lies in innovation and development to find ways to adapt technology that works well in the gaming world to the film world.

Some ideas for how journalists might use 5G networks’ faster internet

Image by Free-Photos from Pixabay

Faster internet means faster information exchange, which is undeniably of value to journalists and other media producers. And from the perspective of journalists, better networks tend to produce – or distribute – different types of content.

For example, the first iPhone allowed only 2G data which was roughly the equivalent of passing along a single manila folder with a couple of sheets of A4 inside so publishers stuck to mostly basic webpages, especially as they were still mostly serving dial-up customers on home computers.

Then  3G came along, enabling the boom in podcasts, followed by 4G, which finally made video streaming (and downloading) via mobile networks pretty much tolerable as well as the first glimpses of pretty ropey but decent augmented reality (AR) and virtual reality (VR) apps.

You’ve probably heard that 5G is looming on the horizon (it’s about a year away from widespread mainstream use) and is expected to be around twenty times the speed of 4G, so it should make sense that news media publishers are already looking ahead to start planning for what it might bring.

For example, the New York Times recently said in a Medium post that it was launching a “5G Journalism Lab” to explore the kind of storytelling opportunities that the faster network speeds might enable, and has partnered with network provider Verizon for early access to the network and equipment with which to experiment.We believe 5G’s speed and lack of latency could spark a revolution in digital journalism in two key areas: how we gather the news and how we deliver it,” the newspaper said. “In the short term, having access to 5G will help The Times enhance our ability to capture and produce rich media in breaking news situations.

“Over time, as our readers start to use 5G devices, we will be able to further optimize the way our journalism is delivered and experienced,” it added.

Their stated plans already include better and more reliable data connections for journalists in the field streaming footage back to the newsroom in real-time, and more and better AR and VR immersive experience embedded within stories to allow readers to explore “new environments captured in 3D”. These are by no means the only opportunities that faster network speeds might provide for enterprising journalists, many of which publishers are probably unable to anticipate – see below for a few ideas for the directions this could go…

Livestreams

Reporters will be able to livestream high quality content via a newspaper’s website or on social media, maybe even becoming “livestreamers” almost by default as this is exactly the kind of content that 5G will incentivise. After all, we’re basically talking about digital-age live television news sent straight to your phone or computer. This could be a premium subscription service or even just a part of everyday news gathering and dissemination.

There are obvious flaws in this concept, notably the inherent lack of editorial decision-making, that would need to be worked out but there’s definitely an opportunity here for increased transparency in newsmaking as audiences would be able to follow along as a journalist investigates and reports an entire story from start to finish.

Re-writing the re-write

It’s highly likely that traditional media outlets will be a fan of livestreaming straight to their audiences, preferring to serve more highly curated, edited content but what about livestreaming straight to the newsroom?

In the past, re-writes involved multiple reporters working on a story sending feeds back to a single individual back at the office who was charged with assembling those raw components into a coherent, publishable story.

Except these aren’t really raw material: those reporters out in the field have already cherry picked which facts and stories to send back to the newsroom and may very well have left out some key piece of information that needs to be included or just outright missed something important.

But what if reporters could just livestream the whole of their day back to the re-write reporter in real time? This would represent a truly raw feed that could be searched using computer-vision or video search technology to directly pull specific quotes or interactions word-for-word.

There would need to be some kind of oversight of the process, certainly. It all sounds a bit authoritarian and reporters simply aren’t used to constant surveillance by their employer – unlike some delivery drivers – but advances in AI and the aforementioned machine learning technology will make mining this kind of content for data incredibly easy over time, creating a rich archive of material.

Detaching news from reporters

If livestreams become the currency of newsmaking, does every single bit of it need to be attached to a reporter? While journalists have experimented with so-called sensor journalism in the past and found that connecting a ton of small devices to a single network is just unmanageable, this shouldn’t be an issue with 5G, at least in the long-term.

Instead of actively reporting on every aspect of a story, journalists would be able to set-up sensors to monitor certain activity – such as traffic flow on certain sections of road or gameplay for gaming stories – and livestream that data to their readers. Journalists could even set up livestreams in important meetings and other spaces in which important decisions to capture those rare newsworthy moments as they happen – C-SPAN for the internet-age, basically.