The latest Tech Trends
The US Federal Communications Commission (FCC) under the Trump administration has announced on 12 April plans for a spectrum auction and to bolster broadband in rural areas across the country as part of its 5G Fast Plan, which looks to speed-up the rollout of 5G technology across the US.
The auction is scheduled to begin on 10 December and – with 3,400 megahertz in three different spectrum bands up for grabs – will be the largest slice of the airwaves that the FCC has ever auctioned for commercial use at one time. Spectrum is the airwaves that the networks use to provide internet to devices and this space is regulated by the FCC.
With tens of millions of Americans still without broadband, providers have been urging the regulator to open up mid-band airwaves that can project signals over longer distances, improving connectivity in rural areas.
The FCC also announced a series of measures to help connect these areas to faster internet, including repurposing funds from other programs to create the Rural Digital Opportunity Fund and new rules that would allow “Fixed Satellite Service operators to provide faster, more advanced services to their customers” using 50 GHz spectrum.
This program seeks to connect up to four million households and businesses to high-speed internet networks via US$20.4 billion in subsidies over 10 years to companies through an auction process to build out broadband infrastructure in rural areas.
The 5G Fast Plan also covers initiatives such as restrictions on how much cities can charge for deployment of 5G infrastructure and a 90-day limit on the processing of applications for such development.
Meanwhile, some mobile phone carriers – including Verizon and AT&T – have begun launching their 5G networks across the US. Sprint and T-Mobile are both expected to start activating theirs over the next few months. Currently, there is one 5G enabled phone available in the US, offered by Motorola and Verizon.
5G is the fifth generation of mobile internet and promises fast data download and upload speeds, alongside wider coverage and more stable connections. In short, it aims to make better use of the radio spectrum (hence the auction) and to enable more devices to access mobile internet simultaneously.
Users will require new, 5G enabled phones to access the network, which experts believe may lead to the development of higher quality video, better virtual and augmented reality on your phone, advances in smart technology and the internet of things, clearer and less jerky video calls, less delay for gamers, and better coordination between drones carrying out tasks such as search and rescue missions.
The race between countries to launch the network has become highly competitive with US President Donald Trump recently describing it as a private sector driven and private sector led” competition that “America must win” at a launch event with the FCC’s chairman Ajit Pai for the regulator’s new initiatives.
If there’s one thing that we’re never short of these days, it’s new smartphones. We’re a few months in 2019 now so there’s been quite a few new product launches so far, necessitating an update on all those “best smartphone” lists out there. When choosing a new phone, there are a few things to look out for: cost, build quality and design, ease of use, features, performance, and value for money.
A flagship phone usually costs anywhere between £600 and £800 but can run to over £1000 in a few cases. On a contract, you’re probably looking at something between £30 and £50 per month but you can spend much more if you’re after an expensive phone and a whole load of mobile internet data. Buying a phone outright may be the best value for money but not everyone has the liquid capital to do that.
You’ll also need to think about operating systems (the eternal android versus iPhone debate), whether or not to buy an unlocked phone, and to ensure that you buy the right SIM card (for example, if you want 4G, you’ll need a 4G-enabled phone and SIM).
Google Pixel 3/Google Pixel 3 XL
Easily one of the highest quality phones on the market right now with the best camera ever on a smartphone. Both the regular size and the XL sport Google’s most advanced hardware to date, good battery life, wireless charging, and are waterproof.
It’s not the most exciting upgrade ever – and the previous model will satisfy most at a lower price – but it’s a great device made better through a small number of improvements at a reasonably low cost. For example, a new glass rear cover with a soft matt finish enables wireless charging and is bang on trend.
There’s still no headphone jack, unfortunately, but this seems like a small price to pay overall. And the now iconic design comes in a small range of colours, including Clearly White, Just Black and the new Not Pink (which is closer to peach in real life).
Samsung Galaxy S10 Plus
The S10 Plus’ Super AMOLED 6.4-inch display has been measured as one of the best available with great colours, dynamic range, and the best viewing experience you can ask for on a smartphone. Plus, there’s a fingerprint scanner actually embedded into the display which theoretically makes it easier to unlock your phone when it’s resting flat than when using a back-mounted fingerprint button or facial recognition.
The battery life is an improvement over the previous model thanks to a larger battery size and should easily last you through the day. It also has the option for Samsung’s new Wireless PowerShare, allowing you to wirelessly charge other devices on the rear of the handset.
With no less than three cameras, the S10 Plus offers a wide range of photographic features, shooting modes, and better overall clarity than on the previous model.
Huawei Mate 20 Pro
This is quite possibly the Chinese manufacturer’s best smartphone to date with an in-screen fingerprint scanner, three great cameras (including one wide angle and one telephoto lens), a huge 6.39-inch high-resolution OLED display, and fantastic battery life.
AI features also improve the camera experience over the previous model, and add 3D face unlock and reverse wireless charging so it can charge other phones like the Samsung Galaxy S10 Plus. Stereo speakers, waterproofing and 128GB storage seal the deal, ensuring that this is a phone that should last for at least two or three years of use.
With higher specs than most of its competitors and a few features that you won’t be able to find elsewhere, the Huawei Mate 20 Pro is a high quality phone at a high price.
It may not take as long as you might think for augmented reality (AR) – and virtual reality (VR) – to become an integral, everyday part of our consumption of entertainment-focused media.
The value of virtual reality has long been recognised and depicted on the small screen in the form of Star Trek’s holodeck, a fully interactive virtual environment with which you can physically interact, and Ready Player One’s OASIS, an expansive virtual reality universe that humans basically live their everyday lives inside as the planet sits on the brink of chaos and collapse.
The storytelling potential granted to both filmmakers and game designers by both AR and VR is immense, allowing them the ability to create or recreate specific environments or even whole worlds in which the audience or players can become entirely immersed.
We’ve already seem some attempts to integrate this kind of technology into gaming through Pokémon Go!’s use of augmented reality (which sometimes works and sometimes doesn’t, mostly depending on your choice of smartphone) and virtual reality headsets such as the Occulus or HTC Vive which work brilliantly for some and make others feel seasick.
The technology definitely isn’t perfect yet and there are many refinements that could – and will – be made in order to make it truly commercially viable, user friendly and good value for money. But in the meantime, here are just a few ways in which this futuristic tech could be used to level up your post-work entertainment…
AR post-Pokémon Go!
It wasn’t so long ago that no one had even really heard about augmented reality. And then Pokémon Go! was released and everything changed. The technology didn’t even really need to work right for people to start getting excited – even the mere promise of something close to virtual reality was enough to get nerdy hearts all over the world singing with excitement. What gamer wouldn’t want the chance to actually live a game in the real world, at least in some way?
Perhaps more importantly for game developers, AR works best on smartphones and/or tablets and most people in developed (and undeveloped) countries already have one or both. Traditional gaming platforms such as the Xbox or Playstation require specialised equipment and confine users to a single location, whereas AR offers freedom and mobility turning the entire world into a gaming environment. Snapchat filters use the same technology; toy giants such as Hasbro and Lego are hoping it will breathe new life into old toys; and Apple, Google, and Snapchat have all released AR platforms in recent years.
There are two possible futures for AR in gaming: developers can take a tethered approach where users will need to pair their smartphone with some kind of headset or a standalone option that will be more expensive to create but far more convenient for users. Much needed improvements to the current technology include a better field of view, increased brightness and battery life, and 3D sensing capabilities. Investment in AR is pretty steep right now and most companies are waiting for the necessary components to become more readily available – but consumer demand definitely supports jumping headfirst into development to make this technology a reality.
It’s looking increasingly likely that gaming will be the industry that delivers workable, consumer friendly VR technology that may become mainstream in the consumer sector even before it reaches the business world (imagine it though: virtual offices really could mean the end of the daily commute). Unlike office workers, gamers aren’t pressed for time and are willing to spend time working out how virtual environments function, particularly if the game offers full immersion in the experience.
Over the years, new innovations in gaming technology have added 360-degree views of more realistic environments and haptic feedback through controls (see the Nintendo Switch, among many others), which VR takes a step further, giving users the desirable illusion that they are actually part of the game itself.
Of course, the technology is very much still in development and there are obvious limitations to the systems currently on offer. For example, game designers are still working on creating flawless virtual worlds that properly orientate direction, adjust to gamers’ movements in real-time, and accurately understand which part of the virtual world the player is interacting with at any given moment. There’s also the aforementioned seasickness, which is caused by discrepancies between the virtual world experienced by the mind and the real world experienced by the body.
However, experts predict that despite these challenges, the technology will go mainstream within the next five years or so, and anticipate an eventual world in which players can manipulate a game on a screen with the wave of a glove equipped with motion sensors. It’s even possible that we might see VR that can manipulated by a player who can move through the artificial world while remaining completely sedentary in the real world. Whether this is a good idea is up for debate of course but it’s not going to stop the industry reaching for this science-fiction level technology.
VR and film
Some filmmakers have already begun making films specifically for virtual reality but it’s unlikely that the technology is the future of the format. Five years ago, the world-renowned Sundance Film Festival’s New Frontier program proved to be a launch pad for the VR filmmaking boom but in 2019, creators had already started branching out and incorporating a slew of other technological advancements into their films, including augmented reality, artificial intelligence, and connected devices to create more dynamic ways of storytelling.
Many of those creators are independent studios with the larger, more mainstream and traditional studios – including Disney – just starting to dip their toes into virtual reality-based content. Disney brought their first-ever VR animated short to the festival this year, while 21st Century Fox brought a VR experience based on hit Matt Damon vehicle The Martian just three years previous. VR films have started attracting the seven-figure acquisition deals that are normally reserved for standard 2D films, however, who sold their movies to studios or cable outfits.
Filmmakers are just starting to look beyond the formats provided by headset manufacturers like Oculus, Samsung and HTC as they are not necessarily cost effective, comfortable, or user friendly like the traditional cinema or home DVD experience that moviegoers are used to. Innovators are now looking for new platforms or backing away from the technology completely. For filmmakers, the future of VR lies in innovation and development to find ways to adapt technology that works well in the gaming world to the film world.
US-based airplane manufacturer Boeing officially took responsibility for the two crashes of their new 737 Max jets on 4 April this year in an attempt to get the planes approved to fly again after they were grounded by officials in multiple countries around the world.
The company admitted that it had found two different flaws in the plane’s software – the second of which was reportedly unrelated to the crashes – that it needs to fix, which will delay the process of getting the planes back into the air.
Boeing said that it has a plan in place to replace the faulty software and eliminate the problem but regulators – such as the US’s Federal Aviation Administration (FAA) – will still need to clear the plane to fly (which more than begs the question as to why the software flaw slipped past regulators in the first place).
The scandal was thrust into the public eye on 10 March when Ethiopian Airlines flight 302 from Addis Ababa (Ethiopia) to Nairobi (Kenya) crashed soon after take-off, killing all 157 people on board just months after a Lion Air flight of the same model crashed after taking off from Jakarta (Indonesia), killing all 189 passengers.
US President Donald Trump suggested in a tweet posted on 15 April that the company should “rebrand” the plane by changing its name after fixing the flawed software, and adding some “great additional features”. Trump has taken a keen interest in the saga, lobbying for the planes to remain in the air, and the US was one of the last countries to ground the 737 Max despite the obvious safety concerns involved.
His advice seems unlikely to be well-received as branding really isn’t Boeing’s problem here: it’s the automated software system that is believed to have been at issue in both crashes, specifically the plane’s Manoeuvring Characteristics Augmentation System (MCAS), an anti-stall system that can allegedly make it difficult for pilots to control the 737 Max without being overridden.
While a preliminary report on the Ethiopian Airlines crash did not assign blame – and it is not yet definitively known whether the MCAS or pilot error was at fault – investigators have said that the pilots were correctly following Boeing’s procedures.
“The full details of what happened in the two accidents will be issued by the government authorities in the final reports, but, with the release of the preliminary report of the Ethiopian Airlines. . . accident investigation, it’s apparent that in both flights the [MCAS] activated in response to erroneous angle of attack information,” Boeing CEO Dennis Muilenburg said in a statement.
“The history of our industry shows most accidents are caused by a chain of events. This again is the case here, and we know we can break one of those chain links in these two accidents,” he added. “As pilots have told us, erroneous activation of the MCAS function can add to what is already a high workload environment.”
It was Boeing’s “responsibility to eliminate this risk,” he said, adding: “we own it and we know how to do it.”
But the real scandal here may not be the software bug at all but the rivalry that may have allegedly spurred Boeing to cut corners when developing the 737 Max. With a whopping 38% of market share (in 2016), Boeing is one of the top aircraft manufacturers in the world. It’s main competitor, European manufacturer Airbus, comes in a close second with a 28% share of the market, and the two companies share a fierce rivalry.
In 2010, Airbus announced an update to the A320, their most popular single-aisle aircraft which services many domestic flights in the US. The new version, dubbed the A320neo, would have a new, larger engine that was 15% more fuel efficient and the aircraft’s operation would not change enough to require pilots to undergo much retraining, saving airlines a bucketload of money.
This posed a problem for Boeing which moved to upgrade the engine on their own single aisle plane – the 737-800 – in order to compete with Airbus. However, the 737-800 didn’t have enough room for a new, larger engine as it sat too close to the ground.
The company attempted to fix this issue by moving the engine higher on the new model, which they named the 737 Max. Like Airbus, Boeing claimed that pilots would need only minimal retraining as it was allegedly almost indistinguishable from the 737-800.
The plane sold incredibly well and helped the company to compete with Airbus but the new engine placement had a side effect: the nose of the plane tended to point too far upward during take-off, which could lead to a stall.
Boeing chose not to reengineer the plane, instead installing software that would push the nose downward if it was flying at a higher angle in order to force it to behave like the original model. This was the MCAS.
As Boeing was selling the planes as virtually the same as the old model, they didn’t highlight the new system and regulators cleared the plane to fly without pilots receiving more than minimal retraining that didn’t mention the MCAS. The first sign of trouble was reports from pilots that the planes were suddenly nosing down without any warning and then, on 29 October 2018, the first crash occurred.
A recent report by non-partisan, US-based think tank Pew Research Centre has found that the vast majority of adults in emerging and developing countries own – or have access to – a mobile phone, and widely use both social media and messaging apps.
The report looked at mobile phone use by adults over 18 years of age across 11 countries, including Mexico, Venezuela and Colombia; South Africa and Kenya; India, Vietnam and the Philippines; and Tunisia, Jordan and Lebanon.
These countries were selected on the basis that they are all middle-income countries as defined by the World Bank, contain a mixture of people using different kinds of device, offer country-level diversity and variety, vary in market conditions, and in many cases have high levels of internal or external migration.
Researchers found that an average of 53% of people across the nations surveyed had access to smartphones with the capability to access the internet and run apps, including WhatsApp and Facebook, both of which notably enjoyed wide use in these countries. According to the study, an average of 64% of people across the surveyed countries used at least one of seven different social media sites or messaging apps.
Smartphone and social media use were so closely intertwined, in fact, that an average of 91% of smartphone users in these countries said they also used social media, while an average of 81% of social media users said they owned or shared a smartphone.
The report found that people in these nations believed they had been personally helped by mobile phone in many ways, such as helping them to stay in touch with distant relatives and friends, and to obtain news and information about important issues. Furthermore, a majority of adults in all 11 countries surveyed said that the internet had a good impact on education – and a majority said the same about mobile phones specifically.
A smaller percentage of adults in the surveyed countries said mobile phones and social media had been good for society, and the report found that challenges posed by digital life for children were a “notable source of concern”. It was common for parents to say that they attempted to “curtail and surveil they child’s screen time”.
Around 79% of adults in these countries said they believed that “people should be very concerned about children being exposed to harmful or immoral content when using mobile phones”, while an average of 63% of surveyed adults said mobile phones had a “bad influence on children in their country”. They also expressed mixed opinions about the impact of increased connectivity on physical health and morality.
These concerns mirror those expressed by journalists and politicians in the developed world concerning the impacts of social media on elections (e.g. alleged Russian bots on Twitter during the 2016 US Presidential election), the behaviour of children and young adults, and the spread of far-right conspiracy theories on social media, among other worries.
Some of the issues listed in the survey spanned all the countries included in the survey, although some issues were “nation-specific”, such as addiction to mobile phones. Over half of mobile phone users in five of the countries described their phone as “something they couldn’t live without”, whereas users in the other six countries were more likely to describe it as “something they don’t always need”.
Would you trust a computer to correctly diagnose a health problem? Most of us would probably prefer to leave it in the hands of our highly trained general practitioner, emergency room doctor or surgeon. The narrative concerning the intersection between artificial intelligence (AI) is often grossly distorted towards one extreme or another: either the robots are coming to kill us and steal our jobs or they herald some new utopian era and represent the only possible source of future prosperity for the human race. Reality – as in most instances – is far more nuanced and probably lies somewhere in between these two extremes.
We’re a long way from developing Star Trek-esque androids that can perfectly simulate human behaviour and supplant your current, fully human doctor. However, there are a few ways in which AI has already begun to supplement your friendly neighbourhood doctor’s practice and a few more in the pipeline…
Consider the humble FitBit. We’re not entirely sure that they track our steps correctly all of the time or get our heartbeat right but they’re increasingly popular and there is evidence that they do work. They monitor our fitness levels, warn us when we need to get more exercise and can also record abnormalities such as heart palpitations, potentially saving lives.
The information they record can be shared with healthcare professionals and AI systems to be analysed, giving doctors a more accurate picture of the habits and needs of their patient, especially when supplemented with medical histories and other useful patient information. This allows doctors to more carefully and accurately tailor treatments, rendering them increasingly more effective.
However, critics are concerned that this information could also be used by companies to discriminate against their employees should the data be used unethically. Experts have also voiced concerns about invasion of privacy if the data collected and stored by manufacturers of fitness trackers is either hacked or sold.
Healthcare professionals have already begun to use machine learning-based applications, support vector machines and optical character recognition programs such as MATLAB’s handwriting recognition technology and Google’s Cloud Vision API to assist in the process of digitising healthcare information. This helps to speed up diagnosis and treatment times as healthcare professionals are able to more quickly access complete sets of records on their patients.
The Massachusetts Institute of Technology (MIT) Clinical Machine Learning Group is leading the pack in developing the next generation of intelligent electronic healthcare records by developing applications with built-in AI – specifically machine learning capabilities – that can help with the diagnostic process. In theory, this will allow healthcare professionals to quickly make clinical decisions and create individual treatment plans tailored to their patients.
According to MIT, there is an ever growing need for “robust machine learning [that is] safe, interpretable, can learn from little labelled training data, understand natural language, and generalize well across medical settings and institutions”.
The term “AI” is somewhat misleading as it implies something more than the technology that we currently use it to describe. We don’t literally mean artificial intelligence – no true AI has been invented yet – but advanced algorithms that run on ever more powerful computers and can recognise patterns, pick information out of complex texts or even derive the meaning of an entire document from just a few sentences. This is known as artificial narrow intelligence (ANI) and comes nowhere close to artificial general intelligence (AGI) – aka the next step in developing a fully conscious AI or “superintelligence” – that can abstract concepts from limited experience and transfer knowledge from one place to another.
However, natural language processing and computer vision – the two main applications for ANI – are developing phenomenally quickly, the latter of which is based on pattern recognition and crucial for diagnostics in healthcare. Algorithms are trained to recognise various patterns seen in medical images and used to help doctors diagnose specific conditions in their patients, such as DNA mutations in tumours, heart disease, and skin cancer. This methodology does have limitations, however, as the medical evidence that the algorithms are programmed to recognise tend to originate in highly developed regions and reflect the subjective assumptions (or biases) of the working team. Furthermore, the forecasting and predictive elements of these algorithms are anchored in previous cases, and may therefore be useless in new cases of treatment resistance or side effects of drugs. Finally, the majority of AI research already conducted has been done on training data sets collected from medical facilities and doctors are provided with the same dataset after the algorithm analyses the images, usually without any attempt to reproduce the clinical conditions.
The European Commission (EC) announced on 8 April that it would launch a pilot program to ensure that ethical guidelines for the development and use of artificial intelligence (AI) can be implemented in practice.
This is the second step in the Commission’s three-part approach to the question of ethical AI, following the development of seven key requirements or guidelines for creating “trustworthy” AI developed by the High-Level Expert Group.
These include: human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. The Commission added that any AI that can be considered trustworthy should also respect “all applicable law and regulations”.
Industry, research institutes and public authorities have been invited to test an assessment list drafted by the group to complement the guidelines. The 52-strong panel of independent experts was appointed by the Commission in June 2018, and is comprised of representatives from industry, academia and civil society.
According to the Commission, the third and final step in its plan will be to work on building an “international consensus” on human-centric AI as “technologies, data and algorithms know no borders”.
These plans are a component of the Commission’s overarching “AI strategy”, which aims to increase public and private investments to at least €20 billion annually over the next decade in order to make more data available, foster talent and “ensure trust”.
Members of the group will present their work in detail at the third “Digital Day” in Brussels on 9 April. Following the conclusion of the pilot phase in early 2020, they will review the assessment lists for the key requirements, building on the feedback they receive, after which the Commission plans to evaluate the outcome of the project so far and propose next steps.
The Commission has also pledged to launch a set of networks of AI research excellence centres; begin setting up networks of digital innovation hubs; and together with Member States and stakeholders, start discussions to develop and implement a model for data sharing and making best use of common data spaces; before autumn 2019.
“I welcome the work undertaken by our independent experts,” Vice-President for the Digital Single Market Andrus Ansip said in a statement. “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies.”
For Ansip, ethical AI is a “win-win proposition” that could create a “competitive advantage for Europe” should it become “a leader of human-centric AI that people can trust”.
“Today, we are taking an important step towards ethical and secure AI in the EU,” Commissioner for Digital Economy and Society Mariya Gabriel added. “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society.”
The Commission is looking to put these requirements into practice while simultaneously fostering “an international discussion on human-centric AI,” she said.
AI refers to digital systems that show intelligent, human-like behaviour. By analysing their environment they can perform various tasks with some degree of autonomy to achieve specific goals, learning from data to make predictions and deliver useful insights.
The Commission estimates that the economic impact of the automation of knowledge work, robots and autonomous vehicles on the EU will reach between €6.5 and €12 trillion annually by 2025. The body has already invested what it describes as “significant amounts” in the development of AI, cognitive systems, robotics, big data, and future and emerging technologies in a bid to make Europe more competitive in this area.
This includes around €2.6 billion on AI-related areas and €700 million on research programs studying smart robots. The Commission intends to invest further in research and innovation up to and after 2020, including €20 billion per year in combined public and private investment.
However, Europe is currently behind in private investments in AI having spent €2.4 to €3.2 billion on development in 2016, compared with the €6.5 to €9.7 billion spent in Asia and €12.1 to €18.6 billion in North America.
In a press release, the Commission acknowledged that while AI has the potential to benefit a wide range of sectors – such as healthcare, climate change, law enforcement and security, and financial risk management, among others – it brings new challenges for the future of work, and raises significant legal and ethical questions.