Tag Archives: ai

Groupon acquires Presence AI

Image courtesy of GrouponRUS via Wikimedia Commons under a Creative Commons Attribution-Share Alike 4.0 International license.

Chicago-based worldwide e-commerce marketplace Groupon announced on 8 August that it had acquired Presence AI, an AI-powered text and voice communications tool that is working on a communications platform to automate business-to-customer (B2C) calls and messaging. Terms of the transaction were not disclosed.

According to a survey conducted by Bizrate Insights, consumers – especially millennials – vastly prefer messaging and chat-based communications over phone calls. Presence AI says it aims to enable merchants to respond to this trend by “offering a 24/7 business assistant that integrates with a merchant’s existing scheduling software”.

It will “accept and manage bookings, provide instant answers to customer questions, remind people when it’s time to re-book and much more”. The company already has some integrations with “popular booking software providers”.

Amazon’s Alexa Fund is one of the investors in Presence AI, which has so far raised US$20,000 in seed funding. The company was founded in 2015 in San Francisco and operates in the health, beauty and wellness space, which is one of Groupon’s largest categories. In 2018, it participated in the Alexa Accelerator, which “supports early-stage start-ups using voice to deliver transformative customer and business experiences”

As Groupon starts trying to move towards “universal bookability” for certain services, it hopes Presence AI’s technology will provide merchants with the capabilities to support this “booking vision”.

“We’re pleased to welcome the Presence AI team and their booking technology to Groupon,” Groupon Chief Product Officer Sarah Butterfass said in a statement. “Booking is a key part of our voucher-less initiative aimed at improving the redemption experience, providing always-on availability . . . opening up our marketplace to a broader range of merchants.”

“Presence AI’s technology is very complementary to what we’ve been building into our existing booking experience and will accelerate our roadmap with its text- and chat-based interface,” she added.

“We’re very excited to join Groupon and continue transforming client conversations through the use of artificial intelligence,” Presence AI co-founder and CEO Michel Meyer said. “With more than 3 million text messages generated last year, Presence AI is saving merchants time and generating additional revenues. We can’t wait to bring our technology to more businesses.”

Twitter acquires deep-learning start-up Fabula AI

Image by William Iven from Pixabay

Social media giant Twitter announced on 3 June that it had acquired London-based deep learning start-up Fabula AI in an attempt to boost its machine learning expertise, feeding into an internal research group led by the company’s senior director of engineering Sandeep Pandey.

The research group’s stated aim is to “continually advance the state of machine learning, inside and outside Twitter”, focusing on “a few key strategic areas such as natural language processing, reinforcement learning, [machine learning] ethics, recommendation systems, and graph deep learning”.

Fabula AI’s researchers specialise in employing graph deep learning to detect network manipulation, applying machine learning techniques to network-structured data in order to analyse very large and complex datasets describing relations and interactions, and extract signals in ways that traditional machine learning techniques are not capable of doing.

Twitter described the acquisition as a “strategic investment” and a “key driver” as the company works to “help people feel safe on Twitter and help them see relevant information”. Financial terms of the deal were not disclosed.

“Specifically, by studying and understanding the Twitter graph, comprised of the millions of Tweets, Retweets and Likes shared on Twitter every day, we will be able to improve the health of the conversation, as well as products including the timeline, recommendations, the explore tab and the onboarding experience,” the social network said.

Fabula was founded by Michael Bronstein, Damon Mannion, Federico Monti and Ernesto Schmitt. It is led today by Bronstein – who currently serves as chief scientist – and Monti, now the company’s chief technologist, who began their collaboration together while at the University of Lugano, Switzerland.

“We are really excited to join the ML research team at Twitter, and work together to grow their team and capabilities,” Bronstein said in a post on Twitter’s blog. “Specifically, we are looking forward to applying our graph deep learning techniques to improving the health of the conversation across the service.”

Bronstein is currently the Chair in Machine Learning & Pattern Recognition at Imperial College, and will remain in that position while leading graph deep learning research at Twitter. He will be joined by long-time collaborators from academia (including current or former students) who research advances in geometric deep learning.

Twitter – along with other social media platforms and internet search engines – has recently come under fire from the media, academics and politicians for its perceived failure to properly deal with abuse and hate on its platform. It has previously been criticized for failing to take action against accounts that spread hate speech and still does not have a clear policy in place for dealing with white supremacist accounts.

Samsung Electronics Expands AI Lab in Canada

Samsung Electronics announced on 2 May the expansion of its ‘Samsung Advanced Institute of Technology (SAIT) artificial intelligence (AI) Lab Montreal’ in Canada. The lab will help the company to “strengthen its fundamentals in AI research and drive competitiveness in system semiconductors”.

The AI Lab is located in Mila – the Montreal Institute for Learning Algorithms – a research centre in the field of deep learning founded by Professor Yoshua Bengio at the University of Montreal. SAIT AI Lab Montreal has an open workspace with the aim of working closely with the AI research communities in Mila.

The lab will focus on unsupervised learning and Generative Adversarial Networks (GANs) research to develop “disruptive innovation and breakthrough technologies, including new deep learning algorithms and next generation of on-device AI”.

To drive the effort, the AI Lab has !actively recruited leaders in deep learning research”, including Simon Lacoste-Julien, Professor at the University of Montreal, who recently joined as the leader of the lab.

Also read: Where are Samsung Phones Made? It’s More than One Country

In addition, Samsung is planning to dispatch R&D personnel in its Device Solutions Business to Montreal over time, and to utilize AI Labs as a base for training AI researchers and for collaborations with other advanced AI research institutes.

Bengio is one of the world’s greatest experts on deep learning, machine learning, and AI. SAIT has collaborated with him on deep learning algorithm research since 2014, successfully publishing three papers on academic journals.

SAIT has actively pursued research collaboration with other top authorities in the field, including Yann LeCun, Professor at New York University and Richard Zemel, Professor at University of Toronto. Bengio and LeCun, along with computer scientist Geoffrey Everest Hinton won the 2018 Turing Award which is known as the “Nobel Prize in computer science”.

“Samsung’s collaboration with Mila is well established already and has been productive and built strong trust on both sides,” Professor Bengio said in a statement. “With a new SAIT lab in the midst of the recently inaugurated Mila building and many exciting research challenges ahead of us in AI, I expect even more mutually positive outcomes in the future.” “SAIT focuses on research and development – not only in next generation semiconductor but also innovative AI as a seed technology in system semiconductors,” added Sungwoo Hwang, Executive Vice President and Deputy Head of SAIT. “SAIT AI Lab Montreal will play a key role within Samsung to redefine AI theory and deep learning algorithm for the next 10 years.”

Artificial intelligence technologies for smart healthcare

Would you trust a computer to correctly diagnose a health problem? Most of us would probably prefer to leave it in the hands of our highly trained general practitioner, emergency room doctor or surgeon. The narrative concerning the intersection between artificial intelligence (AI) is often grossly distorted towards one extreme or another: either the robots are coming to kill us and steal our jobs or they herald some new utopian era and represent the only possible source of future prosperity for the human race. Reality – as in most instances – is far more nuanced and probably lies somewhere in between these two extremes.

We’re a long way from developing Star Trek-esque androids that can perfectly simulate human behaviour and supplant your current, fully human doctor. However, there are a few ways in which AI has already begun to supplement your friendly neighbourhood doctor’s practice and a few more in the pipeline…

Wearables

Consider the humble FitBit. We’re not entirely sure that they track our steps correctly all of the time or get our heartbeat right but they’re increasingly popular and there is evidence that they do work. They monitor our fitness levels, warn us when we need to get more exercise and can also record abnormalities such as heart palpitations, potentially saving lives.

The information they record can be shared with healthcare professionals and AI systems to be analysed, giving doctors a more accurate picture of the habits and needs of their patient, especially when supplemented with medical histories and other useful patient information. This allows doctors to more carefully and accurately tailor treatments, rendering them increasingly more effective.

However, critics are concerned that this information could also be used by companies to discriminate against their employees should the data be used unethically. Experts have also voiced concerns about invasion of privacy if the data collected and stored by manufacturers of fitness trackers is either hacked or sold.

Machine learning

Healthcare professionals have already begun to use machine learning-based applications, support vector machines and optical character recognition programs such as MATLAB’s handwriting recognition technology and Google’s Cloud Vision API to assist in the process of digitising healthcare information. This helps to speed up diagnosis and treatment times as healthcare professionals are able to more quickly access complete sets of records on their patients.

The Massachusetts Institute of Technology (MIT) Clinical Machine Learning Group is leading the pack in developing the next generation of intelligent electronic healthcare records by developing applications with built-in AI – specifically machine learning capabilities – that can help with the diagnostic process. In theory, this will allow healthcare professionals to quickly make clinical decisions and create individual treatment plans tailored to their patients.

According to MIT, there is an ever growing need for “robust machine learning [that is] safe, interpretable, can learn from little labelled training data, understand natural language, and generalize well across medical settings and institutions”.

Smart algorithms

The term “AI” is somewhat misleading as it implies something more than the technology that we currently use it to describe. We don’t literally mean artificial intelligence – no true AI has been invented yet – but advanced algorithms that run on ever more powerful computers and can recognise patterns, pick information out of complex texts or even derive the meaning of an entire document from just a few sentences. This is known as artificial narrow intelligence (ANI) and comes nowhere close to artificial general intelligence (AGI) – aka the next step in developing a fully conscious AI or “superintelligence” – that can abstract concepts from limited experience and transfer knowledge from one place to another.

However, natural language processing and computer vision – the two main applications for ANI – are developing phenomenally quickly, the latter of which is based on pattern recognition and crucial for diagnostics in healthcare. Algorithms are trained to recognise various patterns seen in medical images and used to help doctors diagnose specific conditions in their patients, such as DNA mutations in tumours, heart disease, and skin cancer. This methodology does have limitations, however, as the medical evidence that the algorithms are programmed to recognise tend to originate in highly developed regions and reflect the subjective assumptions (or biases) of the working team. Furthermore, the forecasting and predictive elements of these algorithms are anchored in previous cases, and may therefore be useless in new cases of treatment resistance or side effects of drugs. Finally, the majority of AI research already conducted has been done on training data sets collected from medical facilities and doctors are provided with the same dataset after the algorithm analyses the images, usually without any attempt to reproduce the clinical conditions.

European Commission announces pilot program for AI ethics guidelines

The European Commission (EC) announced on 8 April that it would launch a pilot program to ensure that ethical guidelines for the development and use of artificial intelligence (AI) can be implemented in practice.

This is the second step in the Commission’s three-part approach to the question of ethical AI, following the development of seven key requirements or guidelines for creating “trustworthy” AI developed by the High-Level Expert Group.

These include: human agency and oversight; robustness and safety; privacy and data governance; transparency; diversity, non-discrimination and fairness; societal and environmental well-being; and accountability. The Commission added that any AI that can be considered trustworthy should also respect “all applicable law and regulations”.

Industry, research institutes and public authorities have been invited to test an assessment list drafted by the group to complement the guidelines. The 52-strong panel of independent experts was appointed by the Commission in June 2018, and is comprised of representatives from industry, academia and civil society.

According to the Commission, the third and final step in its plan will be to work on building an “international consensus” on human-centric AI as “technologies, data and algorithms know no borders”.

These plans are a component of the Commission’s overarching “AI strategy”, which aims to increase public and private investments to at least €20 billion annually over the next decade in order to make more data available, foster talent and “ensure trust”.

Members of the group will present their work in detail at the third “Digital Day” in Brussels on 9 April. Following the conclusion of the pilot phase in early 2020, they will review the assessment lists for the key requirements, building on the feedback they receive, after which the Commission plans to evaluate the outcome of the project so far and propose next steps.

The Commission has also pledged to launch a set of networks of AI research excellence centres; begin setting up networks of digital innovation hubs; and together with Member States and stakeholders, start discussions to develop and implement a model for data sharing and making best use of common data spaces; before autumn 2019.

“I welcome the work undertaken by our independent experts,” Vice-President for the Digital Single Market Andrus Ansip said in a statement. “The ethical dimension of AI is not a luxury feature or an add-on. It is only with trust that our society can fully benefit from technologies.”

For Ansip, ethical AI is a “win-win proposition” that could create a “competitive advantage for Europe” should it become “a leader of human-centric AI that people can trust”.

“Today, we are taking an important step towards ethical and secure AI in the EU,” Commissioner for Digital Economy and Society Mariya Gabriel added. “We now have a solid foundation based on EU values and following an extensive and constructive engagement from many stakeholders including businesses, academia and civil society.”

The Commission is looking to put these requirements into practice while simultaneously fostering “an international discussion on human-centric AI,” she said.

AI refers to digital systems that show intelligent, human-like behaviour. By analysing their environment they can perform various tasks with some degree of autonomy to achieve specific goals, learning from data to make predictions and deliver useful insights.

The Commission estimates that the economic impact of the automation of knowledge work, robots and autonomous vehicles on the EU will reach between €6.5 and €12 trillion annually by 2025. The body has already invested what it describes as “significant amounts” in the development of AI, cognitive systems, robotics, big data, and future and emerging technologies in a bid to make Europe more competitive in this area.

This includes around €2.6 billion on AI-related areas and €700 million on research programs studying smart robots. The Commission intends to invest further in research and innovation up to and after 2020, including €20 billion per year in combined public and private investment.

However, Europe is currently behind in private investments in AI having spent €2.4 to €3.2 billion on development in 2016, compared with the €6.5 to €9.7 billion spent in Asia and €12.1 to €18.6 billion in North America.

In a press release, the Commission acknowledged that while AI has the potential to benefit a wide range of sectors – such as healthcare, climate change, law enforcement and security, and financial risk management, among others – it brings new challenges for the future of work, and raises significant legal and ethical questions.