2374 Views |  10

Computational Linguist Leen Sevens Talks To Durtti About Her Groundbreaking Pictograph Translation Technology

Leen Sevens is continually pushing extraordinary boundaries in her research at the Centre for Computational Linguistics (KU Leuven). The many benefits of her pioneering work will undoubtedly transform those with cognitive learning difficulties, including the rapidly increasing number of migrating populations who are having to quickly learn a new language. Durtti wants to understand some of the development challenges Leen has faced to date, and more importantly, how she has overcome them.

As part of your PhD, you are creating a user interface that facilitates the construction of text for those with intellectual disabilities. What have you found to be the biggest obstacle in designing an effective interface, Leen?

The user interface allows people with Intellectual Disabilities (ID) to construct short messages using a combination of written text and Sclera or Beta pictographs, which can be sent as natural language text to family and friends on social media websites.

With several thousands of Sclera and Beta pictographs available, it was a challenging task to create an interface that contains enough pictographs to cover different areas of life, while not being too overwhelming for our end users.

For that reason, we used a very large corpus of text written by people with intellectual disabilities to extract popular topics and information on word frequency.

On top of that, we developed a pictograph prediction tool, which uses linguistic information to suggest a number of contextually and semantically relevant pictographs to the user. Furthermore, we learned that “one size fits all” does not work when it comes to different types of disabilities, with limitations ranging from low literacy skills to motor disabilities.

Some people need a smaller set of pictographs to begin with, so it was an obvious decision to make the interface fully customisable in order for it to become truly useful.

Caregivers or parents can disable or enable pictograph categories in the interface, allowing the users to gradually familiarise themselves with the pictographs.

Hyperpersonalisation has proven to be essential in this project.

‘Pictograph to text’ or ‘text to pictograph’. Which of the two have you discovered is harder to ‘teach’ a computer to do and why?

Pictograph to text is definitely the more challenging one. The main reason for this lies in the fact that textual messages contain a number of specificities that are not expressed in pictographic messages. For instance, the Sclera and Beta pictographs have a number of underspecified linguistic features. No distinction is made between singular or plural, verbs are not inflected, articles are missing, and so on.

In other words, pictograph to text translation requires the system to determine these hidden features to convey an appropriate or, at least, the most probable meaning of the pictograph sequence and generate a syntactically correct message.

For now, we are using language models that are trained on very large corpora of natural language text.

We are also looking into the added value of Recurrent Neural Networks. This is ongoing research, but the first results look promising.

You are also an accomplished children’s illustrator. If you were asked to teach art to a class of children with intellectual disabilities for one week, what sort of things would you ask them to draw or paint and why?

I think it would be great to make a children’s book together, using all sorts of different materials and techniques, including paper crafts and scrapbooking.

In the light of the “Toy Like Me” campaign, I would like to write a story about children with disabilities, sending out a powerful message that everyone should be included and celebrated.

Disability should not be left out of the toy box, because positive representation matters – including in children’s books!

The children in my art class would work together to make this empowering story come to life through drawings, paper dolls, and pictures.

We would sell the book afterwards and raise money for their schools or disability organisations. (Actually, this sounds like a wonderful idea. Thanks for making me think about this!)

Tell us about the ‘Able to Include’ project that you are also involved in developing.

The Able to Include project is a European Commission-funded project that started in 2014 and ended in March 2017. It created a technical solution that allows people with intellectual disabilities to use applications that may contribute to living a more fulfilling, independent life.

Three types of pilot studies were carried out during the project: leisure within the information society (use of social media websites), mobility (independent travelling), and labour integration (use of email).

These pilots allowed, through the interaction with real users, for a permanent adjustment of all tools and resources involved.

Able to Include presents three Natural Language Processing tools that can help people with intellectual disabilities interact with the Information Society: a text and content simplifier, text-to-speech technologies, and pictograph translation technologies for Dutch, English, and Spanish.

The three technologies are clustered in the Accessibility Layer, an open-source service that can be activated in combination with existing social media services, such as Facebook, or for which brand-new apps or web services can be created.

That being said, make sure to check out our Github and start developing!

The Artificial Intelligence Group

How do you think AI will realistically impact all of our lives in the next 3-5 years?

AI is already part of our daily lives. Think of Facebook’s face recognition technology, YouTube’s automated subtitling, Netflix’ and Amazon’s recommendations, and Google Search.

I am convinced that AI will continue to make our lives pleasantly comfortable in many possible ways.

I am particulary interested in how AI will contribute to enhanced independent living, not only for people with disabilities, but also for the elderly.

The industry is currently blooming before our eyes. I have seen and tested prototypes of clever devices that physically support people with limited motor skills by registering their intentions, detect whether someone has fallen down the stairs, or measure whether someone had consumed enough proteins, carbohydrates, and other food energy sources.

Emotion or mood detection devices, virtual reality tools that provide help in interacting with the information society, technologies that automatically simplify letters or newspaper text, and intelligent dialogue systems that allow anyone to interact with the Internet of Things, such as home electronics, are just a few examples of technologies that will realistically impact the lives of those who need it in the not so distant future.

It should be noted, however, that privacy issues and loneliness as a result of replacing human care and company by technology may also arise much sooner than we think. Caution must therefore be exercised in order to avoid social isolation and loss of privacy of those who interact with AI.

As human beings we all have emotions. Those emotions can lead to good days and bad days! How do you try to turn a bad day into a good one?

Even though I always get complimented for my enthusiasm, I am too often overwhelmed by feelings of worry, as I tend to overthink things, and usually prepare more than necessary.

I am a perfectionist, which can sometimes be very tiring.

I am still learning to take things more easy, but I am certainly getting better at it!

For me, happiness lies in the small things – in doing the things I love, without any form of hesitation: buying ice cream after work (I’m definitely going to do that today!), making doodles for kids or friends, impulsively buying concert or event tickets, practicing a new language, playing with my cat, having a good conversation, and playing videogames that have captivating storylines, among many different things.

Speaking as a cat lover and a linguistics software expert yourself, do you believe that we will ever be able to create linguistics software that enables us to translate what animals are saying?

I certainly hope so! There have already been successful attempts at decoding animal language.

The main challenge lies in learning to interpret the meaning of animals’ calls and gestures within their specific context. Constantine Slobodchikoff, for instance, found that prairie dogs have distinct alarm calls indicating different potential predator species. These calls can be surprisingly detailed. For instance, the rodents can discriminate between different shapes (overweight or tall), sizes (tall or short), and colours (blue or green). The alarm cries can be recorded and when enough data is available, new cries can be translated into their respective meaning through machine learning.

Of course, we need to step away from the idea that animal language will look anything like human natural languages.

A lot of information is communicated through their body language, as well.

For instance, when cats slowly blink at you, they consider you as their buddy!

There are many factors that have to be taken into account, but who knows? Maybe animal language translation will become a reality one day!

Finally, Leen, what one piece of life advice would you pass on to the children in question 3 at the end of your week of teaching?

Quoting Teri Garr: “The real disability is people who can’t find joy in life.”

Live your life to the fullest, and don’t forget to spread happiness and joy everywhere you go!

More at www.ccl.kuleuven.be

You can follow Leen here on: Durtti and LinkedIn.

Leen is a member of The Artificial Intelligence Group on LinkedIn.