LINGUALENS
Transformed language learning into an interactive, visualized, and personalized experience.
Edtech
Personal Project
Year
2023
Duration
3 weeks
“For this mid-term, you need to submit your design concept and show what values your design brings to the community” - said professor T. in the Prototyping and Interaction Design course.
And that was how this project was born.
Ok. Let’s jam!
LinguaLens is a design aiming to go beyond mere translation and create an experience that incorporates visualized, interactive, and personalized learning elements.
Users can enhance their knowledge and develop positive attitudes throughout their journey of interacting with a new language.
I moved to Taipei at the end of August 2022 and my life completely changed. A whole new place to live, new people to meet, a new culture to experience, a new language to learn. Mobile phones and translating applications become essential tools for me to understand what I see, what I hear.
Typing a word, checking the translated text, but then, how can I learn more actively and place those words in my memory?
💡 I believe learning a language should be fun, interactive, and when what we learn is attached with experiences and connected together, we can learn with a more positive attitude, which means better learning progress and outcomes.
I've looked into the ways that both myself and other people around me learn a new language, especially when it comes to self-learning methods for expanding our vocabulary. There are tons of ways to go about it, such as books, online courses and instructions, language learning apps/websites, watching movies and TV shows, and even translating apps.
I specifically looked into the translation field. There are various applications related to text translating, but just a minority conduct experiments on using objects as input.
#1 app - iTranslate
The good
Information presented clearly in cards.
Suggested association words for better description.
Support playing sounds and copying.
“Plus” for more information of an object.
The gaps
Focus on object recognition only.
Detect single object only.
Information redundant (the word “computer”).
Not support meaning.
#2 app - Qtranslator
The good
Detect multiple objects at once.
The gaps
Readability issue due to the overlapped information.
Focus on object recognition only.
Not provides relevant content.
Not support further actions.
Not support meaning.
#3 app - Translation app
The good
Information presented in cards.
Language name is clearly indicated.
Provides sound support.
The gaps
Wrong translation due to technical issue.
Focuses on object recognition only.
Not provides relevant content.
Not supports meaning.
#4 app - Camtranslator
The good
Identifies objects effectively and suggests descriptive words.
Supports actions like copying, sharing, adding sound, and cropping objects for clarity.
The gaps
Detect single object only.
Cluttered UI with poor hierarchy (equal button and content sizing).
Same color for interactive elements and content causes confusion.
Inconsistent spacing, minimal bottom spacing leads to misclicks on ads.
Small, low-contrast photo library and flash buttons, poor placement affects accessibility.
Based on the above informations, combined with conversations, and personal experience, four main problems were identified:
Translating applications help identify translated words in specific situations, but they are not optimized for learning and lack a systematic way to manage knowledge.
The content, learning methods, or both are not interesting and relevant, which might causes learners to lack motivation and engagement during the learning process
Prepared materials such as books, applications, and websites, as well as instruction in courses, are popular, but the content is updated slowly and lacks personalization.
There is a lack of opportunities to learn language in a more active and hands on experience.
This keeps me thinking about…
AI-driven Image Tagging and Description
Under the scope of artificial intelligence, image tagging technology utilizes computer vision and natural language processing to analyze the sequence of pixels in an image and generate descriptive words, tags, or the content of an image. This process enables the transformation of visual information into textual representation, enhancing image understanding.
Augmented Reality
Augmented reality (AR) is a technology that overlays computer-generated digital content, such as images, videos, or 3D models, onto the real-world environment in real-time. It combines elements from the physical world with virtual objects, enhancing the user's perception and interaction with their surroundings.
A typical user's path
The primary goal is to help users learn. To achieve this, the design needs to present information clearly and make it easy for users to take actions that support their learning.
However, since the world is a learning canvas and users interact with it through their own perspective, the screen can sometimes become crowded with content. This leads to several key challenges:
How can we ensure the information is clear and accessible?
How can we avoid making the screen feel overwhelming?
How can we prevent misinterpretation of words or content?
Initial sketches (1)
To address these challenges, I explored a few approaches:
Using progressive disclosure to reveal small pieces of content directly on the image, expanding gradually as users interact.
Focusing on one object at a time, with related content displayed as a list below.
Separating objects and learning content from the main image for better organization.
After experimenting with these ideas and thinking about how they would impact the learning experience, I mixed and matched elements from each approach. In the end, I came up with a final version that felt just right.
LinguaLens
The name combines the Latin word for language (lingua) with the idea of using a lens/cameras to capture, scan objects to learn new vocabularies.
Learning as Exploration
Traditional learning methods focus on translation, object recognition, or rigid courses.
LinguaLens takes a different approach, combining experience, visuals, and interactive learning to boost motivation, engagement, and accessibility.
Learning becomes a part of daily life.
Everyday Moments. Everyday Learning.
Personalized and dynamic.
No more slow updates.
By actively learning through their surroundings, users can turn real-world interactions into learning opportunities, just by using their camera.
Spot a cat on a pizza next to a motorbike? Capture it, and boom! Instant learning.
Enhanced Object Recognition
LinguaLens supports multiple object translations at once. Instead of overlapping text, translations appear on separate cards to maintain readability.
These cards with meanings help users avoid confusion and align words with the correct objects, preventing misinterpretation when the screen gets busy.
Strengthening Vocabulary and Comprehension
The app leverages associative priming, activating related words in a user’s semantic memory when they encounter new vocabulary.
This technique builds stronger word connections, improving recall and understanding by presenting words in meaningful contexts.
Structured Learning for Better Retention
Beyond translation, effective learning requires organization and reflection. LinguaLens automatically categorizes saved content into three key groups:
Images – Capture full memories of an experience.
Objects – Quickly retrieve specific translations.
Collections – Organize related experiences into thematic groups.
Sharing Is Learning
Words may be the same, but experiences shape how we learn.
LinguaLens fosters a collaborative learning space where users share discoveries and learn from each other. Hashtags are auto-generated based on objects in an image, helping users explore content they love.
Distractions and social pressure? Not here. LinguaLens includes anonymous mode, allowing users to engage at their own pace.
No pressure, no distractions, just purely learning and sharing between each other!
Create designs for learning that connects people experience, and motivate people to learn more.
Consider readability and information density when designing with real environments/objects.
HEY YOU MAKE IT ALL THE WAY HERE!