Problems and Pain Points

Multilingual speakers have difficulty expressing feelings and sentiments accurately across different languages. Why? Translation tools today don't consider cultural and contextual nuances that are essential for providing accurate translations In our research of other translation apps and tools via competitor analysis and user interviews, we found the following pain points:

  • Pain Point 01: Apps don't accommodate language change based on region
  • Pain Point 02: Apps only provide one result, and sometimes it isn't right
  • Pain Point 03: The same words are forgotten and always need to be re-researched

We decided to address these painpoints with WordFor- a webapp to help bilingual and multilingual speakers find the words for what they want to say.

Our Solution

WordFor alleviates our users' pain points by:

  • Creating regional field to translation inputs for highest level of accuracy
  • Incorporating dynamic regional gradients to highlight its importance
  • Including multiple search results to ensure every situation has a translation
  • Filtering capabilities to easily find the word that the user is looking for
  • Bookmarks to remind you of your translations

When testing our solution against Google Translate we found that 100% of our users rated WordFor higher than Good Translate for both accuracy and usability.

We also found that 90% of our users rated WordFor above an 8/10 for both accuracy and usability.

Software Architecture

Prior to starting our development flow, we had to determine the software and tools we would use in order to achieve a fully functioning prototype of WordFor and create our software architecture.

Although initial technological research findings led us in the direction of using ReactJS as our frontend with a Flask (Python) backend, we decided in the Winter term to use a SvelteJS frontend, which also performed as our backend as well. This was extremely useful as our development phase was essentially 10 weeks and Svelte provided us with the tools to most efficiently execute our proof of concept.

Along with using Svelte, we chose to use OpenAI's APIs for all of our AI functionality. The AI models used from OpenAI are as follows:

  • GPT-3.5 Turbo: for our translation functionality
  • TTS(text-to-speech): for our functionality regarding text to speech
  • Whisper: for our speech to text functionality

Finally, we also decided to use Google Firebase for storing our feedback from the user when they interact with the feedback modal. This allows us to very simply look at the data from the users to determine in the translations being produced are accurate. With this data, we hope that we will be able to train our own language learning model to provide more accurate translations.

Defining Our Development

Our development was initially designed into 2 phases, the alpha and the beta phase. The alpha phase was defined as containing full functionality of our main task flow. This includes:

  • Translating words/phrases across English and Spanish using OpenAI and being able to produce multiple translations.
  • Text to Speech functionality where applicable as seen on our Hi-Fi prototype design
  • Speech to Text functionality where applicable as seen on our Hi-Fi prototype design
  • Saving/Bookmarking specific translations. This includes removing bookmarks as well.
  • Recent Searches functionality: storing the user's recent translation searches.This also includes deleting search history for all history or just one search item.

Items that were then added during the beta phase were:

  • 1:1 accuracy with the designs. We are more focused on the core functionality rather than the styling at this time.
  • Responsiveness
  • Feedback modal functionality
  • Filtering for the bookmark page

By breaking up our development in this way, it ensured that our developers would remained focused on functionality prior to styling the components. This is because, in order to style, we first had to created the functional components so that the hand-off between the lead developer and the developer styling the components would be smoother.

Alpha

One of the biggest leaps during our Alpha Phase was connecting all of the AI models being used to our app. This was a bit of a hurdle, especially for the translation flow, as the response time of our API was extremely long. To cut it down from about 20 seconds to less than 5 seconds required a lot of revisions to our API calls to make them as efficient as possible. In addition to this, we also added onto our list for the Beta Phase to develop a loading screen in order for the time to seem quicker to the user.

A key part of our development during this phase was understanding how what to deliver as the prompt to our GPT-3.5 Turbo AI model as well as how to add the user input into that prompt. The solution was to first create a prompt that would allow us to patch in the user input in a "fill in the blank" style and pass the user inputs of their languages, region, context, and phrase into their respective "blanks". There were also chances where to user may not include a region or context, for that use case, we ensured to have a default "fill in" that we would use for the region and context "blanks" of our prompt.

After having our AI models connected to our webapp, much of the main task flow was functional. This is because a majority of the main task flow is passing on data given by the translation response. For example, if the user translated "hello" from English to Spanish, one result would be "hola" and would also include its part of speech, phonetic spelling, definition, and example sentence of "hello" in English, an example sentence of "hola" in Spanish. This data would then get passed into different areas of our state management depending on if the user clicked into the result, bookmarked it, or viewed it in their recent searches.

Beta

Once the main task flow functionality was complete, we were ready to transition into the Beta Phase. In this phase, Dane joined in developing the styles of our components. As Dane styled the components previously made in the Alpha Phase such as our form and translation results, the lead developer continued developing areas such as bookmarks and recently searched. The bookmarks page required a filtering system to filter bookmarked translation by language, region, and context. The recently searched page required the ability for the user to edit the page and remove all or any recently searched items. As the lead developer would finish the functionality and components, they would hand it off to Dane to style. Additionally, our UI Design Lead and Content Management Lead QA tested any of the areas of the webapp that were marked as complete by our developers. If there was a bug, they logged it within our GitHub repo's bug tracker and our developers would determine whether it was a functionality bug or a styling bug and triage it accordingly.

Learnings

Through this project we learned that sometimes, industry standards need to be broken in order to innovate. Our main competitors are leaders in language: Google Translate and Papago. They set the standard for contextual and one-size-fits-all translations but left multilingual speakers unable to communicate. We redefined the standards by providing multiple regional-based translations, achieving increased user satisfaction and higher usability and accuracy ratings than Google Translate.