HearAI

Model Sign Language with Machine Learning

Model Sign Language with Machine Learning.

Deaf people are affected by many forms of exclusion, especially now in the pandemic world. HearAI aims to build a deep learning solution to make the world more accessible for the Deaf community and increase the existing knowledge base in using AI for Polish Sign Language.

Who we are

Hear.ai is a non-profit, educational project. We are group of ML enthusiasts under mentors’ supervision who want to build AI solutions for general good, while networking and gaining experience.

Our motivation

Deaf belong to the most excluded social groups. Communication outside their community can be a huge problem for them. They often cannot benefit from the services of an interpreter when contacting with doctor (also in the hospital), social workers, banks etc. Sign language is different comparing to Polish. Deaf people learn it as a foreign language. Because of that they have difficult access to information and entertainment in their native language.

Our goals

  • Working on segmentation and detection of signs.
  • Increasing existing knowledge base.
  • Increasing public awareness.
  • Gaining experience.
  • Networking.
  • Experimenting with technologies.
  • Sharing knowledge.

Learn more about us here.


title: HearAI - Sign Language Recognition summary: Automatic translation of sign language from video to text using the HamNoSys notation. tags:

  • ai-for-good date: “2021-05-01T00:00:00Z”

Optional external URL for project (replaces project detail page)

external_link: “https://www.hearai.pl/

image: caption: focal_point: Smart

links:

  • icon: link icon_pack: fas name: Project Website url: https://www.hearai.pl/ url_code: "” url_pdf: "” url_slides: "” url_video: "”

Slides (optional).

Associate this project with Markdown slides.

Simply enter your slide deck's filename without extension.

E.g. slides = "example-slides" references content/slides/example-slides.md.

Otherwise, set slides = "".

slides: "”

The project involved automatic translation of sign language from video to text using the HamNoSys notation. I co-organized the project, proposed a research plan, and led the team in the project HearAI for recognizing the Polish Sign Language. The research team included 12 people of which seven were recruited out of 75 applications. We worked directly on video using multimodal custom models with Vision Transformers, CNNs, and pose estimation (MediaPipe/OpenPose).

Related