2020 — 2021
HearAI — sign language with machine learning
Open-source, educational project · Polish Sign Language (PJM) · AI for accessibility
What HearAI is
HearAI is a non-profit, volunteer-driven learning lab: ML enthusiasts working with mentors to ship research-minded prototypes for social good, grow a community, and publish what we learn. The public site and codebase live at hearai.pl.
Why it matters
Sign languages are full natural languages—not signed Polish. Many deaf people experience Polish as a second language, while TV, education, and digital products rarely offer PJM-first experiences. Better tooling for sign detection, segmentation, and understanding is one small lever alongside human interpreters and policy.
Goals
- · Segmentation and detection of signs in video
- · Growing the shared knowledge base for Polish Sign Language + ML
- · Public awareness and inclusive design
- · Hands-on experience, mentoring, and open collaboration
- · Experimenting with modern CV architectures and sharing lessons learned
My role
I co-organized HearAI, shaped the research direction, and led the core team. We recruited from a large applicant pool and ran a hands-on lab: multimodal models on video, Vision Transformers, CNNs, and pose estimation (e.g. MediaPipe / OpenPose) for sign articulation.
Related research — sign labeling
Supervised sign recognition depends on consistent, trustworthy labels. In collaborative work we analyzed HamNoSys annotations across open corpora in five sign languages, asking how maintainers label video and whether HamNoSys strings are objective enough for training modern models—surfacing friction between language‑agnostic transcription and messy real‑world practice.
On the Importance of Sign Labeling: The Hamburg Sign Language Notation System Case Study (Ferlin et al., arXiv:2302.10768). Abstract & PDF on arXiv →