Currently we have about 68 million people in the United States that speak a language other than English per the US Census. The data also shows that from 2021 the numbers have stayed pretty consistent as of October 2023. Language is such a vital need when it comes to connection with one another. With the enrollment of English learners(ELs) in public schools for 2020 being higher than 2010, it's safe to say that by 2030 it will continue the 10% increase that The National Center for Education Stats has researched. To give you an idea of what this means in numbers, 10% equals approximately 5 million students out of 339 Million in total population.
A study by Digital Promise states that approximately 350 languages are spoken in the US today. Of these languages Spanish, Chinese, Tagalog, Vietnamese, and Arabic are dominantly spoken outside of English. Bringing us back to the need to not only have transcriptions for those with disabilities but those that speak another language. What if we allowed people to learn in the language they would best succeed, would content then be more effective? The importance of being able to provide multiple language options for students is even more needed in states that have a dominant presence of ESL(English second language) students and parents. Think border states like California, Texas, Florida, New Mexico, and even states like Colorado and Nevada, they are rated the top states for ESL in a single family household.
The mission for Songbird-ai is to allow for organizations to easily upload audio files and then provide them with multi language options for their transcriptions. We believe that making accommodations for all languages and people who speak them will enhance the way we learn. If we can get an organization to look at these features as a standard, then inclusion becomes inevitable. How we do this is with our first LLM in our workflow, called Whisper. Whisper allows us to have up to 99 languages for translation. This is obviously far from the 350 languages mentioned earlier, but it is a start that gets us further than any other automated transcription company today. We realize that Whisper is not the only ASR(automated speech recognition) out there today, but after we have tested it, we learned it is the most accurate today. The ability to interchange our workflows and use the best in breed LLMs allows us to provide transcriptions for multi language use. This use can result in the same language translation and one language to another, allowing for us to provide organizations with the ability to be inclusive without having to utilize more than one platform. The end result is ensuring that we help cultivate a culture of equal accessibility.
Sources: