Photo credit: tootography
This content was provided by Kyiana Williams, CEO and founder of e.e.r.s., a civic engagement software.
Most people meet artificial intelligence through Hollywood.
Through the futuristic fantasy of Meet the Robinsons.
Or through the apocalyptic “inevitability” in The Terminator.
When that’s your reference point, of course AI feels WIDELY unpredictable and terrifying.
It feels like it’s already too late.
Like someone, somewhere, has already built “the thing.”
Like we’re just waiting to find out which sci-fi plot we’re trapped inside.
But I don’t see it that way.
As an artist, I’ve always understood the power of imagination. When we see something on screen, we believe it. We internalize it. We prepare for it. And sometimes we mistake imagination for inevitability.
The truth is: we are at the very beginning of AI’s era. Not the end. Not the climax. The beginning.
So when I realized that, I didn’t feel fear. I felt responsibility.
I saw an opportunity.
Before e.e.r.s. was a civic tool, it was an arts tool. I built it inside my first company, entertwine, because I was frustrated by how opaque the entertainment industry was—especially for underserved storytellers. My team and I developed an algorithm that recommended grants, fellowships, and jobs to aspiring filmmakers who didn’t have insider access.
With e.e.r.s. we matched 600 women of color, who were new to the industry, with 100 seasoned mentors using 25 data points. The AI delivered 80% accuracy in seconds. A process that used to take six to eight weeks of manual review suddenly became immediate.
At the time, I thought I was solving an access problem in entertainment.
Then, a business professor asked me a question that disrupted everything: Have you thought about changing the world?
What if this technology could be used for cancer research? AIDS research? Suicide prevention? What if social workers answering crisis calls didn’t have to fumble through databases while someone on the other end of the line was in despair?
That question led me to study the 988 Suicide & Crisis Lifeline.
What I learned was sobering. The agents—mostly women—were managing enormous emotional labor. They were holding space for people in crisis while simultaneously trying to locate local abuse shelters, rehab centers, counseling services, and emergency resources. The cognitive load was immense.
I approached them about using e.e.r.s. as a crisis-support tool—a system that could surface precise civic resources in real time while they focused on listening.
They didn’t hesitate. They loved it.
That was the moment I realized this wasn’t just an artist’s problem. It wasn’t even just a social worker’s problem. It was a systems problem. Access to help shouldn’t depend on how well someone can navigate fragmented information while under stress.
Cities began adopting the technology.
Then the Los Angeles fires happened.
My grandparents’ home burned down.
Suddenly, this wasn’t abstract civic infrastructure. It was personal. I watched how overwhelming disaster recovery was—how many people didn’t know where to turn in the most intense moment of need. Until then, we had been helping communities steadily, responsibly—but without urgency.
The fires changed that. The fires made me realize that e.e.r.s. had to be built not just for everyday gaps in access, but for crisis-level intensity.
Throughout all of this, I’ve never believed I was shifting from a people-driven company to a tech-driven one. Both entertwine and e.e.r.s. are human-first. Technology is simply the instrument.
I learned early on that strong products start with people. I spent years sitting down with the communities we serve and asking, How does this feel? What’s working? What’s not? The data across sectors—arts, education, civic services—is more similar than most assume. Once I understood that, expanding into broader social services wasn’t a leap. It was an expansion.
What concerns me more than AI itself is the feeling people have that it’s happening to them and around them, without their input. That’s what’s terrifying—powerlessness.
But we are not powerless.
We are conscious during the active making of history. We are early enough to shape this era. Through policy. Through purchasing power. Through boycotts. Through public demand.
We can insist that technology serve humanity.
Some argue that this human-first approach slows innovation. I ask: why is that a bad thing? Why do we need to move so fast and break things—especially when some things can’t be unbroken?
My co-founder, James, often says that people are chasing the next best thing without reflecting on what they’ve already built—or whether it actually makes life good.
I don’t want to chase the next sci-fi fantasy.
I build tools that help a social worker breathe while taking a suicide call.
I build systems that help a family find shelter after a fire.
I use my imagination not to predict apocalypse but to design infrastructure for care.
We are not in the final act of a dystopian film.
We’re in the early scenes. And we still get to decide how the story unfolds.


