Mozilla.ai: not another chatbot

In this photo illustration, the Mozilla Firefox logo is seen displayed on an Android mobile phone.
(Image credit: Photo Illustration by Omar Marques/SOPA Images/LightRocket via Getty Images)

As ChatGPT and the like keep rapidly changing the tech world as we know, concerns that the risks might be overcoming the benefits are growing. 

We've previously written about how AI-powered chatbots have quickly become a privacy nightmare for users. Italy even temporarily banned OpenAI's tool on these grounds, inducing a spike in downloads of VPN services across the country.

Mozilla, the non-profit behind one of the best secure browsers and 25 years of experience in making the internet a better place, believes a trustworthy and independent open-source AI ecosystem is what's needed to safely get the most out of the new AI wave. 

Hence, the company invested $30 million to launch Mozilla.ai: a startup and, most importantly, community, to build on this vision and develop safer AI products. We talked to Liv Erickson, Innovation Ecosystem Development Lead at Mozilla, to understand how this might work.  

Mozilla's Innovation Lead Liv Erickson headshot
Liv Erickson

With over 10 years of experience working in the emerging technology space, Liv joined Mozilla in 2019 first as Product Manager and then as Director of the Mozilla Hub team. She's now just taken a new role as Innovation Ecosystem Development Lead to help shape the way Mozilla can support organizations and individuals to build new technologies with the community of users at their core. It goes without saying that AI is now one of her main priorities. 

As browsers and search engines have been racing to develop their own AI chatbots, what are the risks and opportunities coming from it?

There's a lot of opportunity for AI to really help people collaborate and communicate more effectively and efficiently. And, there's also a lot of risks from how a lot of these AI technologies have been developed so far. 

There's been a big focus on very large language models that are trained on publicly available web information. These can really help, especially in terms of search and finding information, but there are certainly risks about how user interests are represented in the development of these systems.

A lot of these large language models are built upon primarily English speaking websites. So, there's a lot of risk around bias in training data as these systems are generating texts built off of a dominant world perspective. 

There are a lot of concerns around the centralization of how these systems are built and who they're available to, a lack of transparency and ability to document. As they're being trained on so much information, it's really hard to understand where that data is coming from. So, there's a risk that the centralized large model for AI includes biases that we don't understand.

Where does Mozilla.ai stand in this scenario?

We see Mozilla's role as being one where we can help shape an ecosystem where people are building these technologies in a different way. A place where we're sharing information, making sure that this technology is accessible even if you don't have access to a huge amount of cloud computing resources. 

By working with developers, founders, businesses, organizations, and scientists we want to create this kind of third space where there's not anyone centralized interest in the technology that's coming together to help shape it.

We're really looking at it from a variety of different angles. Mozilla.ai is one component where we're looking at the products and services that Mozilla can build to help people navigate this. 

There's also the initiative we're doing within the Mozilla Foundation, the Responsible AI challenge, where we are really trying to take a variety of different approaches to finding people who are building the core enabling technologies, those who are building products and services on top of that, doing research and taking a multidisciplinary approach.

Do you think it's really possible to fix privacy issues around AI tools training data?

It's a great question and it's a really key component of where Mozilla sees our particular role in this space as people are finding ways to create new data sets that are smaller, more tailored, customized and documented.

Those are some really interesting community driven projects that are focused on how the trust and relationship between different individuals can actually build comparable systems to these large language models (LLMs). And, to go back to privacy and trust, these types of models allow to be specific about how information has been sourced.

Two phones next to eachother, one with ChatGPT open and one with Google

(Image credit: Shutterstock / Tada Images)

In your opinion, could AI regulation help Mozilla to promote safer AI?

I absolutely think that there's a role for regulators here in helping create frameworks for companies who are building these types of software to communicate how they've trained their models and where that information has come from. 

I think a lot can be done, too, about expanding the definition of what private and personal information even means. There's an opportunity for regulators to look at how the definition of privacy is going to evolve as these systems become more capable of uniquely identifying people.

There's also a role for regulators to play in helping with the transparency of what these systems are actually capable of in a way where the average consumer understands what's actually happening or not for protecting users from misleading claims about what these systems can do. 

As Mozilla celebrates 25 years of operations, which are in your opinion the biggest achievements in promoting a more private and free internet?

When we look at our role in this ecosystem, I think we've really been able to turn the tide in how people think about building these technologies.  

With Firefox, Mozilla fought for that to be open and accessible to developers to help anybody who wants to create a presence on the Internet to have the resources and capabilities to do that. Really, just helping shape a more collective way of building technologies together.

Today, this also acts as a model to help us look at the next 25 years and what that means for AI, being a big topic right now, but just generally how people engage with technology, the Internet and each other.

In your view, what are today's biggest threats to the internet as we know it?

I think centralizing so much of what we see in the technology space to large cloud providers is a risk seeing how quickly they're able to build and deliver things to users. 

Going back to our role as a place of building a community where we can call out problems and rally people to help solve them within Mozilla, I see that as being a kind of an antidote to that. 

Eine Frau sitzt vor ihrem Laptop und hört Musik via Kopfhörer

(Image credit: Stocksy)

The central idea of Mozilla's Manifesto is promoting a healthy internet. So, how do you plan to do that?

I think if you ask different folks at Mozilla you'll get different perspectives, but for me it comes back to rethinking the way we define innovation. 

The Internet touches so many people lives and yet a very disproportionate number of people actually have a say in how it's being built or how their data is being used. When I think about what innovation can look like in the future of the Internet is finding ways to innovate with those communities to cast out a really wide net, to be willing to be transparent about what we're trying and learning, as well as to be investing in projects that even if they're not within Mozilla.

With how fast everything's moving, we have a relatively short window to make sure we're not repeating the mistakes we've done in the past with centralized systems. It's important to highlight the need for more community solutions in this space and to really challenge our default way of thinking about the way technology is built, which is both exciting and challenging.

Chiara Castro
Senior Staff Writer

Chiara is a multimedia journalist committed to covering stories to help promote the rights and denounce the abuses of the digital side of life—wherever cybersecurity, markets and politics tangle up. She mainly writes news, interviews and analysis on data privacy, online censorship, digital rights, cybercrime, and security software, with a special focus on VPNs, for TechRadar Pro, TechRadar and Tom’s Guide. Got a story, tip-off or something tech-interesting to say? Reach out to chiara.castro@futurenet.com