MIT Technology Review: Wikimedia’s CTO: In the age of AI, human contributors still matter

Wikipedia in the news - rip and read.
User avatar
Hemiauchenia
Habitué
Posts: 1049
kołdry
Joined: Sun Mar 21, 2021 2:00 am
Wikipedia User: Hemiauchenia

MIT Technology Review: Wikimedia’s CTO: In the age of AI, human contributors still matter

Unread post by Hemiauchenia » Wed Feb 28, 2024 3:39 am

https://www.technologyreview.com/2024/0 ... ributions/

By Rebecca Ackermann
Selena Deckelmann argues that in this era of machine-generated content, Wikipedia becomes even more valuable.

Selena Deckelmann has never been afraid of people on the internet. With a TV repairman and CB radio enthusiast for a grandfather and a pipe fitter for a stepdad, Deckelmann grew up solving problems by talking and tinkering. So when she found her way to Linux, one of the earliest open-source operating systems, as a college student in the 1990s, the online community felt comfortingly familiar. And the thrilling new technology inspired Deckelmann to change her major from chemistry to computer science.

Now almost three decades into a career in open-source technology, Deckelmann is the chief product and technology officer (CPTO) at the Wikimedia Foundation, the nonprofit that hosts and manages Wikipedia. There she not only guides one of the most turned-to sources of information in the world but serves a vast community of “Wikipedians,” the hundreds of thousands of real-life individuals who spend their free time writing, editing, and discussing entries—in more than 300 languages—to make Wikipedia what it is today.

It is undeniable that technological advances and cultural shifts have transformed our online universe over the years—especially with the recent surge in AI-generated content—but Deckelmann still isn’t afraid of people on the internet. She believes they are its future.

In the summer of 2022, when she stepped into the newly created role of CPTO, Deckelmann didn’t know that a few months later, the race to build generative AI would accelerate to a breakneck pace. With the release of OpenAI’s ChatGPT and other large language models, and the multibillion-dollar funding cycle that followed, 2023 became the year of the chatbot. And because these models require heaps of cheap (or, preferably, even free) content to function, Wikipedia’s tens of millions of articles have become a rich source of fuel.

To anyone who’s spent time on the internet, it makes sense that bots and bot builders would look to Wikipedia to strengthen their own knowledge collections. Over its 23 years, Wikipedia has become one of the most trusted sources for information—and a totally free one, thanks to the site’s open-source mission and foundation support. But with the proliferation of AI-generated text and images contributing to a growing misinformation and disinformation problem, Deckelmann must tackle an existential question for Wikipedia’s product and community: How can the site’s open-source ethos survive the coming content flood?

Deckelmann argues that Wikipedia will become an even more valuable resource as nuanced, human perspectives become harder to find online. But fulfilling that promise requires continued focus on preserving and protecting Wikipedia’s beating heart: the Wikipedians who volunteer their time and care to keep the information up to date through old-fashioned talking and tinkering. Deckelmann and her team are dedicated to an AI strategy that prioritizes building tools for contributors, editors, and moderators to make their work faster and easier, while running off-platform AI experiments with ongoing feedback from the community. “My role is to focus attention on sustainability and people,” says Deckelmann. “How are we really making life better for them as we’re playing around with some cool technology?”

What Deckelmann means by “sustainability” is a pressing concern in the open-source space more broadly. When complex services or entire platforms like Wikipedia depend on the time and labor of volunteers, contributors may not get the support they need to keep going—and keep those projects afloat. Building sustainable pathways for the people who make the internet has been Deckelmann’s personal passion for years. In addition to working as an engineering and product leader at places like Intel and Mozilla and contributing to open-source projects herself, she has founded, run, and advised multiple organizations and conferences that support open-source communities and open doors for contributors from underrepresented groups. “She has always put the community first, even when the community is full of jerks making life unnecessarily hard,” says Valerie Aurora, who cofounded the Ada Initiative—a former nonprofit supporting women in open-source technology that had brought Deckelmann into its board of directors and advisory board.

Addressing both a community’s needs and an organization’s priorities can be a challenging balancing act—one that is at the core of open-source philosophy. At the Wikimedia Foundation, everything from the product’s long-term direction to details on its very first redesign in decades is open for public feedback from Wikipedia’s enormous and vocal community.

Today Deckelmann sees a newer sustainability problem in AI development: the predominant method for training models is to pull content from sites like Wikipedia, often generated by open-source creators without compensation or even, sometimes, awareness of how their work will be used. “If people stop being motivated to [contribute content online],” she warns, “either because they think that these models are not giving anything back or because they’re creating a lot of value for a very small number of people—then that’s not sustainable.” At Wikipedia, Deckelmann’s internal AI strategy revolves around supporting contributors with the technology rather than short-circuiting them. The machine-learning and product teams are working on launching new features that, for example, automate summaries of verbose debates on a wiki’s “Talk” pages (where back-and-forth discussions can go back as far as 20 years) or suggest related links when editors are updating pages. “We’re looking at new ways that we can save volunteers lots of time by summarizing text, detecting vandalism, or responding to different kinds of threats,” she says.

But the product and engineering teams are also preparing for a potential future where Wikipedia may need to meet its readers elsewhere online, given current trends. While Wikipedia’s traffic didn’t shift significantly during ChatGPT’s meteoric rise, the site has seen a general decline in visitors over the last decade as a result of Google’s ongoing search updates and generational changes in online behavior. In July 2023, as part of a project to explore how the Wikimedia Foundation could offer its knowledge base as a service to other platforms, Deckelmann’s team launched an AI experiment: a plug-in for ChatGPT’s platform that allows the chatbot to use and summarize Wikipedia’s most up-to-date information to answer a user’s query. The results of that experiment are still being analyzed, but Deckelmann says it’s far from clear how and even if users may want to interact with Wikipedia off the platform. Meanwhile, in February she convened leaders from open-source technology, research, academia, and industry to discuss ways to collaborate and coordinate on addressing the big, thorny questions raised by AI. It’s the first of multiple meetings that Deckelmann hopes will push forward the conversation around sustainability.

Deckelmann’s product approach is careful and considered—and that’s by design. In contrast to so much of the tech industry’s mad dash to capitalize on the AI hype, her goal is to bring Wikipedia forward to meet the moment, while supporting the complex human ecosystem that makes it special. It’s a particularly humble mission, but one that follows from her career-long dedication to supporting healthy and sustainable communities online. “Wikipedia is an incredible thing, and you might look at it and think, ‘Oh, man, I want to leave my mark on it.’ But I don’t,” she says. “I want to help [Wikipedia] out just enough that it’s able to keep going for a really long time.” She has faith that the people of the internet can take it from there.
One part of this that jumps out to me is the this quote:
“If people stop being motivated to [contribute content online],” she warns, “either because they think that these models are not giving anything back or because they’re creating a lot of value for a very small number of people—then that’s not sustainable.”
Surely Deckelmann knows that this isn't true, as Wikipedia editors have always been "creating a lot of value for a very small number of people", that "very small number of people" of course being WMF executives.

As a broader comment on the subject of the article, the purpose (in part) of chatbots like ChatGPT is to summarise pre-existing sources, which is exactly what a Wikipedia article is supposed to do. I have no idea if Wikipedia has a long-term future against the chatbots, though it might hinge on the results of upcoming lawsuits like NYT v OpenAI.

User avatar
Kraken
Banned
Posts: 542
Joined: Tue Feb 06, 2024 2:44 pm

Re: MIT Technology Review: Wikimedia’s CTO: In the age of AI, human contributors still matter

Unread post by Kraken » Wed Feb 28, 2024 9:44 am

She doesn't sound like she knows much about Wikipedia at all. I have no clue what a nuanced human perspective is supposed to mean, but that's not what Wikipedia content is, that's for sure.

Readers of Wikipedia aren't supposed to detect a human touch in the product at all. It's supposed to look and sound like a robot created the content. The fact technology can't yet do that is the only reason Wikipedians haven't embraced AI with a greater enthusiasm than Mark Zuckerberg embracing a VR sex bot.

She vastly overstates the number of Wikipedians, through either cluelessness or the usual PR boosterism. She vastly overestimated how much talking they do, or what they are doing most of their talking for. It's not responding to edit requests to update articles, that's for sure. That's the very first thing they would hand over to AI if they could. You can already write a bot that says fuck off you clueless disphit, right?

----

(I was going to insert a bit here about how robotic and non-nuanced Wikipedia's human responses to less than perfect edit requests has already become with absolutely no help from AI, but when I fired up Israel-Hamas War (T-H-L) as the most likely place to find a hundred such messages, I was amused to be reminded that newcomers are no longer even allowed to post to such talk pages until they have essentially done what is essentially a complete induction into Wikipedia editing)

Wikipedia's beating heart is already quite coldly inhuman and insanely unsustainable, and this is how the Wikipedians want it. They did that. To the point one suspects the only reason they would want an AI for, is to fire a cruise missile right into the home of anyone who even has a thought about starting their Wikipedia experience on an article that would be quite important to a future where readers interact with the content not directly but via "a plug-in for ChatGPT’s platform that allows the chatbot to use and summarize Wikipedia’s most up-to-date information to answer a user’s query".

Hey Chatbot! How many innocent Palestinians did the Israeli state murder today?

Hello Dave. Today the Isreali state murdered 0 innocent Palestinians and 1,000 heavily armed and ruthless baby killing terrorists.

Is there anything else I can help you with today Dave?

That doesn't sound right Siri. Let me go check Wikipedia and help make the world a better place.

KABOOM.

WikiAI: Threat Eliminated.

Wikipedians: ALL RIGHT! NOW THAT'S WHAT AM TALKING ABOOT
.

(In the future, all Wikipedians are Canadian, cos Trump).

----

She didn't know about the forthcoming AI revolution. She doesn't know what motivates Wikipedians to keep going. She doesn't know Wikipedians don't see themselves as jerks, and it mightily pisses them off to be told they are jerks by the people they're volunteering for. She doesn't know Wikipedians don't want to be told what the direction of Wikipedia is, and part of their jerkiness is to not just make her life hard, it's to act like she doesn't exist at all. Seeing her as an obstacle not a partner is where their heads are at usually.

Overall, she seems to have vastly overstated the importance of technology, much less an open source ethos, is to the Wikipedians. Quite a lot of them are utterly clueless about software. They are stereotypical users, not coders. Which is no bad thing, given you don't need a single bit of coding knowledge to write a good Wikipedia article, much less keep it updated.

It is the small but annoying band of open source coders who find Wikipedia to be a stimulating hobby who have arguably seemingly made it their mission to piss off people who are interested in Wikipedia for the whole keeping information accurate and up to date part. The serving humanity part.

They're the ones who are seemingly at times really quite blind to the importance of a volunteer needing to know where a particular piece of vandalism originated from quickly. Or rather, they have created a dependency on the old school super non-AI coder to meet the moment (Wikipedia's dearth of human resource). If something newer and shinier and with even less bothersome human interaction comes along and they leave, Wikipedia is truly fucked..

What she should be developing are AI tools that can help Wikipedia with the only problem that is unique to them. Something that is of no concern to Google or Facebook or Amazon or InstaTikTwat, not just because they pay their employees, but because they have a solid business model.

Wikipedia's business model is and always will be to get humans interested in editing to the point they do understand its quirks and foibles and are willing to put up with the many many downsides, for the few upsides. This is the thing that Google fucked with by taking readers away from the platform.

The problem is known. The solutions thus far, even the AI ones, have missed the mark. Making it easier to detect vandals doesn't translate at all into making it easier to onboard newcomers. Because for example the assumption that vandals fighters are just overworked welcome hosts, was always pure nonsense. Other than the fact a vandals fighter's solution to that problem is of course a bot that slaps a newcomer with a welcome template. Job done. Why isn't it working? Who cares. Just keep firing, dammit!

Wikipedia needs a Microsoft paperclip. A piece of technology that was laughed at in its day, but which in hindsight was clearly far ahead of its time. A piece of AI software that detects what stupid n00bs editors are trying to do in real time and gives them fast and effective guidance.

And unlike the paperclip, WikiClip can now even give them direct assistance with editing, and also direct them to genuinely useful human help. If it wants to help of course. But if it does, AI will have at least matched two humans with compatible interests.

It will have its downsides of course. Wikipedia might fracture into different communities with different interests working toward different goals, and when viewed from on high the whole thing might look quite absurd. But one could argue that has already happened, and it was the Foundation that brought us this nightmare by hiring people who really don't understand Wikipedia or Wikipedians at all.
No thank you Turkish, I'm sweet enough.

User avatar
ltbdl
Critic
Posts: 171
Joined: Mon Sep 11, 2023 4:38 am
Wikipedia User: ltbdl
Location: Cape Denison

Re: MIT Technology Review: Wikimedia’s CTO: In the age of AI, human contributors still matter

Unread post by ltbdl » Wed Feb 28, 2024 1:29 pm

Kraken wrote:
Wed Feb 28, 2024 9:44 am
[...]
zzz... huh? wha? you're still talking?
if you are reading this then you maybe are suffering maybe paranoia perhaps (or not)...

User avatar
Ron Lybonly
Regular
Posts: 425
Joined: Thu Jun 08, 2023 12:29 am

Re: MIT Technology Review: Wikimedia’s CTO: In the age of AI, human contributors still matter

Unread post by Ron Lybonly » Wed Feb 28, 2024 1:52 pm

Kraken wrote:
Wed Feb 28, 2024 9:44 am


Wikipedia needs a Microsoft paperclip. A piece of technology that was laughed at in its day, but which in hindsight was clearly far ahead of its time. A piece of AI software that detects what stupid n00bs editors are trying to do in real time and gives them fast and effective guidance.

And unlike the paperclip, WikiClip can now even give them direct assistance with editing, and also direct them to genuinely useful human help. If it wants to help of course. But if it does, AI will have at least matched two humans with compatible interests.

This is a great idea.

User avatar
eppur si muove
Habitué
Posts: 1997
Joined: Mon Mar 19, 2012 1:28 pm

Re: MIT Technology Review: Wikimedia’s CTO: In the age of AI, human contributors still matter

Unread post by eppur si muove » Wed Feb 28, 2024 2:21 pm

The sentence which most caught my eye was:
Over its 23 years, Wikipedia has become one of the most trusted sources for information—and a totally free one,
Is it really that trusted? If so why do I hear it joked about so often on comedy programmes?

WIkipedia is used so much because it is free and can be looked at anywhere with access to the internet, not because believe it is accurate.

User avatar
Ismail
Contributor
Posts: 74
Joined: Tue Jun 05, 2012 2:25 pm
Wikipedia User: Ismail

Re: MIT Technology Review: Wikimedia’s CTO: In the age of AI, human contributors still matter

Unread post by Ismail » Wed Feb 28, 2024 2:29 pm

eppur si muove wrote:
Wed Feb 28, 2024 2:21 pm
WIkipedia is used so much because it is free and can be looked at anywhere with access to the internet, not because believe it is accurate.
I'm reminded of a remark by a critic back in 2006, "Just as McDonald's is where you go when you're hungry but don't really care about the quality of your food, Wikipedia is where you go when you're curious but don't really care about the quality of your knowledge. . . . Wikipedia is becoming the butt of jokes all across the mainstream media. Complaining about Wikipedia will soon become as common as complaining about AOL."

I do think criticism of Wikipedia on political grounds has greatly increased in significance over the past decade though.

User avatar
Randy from Boise
Been Around Forever
Posts: 12281
Joined: Sun Mar 18, 2012 2:32 am
Wikipedia User: Carrite
Wikipedia Review Member: Timbo
Actual Name: Tim Davenport
Nom de plume: T. Chandler
Location: Boise, Idaho

Re: MIT Technology Review: Wikimedia’s CTO: In the age of AI, human contributors still matter

Unread post by Randy from Boise » Wed Feb 28, 2024 3:47 pm

Kraken wrote:
Wed Feb 28, 2024 9:44 am
She vastly overstates the number of Wikipedians, through either cluelessness or the usual PR boosterism.
This is the thing that jumped out at me. We used to have really good numbers every month for very active Wikipedians. I think that was determined by 100 edits in a month. It was a fabulous series over time, listed by project. My recollection is that the number is right around 10,000 people per month across all projects — not "hundreds of thousands" for sure, although if you look at WP's entire 20+ year history that number would doubtlessly run that high.

The person who used to tabulate and maintain the series ended up leaving WMF, replaced by no one, and the data isn't generated any longer, so far as I am aware.

I think the systemic overcounting by WMF and its allies is intentional and BAD — by devaluing the actual people in the trenches. If there are actually, let's say, 8,000 core Wikipedians and 250 administrators in English WP, that is a very different situation than if there were 179,000 and 500 administrators. In one case the individuals are precious, in the other disposable.

t

User avatar
Giraffe Stapler
Habitué
Posts: 3180
Joined: Thu May 02, 2019 5:13 pm

Re: MIT Technology Review: Wikimedia’s CTO: In the age of AI, human contributors still matter

Unread post by Giraffe Stapler » Wed Feb 28, 2024 4:40 pm

This is off-topic, but I looked up Valerie Aurora (T-H-L) just to confirm that that wasn't her birth name. Wikipedia tells me her mother is Carolyn Meinel (T-H-L). The name vaguely rings a bell and I click to her entry.
Carolyn P. Meinel (born 1946) is notable for being one of the targets in the hacking scene during the 1990s.
That sounds interesting, so I keep reading to find out more about that.
In 1996, Meinel was among the targets of a high-profile email bomber known as "Unamailer" or "johnny xchaotic".
That's it. Good work Wikipedia.

User avatar
Randy from Boise
Been Around Forever
Posts: 12281
Joined: Sun Mar 18, 2012 2:32 am
Wikipedia User: Carrite
Wikipedia Review Member: Timbo
Actual Name: Tim Davenport
Nom de plume: T. Chandler
Location: Boise, Idaho

Re: MIT Technology Review: Wikimedia’s CTO: In the age of AI, human contributors still matter

Unread post by Randy from Boise » Wed Feb 28, 2024 5:27 pm

eppur si muove wrote:
Wed Feb 28, 2024 2:21 pm
The sentence which most caught my eye was:
Over its 23 years, Wikipedia has become one of the most trusted sources for information—and a totally free one,
Is it really that trusted? If so why do I hear it joked about so often on comedy programmes?

WIkipedia is used so much because it is free and can be looked at anywhere with access to the internet, not because believe it is accurate.
That's an interesting question. If Wikipedia wanted to do something useful instead of providing scholarships for cultists to conventions at exotic locales, it might engage in some polling to track the ebbs and flows of WP's reputation, much like questions like the trust level in politicians or the media or the judiciary or the military is tracked over time.

I think WP would score high marks for accuracy, although I don't think very many people understand how the bumblebee actually flies.

It would absolutely score high marks for convenience and cost.

Even people who don't think they trust or would ever use Wikipedia actually do when they use any other on-line information aggregator or AI device — all of which are fed Wikipedia content for breakfast.

t