Well, what’s going on with the LinkedIn algorithm?

πŸš€ Discover this insightful post from TechCrunch πŸ“–

πŸ“‚ Category: AI,Social,algorithms,DEI,LinkedIn,llm,social media

πŸ“Œ Key idea:

One day in November, a product strategist we’ll call Michelle (not her real name), logged into her LinkedIn account and switched her gender to male. She also changed her name to Michael, she told TechCrunch.

She was participating in an experiment called #WearthePants in which women tested the hypothesis that LinkedIn’s new algorithm was biased against women.

For months, some heavy LinkedIn users have complained about seeing a decline in engagement and impressions on the career-oriented social network. This came after the company’s vice president of engineering, Tim Jurka, said in August that the platform had “recently” implemented LLM certifications to help show useful content to users.

Michelle (whose identity is known to TechCrunch) was skeptical about the changes because she has more than 10,000 followers and ghostwrites posts for her husband, who only has about 2,000. However, she said she and her husband tend to get the same number of post impressions, despite her larger following.

β€œThe only significant variable was gender,” she said.

Marilyn Joyner, the founder, also changed the gender of her profile. She has been posting on LinkedIn continuously for two years and in the past few months has noticed a decrease in the visibility of her posts. β€œI changed my profile gender from female to male, and my impressions jumped 238% within one day,” she told TechCrunch.

Megan Cornish reported similar results, as did Rosie Taylor, Jessica Doyle-Meeks, Abbie Needham, Felicity Menzies, Lucy Ferguson, etc.

TechCrunch event

San Francisco
|
October 13-15, 2026

LinkedIn said that its β€œalgorithms and AI systems do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the feed” and that β€œa side-by-side shot of your feed updates that are not perfectly representative, or equal in reach, does not automatically imply unfair treatment or bias” within the feed.

Social algorithm experts agree that explicit sexism may not have been the cause, although implicit bias may be.

Platforms are β€œa complex symphony of algorithms that pull specific mathematical and social levers, simultaneously and continuously,” Brandeis Marshall, a data ethics consultant, told TechCrunch.

β€œChanging the profile picture and name is just one of those factors,” she said, adding that the algorithm is also affected, for example, by how the user currently engages and interacts with other content.

“What we don’t know are all the other levers that cause this algorithm to prioritize one person’s content over another. This is a more complex problem than people assume,” Marshall said.

Coded bro

The #WearthePants experience started with two entrepreneurs – Cindy Gallup and Jane Evans.

They asked two men to create and post the same content, curious to know if gender was the reason why so many women felt less engaged. Both Gallup and Evans had large followings β€” more than 150,000 combined compared to the two men, who had about 9,400 at the time.

Gallup reported that her post reached only 801 people, while the man who posted the exact same content reached 10,408 people, more than 100% of his followers. Then other women participated. Some, like Joyner, who uses LinkedIn to market her business, have become concerned.

β€œI would really like to see LinkedIn take responsibility for any bias that may exist within its algorithm,” Joyner said.

But LinkedIn, like other LLM-based search and social media platforms, provides few details on how to train content selection models.

Marshall said most of these platforms “innately incorporated a white, male, Western-centric point of view” because of who trained the models. Researchers find evidence of human biases such as sexism and racism in popular LLM models because the models are trained on human-generated content, and humans are often directly involved in post-training or reinforcement learning.

However, how any individual company implements its AI systems remains shrouded in algorithmic black box secrecy.

LinkedIn says the #WearthePants experiment could not have proven sexism against women. Jurka’s August statement said β€” and Sakshi Jain, LinkedIn’s head of AI and responsible governance, confirmed in another post in November β€” that its systems do not use demographic information as a visibility signal.

Instead, LinkedIn told TechCrunch that it is testing millions of posts to connect users to opportunities. The company said demographic data is only used for such testing, such as seeing whether posts “from different creators compete on an equal footing and that the scrolling experience, what you see in the feed, is consistent across audiences.”

LinkedIn was noted for researching and adjusting its algorithm to try to provide a less biased experience for users.

Marshall said it’s the unknown variables that likely explain why some women see increased impressions after changing the gender on their profile to male. For example, participating in a viral trend can increase engagement; Some accounts were posting for the first time in a long time, and the algorithm could reward them for doing so.

Tone and writing style may also play a role. Michelle, for example, said that the week she posted as β€œMichael,” she adjusted her tone slightly, writing in a simpler, more direct style, as she does with her husband. That’s when she said impressions jumped 200% and engagements rose 27%.

She concluded that the system was not “overtly sexist”, but seemed to regard communication styles typically associated with women as “low-value alternatives”.

Stereotypical male writing styles are thought to be more concise, while female writing style stereotypes are thought to be softer and more emotional. If MAs are trained to promote writing that conforms to masculine stereotypes, this is implicit and subtle bias. As mentioned earlier, researchers have determined that most MBAs are full of them.

Platforms like LinkedIn often use entire profiles, as well as user behavior, when deciding what content to boost, said Sarah Dean, an assistant professor of computer science at Cornell University. This includes the functionality in a user’s profile and the type of content they typically interact with.

β€œSomeone’s demographics can influence β€˜both sides’ of the algorithm β€” what they see and who sees what they post,” Dean said.

LinkedIn told TechCrunch that its AI systems look at hundreds of signals to determine what is being pushed to a user, including insights from a person’s profile, network, and activity.

β€œWe are constantly testing to understand what helps people find the most relevant and timely content for their careers,” the spokesperson said. β€œMember behavior also shapes the feed, what people click, save, interact with and changes daily, and the formats they like or don’t like. This behavior also naturally shapes what appears in feeds along with any updates from us.”

Chad Johnson, a sales expert active on LinkedIn, described the changes as deprioritizing likes, comments, and reposts. The LLM system β€œno longer cares how many times you post or what time of day,” Johnson wrote in a post. β€œIt matters whether your writing shows understanding, clarity, and value.”

All of this makes it difficult to pinpoint the real cause of any #WearthePants results.

People just don’t like the algorithm

However, it seems that many people, of all genders, either don’t like or don’t understand LinkedIn’s new algorithm β€” whatever it is.

Data scientist Shelvie Wakolo told TechCrunch that she averaged at least one post a day for five years and was receiving thousands of impressions. Now she and her husband are lucky to see a few hundred. β€œIt’s frustrating for creators who have a large, loyal following,” she said.

One man told TechCrunch that he’s seen a 50% drop in engagement over the past few months. However, another man said he saw post impressions and reach increase by more than 100% in a similar time period. β€œThis is largely because I write about specific topics for specific audiences, which the new algorithm rewards,” he told TechCrunch, adding that his clients are seeing a similar increase.

But in Marshall’s experience, she, who is black, believes posts about her experiences perform worse than posts about her race. β€œIf Black women only get interactions when they talk about Black women, but not when they talk about their own experiences, that’s bias,” she said.

Researcher Dean believes the algorithm may simply amplify “any signals that are already there.” This can be a bonus for certain posts, not because of the writer’s demographic, but because there is a greater history of responding to them across the platform. While Marshall may have stumbled upon another area of ​​implicit bias, the anecdotal evidence she presents is not enough to determine it with certainty.

LinkedIn provided some insights into what’s working well now. The company said the user base has grown, and as a result, the posting rate is up 15% year-on-year while comments are up 24% year-on-year. “This means more competition at the bottom line,” the company said. Posts about career insights and career lessons, industry news and analysis, and education or informational content about work, business and the economy all work well, she said.

If anything, people are just confused. β€œI want transparency,” Michelle said.

However, since content selection algorithms have always closely guarded their company secrets, and transparency can lead to them being manipulated, this is a big ask. It is something he is unlikely to ever be satisfied with.

πŸ”₯ Share your opinion below!

#️⃣ #whats #LinkedIn #algorithm

πŸ•’ Posted on 1765570265

By

Leave a Reply

Your email address will not be published. Required fields are marked *