Meta is rolling out new AI content enforcement systems while reducing reliance on third-party vendors

💥 Read this awesome post from TechCrunch 📖

📂 **Category**: AI,Apps,Social,Facebook,Instagram,Meta

📌 **What You’ll Learn**:

Meta announced Thursday that it has begun rolling out more advanced artificial intelligence systems to handle content enforcement as it plans to cut back on third-party vendors. Content enforcement tasks include detecting and removing content related to terrorism, child exploitation, drugs, fraud and scams.

The company says it will deploy more advanced AI systems across its apps once they consistently outperform current content enforcement methods. At the same time, it will reduce its reliance on third-party vendors for content enforcement.

“While we will still have people reviewing content, these systems will be able to do work that is better suited to the technology, such as frequent reviews of graphic content or areas where warring actors are constantly changing their tactics, such as illicit drug sales or fraud,” Meta explained in a blog post.

Meta believes these AI systems can detect more violations with greater accuracy, better prevent fraud, respond more quickly to real-world events, and reduce over-enforcement.

The company says early tests of the AI ​​systems have been promising, as they can detect twice as much adult sexual content as review teams violate, while also reducing the error rate by more than 60%. It also says the systems could identify and block more impersonation accounts involving celebrities and other high-profile individuals, as well as help stop account takeovers by detecting signals such as logins from new sites, password changes, or modifications made to the profile.

Additionally, Meta says the systems can identify and mitigate around 5,000 fraud attempts per day, where scammers try to trick people into giving up their login details.

“Experts will design, train, supervise and evaluate our AI systems, measure performance and make the most complex, high-impact decisions,” Meta wrote in a blog post. “For example, people will continue to play a key role in how the riskiest and most important decisions are made, such as appealing account deactivation or reporting to law enforcement.”

The move comes as Meta has been loosening its content moderation rules over the past year or so, as President Donald Trump takes office for a second time. Last year, the company ended its third-party validation program in favor of an X-like community feedback model. It also lifted restrictions on “topics that are part of mainstream discourse” and said users would be encouraged to take a “personal” approach to political content.

It also comes as Meta and other major tech companies are currently facing several lawsuits seeking to hold social media giants liable for harming children and young users.

Meta also announced on Thursday that it will launch a Meta AI support assistant that will give users access to 24/7 support. The Assistant is rolling out globally on the Facebook and Instagram apps for iOS and Android, and within the Facebook and Instagram Help Center on desktop.

🔥 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#Meta #rolling #content #enforcement #systems #reducing #reliance #thirdparty #vendors**

🕒 **Posted on**: 1773941604

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *