🔥 Discover this must-read post from TechCrunch 📖
📂 **Category**: AI,Security,chatbot,Copilot,cybersecurity,data protection,Microsoft
✅ **What You’ll Learn**:
Microsoft has confirmed that a flaw allowed its Copilot AI software to summarize confidential customer emails for weeks without permission.
The bug, first reported by Bleeping Computer, has allowed Copilot Chat to read and outline the contents of emails since January, even if customers had DLP policies in place to prevent their sensitive information from being absorbed into Microsoft’s large language form.
Copilot Chat allows paying Microsoft 365 customers to use the AI-powered chat feature in Office products, including Word, Excel, and PowerPoint.
Microsoft said the bug, which administrators can track as CW1226324, means that draft emails sent “with a confidential label applied are incorrectly processed by Microsoft 365 Copilot chat.”
The tech giant said it began rolling out a fix for this flaw earlier in February. A Microsoft spokesperson did not respond to a request for comment, including a question about how many customers were affected by the flaw.
Earlier this week, the European Parliament’s IT department told lawmakers it had blocked AI features built into their work-issued devices, citing concerns that AI tools could upload potentially confidential correspondence to the cloud.
⚡ **What’s your take?**
Share your thoughts in the comments below!
#️⃣ **#Microsoft #Office #bug #exposed #confidential #customer #emails #Copilot**
🕒 **Posted on**: 1771426211
🌟 **Want more?** Click here for more info! 🌟
