makes sense. legaleze for "we're not responsible for inaccurate or dangerous or negative output from the AI service". of course they're working to build a more accurate & "aware" AI model but they not going to take blame for users that blindly uses system responses or advice that's wrong or harmful.
Microsoft is establishing Copilot as its primary AI brand for Windows, Microsoft 365 and GitHub. However, the terms of use, warnings, and disclaimers for each Copilot differ. By doing this, the Redmond-based tech giant is undermining trust in its own AI tool.