The May 20 meeting of the Project Cortex Office Hours series focused on content understanding and Managed Metadata Services (MMS). We looked at how Project Cortex allows you to automatically classify content, streamline content processes, protect and manage content, and leverage managed metadata.


View the recording, browse the deck, or learn about the upcoming taxonomy Graph APIs to see how Project Cortex works across Microsoft 365 to intelligently categorize content and automate processes.

Want to get the latest Project Cortex news and information, join our mailing list.

Recording - May 20: Content understanding and MMS

View the recording of the May 20: Project Cortex Office Hours meeting

View the Teams live event recording

Presentation - May 20: Content understanding and MMS

See what we presented in the May 20 Project Cortex Office Hours meeting

View the presentation

Taxonomy Graph APIs - coming soon

Get the latest preview documentation, including sample code, operations, enumerators, and more

Learn more

Q & A

Thank you for all your questions during the Microsoft Teams live event. Even if we don’t answer them on the air, we use your questions to inform our Project Cortex FAQ and technical documentation.

A selection of questions and responses from the Q&A portion of the meeting follow.

Q: How do I get access to the modern Managed Metadata Services (MMS)?

A: MMS is part of SharePoint and has been upgraded both on back end and modern UX. It is now a suite-wide (across Microsoft 365) service and is available through Microsoft 365. We have some additional new features that will be part of Project Cortex.

Q: Will these new experiences be accessible to non-SharePoint admins? 

A: No, the new experiences won’t be accessible to non-SharePoint admins, because the experiences are housed within the admin center. You’ll need to be a SharePoint admin to see the global term store and the content type gallery. We are actively working on modernizing the site collection and site admin experiences. As they are modernized, you’ll see one modern experience throughout all your admin interactions.

Q: Will the content type gallery replace the content type hub for content type publishing?

A: No, the content type gallery won’t directly replace the content type hub; there will still be a content type hub. The content type gallery is intended to be a more flexible and more centralized way than the content type hub for you to create that master pool of content types that can be used throughout your organization.

Q: With what formats does machine training work?

A: You can apply machine training to Office documents, PDFs, and images. Our documentation will be clear about the supported file types. While certain Office file types are supported, it might be a more difficult to build models from file types with a lot of layout and formatting nuance – but it is possible. The goal is to support all Office file types, PDFs, and images.

Q: Do you need to create a library for each model?

A: There are two different types of models – one for structured content and one for unstructured content. You apply form processing models for structured content to a library, because you start from a library to go into AI Builder to build your model for that library. You can apply the models for unstructured content, which you build in the Content Center, to multiple libraries.