FAQs
- Do you have a question?
- Introduction
- How is the Shape Repository connected to the AIM@SHAPE-VISIONAIR projects?
- What is the purpose of the Shape Repository?
- What is the vision for this repository?
- Rights and responsibilities
- Are these models free for everyone?
- How should the models be acknowledged?
- Anything else about usage?
- Who can add models?
- DSW5 Registration
- File formats
- Group models
- Browsing and searching the repositories
- How do I browse for models?
- How do I browse a group of models?
- How do I see more details about a model?
- How do I download a model?
- How do I preview a 3D model before I download it?
- How do I browse for tools?
- How do I see more details about a tool?
- How do I browse the DSW5 ontologies?
- How do I learn more about the DSW5 ontologies?
- How do I search for models?
- How do I use the keywords search for models?
- Why use the semantic search? What’s different from the simple keyword search?
- How do I use the semantic search for models?
- How do I use the SPARQL language to search for models?
- How do I use the geometric search for models?
- How do I search for tools?
- How do I use the keywords search for tools?
- How do I use the semantic search for tools?
- How do I use the SPARQL language to search for tools?
- How do I browse the Glossary terms?
- Multi-resolution download
- What are multi-resolution downloads?
- That's pretty cool! What shape categories and formats are supported?
- Where can I learn more about multi-resolution representations of meshes?
- Inserting Tools, Models, group of models and user groups (only for registered users)
- How do I insert a new model?
- How do I insert multiple models (batch upload) with a zip file?
- How do I temporally save the model/tool metadata without inserting the new model/tool?
- Where can I find my temporally saved model/tool metadata?
- How do I edit/delete a model that I have previously uploaded?
- How do I create a new (empty) group of models?
- How do I create a new group of models and upload a representative model?
- How do I insert a model to an existing group of models?
- How do I insert a new tool?
- What is the Functionality of a tool?
- How do I edit/delete a tool that I have previously uploaded?
- Something went wrong while I was adding a model, group of models or a tool. What should I do?
- How do I create a new LDAP user group of DSW5 registered users?
- How do I edit/delete an LDAP user group of DSW5 registered users?
- How do you extract metadata from shape models?
- Inserting and browsing medical data (only for registered users)
- How do I insert a new medical model?
- How do I browse medical data?
- How can I upload part-based annotations?
- How do I browse part-based annotations?
- Inserting (only for registered users) & Browsing Workflows
- Which kinds of workflows are considered?
- How do I browse for workflows?
- How do I insert a new Workflow?
- How do I create a new Executable Workflow?
- How do I create a new Static Workflow?
- Which is the difference between simple-activities and macro-activities?
- How do I insert an activity?
- Where can I find previously uploaded workflows?
- How can I execute a previously uploaded Executable Workflow?
- How can I edit/remove a workflow that I have previously uploaded?
- What are Executable Web Services?
- Inserting (only for registered users) & Browsing Shape models and ontology for the manufacturing domain
- Which kinds of shapes are considered?
- Which standards are considered for information organisation?
- Why I cannot see the complete Manufacturing Shape Ontology?
- How can I insert a manufacturing element and its shape representations?
- There exists any dependence in the object creation?
- Can I upload several representations for the same manufacturing element?
- How can I insert an avatar?
- How can I insert a product with its related working steps?
How can I contact the DSW5 administrators? |
If you have a question, a problem, an error page, to report bugs or for any other reason, you can contact the VVS people through the “Contact us” form which is located at the bottom of the left menu. When you submit the form, an email is sent automatically to the VVS administrator.
How is this Shape Repository connected to the AIM@SHAPE-VISIONAIR projects? |
The Shape Repository v5 is a key component of the e-Science framework of tools and services for modeling, processing, and interpreting digital shapes, developed within the AIM@SHAPE and VISIONAIR projects.
What is the purpose of the Shape Repository? |
The repository is primarily meant to facilitate the research process.
For developing new algorithms, it is important to have a number of small
and easily manageable shapes that cover all necessary test scenarios.
This enables an efficient prototyping as well as a first proof of
concepts. On the other hand, for practical evaluation, real-world or
large-scale benchmarks have to be considered.
The Shape Repository provides digital shapes of varying complexity for any of these purposes.
What is the vision for this repository? |
Once a critical mass of models is reached, the Shape Repository will become the European reference database of digital shapes, similar to other repositories available on the web, e.g. the Stanford 3D Scanning Repository, the NIST repository. Consequently, its content will be presented and referenced in numerous publications, serving as an important means of public relations for the entire AIM@SHAPE-VISIONAIR network.
Are these models free for everyone? |
Most of them are. At the time of upload, model owners are asked to specify whether they would like to open the models to public, or keep them restricted to AIM@SHAPE-VISIONAIR partners. A large portion of the models in the repository have been made open to public. However, each model is governed by some terms of use, which are prescribed in the accompanying licence(s). Please go through them when downloading a model from the repository.
How should the models be acknowledged? |
All such legal stuff is clarified in the licence(s) accompanying each model. We request you to take the time to go through these short licence(s) when donwloading a model.
Anything else about usage? |
Yes, we request you to be mindful of people's feelings. Some models represent artifacts of religious, cultural and/or historical significance, e.g. the Max Planck model. Please handle these models with the care and respect you would otherwise observe for the original artifact. Please refrain from conducting "amusing" experiments on them, e.g. morphing, animation, Boolean operations etc. For these purposes, feel free to use any of the other models in the repository!
How do I register to DSW5? |
Every visitor can request to register in DSW5. To register click on the “Register” link located at the upper right corner of every DSW5 page. The user registration procedure requires from a user to fill in the form shown in Figure 1. The required information for every new user is: First name, Last name, Organization he is affiliated with and valid Email address. Upon submitting the registration form, an email is automatically sent to the VVS administrator who will accept (or reject) you registration request. You will be notified by email either if the registration is accepted or rejected. If accepted, the email sent to you will also contain the login information and password (that are automatically generated) for your account.
Figure 1: The user registration form.
How do I login to DSW5? |
To login click on the “Login” link located at the upper right corner of every DSW5 page. The user login page is shown in Figure 2 below.
Figure 2: The DSW5 user login page.
I forgot my password. |
There is a user account recovery mechanism in the bottom of the “Login” page. You can request a password recovery by providing the email that was used during your registration. An email with all the account information will automatically be sent back to you.
What is the Profile page of a registered user? |
All registered users are able to upload a resource (e.g. model or tool) and delete a resource or edit the metadata of their own uploaded resources. This can easily be done from the registered users’ profile page, which is located at the upper right corner of every DSW5 page. As shown in Figure 3 below, the profile page contains a summary of the resources that the user has uploaded/created i.e. models, tools, group of models, group of users and temporarily saved (unfinished) metadata for models or tools. The profile page is divided in 5 sections, each one containing detailed information regarding the above resources and possible management actions that the user can apply to them e.g. edit or delete.
Figure 3: Example profile page of a registered user.
Which file formats are supported? |
The repository accepts any kind of file formats.
Restrictions are applied to 3D thumbnails that should be only in .off format.
For 3D mesh models in off, vrml, ply (bin and ascii) and obj formats,
some metadata and the 3D thumbnail are automatically computed.
Geometric search is applicable only to shapes in the above formats or
provided with the corresponding 3D thumbnail.
Some third party resources for these file formats are listed below.
OFF:
documentation
Unix/Linux viewer
Windows viewer
VRML:
Unix/Linux viewer
Windows viewer
PLY:
documentation
tools
Unix/Linux viewer
Windows viewer
OBJ:
documentation
Windows viewer
For more information on formats and free viewers and converters, see here.
What are group models? |
Group models are envisaged to act as containers for
logically related models. Some examples of models logically related to
each other are:
remeshings of the same model with modified parameter values, one shape
in several file formats, multiple range images of a shape possibly
accompanied by one or more aligned and registered models. Instead of
adding these models to the repository as several individual models,
they can be added as a group of models.
How do I add a group model? |
Please note that only registered users can add models to the
repository.
For more information about adding a group of models, see section 9 in this FAQ.
How do I browse for models? |
To browse for models, click on the “Shapes” tab (which transfers you to the Shape Repository home page) and click on the “Browse models” link located in the left menu. The “Browse models” page is shown in Figure 4. There are different ways for displaying and filtering the models stored in the Shape Repository. The models can be browsed by category (e.g. Point set, Manifold surface mesh etc) and can be sorted by quality (default option), upload time, name, uploader and number of downloads (i.e. most popular models). Another display option is related with the ‘group of models’ concept: the default option is showing only single models (that do not belong to a group) and the group representative models (only one ‘representative’ model from a group of models). If you change the display option between ‘Single models and group representatives’ and ‘All models’, you can notice the different number of results displayed (the first option limits the displayed models significantly). The default option for browsing models is 12 per page but there is also the option of displaying 24, 36 and 48 models per page.
For each model a brief overview of the most important information is shown in the browse page. This information include: the name of the model, the category, the format, the size in KB, the uploader (i.e. the registered user who upload it), the creator, the upload date, the number of downloads (for the specific model), the number of group downloads in case of a group model (i.e. the sum of the downloads of all the models belonging to this group), the thumbnail image and the description of the model. In case of single models, there are two buttons at the end: ‘view model’ and ‘download’. If it’s a group model, there is only a ‘view group’ button.
Figure 4: The Shape Repository browse models page
How do I browse a group of models? |
When browsing for models (see 2.1) the models that belong to a group of models have a ‘view group’ button at the end. The ‘view group’ page (shown in Figure 5) displays the group id, the group name, the group description, a button for showing/hiding additional group metadata information, the group representative model and all the other models that belong to the group.
Figure 5: The Shape Repository view group of models page
How do I see more details about a model? |
When you click on a specific model, either from the ‘Browse Models’ page or the ‘view group’ page, a new page with more detailed information about the model is displayed (see Figure 6). The ‘view model’ page contains information about the model name, the model id in the Repository, information about its group (if it belongs to a group) and the model metadata. The model metadata are divided into three categories: the basic model info (i.e. overview information that are also displayed in the ‘Browse Models’ page), the common metadata (i.e. metadata properties that are common to all the models in the Repository – the properties of the class ShapeRepresentationAndDescription) and the class specific metadata (i.e. the metadata properties that belong to the specific class/category of the model). The default behavior of the ‘view model’ page is to display the values of the non-empty properties. However, you can choose to show all properties (even the empty ones) by clicking on the ‘display all metadata fields’ radio button.
The ‘view model’ page also shows the thumbnail image of model. In addition, you can see all the model gallery and thumbnail images by clicking on the provided link (see Figure 7). Moreover, there is a button called ‘Find models with similar shape’ that utilizes the Geometric Search Engine to find similar models using this model as a basis for comparison.
Note: Please keep in mind that whether or not you have the privileges to see this ‘view model’ page depends on the value of the metadata property hasVisibilityLevel.
Figure 6: The Shape Repository view model page
Figure 7: The SR view gallery and thumbnail images page
How do I download a model? |
A model can be downloaded either from the ‘view model’ page or from the ‘Browse Models’ page. After you agree to the applicable licenses (in most cases the AIM@SHAPE-VISIONAIR Shape Repository General License), the system automatically bundles the original model, the thumbnails and the metadata (in HTML format) into a compressed RAR package.
For some models (mainly meshes) it is possible to choose the quality (resolution) of the downloaded model and choose the desired download file format.
How do I preview a 3D model before I download it? |
For some models it is possible to view a simplified version of the 3D model online with an OFF mesh viewer applet (see Figure 8). This is determined automatically by the system and the option ‘view 3D model’ only appears if it is available to the specific model.
Figure 8: View a simplified version of the 3D model online
How do I browse for tools? |
To browse for tools, click on the “Tools” tab (which transfers you to the Tool Repository home page) and click on the “Browse tools” link located in the left menu. The “Browse tools” page is shown in Figure 9. There are different ways for displaying and filtering the tools stored in the Tool Repository. The tools can be browsed by category (e.g. Independent application, Library etc) and can be sorted by name (default option), upload time, owner and is available at (i.e. the infrastructure from which the tool is available). Other filtering option include: the tool functionality, the availability of the tool in a specific infrastructure and the execution platform of the tool. The default option for browsing tools is 6 per page but there is also the option of displaying 4, 12 and 24 tools per page.
For each tool a brief overview of the most important information is shown in the browse page. This information include: the name of the tool, the tool type (category), the execution platform, the availability (i.e. the name(s) of the infrastructure(s) that the tool is available from), the license, the owner of the tool, the thumbnail image and the description of the tool. There are two buttons at the end of each tool: ‘view tool’ and ‘edit tool’.
Figure 9: The Tool Repository browse tools page
How do I see more details about a tool? |
When you click on a specific tool from the ‘Browse Tools’ page, a new page with more detailed information about the tool is displayed (see Figure 10). The ‘view tool’ page is divided in three sections: the ‘Overall description’ (i.e. overview information that are also displayed in the ‘Browse Tools’ page), the ‘Tool links and information’ (which contains mostly links to the actual tool web site and references) and ‘Other metadata’ (i.e. other metadata properties of the Tool Common Ontology).
How do I browse the DSW5 ontologies? |
To browse the DSW5 ontologies, click on the “Ontologies” tab and then click on the “Browse ontologies” link located in the left menu. This page opens an applet which displays (in different tabs) the class hierarchy of the 5 ontologies developed during AIM@SHAPE (3 domain ontologies and 2 common ontologies) by using a hyperbolic tree representation (see Figure 8). For more information about the ontologies, click on the “Ontologies in the project” link located in the left menu (see 2.7).
A short description for every ontology is provided in the top right corner of the applet. Also, some classes have descriptions or other information, which is displayed on the bottom right corner of the applet when you left click on a class.
When you right click a list of options are displayed: “Show individuals”, “Create individual” , “Edit individual ” and “Delete individual”. When you click on “Show individuals”, a new page is opened and you can browse the individuals (instances) of the selected class.
Note: The options “Create individual”, “Edit individual ” and “Delete individual” are only available for registered users. Also, it is highly recommended that you use the model upload and tool upload page to insert models and tools respectively.
How do I learn more about the DSW5 ontologies? |
To learn more about the DSW5 ontologies, click on the “Ontologies” tab and then click on the “Ontologies in the project” link located in the left menu. Several short tutorials are provided: an introduction to the concept of ontologies, an overview of the Common Shape ontology (SCO) and Common Tool ontology (TCO), and a description of the 3 domain ontology (i.e. the Virtual Humans ontology, Product Design ontology and Shape Acquisition and Processing ontology).
The Common Shape ontology (SCO) is also available from any Shape Repository page (click on the “Shape Ontology Tutorial” link located in the left menu). The Common Tool ontology (TCO) is also available from any Tool Repository page (click on the “Tool Ontology Tutorial” link located in the left menu).
How do I search for models? |
There are three (3) ways to search for models: the simple keywords search, the advanced search i.e. the semantic search and SPARQL endpoint, and the geometric search.
How do I use the keywords search for models? |
To use the simple search interface, click on “Shapes” tab and then click on the “Keyword search” link located in the left menu. The query text can contain one or more words and the asterisk (“*”) can also be used as a multiple-character wildcard.
The keyword search has different filtering options based on different search requirements. The Shape Repository keyword search only provides filtering by model category (i.e. instances/individuals of specific classes in the SCO ontology). The user can select one or more categories (see Figure 12) to limit the search results. The default action is to search in every category and show all individual model results. It is also possible to search for ‘single models and group representative models’. This option essentially excludes from the search all model group members and only searches the representative model of the group and, of course, the models that do not belong to a model group.
Other display options include the “Sort by” button, which can display results sorted by name, quality, upload time, uploader and number of results. You can also select the number of models per result page.
Figure 12: The model keyword search web page
Why use the semantic search? What’s different from the simple keyword search? |
The goal of the semantic-based search mechanism is not simply searching for and retrieving resources, but to also search for every aspect of knowledge captured in the representation of the resources in the ontologies. We differentiate our approach between browsing and discovering DSW5 resources (simple/keyword search) and using semantic criteria based on the ontology schemas and its’ instances instances/individuals. This information may be either explicitly or implicitly described. Explicit information (metadata) includes datatype property values (i.e. specific alphanumeric values of properties), object property values (i.e. relations between classes or instances) or logical expressions/combinations of both. Implicit information can be inferred from explicit information about a resource and this requires the integration of an inference engine (Pellet in our case).
How do I use the semantic search for models? |
To use the advanced (semantic) search interface, click on the “Semantic search” link located in the left menu of every DSW5 web page. Then select the “Shape Common Ontology (SCO)” option and click on the “Semantic search” button to search for models. Please note that loading the ontology to the Pellet reasoner may take a few seconds so patient.
The guided semantic search interface (see Figure 13) helps you in making efficient and appropriate queries to the Inference Engine and abstracts the Search API and the underlying reasoner logic. In this interface you are guided in formulating semantic criteria by visualizing all possible options and query refinements, depending on the current context. For example, the ‘CATEGORY’ box dynamically displays all the classes of the selected ontology, the ‘PROPERTIES’ box dynamically displays all the properties of the selected category (and changes every time the makes another selection) and the ‘SEARCH OPTIONS’ box dynamically changes according to the type of property (or properties) selected. The AND/OR logical operators can be applied between the properties, and appropriate comparison operators can be applied to each property (e.g. equal/not equal or ‘=’, ‘<’, ‘>’ etc.).
Figure 13: The guided semantic search interface
How do I use the SPARQL language to search for models? |
To use the SPARQL endpoint, click on the “Semantic search” link located in the left menu of every DSW5 web page and then click on the “SPARQL search” button. To search for models, select the “Shape Common Ontology (SCO)” option (see Figure 14).
The SPARQL endpoint provides an alternative way of formulating queries using a text-based SPARQL search interface. You can formally query the underlying knowledge base using the SPARQL language. Please note that using this search interface is related to your comprehension of the domain and the way the domain knowledge has been structured. Therefore, in order to use the SPARQL search engine, you have to be familiar with the structure of the ontologies and also have some experience in forming SPARQL queries. To get you started in creating, refining and submitting some simple SPARQL queries is also provided.
The results of SPARQL queries are returned in HTML format. However, since the search results may not always be displayed clearly in HTML, we also provide the option of displaying (and downloading) the query results in XML or formatted text.
Figure 14: The SPARQL endpoint
How do I use the geometric search for models? |
To use the geometric search, click on “Shapes” tab and then click on the “Geometric search” link located in the left menu.
In addition, when you view more details about a model, there is a button called ‘Find models with similar shape’ that utilizes the Geometric Search Engine to find similar models using this model as a basis for comparison.
How do I search for tools? |
There are two (2) ways to search for tools: the keywords search and the semantic search. You can also use the SPARQL endpoint.
How do I use the keywords search for tools? |
located in the left menu. The query text can contain one or more words and the asterisk (“*”) can also be used as a multiple-character wildcard. The tool keyword search also allows an empty query text value (this means: show all the tools stored in the repository).
The tool keyword search provides several filtering options (see Figure 15): filtering by tool category (i.e. instances/individuals of specific classes in the TCO ontology), by tool functionality (based on the values of the property hasFunctionality in the TCO), by tool availability at a specific infrastructure (based on the values of the property isAvailableAtInfrastructure in the TCO), by tool execution platform (based on the values of the property hasExecutionPlatform in the TCO) and by tool input and output format (based on the values of the properties hasInput and hasOutput in the TCO).
Other display options include the “Sort by” button, which can display results sorted by name, upload time, owner and uploader. You can also select the number of tools per result page.
Figure 15: The tool keyword search web page
How do I use the semantic search for tools? |
To use the advanced (semantic) search interface, click on the “Semantic search” link located in the left menu of every DSW5 web page. Then select the “Tool Common Ontology (TCO)” option and click on the “Semantic search” button to search for tools. For more information about semantic search, see 2.10. Please note that loading the ontology to the Pellet reasoner may take a few seconds so patient.
The guided semantic search interface is similar to the semantic search for models, and helps you in making efficient and appropriate queries to the Inference Engine and abstracts the Search API and the underlying reasoner logic. In this interface you are guided in formulating semantic criteria by visualizing all possible options and query refinements, depending on the current context. For example, the ‘CATEGORY’ box dynamically displays all the classes of the selected ontology, the ‘PROPERTIES’ box dynamically displays all the properties of the selected category (and changes every time the makes another selection) and the ‘SEARCH OPTIONS’ box dynamically changes according to the type of property (or properties) selected. The AND/OR logical operators can be applied between the properties, and appropriate comparison operators can be applied to each property (e.g. equal/not equal or ‘=’, ‘<’, ‘>’ etc.).
How do I use the SPARQL language to search for tools? |
To use the SPARQL endpoint, click on the “Semantic search” link located in the left menu of every DSW5 web page and then click on the “SPARQL search” button. To search for tools, select the “Tool Common Ontology (TCO)” option. For more information on using the SPARQL endpoint see the previous questions in this FAQ.
How do I browse the Glossary terms? |
To browse the Glossary, click on the “Glossary” tab. The AIM@SHAPE Glossary is a controlled vocabulary that contains a selected set of three hundred seventy three (373) terms and their definitions which are distinctive in the domain of shape modeling.
There are two ways to browse the Glossary (see Figure 16): alphabetically (by selecting a letter of the alphabet) or display all the terms. A keyword glossary search is also provided. In addition, there is the option of downloading the whole glossary as a PDF document.
Figure 16: The Glossary home page.
What are multi-resolution downloads? |
Using the MT library, we can internally convert some
shape models to MT's native mtf format. An mtf file can then be
queried with a quality term for a mesh at the desired quality. Higher
qualities are closer to the original mesh, while lower qualities are
drastically simplified. The output mesh can be returned in several file formats.
Supported models in the repository are automatically processed and
made available for extraction in the desired quality and format.
That's pretty cool! What shape categories and formats are supported? |
Conversion to mtf format is done by the TriMesh2MT tool. According to its README file dated November 2005, TriMesh2MT accepts triangular surface meshes in OFF, IV and VRML (1.0 and 2.0) formats.
Meshes can be currently extracted from mtf files in OFF, PLY and VRML formats.
Where can I learn more about multi-resolution representations of meshes? |
Try looking at the homepage of the MT library
How do I insert a new model? |
To insert a new model, click on “Shapes” tab and then click on the “Upload Models” link located in the left menu. Next select the “upload single model” option and click the “Continue” button. After that, you must select the model category from the Common Shape Ontology hierarchy. The available categories (leaf classes) are highlighted in red.
The model upload procedure has two steps. The first step includes filling the model metadata values (i.e. the properties as they are defined in the Common Shape Ontology) for the specific category you have selected. An example is shown in Figure 17.
…
Figure 17: The model upload page - filling the model metadata.
Some metadata attributes are required (e.g. the model name) but most of them are optional. However, it is recommended that you fill in as many metadata values as possible in order to enable users to search for models more efficiently.
Context sensitive help is also provided since the meaning of some properties may not be clear to every user. When the mouse passes over the property name (or the small question mark next to the property name), a description/explanation of the property and sometimes the property allowed values, is displayed. This help is dynamically generated from the RDF comments that were defined in the Common Shape Ontology.
You can also temporarily save the metadata values you have inserted until this moment, if you click on the “Temporary save metadata” button. The saved metadata can be used at a later time e.g. to continue the model upload procedure.
After you finish with the metadata values you want to insert, click on the “Insert metadata” button to continue with the second step of the procedure. This second step involves uploading the actual file of the model and optionally: a thumbnail image, a 3D thumbnail other gallery images and related resources. “Related resources” could by any kind of file that can be used for the documentation of the uploaded model (e.g. a scientific paper).
Note: It is recommended that you upload a thumbnail image. For some model (e.g. meshes) a thumbnail image can be automatically generated. If you don’t upload a thumbnail image, the automatically generated image will be used.
How do I insert multiple models (batch upload) with a zip file? |
It is possible to insert multiple models (batch upload) which may be organized in folders inside a zip file. To do that, click on "Shapes" tab and then click on the "Upload Models" link located in the left menu. Next select the "Upload multiple models (batch upload)" option and click the "Continue" button.
For each folder a new shape group will be created. Potentila relationships between sub-folders (e.g. subgroups) have to be manually defined by editing the group metadata at a later time from the user's profile page. The group representative model is automatically set as the first model (file) in the folder. All models inside the zip file should belong to the same Common Shape Ontology category. The group/shape metadata filled in by the user will be used as a template for all the groups/shapes created.
How do I temporally save the model/tool metadata without inserting the new model/tool? |
Before continuing to the second step of the model/tool upload procedure, you can temporarily save the metadata values you have inserted until this moment by clicking on the “Temporary save metadata” button. The saved metadata can be used at a later time e.g. to continue the model/tool upload procedure or as a template for multiple model/tool uploads (e.g. if you want to upload many models/tools with similar metadata).
Where can I find my temporally saved model/tool metadata? |
Any temporarily saved model/tool metadata are accessible through your profile page. All the temporarily saved metadata are chronologically ordered.
How do I edit/delete a model that I have previously uploaded? |
To edit or delete a model you have uploaded to the Shape Repository, go to your profile page. All the models you have uploaded are displayed in chronological order. Then select the desired management action.
How do I create a new (empty) group of models? |
To create a new (empty) group of models, click on “Shapes” tab and then click on the “Upload Models” link located in the left menu. Next select the “upload model to an existing group of models” option and click the “Continue” button. After that, you need to fill in all group metadata information. When you finish with the metadata, click on the “Insert shape group metadata”.
How do I create a new group of models and upload a representative model? |
To create a new model to an existing group of models, click on “Shapes” tab and then click on the “Upload Models” link located in the left menu. Next select the “upload model to an existing group of models” option and click the “Continue” button. After that, you must select a shape group (group of models) that the model will belong to and click on the “create a new group of models and upload a representative model” button. The remaining procedure is the same as with the new model upload.
How do I insert a model to an existing group of models? |
To insert a new model to an existing group of models, click on “Shapes” tab and then click on the “Upload Models” link located in the left menu. Next select the “upload model to an existing group of models” option and click the “Continue” button. After that, you must select a shape group (group of models) that the model will belong to and click on the “Select a shape group” button. The remaining procedure is the same as with the new model upload.
How do I insert a new tool? |
To insert a new tool, click on “Tools” tab and then click on the “Upload Tool” link located in the left menu. Next you must select the tool type from the Common Tool Ontology hierarchy.
The tool upload procedure only has one step: filling the tool metadata values (i.e. the properties as they are defined in the Common Tool Ontology) for the specific tool type you have selected. An example is shown in Figure 19.
Some metadata attributes are required (e.g. the tool name) but most of them are optional. However, it is recommended that you fill in as many metadata values as possible in order to enable users to search for tools more efficiently.
Context sensitive help is also provided since the meaning of some properties may not be clear to every user. When the mouse passes over the property name (or the small question mark next to the property name), a description/explanation of the property and sometimes the property allowed values, is displayed. This help is dynamically generated from the RDF comments that were defined in the Common Tool Ontology.
You can also temporarily save the metadata values you have inserted until this moment, if you click on the “Temporary save metadata” button. The saved metadata can be used at a later time e.g. to continue the tool upload procedure.
After you finish with the metadata values you want to insert, click on the “Insert metadata” button and the new tool will be uploaded.
…
What is the Functionality of a tool? |
The functionality of a tool has to with its intended purpose, usage or function. There are several existing functionality "groupings" inside the Common Tool ontology (TCO). If the functionality you want is not already listed below, you can ask for another functionality to be added while uploading a new tool.
1. Acquisition: transpose real-world geometry into digital form
1.1. Measurement
1.2. Probing
1.3. Image acquisition
2. Annotation: Augment shape models with additional information
2.1. Manual: annotations defined by manual tagging or coding
2.2. Automatic: annotations are defined through automatic processes
2.3. Supervised: users are supported in the task of annotating a shape
2.4. Parameter extraction: users is used for calculating/measuring parameter values of a considered feature (e.g. erosion volume, erosion scoring etc.)
3. Coding: Create codes within or out of a shape model
3.1. Compression: Create a compressed code representing the shape
3.1.1. For storage: Create a compressed code to save storage requirements
3.1.2. For transmission: Create a compressed code to optimize transmission
3.2. Authentication: Create a code to authenticate the shape and/or its author
3.2.1. Watermarking: Create a code within the shape for validation
4. Convexity: Calculation of convex sets
4.1. Convexity test: tests an object for convexity
4.2. Convex hull: computes the convex hull of the input (collection or single object)
4.3. Extremal points: computes the extremal points of the input (collection or single object)
5. Distance: compute or compare distances
5.1. Euclidean: computes or compares Euclidean distances
5.1.1. Compute: computes the Euclidean distance between two objects.
5.1.2. Compare: takes three objects A, B and C and compares the distance between A and B with the distance between A and C.
5.2. Hausdorff: computes or compares Hausdorff distances
5.2.1. Compute: computes the Hausdorff distance between two objects.
5.2.2. Compare: takes three objects A, B and C and compares the Hausdorff distance between A and B with the Hausdorff distance between A and C.
5.3. Frechet:
5.3.1. Compute: computes the Frechet distance between two objects.
5.3.2. Compare: takes three objects A, B and C and compares the Frechet distance between A and B with the Hausdorff distance between A and C.
6. EngineeringNumericalSimulation: Perform simulations based on shape information (e.g. FEA/FEM)
7. Geometry Improvement: Process shapes to improve their geometric quality
7.1. Smoothing: A process that removes high frequencies from the signal.
7.2. Fairing: Generic process to "beautify" the shape
7.3. Denoising: A process that attempts to remove whatever noise presented in the signal (regardless of the signal's frequency content) while preserving the signal itself.
7.4. Completion: Fill large portions of missing geometry inspired by other parts of the same object or by other similar objects
7.5. Feature recovery: Reconstruct corrupted features (e.g. aliased sharp edges and corners)
8. Intersection: detects or computes the intersection of two (or more) objects
8.1. Detect: tests whether the input objects have a non empty common intersection or an empty one.
8.2. Compute: computes the common intersection between the input objects
9. Levels of Details: Create/manage shapes at various resolutions
9.1. Simplification: Reduce the number of primitives (e.g. triangles of a mesh)
9.2. Refinement: Increase the number of primitives (e.g. by resampling)
9.3. Approximation: Create an approximation of the shape, with either less elements or different sampling patterns
9.4. Progressive representation: Create a representation that can be used to generate different resolutions of the model
9.5. Multiresolution shape analysis: An analysis of the model that exploits various resolutions to discover different features
10. Meshing: tiles the input domain with elements that meet at edges.
10.1. Surface mesh: output tiling is a surface mesh (triangle, quad or polygonal)
10.2. Volume mesh: output tiling is a volume mesh (tetrahedral, hexahedral or cellular complex)
10.3. Surface extraction from volumetric data: output output tiling is a surface mesh (triangle, quad or polygonal) and the input is volumetric data coming e.g. from medical images.
11. Modeling: the process of creating a new model, either from scratch or starting from existing information
11.1. Boolean operations: E.g. Constructive Solid Geometry
11.2. Styling:
11.3. Synthesis: Models are synthesized from other information
11.4. Offsetting: Models are generated by offsetting other models/surfaces
11.5. Minkowski sum: Original shapes are created through Minkowski sums of other (primitive) shapes
11.6. Blending: The process of properly merging different shapes into a single object
11.7. Trimming: The process of cutting surfaces using curves
11.8. Free-form deformation: Shapes are created by deforming other shapes
12. Optimization: Shape analysis based on optimization of specific functionals
12.1. Smallest enclosing circle, sphere, and annulus: computes the smallest circle, sphere or annulus enclosing input objects.
12.2. Smallest enclosing ellipse: computes the smallest ellipse enclosing input objects.
12.3. Rectangular p center
13. Parameterization: Compute a mapping between a surface and another domain
13.1. Planar: computes a bijective correspondence of the input onto the plane (typ. surface mesh)
13.2. Atlas generation: performs a decomposition of the input shape into planar disc-like patches.
13.3. Spherical: compute a bijective correspondence of the input onto a spherical domain.
14. Query: Computing geometrical/topological information based on a shape model
14.1. Range Search:
14.2. Nearest neighbor: Find points which are closest to a given query point
14.2.1. Single nearest neighbor: Find just the closest point
14.2.2. K-nearest neighbors: Find the k closest points
14.3. Shape matching: Analyze the similarity between different shapes
14.3.1. Matching based on shape descriptors: Similarity is computed as a distance between shape descriptors
14.3.2. Matching based on directed attribute graph: Matching is based on graph matching techniques
14.3.3. Inexact attributed graph matching: Matching is based on inexact matching techniques for attributed graphs
14.4. Shape interrogation: Calculate geometric properties
14.4.1. Area: The total area of a surface
14.4.2. Curvature: Surface curvature at a point (e.g. Gaussian, mean, etc)
14.4.3. Geodesic distance: Distance computed on the surface
14.4.4. Ridges: Analyze the shape based on the detection of high mean-curvature regions (i.e. ridges)
14.4.5. Singularities: Calculate/detect singular points/lines on a surface description
14.4.6. Self-Intersection: Calculate/detect self-intersection in the geometric realization of an object
15. Remeshing: re-tiles the input mesh to improve sampling and/or shape of elements.
15.1. Quality of elements: emphasis on the shape of elements
15.2. Size of the mesh: maximizes the trade-off between approximation quality and number of elements.
15.3. Compatible: generates a series of meshes which establish correspondences among multiple shapes. The meshes produced must share the same connectivity but not the same geometry.
15.4. Structured or Regular: amounts to replacing an unstructured input mesh by a structured one (also called regular mesh). In a structured mesh all internal vertices are surrounded by a constant number of elements. In an unstructured mesh all internal vertices are not necessarily surrounded by a constant number of elements. The structured remeshing techniques are sorted by the type and regularity of the generated mesh:
15.4.1. Semi-regular: generates a piecewise-regular mesh with subdivision connectivity.
15.4.2. Highly-regular: generates a piecewise-regular mesh.
15.4.3. Perfectly-regular: generates a perfectly regular mesh.
15.5. Uniform:
15.6. Mesh repairing:
15.6.1. Manifold: converts a polygon soup into a 2-manifold mesh. A 2-manifold mesh is a mesh where each edge shares at least one and at most two faces, and the neighborhood of a vertex is a single edge-connected connex component.
15.6.2. Watertight: converts a mesh with boundary into a closed model.
16. Reconstruction: Creation of a shape model starting from measured data
16.1. Registration of measurements: The process of aligning different views of an object into a single reference system
16.2. Fusion of measurement sets: The process of creating a single model by merging different (previously aligned) datasets
16.3. Reconstruction from measurement sets: The process of creating a shape model (e.g. a manifold) starting from measured data (e.g. a cloud of points)
17. Sampling: The process of creating a set of points belonging to a shape
17.1. Parametric surfaces: Create a set of points belonging to a parametric surface
17.2. Implicit surfaces: Create a set of points belonging to a surface defined implicitly
18. Statistical Analysis: Use statistics to determine interesting features of shapes
18.1. Principal component analysis: Classical PCA based on geometric entities (e.g. point location)
19. Structuring: Create useful structures based on an analysis of shapes
19.1. Subdivision: The process of creating a set of cells covering a geometric domain
19.1.1. Arrangement: The process of arranging tiles to cover a domain
19.1.2. Triangulation: The process of subdividing a planar domain into triangles
19.1.3. Delaunay: The process of subdividing a planar domain into triangles satisfying the Delaunay criterion
19.1.3.1. Regular: The domain is sampled so that the triangulation is more regular
19.1.3.2. Constrained: The Delaunay criterion is allowed to fail in specific regions in order to satisfy other constraints
19.1.4. Voronoi diagram: The process of subdividing the space into Voronoi cells
19.1.5. Power diagram: The process of partitioning the Euclidean plane into polygonal cells defined from a set of circles
19.1.6. Polygon decomposition: The process of subdividing a manifold into polygons
19.1.7. Convex decomposition: The process of subdividing a domain into convex cells
19.2. Medial axis: Computing the set of all points having more than one closest point on the object's boundary
19.2.1. Exact computation: Processes that compute the exact medial axes (i.e. might be complex with numerous "spurious" arcs)
19.2.2. Approximation: Processes that compute an approximation of the medial axis (e.g. by "pruning" an exact medial axis, or through other techniques)
19.3. Clustering: The process of dividing a dataset into mutually exclusive groups such that the members of each group are as "close" as possible to one another, and different groups are as "far" as possible from one another, where distance is measured with respect to all available variables.
19.3.1. Segmentation: The process of subdividing a geometric object into disjoint connected subsets that cover the entire object
19.3.2. Multiscale labeling: The process of assigning "labels" to regions of the shape at various scales
19.3.3. Image segmentation: The process of subdividing an image into disjoint connected subsets that cover the entire image
19.3.4. 3D model segmentation: The process of subdividing a 3D object into disjoint connected subsets that cover the entire 3D object
19.4. Topology decomposition: The process of decomposing a shape into topologically simpler regions
19.5. Reeb graph: The process of calculating the connectivity of the level sets of a shape
19.6. Graph partitioning: The process of subdividing a graph into subgraphs
20. Transforms: computes the transform of the input
20.1. Rigid motion: under translation or rotation or combination of
20.2. Affine transforms: under general affine transform
20.3. Projective transforms: under projective transforms
20.4. Moebius: under Moebius transforms
20.5. Inversions: under inversions
20.6. Combine transforms: under combination of transformations
21. Visibility: Computing the visibility of (parts of) objects
21.1. Visibility complex: computes the visibility complex of the input
21.2. Shortest path: computes the shortest Euclidean path between two given points in a given obstacle environment
21.3. Ray shooting: answers ray shooting query in the input environment
22. Visualization: Creating pictures of geometric objects to be displayed onscreen
22.1. Realistic rendering: The objective of the visualization is to be as realistic as possible
22.1.1. Shading: Material properties are used to correctly display surfaces
22.1.2. Ray-tracing: Light sources generate the picture
22.1.3. Soft-shadowing: Depending of the position of lights, shadows are computed
22.1.4. Radiosity: a rendering algorithm which gives a realistic rendering of shadows and diffuse light
22.2. NPR (Non Photo-Realistic) rendering: The objective of the visualization is to conceive information about the data
22.2.1. Hatching: The object is drawn through "strokes" to highlight particular characteristics of the shape
22.2.2. Silhouette: Just extreme curves are drawn to depict the overall appearance of an object
23. Projection: computes the projection of the input objects
24. Conversion between file formats: conversion without loss or addition of information.
25. Conversion between shape representations: conversion between two representations of the same shape. The transformation has to be canonical without loss or addition of information.
26. Animation: The process of generating animated objects
27. Morphing: The process of computing "inbetween" shapes representing "averages" of other shapes
How do I edit/delete a tool that I have previously uploaded? |
To edit or delete a tool you have uploaded to the Tool Repository, go to your profile page. All the tools you have uploaded are displayed in chronological order. Then select the desired management action.
Something went wrong while I was adding a model, group of models or a tool. What should I do? |
In case you encounter problems when adding a model, tool, or group model, there are a lot you can do to remedy the situation, e.g. edit the uploaded metadata, delete the model/tool, retry to upload the model/tool etc. Most of these actions can be found in your profile page.
If nothing of the above solves your problem, you can contact us (please include a brief description of the problem).
How do I create a new LDAP user group of DSW5 registered users? |
The creation of a new LDAP user group is done from the ‘Create new user group’ link located in the left menu at every DSW5 page. Every registered user has the right to create LDAP groups. The creation of a new LDAP user group only requires a user group name and, optionally, a group description.
How do I edit/delete an LDAP user group of DSW5 registered users? |
To edit/delete user groups you must click on the "Edit/Delete user groups" link located in the left menu at every DSW5 page. Every registered user has the right to manage (edit/delete) the LDAP groups he/she has created.
How do you extract metadata from shape models? |
We use the TriMeshInfo tool for metadata extraction.
As TriMeshInfo accepts a restricted set of file formats, we first run a
converter script for some additional formats.
When a supported model is added to the repository, metadata is
automatically extracted from it and displayed along with the user's
input metadata. The user can then choose to commit either the extracted
metadata or the input metadata to the repository.
How do I insert a new medical model? |
A subsection called "Medical Shapes" is located at the left side menu of the Shape Repository. The medical data upload procedure according to the Medical Ontology is much more complex than the normal shape upload in the VVS, therefore the user is guided through the steps. An overview of each upload step is described below:
- Select district (by clicking on a human body region)
- Insert patient info: Generate an instance of the Medical Ontology Patient class
- Insert acquisition info and data: Generate instances of the MO AcquisitionSession class and the MO AcquisitionProtocol class. Optional the user can also upload an MRI (or DynamicMRI , MoCAP etc) i.e. create an instance of the corresponding CSO class and upload the file, thumbnail etc (as it’s done in the DSW).
- Insert segmentation info and data: Generate an instance of the MO SegmentationSession class and allow the upload of multiple instances of the MO SegmentationElement class. For each segmented bone/element an instance of the CSO ManifoldSurfaceMesh class is created and the actual file, thumbnail, 3D thumbnail, other resource files etc are uploaded (as it’s done in DSW). Also, during this step, it’s possible to upload part-based annotation files for each element. In addition, before finalizing the upload procedure, an instance of the CSO ShapeGroup class is created as a container for all the segmented elements.
The upload procedure forms are customized and only the most important fields are shown. The rest of the optional metadata fields are hidden by default but can be viewed by clicking on a collapsible area - open/close button.
Starting the upload procedure and selecting the anatomical district.
Inserting patient information
Inserting acquisition session and acquisition protocol information
Optionally the user can upload MRI, Dynamic MRI, MoCap etc. files
Inserting segmentation session info and segmented elements.
Uploading segmented elements info and optionally part-based annotations
Uploading a 3D model, thumbnail etc. for each segmented element
All the inserted segmentation elements are inserted into a "shape group"
Finalizing the medical data upload procedure.
How do I browse medical data? |
The medical data browsing is patient-driven and/or district-driven and not shape/model oriented as in the VVS. This means that the user can select a patient from a table/list and be able to browse and view any acquisition & segmentation info, thumbnails and 3D models.
There are several options for filtering and displaying the available medical data list:
by district
by type of acquisition protocol
-
by the experience of the person who performed the segmentation
-
by keyword search.
The first three (a, b, c) can also be combined to further refine the filtering (the AND operator is used). The keyword search is applied to all patient data.
Different options for filtering and displaying medical data.
In addition, all table/list headers (i.e. District, Patient name, Acquisition date, Segmentation Session description, Uploader and Upload date) can be sorted in ascending or descending order as shown in the following Figure.
All table headers can be sorted in ascending or descending order.
When a user selects a patient (e.g. anonymous 1), the following page is displayed containing all the available information: patient details, acquisition session and acquisition protocol information, segmentation information and a list with all the segmented elements uploaded.
Overview of the medical data page for a selected patient.
The user can click on the "Show MRI data" button and scroll through the DICOM volume image (WebGL renderer window). Three different views are supported: Axial, Sagittal and Coronal. The view can be altered by clicking the corresponding buttons.
The "Show MRI info" button displays the MRI metadata information.
In case there are many segmentation elements e.g. fifteen elements in the wrist district, the user can hide the upper part of the page (i.e. the patient and acquisition information) and in addition there is a paging system implemented (e.g. 10 elements per page). The resulting layout of the page is shown in the figure below.
Segmentation elements paging and buttons for resizing the displayed area
For each segmentation element the following information are displayed:
- Anatomical entity name (e.g. femur)
- Name of the element (given by the user)
- Description of the element (given by the user)
- Small thumbnail image (if it was uploaded)
- Button for displaying the 3D model of the element (if it was uploaded)
- Button for displaying the 3D model metadata information
- Button for displaying the part-based annotation (if it was uploaded)
Hiding the upper part of the medical data view page (wrist district).
When the user clicks on a thumbnail image, a new window is opened with an enlarged thumbnail image of the element.
Multiple thumbnail image windows can be displayed at the same time.
By clicking on the "Show 3D model" button, a WebGL renderer window is opened displaying the 3D model of the element if the uploaded model is in a .vtk or .obj format.
The user can use the mouse to move/rotate the 3D model and zoom-in/out.
Clicking on the "Show model info" button displays the metadata info.
In addition, there is a "Show all 3D models" button just above the list of the segmented elements. If the user clicks on this button, all the available 3D models are shown in the same renderer window (e.g. see the whole wrist model on the left).
Please note that multiple windows can be displayed at the same time and moved around like in a normal windows desktop application.
Full wrist model and multiple other windows can be shown simultaneously.
How can I upload part-based annotations? |
The medical data upload procedure supports the upload of part-based annotation files as exported from the SemAnatomy3D software (in .txt format). More specifically, during the fourth step of the guided upload procedure, the user is required to upload one or more SegmentationElement instances. For each segmented bone/element it’s also possible to upload a part-based annotation file.
Uploading segmented elements info and optionally part-based annotations
How do I browse part-based annotations? |
When a user selects a patient (e.g. anonymous 3), the following page is displayed containing all the available information: patient details, acquisition session and acquisition protocol information, segmentation information and a list with all the segmented elements uploaded. Note that if there are available part-based annotation files for a segmented element, then the “Show Part-based Annotation” button is active. Otherwise it is disabled.
Overview of the medical data page for a selected patient.
By clicking on the “Show Part-based Annotation” button, a WebGL renderer window is opened displaying the 3D model of the element (if the uploaded model is in a .vtk or .obj format) and a list of available annotations as defined in the uploaded annotation file. When a specific selection is made, the surface patch segment is displayed (overlapped) in the render window with a different color. See Figures 6 and 7 for an example of different annotations for the hamate wrist bone.
The Hook of Hamate is visible as a surface patch in red. The user can use the mouse to move/rotate the 3D model and zoom-in/out.
|
|
|
The user can select a different annotation to visualize the corresponding surface patch (in red) inside the WebGL renderer window.
Note that multiple windows can be displayed at the same time and moved around like in a normal windows desktop application.
Multiple WebGL windows can be displayed simultaneously.
Which kinds of workflows are considered? |
Two types of workflows are considered within the workflow repository:
static workflows and executable workflows.
Static workflows are tutorial-oriented workflows, currently devoted to Virtual
Reality applications. Users of the VVS can find in this part of the Workflow Repository
a support to understand the meaningful steps for the preparation of a CAD model for being
used in VR environments.
Executable Workflows are geometry processing pipelines that can be remotely executed
by taking advantage of specific Web Services provided by the VVS system. There are two
types of executable workflows in the VVS:
- The first kind of executable workflows allow the user to define a workflow by combining geometry processing algorithms, choosing among the available ones. A workflow can consist of a sequence of Atomic Tasks (e.g. Add Noise, Hole Filling, Laplacian Smoothing etc.) or a combination of nested atomic tasks using conditional tasks (e.g. if and while loops). Once a workflow is generated and uploaded to the Workflow Repository, its execution is managed by the repository itself. Users can run a workflow by uploading a triangle mesh as input (or using an existing mesh from the Shape Repository as input). The execution is done asynchronously and the user receives an email notification when the generated output mesh is available for download.
- The second kind of executable workflows allow the user to execute single-step Web Services (e.g. for mesh cleaning, smoothing, reconstruction, simplification, meshing etc) or Web Service Workflows which are a combination/orchestration of single-step Web Services.
How do I browse for workflows? |
The "Browse Workflows" button on the left menu provides different way to browse and filter/search for available workflows. The two main ways of filtering workflows is by type of workflow e.g. static, executable etc. and by purpose/domain e.g. ‘from CAD to VR’, ‘Medical’ etc. It is also possible to filter workflows by input/output type or input/output file format.
How do I insert a new Workflow? |
To insert a new workflow you must click on the "Workflow" tab and
then on the "Upload Workflow" link on the left menu. Here, you can choose to
upload either an executable or a static workflow and, clicking on next, you will
be redirected to the corresponding upload framework.
The workflow upload is allowed only for registered users.
Selecting the type of workflow to upload.
How do I create a new Executable Workflow? |
If an executable workflow is selected to be uploaded, the user is redirected to the executable workflow creation page. This page is divided into two sections:
- the Generic Information subsection, where required info are inserted by the user: name and description (the creator and creation date are automatically filled).
- the Workflow Definition subsection, where the pipeline of geometry processing algorithms are defined by the user.
An executable workflow can consist of a sequence of Atomic Tasks (e.g. Add Noise, Hole Filling, Laplacian Smoothing etc.) or a combination of nested atomic tasks using conditional tasks (e.g. if and while loops, etc). Please note that an executable workflow must have at least one atomic task.
Executable workflow creation page.
Adding conditional tasks i.e. “if” and “while” loops.
After executable workflow definition process is finished, the user clicks on the “Save Workflow” button and gets redirected to the final page. From there, the user can examine the generated XML file of the workflow definition by clicking on the “See XML file” link or run the workflow.
Successful insertion/upload of an executable workflow.
How do I create a new Static Workflow? |
If a static workflow is selected to be uploaded, a detailed guidance page with all the necessary information, instructions and clarifications is displayed before starting the workflow upload procedure. This page contains an overview of the concepts of Workflows, Activities/Sub-Activities, Functionalities, Tools etc. After clicking the “Next” button, the user is redirected to the static workflow creation page. This page is divided into two sections:
- the Generic Information subsection, where required info are inserted by the user: name, description and purpose/domain (the creator and creation date are automatically filled).
- the Workflow Activities Information subsection, where the ordered sequence of activities are defined by the user.
Please note that a static workflow must have at least two activities and all the activities/sub-activities must be provided at this stage i.e. the sequence can NOT be modified at a later time.
Instructions page before staring the static workflow upload procedure.
Static workflow creation page.
There are two ways of adding activities to the workflow: by creating a new activity from scratch i.e. by simply adding its name, or by selecting (reusing) an already existing one from the dropdown list. The “View” button next to a chosen existing activity/sub-activity allows the user to inspect the metadata of the selected element (activity).
The user is also able to add sub-activities to the created activity by either creating a new sub-activity or reusing an existing one.
Selecting an existing activity.
Inserting sub-activities to an activity.
When the definition of the activities is completed, the user clicks on the “Next” button and the metadata are processed and stored in the Workflow Ontology. An intermediate page is displayed from which the user can further edit each activity/sub-activity.
Intermediate page for further editing activities/sub-activities.
By clicking on the “Edit activity” or “Edit sub-activity” buttons, the user can modify or the activity metadata or insert new information. For example, the user can modify the description or the functionality of the activity, or insert new metadata like Additional Input, Restrictions, Tips or Documentation (by uploading a file).
Editing activities/sub-activities.
Which is the difference between simple-activities and macro-activities? |
By macro-activities we mean the main steps of the workflow, that may
be collectors of different sub-activities, called simple-activities that are one-shot
steps corresponding to one functionality of a tool.
A simple-activity cannot be further decomposed into sub-activities and has to
correspond to a functionality of a tool, while in general this is not true for macro-activities,
except from the case in which the macro-activity has no sub-activities (so that it is, in a
sense, a sort of "simple-activity").
How do I insert an activity? |
An activity can be only created when creating a new workflow in which the activity is performed, by simply inserting its name. The metadata of the activity must be provided when the workflow has been submitted.
Where can I find previously uploaded workflows? |
You can browse all the workflows on the repository by using the
"Browse Workflows" link on the left menu. You can choose to browse either static
or executable workflows.
For static workflows, you can view a process diagram representation of the workflows.
For each workflow the related metadata and activities are obtained by clicking on its representing box.
For executable workflows a new page is visualised providing the main information about
the workflow, the link to its XML file and a link to the page for its execution.
How can I execute a previously uploaded Executable Workflow? |
You can execute a previously uploaded workflow by selecting it in the browsing page and then clicking on the provided "Run Workflow" button. You will be redirected to a page where the input mesh has to be uploaded (currently available formats are: .off, .ply, .stl and .obj). By clicking on "Upload Mesh" you will be asked an email address where the output of the processing will be sent to.
How can I execute a previously uploaded Executable Workflow? |
Currently, the only way of editing a workflow is editing its metadata by using your profile page, available after login and selectable on the menu near the login. A user-friendly editing framework is work in progress. Workflows can be removed by using the "Remove Workflow" link on the left menu, where a dropdown visualizing all the workflows you have created allows you to choose the one you want to delete. For static workflows, you can choose either to delete only the workflow instance or also all the other elements connected to it (such as all the activities, tips, restrictions, etc.).
What are Executable Web Services? |
The Web Services UI provides a way to dynamically execute the available Web Services and Web Service workflows. In the subsection called “Executable Web Services” at the left side menu of the Workflows Repository, there are two main options: a) to execute a single-step web service and b) to execute one of the pre-defined dynamic web service workflows.
Single-Step Web Services: The list of currently available Geometry Processing Web Services is shown in the figure below. The user interface dynamically generates this list from the instances of the TCO class WebService. Additional information about each service is provided when the mouse pointer goes over the service name (tooltips). This information is also dynamically generated from the values of the datatype property called hasDescription. Additional information are also available by clicking on the “see more details” link, where the user is redirected to the web service description at the Tool Repository. By selecting a single-step service and pressing the execution button, the specific Web Service is invoked.
The user interface of a single-step Web Service selection.
After the Web Services selection, the user uploads the input model, or selects an existing model from the Shape Repository (SR), and initiates the workflow execution. More details about acceptable input formats are given at the web service page. The list of existing model from the Shape Repository is dynamically generated according to the allowed input formats of the selected web service.
It is also possible to save the output model to the Shape Repository. If the user selects this option, all the necessary metadata are automatically generated.
Upload an input model or select an existing model from the SR.
Ready to execute the selected web service
An example web service execution result page is shown in the figure below. A summary of the output log is displayed (as generated by MeshLab) and the resulting output file is provided as a link and can be downloaded by the user.
Single-step web service execution result page (without saving the resulting model to the SR).
Web Services Workflows: The Web Services workflows page has currently two pre-defined dynamic workflows, as described in the previous section.
The web interface of the first workflow scenario shows the abstract workflow definition ( functionality descriptions) in a diagram and the user is prompted to assign concrete service instances to each task/activity of the abstract process. The selection of the specific Web Service instances is done from drop-down menus that are dynamically generated using the Functionality property of the Software Tool class i.e. the user selects specific web services that can be used to perform an abstract task.
The web services workflows initial page.
Selecting concrete web services for the execution of the first workflow.
Selecting concrete web services for the execution of the second workflow.
After the Web Services selection for each workflow abstract task, the user uploads the input model, or selects an existing model from the Shape Repository, and initiates the workflow execution (similarly to the single-step web service).
Finally, a short summary of the execution log is displayed and the resulting output model is provided for download. In addition, as with the single-step web service, the workflow execution can automatically produce the appropriate metadata for the resulting (output) model i.e. processing history, documentation and other details (e.g. number of vertices, number of faces, model origin, file size and file format, location and URL etc.) and the resulting model and its metadata can automatically be stored back to the knowledge base if the user selects the corresponding option.
Note that saving the generated model to the Shape Repository (SR) involves a number of steps: a) create a new ontology instance in the appropriate class/category of the Common Shape Ontology (e.g. ManifoldSurfaceMesh) and a new FileInfo instance, b) calculate and fill automatically some of the metadata (e.g. number of vertices, number of faces etc), c) update the SR cache table, d) generate automatically a new thumbnail (applicable only for mesh models), e) compute the MT and signature of the model and added them in the Geometric Search Engine (GSE) database.
Result page of the second executable web services workflow and saving the resulting model to the SR.
Which kinds of shapes are considered? |
It is possible to upload all those shapes that are related to the representation and simulation of factories in the discrete manufacturing domain. In particular shapes corresponding to: the factory itself, the factory site and other external spaces, factory buildings, building internal spaces, generic building/factory components (such as roofs, doors, furniture, human actors, etc.), production machineries, transport elements and products (with the possibility of adding shapes also for the corresponding work-pieces for its machining working steps ). The type of shapes that can be uploaded are the same (e.g. CAD models, meshes) that are allowed by the Shape Repository, as well as the procedure to upload them (i.e. the one for uploading a general shape to the repository).
Which standards are considered for information organisation? |
The Manufacturing Shape Ontology exploits two standards for organizing the objects in the manufacturing domain: the IFC - Industry Foundation Classes and the DIN8580 (Deutsches Institut fĂĽr Normung) Standard.
Why I cannot see the complete Manufacturing Shape Ontology? |
The Manufacturing Shape Ontology exploits the Pro2Evo and VFF ontologies which are rather complex, in particular in the relations defined for linking the elements among each other (from the definition of the name of an object to the decomposing relations). That is why we choose to visualise only a part of the ontology and to setup a user-friendly framework for both defining and removing all the elements a user wish to upload to the repository, leaving to the system the automatic creation/removal of all the required relations and auxiliary instances.
How can I insert a manufacturing element and its shape representations? |
To upload a manufacturing element to the repository click on the "Shapes" tab and then select the "Upload Virtual Manufacturing Models" link on the left menu. Here, you will be first asked to choose the type of element you wish to upload, then to insert some information about it and finally to upload its shape models. The shape model uploading procedure is the same of the one for uploading general shapes to the Shape Repository.
There exists any dependence in the object creation? |
For all categories excluding "Product" and "Production machinery and transportation element", relations to their composing or contained elements may be optionally added. The composing or/and contained elements should be already existing: a complex object should be created from its parts to its whole. For example, an internal spatial element room can be made up of a floor, a roof and some doors, which have to be created before the room instantiation.
Can I upload several representations for the same manufacturing element? |
Yes, in some cases, more representations of the same element may be required, e.g. factory detail design and simulation. You can upload several representations of an element either by clicking different times on the uploading button, or by clicking on the link provided at the end of a shape's uploading procedure.
How can I insert an avatar? |
You can insert an avatar by choosing "Generic building/factory component" as the element you wish to upload, and then selecting "Actor" in the dropdown list for choosing the generic type of the element.
How can I insert a product with its related working steps? |
You can insert a product by choosing "Product" as the element you wish to upload. Working steps are automatically created whenever you provide the processes that are used to produce the product. When arriving to the shape uploading page, you can upload the shapes corresponding to the product obtained at the specific working step. Only the final shape is mandatory and corresponds to the product itself.