Critically compare a range of Database Modelling languages:
User-Friendly: With the main motivations behind Umple, as stated above, it is essential that user-friendliness is our central concern during the development of Umple or any other tool. any other model. We have focused on the following aspects of usability that we consider essential and recommend them as secondary criteria for comparing modeling approaches. It should be easy to set up: One problem with many tools is that they are not "instant". This not only hinders classroom use, but also causes spontaneous discussions between developers (such as “whiteboard” style sessions) or a quick return to work after an interruption. paragraph. In Umple, users can do quick web work using UmpleOnline and have access to a command line tool that also requires almost no installation. We recommend that this be rated on the following scale: i. Instanton (usually web-based); ii. Download and run; iii. Download and run with other components that need to be pre-installed (like an IDE) and need to be at this level or better in installability. iv. Installation is complex (e.g. requires compilation, configuration, etc.) It should be easy to learn: The tool should have extremely simple modes so that beginners can immediately guess what needs to be done. In Umple, we focus on simplicity of the language: class declarations look like declarations in other languages; union declaration looks like UML; state machines are just nested blocks named Cstyle. Likewise, on startup, UmpleOnline displays a central menu highlighting the main operations: Load/Draw/Create. There is no complicated configuration of the "project" as is normal in other modeling tools or IDEs. In Umple, we chose to allow models to be written with the least amount of special symbols, brackets, and other complex syntaxes. We also ensure that its syntax matches that of other C-family languages to ensure familiarity and ease of integration. For the graphical language, Moodie provides a complete set of ideas [15]. Assessment of learning ability can only be adequately done with experimental studies, but to achieve optimal learning ability it is necessary to have: a) Some examples that can be tried in practice without need effort; b) Provide adequate feedback to help users when they make mistakes or suggest alternative solutions; c) Ability to do basic modeling by testing and simulating or modifying examples; d) Complete user manual; e) Default values for items are configurable, so users can "start" a basic project right away. Without a doubt, this list of criteria will have to be expanded. It needs to be efficient and fast to use: users need to be able to edit the model as quickly as possible with the least amount of mouse movement or keystrokes. We've found that for some tasks (such as adding a diagram element), a two-click interaction involves clicking an icon and then clicking a location in the diagram to add an element, works best. Rename by clicking on the name and edit, can also work well on diagrams. However, for anything more complex, well-designed text symbols are superior. In Umple, we chose to implement tools that allow text versions of UML models according to the C family's syntax, side-by-side with diagram versioning. This gives users the best of both worlds and our surveys show that users are satisfied [10]. To evaluate this criterion, more empirical studies are needed. Measurement may involve determining a user's ability to create reference models, make specific changes to those models, and answer specific questions about those models. . It will keep users away from errors: we have observed that many modelers in the industry produce inaccurate or incomplete models [11]. Full code generation is one solution to this problem, because without generating code where the results can be checked, it is difficult to get feedback on the models. Umple works like any other compiler and provides various warnings and errors to guide the user to create correct models. We created a man page that shows the cause of the error and gives an example of how to fix it. All errors, even in the embedded base language code, point to the origin of Umple, eliminating the error-prone and complex "circle technique". In order to compare tools, one would need empirical evaluation, measuring the frequency of errors and the speed of error correction, when working with standard problems.
Scalability: Many modeling tools have scalability issues. We suggest that extensibility should mean the following for a language or modeling engine. It can model systems of arbitrary size and process those models without slowing down: In Umple, we compiled systems of thousands of rows, including Umple itself, or JHotDraw [12]. It should have an acceptable response time as the system grows: since Umple is text-based, editing large models shouldn't be a problem: integrated development environments can already handle models large images for decades across multi-file systems. In addition, the Umple compiler can compile a very large system in slightly longer time than an equivalent Java system can be compiled. To evaluate this, one should measure the speed of editing and, if possible, the speed of analysis or the speed of code generation. Any slowdown to editing large models would indicate a lack of scalability, as well as a non-linear slowdown in analysis or code generation. It must have separation and anxiety mechanisms: these allow the user to work with parts of the system and not be confused by too much complexity. A Umple system can be divided into files in several ways: By class or by function for example. Parts of a class can be defined in different files. Having such mechanisms does not guarantee immediate scalability, as modelers must always organize their models. However, this inability limits scalability. To evaluate this criterion, modeling approaches must be labeled according to whether the submodules can be modified and analyzed independently of the larger model, and whether one of the techniques Can the following techniques be used: multi-element, aspect-oriented, combination/trait, variant, and tools to show relationships between model elements (e.g., UML package diagrams are create) as well as to edit those relationships. It must have search engines to find elements in a complex model. Since Umple is a text modeling tool, you can use all the power of existing text search engines. Graph modeling approaches should be evaluated against the ease of finding elements or structures.
Completeness: Completeness in a modeling tool can be assessed against certain dimensions: • Completeness relative to a standard language (eg UML). • Completeness in relation to user needs, as assessed by empirical studies. • Completeness of analytic capabilities, i.e. the extent to which semantic analysis of the model is performed to find errors or inconsistencies. • Inclusiveness of the generation process, i.e. if all relevant semantics represented in the model are reflected in the generated code. This only applies to languages used to generate code and not to languages for aspects such as requirements. In Umple we specifically avoid implementing all of UML, because a major criticism of UML is that it is too complicated. Instead, we focused on the second criterion above - we asked users to develop the system, and when they found significant flaws in Umple, we improved Umple by adding new features. necessary capacity. We also continue to address the third criterion by adding more layers of analytical capabilities. Umple already has a full set of warnings and error messages, pointing out inconsistencies. We also have tools for generating metrics and are working on the Umple interface with full-fledged tools for model verification and theorem proofs. Our previous research revealed serious shortcomings in the code generated by almost every code generator we could access. For example, most code generators for UML bindings, including ArgoUML, create only a few variables in each class. The user must write code to manipulate these variables. Umple generates code that correctly handles all constraints, as well as referential integrity.