Ma. Anna Barbara R. Villanueva October 6, 2013 Advanced Organization of Database Project 1-Documentation



Download 148.45 Kb.
Page4/4
Date29.07.2017
Size148.45 Kb.
#24437
1   2   3   4

Types of integrity constraints


Data integrity is normally enforced in a database system by a series of integrity constraints or rules. Three types of integrity constraints are an inherent part of the relational data model: entity integrity, referential integrity and domain integrity:

  • Entity integrity concerns the concept of a primary key. Entity integrity is an integrity rule which states that every table must have a primary key and that the column or columns chosen to be the primary key should be unique and not null.

  • Referential integrity concerns the concept of a foreign key. The referential integrity rule states that any foreign-key value can only be in one of two states. The usual state of affairs is that the foreign key value refers to a primary key value of some table in the database. Occasionally, and this will depend on the rules of the data owner, a foreign-key value can be null. In this case we are explicitly saying that either there is no relationship between the objects represented in the database or that this relationship is unknown.

  • Domain integrity specifies that all columns in relational database must be declared upon a defined domain. The primary unit of data in the relational data model is the data item. Such data items are said to be non-decomposable or atomic. A domain is a set of values of the same type. Domains are therefore pools of values from which actual values appearing in the columns of a table are drawn.

If a database supports these features it is the responsibility of the database to insure data integrity as well as the consistency model for the data storage and retrieval. If a database does not support these features it is the responsibility of the applications to ensure data integrity while the database supports the consistency model for the data storage and retrieval.

Having a single, well-controlled, and well-defined data-integrity system increases



  • stability (one centralized system performs all data integrity operations)

  • performance (all data integrity operations are performed in the same tier as the consistency model)

  • re-usability (all applications benefit from a single centralized data integrity system)

  • maintainability (one centralized system for all data integrity administration).

As of 2012[update], since all modern databases support these features (see Comparison of relational database management systems), it has become the de-facto responsibility of the database to ensure data integrity. Out-dated and legacy systems that use file systems (text, spreadsheets, ISAM, flat files, etc.) for their consistency model lack any kind of data-integrity model. This requires organizations to invest a large amount of time, money, and personnel in building data-integrity systems on a per-application basis that effectively just duplicate the existing data integrity systems found in modern databases. Many companies, and indeed many database systems themselves, offer products and services to migrate out-dated and legacy systems to modern databases to provide these data-integrity features. This offers organizations substantial savings in time, money, and resources because they do not have to develop per-application data-integrity systems that must be re-factored each time business requirements change.

In information technology, a backup, or the process of backing up, refers to the copying and archiving of computer data so it may be used to restore the original after a data loss event. The verb form is to back up in two words, whereas the noun is backup.

Backups have two distinct purposes. The primary purpose is to recover data after its loss, be it by data deletion or corruption. Data loss can be a common experience of computer users. A 2008 survey found that 66% of respondents had lost files on their home PC. The secondary purpose of backups is to recover data from an earlier time, according to a user-defined data retention policy, typically configured within a backup application for how long copies of data are required. Though backups popularly represent a simple form of disaster recovery, and should be part of a disaster recovery plan, by themselves, backups should not alone be considered disaster recovery. One reason for this is that not all backup systems or backup applications are able to reconstitute a computer system or other complex configurations such as a computer cluster, active directory servers, or a database server, by restoring only data from a backup.

Since a backup system contains at least one copy of all data worth saving, the data storage requirements can be significant. Organizing this storage space and managing the backup process can be a complicated undertaking. A data repository model can be used to provide structure to the storage. Nowadays, there are many different types of data storage devices that are useful for making backups. There are also many different ways in which these devices can be arranged to provide geographic redundancy, data security, and portability.

Before data is sent to its storage location, it is selected, extracted, and manipulated. Many different techniques have been developed to optimize the backup procedure. These include optimizations for dealing with open files and live data sources as well as compression, encryption, and de-duplication, among others. Every backup scheme should include dry runs that validate the reliability of the data being backed up. It is important to recognize the limitations and human factors involved in any backup scheme.

According to the patterns & practices Improving Web Application Security book, a principle-based approach for application security includes:


  • Knowing your threats.

  • Securing the network, host and application..

  • Incorporating security into your software development process

Note that this approach is technology / platform independent. It is focused on principles, patterns, and practices.

Threats, Attacks, Vulnerabilities, and Countermeasures

According to the patterns & practices Improving Web Application Security book, the following terms are relevant to application security



  • Asset. A resource of value such as the data in a database or on the file system, or a system resource.

  • Threat. A negative effect.

  • Vulnerability. A weakness that makes a threat possible.

  • Attack (or exploit). An action taken to harm an asset.

  • Countermeasure. A safeguard that addresses a threat and mitigates risk.

Mobile application security

The proportion of mobile devices providing open platform functionality is expected to continue to increase in future. The openness of these platforms offers significant opportunities to all parts of the mobile eco-system by delivering the ability for flexible program and service delivery options that may be installed, removed or refreshed multiple times in line with the user’s needs and requirements. However, with openness comes responsibility and unrestricted access to mobile resources and APIs by applications of unknown or untrusted origin could result in damage to the user, the device, the network or all of these, if not managed by suitable security architectures and network precautions. Application security is provided in some form on most open OS mobile devices (Symbian OS, Microsoft], BREW, etc.). Industry groups have also created recommendations including the GSM Association and Open Mobile Terminal Platform (OMTP).

There are several strategies to enhance mobile application security including


  • Application white listing

  • Ensuring transport layer security

  • Strong authentication and authorization

  • Encryption of data when written to memory

  • Sandboxing of applications

  • Granting application access on a per-API level

  • Processes tied to a user ID

  • Predefined interactions between the mobile application and the OS

  • Requiring user input for privileged/elevated access

  • Proper session handling

Security testing for applications

Security testing techniques scour for vulnerabilities or security holes in applications. These vulnerabilities leave applications open to exploitation. Ideally, security testing is implemented throughout the entire software development life cycle (SDLC) so that vulnerabilities may be addressed in a timely and thorough manner. Unfortunately, testing is often conducted as an afterthought at the end of the development cycle.

Vulnerability scanners, and more specifically web application scanners, otherwise known as penetration testing tools (i.e. ethical hacking tools) have been historically used by security organizations within corporations and security consultants to automate the security testing of http request/responses; however, this is not a substitute for the need for actual source code review. Physical code reviews of an application's source code can be accomplished manually or in an automated fashion. Given the common size of individual programs (often 500,000 lines of code or more), the human brain cannot execute a comprehensive data flow analysis needed in order to completely check all circuitous paths of an application program to find vulnerability points. The human brain is suited more for filtering, interrupting and reporting the outputs of automated source code analysis tools available commercially versus trying to trace every possible path through a compiled code base to find the root cause level vulnerabilities.

The two types of automated tools associated with application vulnerability detection (application vulnerability scanners) are Penetration Testing Tools (often categorized as Black Box Testing Tools) and static code analysis tools (often categorized as White Box Testing Tools). Tools for Black Box Testing include IBM Rational AppScan, HP Application Security Center suite of applications (through the acquisition of SPI Dynamics, N-Stalker Web Application Security Scanner (original developers of N-Stealth back in 2000), Nikto (open source), and NTObjectives. Static code analysis tools include Coverity, ECLAIR, GrammaTech, Klocwork,[11] Parasoft,] and Veracode.

Banking and large E-Commerce corporations have been the very early adopter customer profile for these types of tools. It is commonly held within these firms that both Black Box testing and White Box testing tools are needed in the pursuit of application security. Typically sited, Black Box testing (meaning Penetration Testing tools) are ethical hacking tools used to attack the application surface to expose vulnerabilities suspended within the source code hierarchy. Penetration testing tools are executed on the already deployed application. White Box testing (meaning Source Code Analysis tools) are used by either the application security groups or application development groups. Typically introduced into a company through the application security organization, the White Box tools complement the Black Box testing tools in that they give specific visibility into the specific root vulnerabilities within the source code in advance of the source code being deployed. Vulnerabilities identified with White Box testing and Black Box testing are typically in accordance with the OWASP taxonomy for software coding errors. White Box testing vendors have recently introduced dynamic versions of their source code analysis methods; which operates on deployed applications. Given that the White Box testing tools have dynamic versions similar to the Black Box testing tools, both tools can be correlated in the same software error detection paradigm ensuring full application protection to the client company.

The advances in professional Malware targeted at the Internet customers of online organizations has seen a change in Web application design requirements since 2007. It is generally assumed that a sizable percentage of Internet users will be compromised through malware and that any data coming from their infected host may be tainted. Therefore application security has begun to manifest more advanced anti-fraud and heuristic detection systems in the back-office, rather than within the client-side or Web server code.



Database Security Products:

Database Activity Monitoring

Delivers automated scalable monitoring, auditing, and reporting across heterogeneous database platforms. It enables organizations to achieve quicker and easier incident response and forensic investigation.



Database Firewall

Protects databases against attack, data loss and theft with real-time alerting, blocking, and pre-built security policies. Reduces exposure with virtual patching. Includes all capabilities offered by Database Activity Monitoring.



User Rights Management for Databases

Automatically aggregates and displays user access rights for review. Analyzes rights to sensitive data, and identifies excessive rights and dormant users, based on actual usage.



Discovery and Assessment Server

Enables IT to prioritize and manage risk mitigation efforts. Helps scope compliance projects by discovering databases, sensitive data, vulnerabilities and misconfigurations.



Secure Sphere Database Agents

Monitor and audit database activity and eliminate monitoring blind spots. These agents optimize deployment with minimal overhead.



Host-Based Application Firewalls

A host-based application firewall can monitor any application input, output, and/or system service calls made from, to, or by an application. This is done by examining information passed through system calls instead of or in addition to a network stack. A host-based application firewall can only provide protection to the applications running on the same host.

Application firewalls function by determining whether a process should accept any given connection. Application firewalls accomplish their function by hooking into socket calls to filter the connections between the application layer and the lower layers of the OSI model. Application firewalls that hook into socket calls are also referred to as socket filters. Application firewalls work much like a packet filter but application filters apply filtering rules (allow/block) on a per process basis instead of filtering connections on a per port basis. Generally, prompts are used to define rules for processes that have not yet received a connection. It is rare to find application firewalls not combined or used in conjunction with a packet filter.

Also, application firewalls further filter connections by examining the process ID of data packets against a ruleset for the local process involved in the data transmission. The extent of the filtering that occurs is defined by the provided rule set. Given the variety of software that exists, application firewalls only have more complex rule sets for the standard services, such as sharing services. These per process rule sets have limited efficacy in filtering every possible association that may occur with other processes. Also, these per process rule set cannot defend against modification of the process via exploitation, such as memory corruption exploits.[2] Because of these limitations, application firewalls are beginning to be supplanted by a new generation of application firewalls that rely on mandatory access control (MAC), also referred to as sandboxing, to protect vulnerable services. Examples of next generation host-based application firewalls which control system service calls by an application are AppArmor and the TrustedBSD MAC framework (sandboxing) in Mac OS X.

Host-based application firewalls may also provide network-based application firewalling.



Issues in Database

Page




Download 148.45 Kb.

Share with your friends:
1   2   3   4




The database is protected by copyright ©ininet.org 2024
send message

    Main page