//*/

Saturday, 6 February 2016

Transaction Control Language (TCL)

Transaction Control Language (TCL): Transaction:A transaction is a set of sql statements which Oracle treat as a Single unit.The following scenarios TCL commands comes into the picture. If any transaction failed then rollback all the remain statements All the statements execute successfully None of...

Wednesday, 27 January 2016

Oracle Commands

SQL Support the following categories of commands:
Data Definition Language(DDL)
Data Manipulation Language(DML)
Data Control Language(DCL)
Transaction Control Language(TCL)Statements

Data definition Language(DDL) Statements:
Data definition statements define,maintain and drop schema objects when they are no longer needed. DDL statements also include statements that permit a user to grant other user the privileges , or right, to access the database and specific object with in the database.
CREATE - to create objects in the database
ALTER - alters the structure of the database
DROP - delete objects from the database
TRUNCATE - remove all records from a table, including all spaces allocated for the records are removed
COMMENT - add comments to the data dictionary
RENAME - rename an object
Data Manipulation Language(DML) Statements:
The data Manipulation language statements manipulate the database's data. For examples querying,inserting,updating and deleting rows of a table are all DML operations. Locking a table or view and examining the execution plan of an SQL statements are also DML operations.

SELECT :retrieve data from the a database
INSERT : insert data into a table
UPDATE : updates existing data within a table
DELETE : deletes all records from a table.
MERGE : UPSERT operation (insert or update)
CALL -:call a PL/SQL or Java subprogram
EXPLAIN PLAN : explain access path to data
LOCK TABLE : control concurrency
Data Control Language(DCL) Statements:
Data control language provides users with privilege commands. The owner of the database objects, say tables, has the sole authority over them. The owner cal allow other database users access to the objects as for the description. Granting privileges(insert,select..) to others.
GRANT : gives user's access privileges to database
REVOKE : withdraw access privileges given with the GRANT command
Transaction Control Language(TCL)Statements:
Transaction control Statements are manage the changes made by DML statements. The enable the user or application developer to group changes into logical transactions. Examples include COMMIT,ROLLBACK and SAVEPOINT.
COMMIT : save work done
SAVEPOINT :identify a point in a transaction to which you can later roll back
ROLLBACK : restore database to original since the last COMMIT
SET TRANSACTION :Change transaction options like isolation level and what rollback segment to use

Codd's 12 Rules

Codd's 12 rules are a set of thirteen rules proposed by Edger F."Ted" Codd, a pioneer of the relational model for databases, designed to define what is required from a database management system in order for it to be consider relational, i.e. a RDBMS.
Rule 0: The System must qualify as relation database, and as management system.
   For a system to qualify as relational database management system(RDBMS), that system must use its relational facilities to manage the database.
Rule 1: The Information Rule:
All information in the database to be represented in one and only one way, namely by values in column positions within rows of tables.
Rule 2: The guaranteed access rule:
All data must be accessible with no ambiguity. This rule is essentially a restatement of the fundamental requirement for primary keys. It says that every individual scalar value in database must be to logically addressable by specifying the name of the containing table, the name of the containing column and the primary key value of the containing row.
Rule 3: Systematic treatment of null values:
The DBMS must allow the each field to remain null (or Empty), specially, it must support a representation "missing information" that is systematic ,distinct from zero or any other number.
Rule 4: Active online catalog based on relational model:
The System must support an online, inline, relational catalog that is accessible to authorize users by by means of their regular query language. That is user must be able to access the database's structure(catalog) using the query language that they use to access the database's data.
Rule 5: The Comprehensive data sub-language rule:
The system must support at least one relational language that has
1. Has a linear syntax.
2. Can be used both interactively and within application programs.
3. Supports data definition operations(Including view definitions), data manipulation operations(update as well as retrieval), Security and integrity constrains, and transaction management operations(begin,commit, and rollback).
Rule 6: View updating rule:
All views that are theoretically updated by system.
Rule 7: High Level insert,update, and delete:
The system must be support set-at-time insert,update and delete operations.
Rule 8: Physical data Independence:
Changes to the physical level and must not require change to an application based on the structure.
Rule 9: Logical data Independence:
Changes to be logical level(tables,columns and rows)must not require a change to an application based on the structure.
Rule 10: Integrity Independence:
Integrity constraints must be specified separately from applications programs and stored in catalog. It must be possible to change such constraints as and when appropriate without unnecessarily affecting existing applications.
Rule 11: Distribution Independence:
The distribution of portions of the database to various locations should be invisible to users of the database.Existing application should be continue to operate successfully.
 Rule 12: The Non Subversion rule:
If the system provides a low-level (record-at-a-time) interface, then that interface cannot be used to subvert the system, for example, bypassing a relational security or integrity constraint.







Sunday, 24 January 2016

Tablespaces in Oracle

Tablespaces is container which holds the segments. Each and every segment belongs to exactly one table space. A tablespace may have many segments within it. All extents of a given segment will be found in tablespace associated with the segment.
Segments never cross table space boundaries. A tablespace itself has one or more data filers associated with. An extent for any given segment in a table space will contain entirely within one data file. However , a segment may have extents from many different data files.
As a developer , we will create segments in table space. And table will create by DBA.

sysntax:
create tablespace alts add datafile '/disk1/oradata/chica/alts1.dbf' size 10m;
Using above statement we can create tablespace.
Here alts is tablespace name.
Every tablspace must have at least one data file. Here alts1 is the data file which is creating in the above given path.

SQL Parsing in Oracle

Oracle keeps SQL statements,packages,information on the Objects and many other things in a memory area named as Shared pool. The purpose of the shred area is to maximize the sharing and reuse the information. This information created in memory for session.It is not useful for any another session. Oracle remove the older data to make available the space for another session.

 Whenever a SQL statement is executed. Oracle follows a process to evaluate the statement in terms of syntax,validity of objects being referred and of course,privileges to the user. Apart from oracle also checks for identical statements that may have been fired, with the intention of reducing the processing over heads. All this takes place in second, Even less, without the user knowing what is happening to the statement that was fired. This process know an Parsing.
Parsing Process:
Syntactical check: Verify the statement for syntax.
Semantic check: checks the validity of objects,privileges to the user. This is Data dictionary check
Types of Parsing:
1) Hard parsing: its follow the all parsing process like Syntactical check ,Semantic check and gathering the stats for best execution plan.
We can say first time query execution its follow all the oracle process to evaluate query called as Hard Parsing.
2) Soft parsing: User executed the statement second time oracle should not follow the Oracle process to evaluate the statement. Whenever user execute the query its directly provide the data without any Syntactical and Semantic check.
Soft parse will considerably improve the system performance where as frequent Hard parsing will affect the system. Reducing hard parsing will improve the resource utilization and optimize the SQL code.

Thursday, 21 January 2016

Oracle Tables(Segments)

Database is consist of Logical structure,Physical structure and Instance. Logical structure contains the Database Objects. We can able to access the Database objects when database up and running only.
Database objects are:
Tables
Indexes
Synonyms
Sequences
Packages
Triggers
Functions
Clusters
Object Types

These Database objects are holding the transnational and metadata information.
Basically there are two categories in tables/views.
1 Base Tables
2. Transaction Tables

Base Tables contains the meat data information.These tables contain information about the Transnational Tables information .
 These tables are created while installing the oracle database and owner of the base tables is SYS(DBA). We can find the base tables in dba_tables(or)all_tables.
And base tables stored into System Tablesapce.

select * from all_tables where owner='SYS';
When we are creating the any transnational tables which we call as DDL operation.
example:
drop table test;
create table test (testid number,testname varchar2(20));
desc test;

Here test is a Table or Table segment/database object. When we are using create statement we can call as DDL statement in the user/developer prospective.
But DBA/SYS prospective which is DDL and DML also.
We know how it DDL but why it DML here?
When we are using the create statement it will insert the record in base tables.Every insert we can call as DML statement.

select * from user_tables where lower(table_name)='test';
Every create statement should be inserted in user_tables .

Transaction Tables/User defined tables:
Transaction tables also known as User defined Tables or Business Tables which is created by user and created based on business requirements.
User tables are stored in the user table space.








Wednesday, 20 January 2016

NULL values in Oracle

One of the wonderful value in Oracle database is NULL value.NULL is nothing. NULL is not even the same as NULL. NULL is unidentified. But we need to work with NULL vales which are no actual values. Oracle provides us with a couple of functions to do the lifting when its comes to checking for NULLs.
IS NOT NULL:
SYNTAX:
expr1 IS NOT NULL
expr2 IS NULL

example:
create table empex (empno number,ename varchar2(10));
insert into empex values(1,'abc');
insert into empex values(2,'xyz');
insert into empex values('','nnn');
insert into empex values(4,'');
insert into empex values('','');
insert into empex values('','');
commit;
select * from empex where empno is not null;
select * from empex where empno is null;
NULL handling:
Using many pre defined  oracle functions to handled the null values.
NVL function:
select nvl(empno,0),nvl(ename,'III') from empex ;
Here NVL is pre defined function from Oracle.
decode function:
select decode(empno,'',0,empno),decode(ename,'','III',ename) from empex ;
NULL handling using case condition:
select case
when empno is null
then 0
else empno
end empno from empex;

NULL behavior:
Using below query we can find the null behavior.
Scenario 1:
select case when null=null then 1 else 2 end abc from dual;


Here two null values are always not equal. Hence the output displaying as 2.
We can not expect the behavior of NULL.

Second scenario:
select * from empex where empno is null;
here we can see the NULL values using the is null condition.

Scenario 3:
select * from empex where empno=null;


Same condition using the empno=null there is no rows are displaying.

So as for above scenarios we cant expect the exact behavior.




Tuesday, 19 January 2016

Oracle Objects

Oracle objects:
Database is collection of below components.
1 Logical design
2 Physical design
3 Instance
We can able to access or modify the logical objects when database up and running mode.
The logical design used to store the data in database.
The logical design contains the Oracle objects.This objects used to hold the data.
This objects are available in database.
example of oracle objects:

  • TABLES
  • SEQUENCES
  • SYNONYMS
  • VIEWS
  • PROCEDURES
  • PACKAGES
  • MATERIALIZED VIEW
  • CLUSTER
  • INDEX
  • TRIGGERS
  • TYPE
Use below query  to find the objects in database.

select OBJECT_TYPE from all_OBJECTS;




Monday, 18 January 2016

Database Design from Logical design to Physical Design

Perfect database design cannot be thrown together quickly novices. What is required is a practiced and formal and formal approach to gathering data requirements and modeling the data. This modeling effort requires a formal approach to this discovery and identification of entities and elements.
Data normalization is the is a big part of data modeling and database design. Normalization data model reduces data redundancy and inconsistency by ensuring that the data elements are designed appropriately.
Database design From Logical:
 Database design is the process of transforming a logical data model into actual physical design.
A logical data model is required before you can even begin to design a physical database. And the logical  data model grows out of conceptual data model And any type of data model begins with the discipline of data modeling.

The first objective of conceptual data modeling is to understand the requirements. A data model, in and of itself, is of limited value. Of course, a data model delivers value by enhancing communication and understanding, and it can be argued that these are quite valuable. But the primary value of a data model is its ability to be used as a blueprint to build a physical database.

When databases are built from a well-designed data model the resulting structures provide increased value to the organization. The value derived from the data model exhibits itself in the form of minimized redundancy, maximized data integrity, increased stability, better data sharing, increased consistency, more timely access to data, and better usability. These qualities are achieved because the data model clearly outlines the data resource requirements and relationships in a clear, concise manner. Building databases from a data model will result in a better database implementation because you will have a better understanding of the data to be stored in your databases.

Another benefit of data modeling is the ability to discover new uses for data. A data model can clarify data patterns and potential uses for data that would remain hidden without the data blueprint provided by the data model. Discovery of such patterns can change the way your business operates and can potentially lead to a competitive advantage and increased revenue for your organization.

Data modeling requires a different mindset than requirements gathering for application development and process-oriented tasks. It is important to think “what” is of interest instead of “how” tasks are accomplished. To transition to this alternate way of thinking, follow these three “rules”:

    Don’t think physical; think conceptual – do not concern yourself with physical storage issues and the constraints of any DBMS you may know. Instead, concern yourself with business issues and terms.
    Don’t think process; think structure – how something is done, although important for application development, is not important for data modeling. The things that processes are being done to are what is important to data modeling.
    Don’t think navigation; think relationship – the way that things are related to one another is important because relationships map the data model blueprint. The way in which relationships are traversed is unimportant to conceptual and logical data modeling.

Data models are typically rendered in a graphical format using an entity-relationship diagram, or E/R diagram for short. An E/R diagram graphically depicts the entities and relationships of a data model. There are many popular data modeling tools on the market from a variety of vendors. But do not confuse the tool as being more important than the process. Of what use is a good tool if you do not know how to deploy it?

A data model is built using many different components acting as abstractions of real world things. The simplest data model will consist of entities and relationships. As work on the data model progresses, additional detail and complexity is added. Let’s examine the many different components of a data model and the terminology used for data modeling.

The first building block of the data model is the entity. An entity, at a very basic level, is something that exists and is capable of being described. It is a person, place, thing, concept, or event about which your organization maintains facts. For example: “STUDENT,” “INSTRUCTOR,” and “COURSE” are specific entities about which a college or university must be knowlegeable to perform its business.

Entities are comprised of attributes. An attribute is a characteristic of an entity. Every attribute does one of three things:

    Describe – An attribute is descriptive if it does not identify or relate, but is used to depict or express a characteristic of an entity occurrence.
    Identify – An attribute that identifies is a candidate key. If the value of an identifying attribute changes, it should identify a different entity occurrence. An attribute that identifies should be unchangeable and immutable.
    Relate – An attribute that relates entities is a foreign key; the attribute refers to the primary key attribute of an occurrence of another (or the same) entity.

Each attribute is assigned a domain that defines the type of data, its size, and the valid values that can be assigned to the attribute. As a general rule of thumb, nouns tend to be entities and adjectives tend to be attributes. But, of course, this is not a hard and fast rule: be sure to apply of the business to determine which nouns and attributes are entities and which are attributes. Every attribute must either identify the entity occurrence, describe the entity occurrence, or relate the entity occurrence to another entity occurrence (in the same or another entity).

Relationships define how the different entities are associated with each other. Each relationship is named such that it describes the role played by an entity in its association with another (or perhaps the same) entity. A relationship is defined by the keys of the participating entities: the primary key in the parent entity and the foreign key in the dependent entity. Relationships are not just the “lines” that connect entities, but provide meaning to the data model and must be assigned useful names.

Keep in mind that as you create your data models, you are developing the lexicon of your organization’s business. Much like a dictionary functions as the lexicon of words for a given language, the data model functions as the lexicon of business terms and their usage. Of course, this short introduction just scrapes the tip of the data modeling iceberg.

Database design From Physical:

Assuming that the logical data model is complete, though, what must be done to implement a physical database?

The first step is to create an initial physical data model by transforming the logical data model into a physical implementation based on an understanding of the DBMS to be used for deployment. To successfully create a physical database design you will need to have a good working knowledge of the features of the DBMS including:

    In-depth knowledge of the database objects supported by the DBMS and the physical structures and files required to support those objects.
    Details regarding the manner in which the DBMS supports indexing, referential integrity, constraints, data types, and other features that augment the functionality of database objects.
    Detailed knowledge of new and obsolete features for particular versions or releases of the DBMS to be used.
    Knowledge of the DBMS configuration parameters that are in place.
    Data definition language (DDL) skills to translate the physical design into actual database objects.

Armed with the correct information, you can create an effective and efficient database from a logical data model. The first step in transforming a logical data model into a physical model is to perform a simple translation from logical terms to physical objects. Of course, this simple transformation will not result in a complete and correct physical database design – it is simply the first step. The transformation consists of the following:

    Transforming entities into tables
    Transforming attributes into columns
    Transforming domains into data types and constraints

To support the mapping of attributes to table columns you will need to map each logical domain of the attribute to a physical data type and perhaps additional constraints. In a physical database, each column must be assigned a data type. Certain data types require a maximum length to be specified. For example a character data type could be specified as CHAR(25), indicating that up to 25 characters can be stored for the column. You may need to apply a length to other data types as well, such as graphic, floating point, and decimal (which require a length and scale) types.

But no commercial DBMS product fully supports relational domains. Therefore the domain assigned in the logical data model must be mapped to a data type supported by the DBMS. You may need to adjust the data type based on the DBMS you use. For example, what data type and length will be used for monetary values if no built-in currency data type exists? Many of the major DBMS products support user-defined data types, so you might want to consider creating a data type to support the logical domain, if no built-in data type is acceptable.

In addition to a data type and length, you also may need to apply a constraint to the column. Consider a domain of integers between 1 and 10 inclusive. Simply assigning the physical column to an integer data type is insufficient to match the domain. A constraint must be added to restrict the values that can be stored for the column to the specified range, 1 through 10. Without a constraint, negative numbers, zero, and values greater than ten could be stored. Using check constraints you can place limits on the data values that can be stored in a column or set of columns.

Specification of a primary key is an integral part of the physical design of entities and attributes. A primary key should be assigned for every entity in the logical data model. As a first course of action you should try to use the primary key as selected in the logical data model. However, multiple candidate keys often are uncovered during the data modeling process. You may decide to choose a primary key other than the one selected during logical design – either one of the candidate keys or another surrogate key for physical implementation. But even if the DBMS does not mandate a primary key for each table it is a good practice to identify a primary key for each physical table you create. Failure to do so will make processing the data in that table more difficult.

Of course, there are many other decisions that must be made during the transition from logical to physical. For example, each of the following must be addressed:

    The nullability of each column in each table
    For character columns, should fixed length or variable length be used?
    Should the DBMS be used to assign values to sequences or identity columns?
    Implementing logical relationships by assigning referential constraints
    Building indexes on columns to improve query performance
    Choosing the type of index to create: b-tree, bit map, reverse key, hash, partitioning, etc.
    Deciding on the clustering sequence for the data
    Other physical aspects such as column ordering, buffer pool specification, data files, denormalization, and so on.

Summary

A logical data model should be used as the blue print for designing and creating a physical database. But the physical database cannot be created properly with a simple logical to physical mapping. Many physical design decisions need to be made by the DBA before implementing physical database structures. This may necessitate deviating from the logical data model. But such deviation should occur only based on in-depth knowledge of the DBMS and the physical environment in which the database will exist.