LevSelector.com |
On this page:
Introduction | home - top
of the page - |
Sybase SQL Server 11
- Has a Single-Process, Multi-threaded Database Engine
- Includes a query optimizer
Standard SQL:
* DDL (Data Definition language),
* DML (Data Modification langiage),
* DCL (Data Control Language)
Transact-SQL = SQL + flow-control (if and while), local variables,
etc..
4 main verbs: select, insert, update, delete.
But even before these words - here are some often used commands (working with Sybase:
FUC - Frequently Used Commands | home - top
of the page - |
isql -Sserver -Uuser -Ppassword
1> use some_database
1> select db_name()
1> select @@servername
1> select user_name()
1> sp_helpdb
1> sp_helpdb dbname
1> sp_help
1> sp_help tablename
1> set rowcount 2
-- to limit number of rows
1> set rowcount 0
-- remove rowcount limitation
1> select * from sometable where 1=2
-- to see column names
1> vi
-- to edit command
1>select name from sysobjects where type = "V"
1> quit
Here are useful help commands:
select name from sysobjects
where name like "sp_help%"
name
1 sp_help
2 sp_help_rep_agent
3 sp_help_resource_limit
4 sp_helpartition
5 sp_helpcache
6 sp_helpconfig
7 sp_helpconstraint
8 sp_helpdb
9 sp_helpdevice
10 sp_helpextendedproc
11 sp_helpexternlogin
12 sp_helpgroup
13 sp_helpindex
14 sp_helpjoins
15 sp_helpkey
16 sp_helplanguage
17 sp_helplog
18 sp_helpobjectdef
19 sp_helpremotelogin
20 sp_helprotect
21 sp_helpsegment
22 sp_helpserver
23 sp_helpsort
24 sp_helptext
25 sp_helpthreshold
26 sp_helpuser
SELECT | home - top
of the page - |
select <column-list> from <table-name>
select au_lname, au_fname from authors
select title_id, type, price, price * .1 from
titles
select * from publishers
(not recommended)
string concatenation:
select au_lname + ", " + au_fname from authors
name ouput columns yourself - column alias
select title_id, type, price "original price",
price * .1 discount from titles
select "Full Author Name" = au_lname +",
" + au_fname from authors
remove duplicates with distinct:
select distinct type from titles
select distinct city, state from authors
(here distinct refers to a combination of city and state,
so that each column by itself may have duplicate entries)
filtering rows with where
select <column-list> from <table-name> where <condition>
select au_lname, au_fname from authors where
state="CA"
equality and inequality operators: = , <> or != , > ,
>= , < , <= , !< , !>
select type, title_id, price from titles where
price * total_sales < advance
(can be applied to string comparison - default sorting order
is ASCII)
logical OR and AND
select au_id, city, state
from authors
where state="CA" or city="Salt Lake City"
between and Ranges of Data:
<expression> between <expression> and <expression>
select title_id, price from titles where price
between $5 and $10
equivalent to
select title_id, price from titles where price
>= $5 and price <= $10
not between:
select title_id, price from titles where price
not between $5 and $10
equivalent to
select title_id, price from titles where price
>= $5 and price <= $10
in (...) :
select title_id, price from titles where type
in ("mod_cook", "trad_cook", "business")
equivalent to
select title_id, price from titles where type
= "mod_cook"
or type = "trad_cook" or type = "business"
not in (...)
select title_id, price from titles where type
not
in ("mod_cook", "trad_cook", "business")
wildcards with like:
% - any number (0 to many) of any characters
_ (underscore) - any single character
[ ] any single character from those listed in the brackets
(this is only for Sybase)
[%[ - actually match the % character
[^A-C] - matches any character except A,B,C
select au_lname, au_fname, city, state from authors
where city like "Spring%"
select type, title_id, price from titles where
title_id like "B_1342"
select type, title_id, price from titles where
title_id like "B[A-Za-z0]1342
Note - if you need to include the '_' character
in your pattern - use 'escape' word, for example:
select * from titles where title_id like 'ABC\_%'
escape('\')
ordering result sets with order by:
select au_lname, au_fname from authors order
by au_lname
select au_lname, au_fname from authors order
by au_lname, au_fname
order by position in the select list:
select title_id, price, total_sales, price*total_sales
"total dollar sales"
from titles
order by 4
Ascending and Descending Ordering
select title_id, price, total_sales, price*total_sales
"total dollar sales"
from titles
order by price*total_sales
desc
default sort order is ascending. Example: sort by type (ascending)
and then by total_sales (desc):
select title_id, price, total_sales
from titles
order by type, total_sales desc
order by columns not Appearing in the Select List:
select au_lname, au_fname from authors order
by city, state
agregate functions:
sum( ) - total numeric,
avg( ) - average numeric,
min( ) - lowest numeric or sorting string or earliest date,
max( ) - highest numeric or sorting string or latest date,
count( ) - returns the number of non-null expressions,
count(*) - returns number of rows found
select avg(price) from title
select avg(price) "avg" from titles where type
= "business"
select avg(price) "avg", sum(price) "sum" from
titles
where type in ("business","mod_cook")
counting rows with count(*):
select count(*) from authors where state="CA"
agregates functions discard null values
sub-agregates with group by:
select type, avg(price) "avg", sum(price) "sum"
from titles
where type in ("_business", "mod_cook")
group by type
When two or more columns are included in group by statement,
agregates are based on unique combinations of these columns:
select type, pub_id, avg(price) "avg",
sum(price) "sum" from titles
where type in ("_business", "mod_cook")
group by type, pub_id
In order for aggregates to properly subtotal (or subaverage or subcount) by non-aggregate values, all non-aggregate columns in the select list should be repeated in the group by clause (see color above).
Filtering results with having:
where - selects rows before averaging:
select type, avg(price) from titles where price
> $10 group by type
having select rows from the result set:
select type, avg(price) from titles where price
> $10 group by type having avg(price) > $20
Example: find duplicates of au_id:
select au_id, count(*) from authors
group by au_id
having count(*) > 1
Worktable - a temporary table which is created before distinct, having or order by are applied.
select type, avg(price) from titles where pub_id="1289"
group by type
having avg(price) > $15
order by avg(price) desc
JOIN | home - top
of the page - |
JOIN Operations:
join using common key (or join key):
select title, pub_name
from titles, publishers
where titles.pub_id = publishers.pub_id
select title, pub_name
from titles t
inner join publishers p on t.pub_id = p.pub_id
select au_lname, au_fname, title
from authors a, titles t, titleauthor ta
where ta.title_id = t.title.id
and a.au_id = ta.au_id
and type="psychology"
select t.*, pub_name
from titles t, publishers p
where t.pub_id = p.pub_id
Outer join: left (*=) or right (=*):
select pub_name, title from publishers p, titles
t
where p.pub_id *= t.pub_id
and ...
select pub_name, title from publishers p
left outer join titles
t on p.pub_id = t.pub_id
where ...
This query retrieves all pub_names from publishers. For any row that doesn't join successfully with titles, the title column will contain a null for that row in the output.
Subqueries - to be used instead of a constant:
select title from titles
where pub_id =
( select pub_id from publishers where
pub_name = "Algodata Infosystems")
Subqueries with IN (or
NOT IN) can return multiple
rows:
select pub_name from publishers
where pub_id in
(select pub_id from titles where type
= "business")
subquery can be rewritten as a JOIN:
select pub_name from publishers p, titles t
where p.pub_id = t.pub_id and type = "business"
Subqueries with exists - the best way to eliminate duplicates:
select pub_name from publishers p
where exists
(select * from titles t where p.pub_id
= t.pub_id and type = "business")
not exists and not in :
select pub_name from publishers p
where not exists
(select * from titles t where p.pub_id
= t.pub_id and type = "business")
Unfortunately the "not exists" construct runs very slowly.
A better (much-much faster) way (in 12.5) is to use the new syntax:
select pub_name from publishers p
left outer join titles t on p.pub_id = t.pub_id and t.type = "business"
where t.pub_id is null
This following syntax I don't like - create problems (char/varchar, nulls):
select pub_name from publishers
where pub_id
not in
(select pub_id from titles where type
= "business")
Subqueries with agregates in the where clauses:
select type, price from titles
where price < (select avg(price) from titles)
union operations:
select city, state from authors
union
select city, state from publishers
(duplicates will be automatically removed from the result unless you use union all )
Here is one more example:
select 95 "year", month, sum(dollar_sales)
from salescurrent
group by month
union
select 94 "year", month, sum(dollar_sales)
from salescurrent
group by month
order by year, month
select into, insert, update, delete | home - top
of the page - |
Example: create an empty copy of an existing table:
select * into new_salesdetail
from salesdetail
where 1 = 2
Adding rows with insert:
insert authors (au_id, au_lname, au_fname, phone)
values ("123-45-6789","Jones","Mary","212-555-1212")
Columns that are not specified will be set to their default values
If you don't specify column names at all (which is very bad practice
and dangerous) - then you have to provide values for ALL columns an in
the right order:
insert publishers
values ("1235","New World Printing","GA","Atlanta")
inserting several rows with select:
insert authors_archive (au_id, au_lname, au_fname,
phone, city, state, zip)
select au_id, au_lname, au_fname, phone, city,
state, zip from authors
where state="CA"
modifying rows with update:
update publishers
set pub_name = "Joe's Press"
where pub_id = "1234"
Example: update based on a join:
update titles
set full_name = isnull(u.full_name,'')
from db1..titles tit, db2..users u
where tit.kid *= u.kid
Example: double all prices in the table:
update titles
set price=price*2
update several columns simultaneously:
update contacts
set address1 = "5 W. Main St.",
address2 = "Apt.3D",
state="CT",
zip="03838"
where contract_id = 17938
remove rows with delete:
delete titles
where type = "business"
Clearing the table:
delete title
or
truncate table titles
truncate is faster, but it doesn't log individual deletions - thus the
ability to recover database is compromised.
Datatypes | home - top
of the page - |
Datatypes:
Once you create a column of certain datatype - you can not change
it without dropping and re-creating the table ????
create table my_table (
id int not null
value float not null
description varchar(30) null
)
Storing strings:
char - fixed length strings (max length - 255)
varchar - variable-length strings (max length - 255)
nchar - multi-byte character strings
nvarchar - variable-length multi-byte char strings
text - for very large strings (up to 2 GB of text per row)
Storing binary data:
binary - for fixed-length binary strings
varbinary - for variable-length binary strings
image - for storing large binary strings (up to 2 GB of image data
per row)
insert b_table (id, bin_col) values (19, 0xa134c2ff)
using timestamp:
create table timestamp_example
( id int not null,
code char(3) not null,
ts timestamp not null )
insert timestamp_example (id, code) values (17, "AAA")
The server uses the unique transaction-log-row id as a timestamp. You can not specify this value.
Optimistic locking:
using tsequal() function to check the locking
Storing BLOBs (binary large objects) using text
and image datatypes:
create table texts (
id numeric(5,0) identity,
item int not null,
textstring text null
)
using insert:
insert texts (item, textstring) values (1, null)
insert texts (item, textstring) values (2)
insert texts (item, textstring) values (3, "some
text")
insert texts (item, textstring) values (4, "some
very long text")
insert statement will not insert more than 1,200
bytes.
string concatenation on strings longer than 255
chars is illegal
using writetext and readtext:
The idea is to get the pointer to the text chain
and to write directly to this chain without modifying the underlying row.
To do this you should start with a non-null popinter
stored in the row itself.
Here is how to retrieve the pointer for a row:
select id, textptr(textstring) textptr,
datalength(textstring) datalength
from texts
to write a new textstring value to the 3rd
row:
declare @pageptr varbinary(16)
select @pageptr = textptr(textstring)
from texts
where id = 3
writetext texts.textstring @pageptr "some
very long string"
to read 50 bytes starting from position 4000:
declare @pageptr varbinary(16)
select @pageptr = textptr(textstring)
from texts
where id = 3
readtext texts.textstring @pageptr 4000
50
textstring
Date/Time datatypes:
datetime (3 millisecons, 8 bytes)
smalldatetime (1 min, 4 bytes)
create table date_example (
id int not null
dateval datetime not null)
insert date_example (id, dateval) values (19, "September 25, 1996 3:15PM")
you can use other forms of format, for example: 4/15/99
select * from date_table where date = "9/3/95"
select * from date_table where date >= "9/3/95"
and date < "9/4/95"
timestamp - a binary, ticks of some sort, it is not datetime
drop table #test
create table #test (
id numeric(18,0) identity,
ts timestamp,
dt datetime,
sm smalldatetime
)
insert #test(dt,sm) values (getdate(), getdate())
select * from #test
datediff(dd,"15-Apr-2000","15-Apr-2001") - returns '365'
where LastActivityDate > dateadd(yy,-2,getdate())
where LastActivityDate > dateadd(dd, -90, getdate())
Logical datatype: bit - can not be indexed
and can not be null:
create table bit_sample (id int not null, description
varchar(30) null, active bit not null)
Numeric datatypes:
int - 4 bytes signed
smallint - 2 bytes signed
tinyint - unsigned byte (0..255)
float - 8 bytes
real - 4 bytes
numeric(7,2) - you can indicate total
number of digits and number of digits after decimal point. Can be used
with identity (automatic counter)
decimal - similar to numeric, but can't
be used with identity
money - 8 bytes
smallmoney - 4 bytes (+/-214,748.3647)
create table numeric_example (
id numeric(8,0) identity,
num_col numeric(7,2))
insert dollar_table (id, dollars) values (95, $12345.93)
=================================================================================
create table table_name ( ...)
drop table table_name
sp_help
sp_help table_name
object's full name:
database_name.owner_name.object_name
pubs.dbo.authors
Columns can have following properties: Null,
Not null, Identity
Views and Temporary Tables | home - top
of the page - |
select * from author_name
drop view author_name
* note: dropping a view doesn't change data
* views - help security by limiting access to tables
* views - help make queries simpler
* views can contain aggregate functions and grouping, joins,
other views and a distinct clause.
* views CAN NOT include "select into" , a compute clause, a
union clause, and "order by" clause.
* You can insert into, update and delete from views (restriction: allowed
to affect only one base table).
* If view includes columns from several tables, you can't delete
rows from view or update columns from more than 1 table in a single update
statement
* You can not update, delete or insert into a view containing the distinct
clause.
sp_help - lists all objects (including
tables and views)
to get a list of just the views:
select name from sysobjects where type = "V"
to see a list of columns in a view:
sp_help view_name
Renaming objects (tables, views, columns.) using sp_rename:
sp_rename old_name new_name
sp_rename 'table_name.old_col_name' , 'new_col_name'
Adding columns to a table using
alter:
alter table tt add
middle_name varchar(20) null,
fax varchar(15) null
NOTE: You can only add columns with null values.
NOTE: You can't remove columns using alter. You have to recreate the
table and indexes and insert data again, for example:
create table lev1 (c1 int not null,c2 int null)
CREATE UNIQUE CLUSTERED INDEX lev1_idx ON lev1(c1) CREATE NONCLUSTERED INDEX lev1_qib_idx ON lev1(c1,c2) insert lev1 values (1,2)
------- add columns -----------------
create table lev1 (c1 int null, c2 int null)
insert into lev1 select c1,c2 from lev1_tmp
|
Temporary Tables - are real tables created in the tempdb database.
They have '#' before table name. They exist only for the duration
of a user session (or stored procedure) and only accessible to this session.
These tables are not recoverable (in case of a crush).
select au_lname, au_fname, title, pub_id
into #titles_and_authors
from authors a, titleauthor ta, titles
t
where a.au_id = ta.au_id and t.title_id
= ta.title_id
Creating permanent Temporary table in tempdb:
select * into tempdb..titles from
pubs2..titles
Rules & Defaults | home - top
of the page - |
create rule color_rule as
@color in ('black', 'brown', 'red')
create rule pub_id_rule as
@pubid like ('99[0-9][0-9]')
or @pubid in ('0736', '0877', '1389')
create rule date_rule as
@date >= getdate()
drop rule key_rule
Binding rule to a column:
sp_bindrule rule_name, 'table.column_name'
sp_unbindrule 'table.column_name'
Defaults:
create default country_default as 'USA'
create default age_default as 16
create default time_default as getdate( )
binding default to columns in a table:
sp_binddefault default_name, 'table.column_name'
sp_bindefault country_default, 'demographics.country'
sp_unbindefault 'demographics.country'
you can define default for a column when you create a table.
later you can change it using later table:
alter table items
replace price default null
alter table items
replace item_code default "N/A"
To list all rules and defaults in a database:
select name from sysobjects where type in ("R",
"D")
To examine the rules and defaults bound to the column in a table:
sp_help table_name
sp_helptext object_name
Creating User-defined Datatypes:
sp_addtype ssn_type, 'char(9)', "not null"
Indexes and Constraints | home - top
of the page - |
Creating indexes:
create unique clustered index name_index
on authors (au_lname, au_fname)
create index fname_index on authors (au_fname,
au_lname)
***********************-
Constraints:
Primary Key and Unique Constraints
Check Constraints
Referential-integrity Constraints
Primary-Key Constraints
Foreign-Key Constraints
Modifying Constraints
Adding Constraints
Removing Constraints
Information on Constraints:
sp_helpconstraint
***********************-
Comparing Data-integrity Methods:
Rules, Defaults, Indexes, Constraints, Keys (Primary Key, Foreign Key
(primary key from another table), Common key)
T-SQL programming constructs | home - top
of the page - |
note: to include quotes into string you can repeat it 2 times (or one time if the surrounding quote is of different type):
" ""
"
' ''
'
" '
"
' "
'
'I don''t know'
"I don't know" " Title ""The Title""
was on sale" ' Title "The Title"
was on sale'
math.functions: abs(), ceiling(), exp(), floor(), pi(),
power(), rand(), round(), sign(), sqrt()
date functions: getdate(), datename(), datepart(), datediff(), dateadd()
convert - to convert data types:
select "Advance = " + convert (char(12), advance) from titles
(can also convert dates)
system functions: host_id(), host_name(), suser_id(), suser_name(), user_id(), user_name(), user, show_role(), db_id(), db_name(), object_id(), object_name(), col_name(), col_length(), index_col(), valid_name(), datalength(), tsequal()
compute, compute by:
allows to include both detail and summary info in a single result set
isnull - function tells to treat null values as zero or something else.
Batches:
several commands followed by 'go' command
Comments: /* */ or *
Local variables:
declare @variable_name datatype
declare @myname varchar(30), @type
int
select @myname=12
go
note: name length is limited to 30 char (including @)
Global variables - can not be defined by users.
(examples: @@rowcount, @@error, @@trancount,
@@transtate, ...)
Message handling:
print "this is a message"
declare @msg varchar(30)
select @msg = "Hello " + user_name()
print @msg
Error handling:
raiserror 52345 'Row not found'
if .. else:
if (select ...)
print "sometext"
else
print "some_other_text"
while:
while(select ...)
begin
update ...
continue
break
end
goto:
metka: select ...
...
goto metka
waitfor:
waitfor time "22:00:00"
while 1<2
begin
waitfor delay "00:00:30"
exec sp_who
end
return:
to exit the batch
set:
set rowcount 100
set statistics time on
set nocount on
...
Cursors - allow custom processing for each row:
declare leads_curs
cursor for
select cust_id, ets-sale from leads
for read only
go
open leads_curs
while () begin .. end
close leads_curs
deallocate leads_curs
go
You can do also custom updating with cursors.
In general you should avoid cursors - try to substitute them sith complex
select statements (usually will run x10 faster)
Triggers | home - top
of the page - |
***************-
create trigger [owner.]trigger_name
on [owner.]table_name
{for {insert, update, delete}
as SQL_statements
Or, using the if update clause:
create trigger [owner.]trigger_name
on [owner.]table_name
for {insert, update}
as
[if update (column_name )
[{and | or} update (column_name)]...]
SQL_statements
[if update (column_name)
[{and | or} update (column_name )]...
SQL_statement ]
...
Example:
create trigger titles_trigger
on titles
for insert
as
print "title inserted"
return
***************-
drop trigger trigger_name
***************-
create trigger cascade_del_trigger
on publishers
for delete
as
if @@rowcount = 0 -- no rows deleted, exit trigger
return
delete titles from titles t, deleted d where
t.pub_id = d.pub_id
if @@error != 0
begin
print "Error occurred during deleting
related titles"
end
return
***************-
create trigger tr1 on ... for insert, update
as
declare @rows int
select @rows = @@rowcount
...
if (.. )
...
***************-
In triggers you CAN NOT do these:
create, drop, alter table/database, grant, revoke, select into,
truncate table, update statistics, reconfigure, load database/transaction,
disk init/mirror/reinit/refit/remirror/unmirror
***************-
Triggers During transactions:
...
begin tran add_titles
insert ...
insert ...
commit tran
***************-
rollback trigger - command to rollback the trigger
Nested Triggers - 16 levels of nesting max
,
Stored procedures | home - top
of the page - |
To execute a stored procedure:
call by name if it is a first command in the batch
or use "exec" (or "execute" command):
exec proc_name
To delete the procedure:
drop proc proc_name
To change the procedure:
if exists (select * from sysobjects where name
= "titles_for_a_pub" and type = "P" and uid=user_id())
drop proc titles_for_a_pub
go
create proc titles_for_a_pub as ...
To display the text of the procedure:
sp_helptext proc_name
passing parameters:
create proc tt (@pub_name varchar(40)) as
select ...where ...
return
declare @my_pname varchar(40)
select @my_pname ="Some text"
exec tt @my_pname
Default Parameter Values:
create proc proc_name
p_name datatype = default_value
, ....
as
SQL Statements
return ...
----- Output parameters:
create proc tt (
@par1 varchar(80) = null,
@par2 int output)
as
-- some T-SQL statements go here
return
declare @myoutput int
exec tt
@par1 = 'sometext',
@par2 = @myoutput output
go
----- Returning procedure status:
return some_int_value
0 - no errors
*1 .. -99 - detected errors (page 194 - list of error status codes)
other values - whatever you like
declare @staatus_var int
exec @staatus_var = proc_name
Note: A stored procedure may NOT: create views, defaults,
rules, triggers or procedures, or issue the use statement
Note: If stored procedure references a table, and the
table was changed - you may have to drop procedure, change it and re-create
it.
When created, stored procedure is transformed into a cashed sequenced
of SQL statements. If procedure has "with recompile" - it forces the optimizer
to generate a new query plan for each execution.
----- Returning data from one SP into a caller-SP:
-----------------------------------
----- transparent passing the result value
-----------------------------------
create proc test1 as select count(1) N from OE
create proc test2 as exec test1
exec test2
-----------------------------------
----- catch the output value in the caller SP:
-----------------------------------
create proc test3(@a int, @b int output) as select
@b = @a*@a
create proc test4 as
begin
declare @par_in int
declare @par_out int
declare @ret_status int
select @par_in = 3
exec @ret_status = test3 @par_in, @par_out
output
if not @ret_status=0 begin select @par_out
= 0 end
select @par_out
select @par_out+1
end
exec test4
-----------------------------------
----- transparent passing the result set
-----------------------------------
create proc test5 as select * from mytab
create proc test6 as exec test5
exec test6
-----------------------------------
----- handling errors, transactions
-----------------------------------
create proc mysp
as
begin
declare @tran_cnt int
declare @errs int
declare @rows int
select @tran_cnt = @@trancount
if @tran_cnt = 0
begin transaction
mytran
else
save transaction
mytran
-- do something useful
-- then check for errors
select @errs = @@error
if (@errs != 0)
begin
rollback transaction
sbl_upd_ticker
return @errs
end
select @loc=loc, @idx=idx from mytab
if not @@rowcount=1
begin
rollback transaction
mytran
raiserror 30200
return 30200
end
if @tran_cnt = 0
commit transaction
mytran
select @my_id = (@loc * 10000000)
+ @idx
return 0
end
Objects referenced in a procedure:
exec sp_depends proc_name
exec sp_depends table_name
Transactions | home - top
of the page - |
begin transaction
update something
insert something
if ...
begin ... commit transaction
end
else
begin ... rollback transaction
end
Note: transactions put locks on pages (2K blocks of data)
savepoints inside transaction:
save tran item999
...
rollback tran item999
Nested Transactions:
@@trancount
stored procedure with a transaction inside
create proc p1
as declare @trncnt int select @trncnt = @@trancount -- save value if @trncnt = 0 begin tran p1 else save tran p1 /* do some work here */ if (@@transtate = 2) -- or other error condition
/* some more processing if required */ if @trncnt = 0
return 0 -- success |
nesting transactions - batch with transaction calls a stored procedure with a transaction inside
.....
declare @status_val int, @trncnt int
/* do some work here */ if (@@transtate = 2) -- or other error condition
execute @status_val = p1 -- call procedure with transaction from inside transaction - thus nesting transactions if @status_val = 25 -- if proc p1 performed rollback
if @trncnt = 0
return |
triggers may be part of a transaction:
begin tran
update titles
set price=$99
where title_id = "BU1234"
commit tran
rollback transaction in a trigger
rollback trigger
Transation modes:
- chained mode - This is ANSI standard. You are always in a transaction. You never have to issue BEGIN TRANs because it is issued implicitly as soon as you connect, and then as soon as you finish previous transaction. So as soon as you issue explicit COMMIT TRAN or ROLLBACK TRAN - you will get into a new transaction. It looks strange - you only close transactions with commit or rollback - and never have to open it, because it is always already opened for you. - unchained mode (default for Sybase) - Each individual statement is atomic. If you want to put several statements in one transactions - you have to explicitly issue BEGIN TRAN and at the end do COMMIT or ROLLBACK. Stored procedures remember in which mode they were compiled - and will not run in a different mode - unless you define them with "anymode". |
Getting info from sys. tables | home - top
of the page - |
some examples using sys tables
--------------------------------------------------------
select name from sysobjects
where sysstat & 7 = 3
order by name
--------------------------------------------------------
select
rowcnt(t2.doampg),
(reserved_pgs(t2.id, t2.doampg) + reserved_pgs(t2.id,t2.ioampg)),
(t3.low), data_pgs(t2.id, t2.doampg), data_pgs(t2.id,t2.ioampg),
(
(reserved_pgs(t2.id, t2.doampg) + reserved_pgs(t2.id,t2.ioampg))
-
(data_pgs(t2.id, t2.doampg) + data_pgs(t2.id,t2.ioampg))
)
from sysobjects t1, sysindexes t2, master.dbo.spt_values t3
where t2.id = object_id('myTableName')
and t1.id = object_id('myTableName')
and t3.number = 1 and t3.type = 'E'
-------------------------------------------------------
select t1.name, t2.name, t1.length, t1.status
from syscolumns t1, systypes t2
where t1.id = object_id('my_name')
and t1.usertype *= t2.usertype
--------------------------------------------------------
select t1.indid, t1.name
from sysindexes t1
where t1.id = object_id('my_name')
and t1.indid > 0 and t1.indid < 255
--------------------------------------------------------
select index_col('my_name', indid_buf, keyid_buf, ...)
--------------------------------------------------------
select t1.name
from master.dbo.spt_values t1, sysindexes t2
where t2.status & t1.number = t1.number
and t2.id = object_id('my_name')
and t2.indid = my_some_indid_buf
and t1.type = 'I'
and t1.number = 2
--------------------------------------------------------
select t1.name
from syssegments t1, sysindexes t2
where t1.segment = t2.segment
and t2.id = object_id('my_name')
and t2.indid = my_some_indid_buf
--------------------------------------------------------
select replstat = convert(bit, (t1.sysstat & -32768))
from sysobjects t1
where t1.id = object_id('my_name')
--------------------------------------------------------
Stored procedure with a cursor | home - top
of the page - |
if exists (select 1 from sysobjects
where name = "myname")
drop proc myname
go
create proc myname
as
begin
declare @my_id int
declare @root_my_id int
declare mycursor cursor for (select my_id from MT)
open mycursor
fetch mycursor into @my_id
while ((@@sqlstatus = 0))
begin
exec @root_my_id = MT_get_root @my_id
insert into MT_root_map_tmp (my_id, root_my_id)
values (@my_id, @root_my_id)
fetch mycursor into @my_id
end
close mycursor
deallocate cursor mycursor
end
go
script to copy data between databases | home - top
of the page - |
You can copy data between databases using migrate utility in DBArtisan.
Or you can use this script (it accepts table name as a parameter):
#!/bin/sh -v
if [ $# != 1 ]
table=$1 PROD="-S SERV1 -U user1 -P pass1"
bcp mydb..$table out $table.dat $PROD -n > /dev/null isql $DEV -w250 <<EOF
bcp mydb..$table in $table.dat $DEV -n > /dev/null isql $DEV -w250 <<EOF
|
misc | home - top
of the page - |
count distinct types:
select dept, count(*) from mytable
where dept is null group by dept union select distinct dept, count(*) from mytable where dept is not null group by dept |
select distinct dept, count(isnull(dept,'0'))
from mytable group by isnull(dept,'0') |
when you joining tables - they join only on not null values.
The common situation is joining 2 tables through a middle table which
maps keys between the side 2 tables. This works good for normal inner joins:
But what if you want all values from A - and just fill with nulls other
columns. You try left join - but it doesn' work:
You can either do it in 2 steps (make a temporary table) - or use union:
|
example of a union:
select #t1.*
|
Getting date difference as part of the year in Sybase
-- select convert(numeric(6,2), datediff(mm, '4/1/02', '1/1/03'))/12.00
-- select convert(numeric(6,2), datediff( dy, '4/1/02', '1/1/03' ))/365.00
-- select convert(numeric(6,2), datediff( dy, '4/1/02', '7/1/02' ))/365.00
Catching special characters:
select count(*) from mytab where charindex(char(10), myname) !=0
The simple way to find all occurencies of special characters is to use
a short perl script.
Below is the code (do_query is a self-made subroutine returning the
result set as an array of hashes):
my ($sql,$result,$row,$oe_id,$name,@ar,%hh);
$sql = qq{ select id, name from mytab where name like '%[^0-9A-Za-z -_.,''""%]%' }; $result = &do_query($dbh, $sql); for $row (@$result) { $id = $row->{id}; $name = $row->{name}; @ar = unpack('C*', $name); %hh = (); for(@ar) {if($_<32 or $_>127){$hh{$_}++}} for(sort keys %hh){print"id <$id>: code $_ - $hh{$_} times\n"} } |
declare @a varchar(30)
select @a = "a b"
select @a = str_replace(@a, char(32), rtrim(''))
select @a
-- the above returns 'ab' in Sybase 12.5.1 and later, but returns NULL in 12.5.0, which is a bug
-- The alternative is to use a while loop:
declare @ss varchar(10) select @ss="a b"
while (charindex(" ",@ss) > 0) select @ss = substring(@ss, 1, charindex(" ",@ss)-1) + substring(@ss, charindex(" ",@ss)+1, 255)
select @ss
===========================
Using 'union' to do logical operations in single statement without creating temporary tables:
Let's say we have a table with 2 columns (a,b):
create table #t (a int null, b int null)
insert #t (a,b) values (-1,6)
insert #t (a,b) values (0,5)
insert #t (a,b) values (1,4)
insert #t (a,b) values (2,3)
insert #t (a,b) values (3,2)
and we want to return max(a) or max(b) depending on which one
is larger.
Here is how this could be done
select max(b) mm from #t
having max(b) > (select max(a) from #t) union select max(a) mm from #t having max(b) > (select max(b) from #t) |
tune your SQL | home - top
of the page - |
Query Tuning Options
set forceplan on - forces SQL Server to join the
tables in a query in the order in which they are listed in the from
clause.
set table count - allows you to specify the number
of tables that are considered at once as joins are optimized.
sp_cachestrategy - disables and reenables cache
strategies and large I/O for individual objects.
specify the index, cache strategy, or I/O size for a query (for example:
select
* from mytab (index mytab_idx) where ...)
set prefetch - allows you to enable or disable large
I/O for a session.
Benchmarking your query by hand:
set statistics time on
set statistics IO on
Help your optimizer to make sure it has correct information about your
table and uses it for queries and stored procedures. This info may me incorrect,
for example, after bulk loads:
update statistics tabname
sp_recompile tabname
Examine the showplan of your query:
set showplan on
go
set noexec on
go
or for a stored procedure
set showplan on
go
set fmtonly on
go
Note: do not mix set noexec & set fmtonly
Run your code - examine the output - look where table scans are made
instead of using indexes. Sometimes index is not used because:
- it doesn't exist on the column used in the where clause
or in a join
- there is a datatype mismatch in the join (like integer
joined to numeric)
- if the query is not restrictive enough and returns significant
portion of the table. The optimizer then will choose to use table
scan.
- the indexes are very complex - so Optimizer decides
that it is less costly to just read the table itself
- some where cluases may cause problems: order by, group
by, having, between, like, calculations in where clauses.
- sometimes table scan is really more effective and you
don't need to eliminate it.
Note, that database can cashes data, so repeating the query may get
the results faster.
Simple solution - vary the query.
dbcc traceon(302, 310, 3604)
sp_sysmon
etc.
Dynamic SQL | home - top
of the page -
|
-- "dynamic SQL" in Sybase is called "execute-immediate" (EI)
exec("select getdate()")
declare @c varchar(25)
select @c = "select getdate()"
exec (@c)
select @c = 'mycolumn'
execute( 'select ' + @c + ' from mytable where 1=2' )
-- You can do line continuation like this:
exec (' \
create procedure myproc \
@part1 int \
as \
select KeyCol, Attrib \
from MyTable \
where KeyCol = @par1 \
')
-- Note:
-- temp tables (#mytemp) created inside are dropped on exit
-- local variables created outside can not be used inside - and vice versa
-- some commands are not allowed inside:
use, set, dbcc, connect, begin transaction, commit, rollback, dump transaction
-- can't call EI from inside another EI, inside a trigger,
-- statements executed in EI can't access tables "inserted" and "deleted"
-- and EI doesn't return the result status
-- cursor usage is limited (and pointless)
-- Passing info back from EI - via table
-- EI sets @@rowcount and @@error as any other SQL
Real-life example (extracted from the middle of a long stored proc):
declare @mysql varchar(1024) set rowcount @max_mid_rows |
Getting Info About Rows and Sizes of Tables | home - top
of the page -
|
select Table_Name=object_name(id),
"Rows"=convert(int,sum(rowcnt(doampg))),
"ReservedKB"=sum(reserved_pgs(id,doampg))*2+ sum(reserved_pgs(id,ioampg))*2,
"Data"=sum(data_pgs(id,doampg))*2 ,
"Index"=sum(data_pgs(id,ioampg))*2
from sysindexes
where id in (select id from sysobjects where type='U')
group by object_name(id)
Testing Outer Joins | home - top
of the page -
|
When chaining joins,
>> , =>, <> are OK,
but
>= is not allowed (even on a different key)
-- testing outer joins
create table #t1 (c1 varchar(10) null, c2 varchar(10) null)
create table #t2 (c1 varchar(10) null, c2 varchar(10) null)
create table #t3 (c1 varchar(10) null, c2 varchar(10) null)
go
-- row.col
insert #t1 values('1.1','1.2')
insert #t1 values('2.1','2.2')
insert #t1 values('3.1','3.2')
insert #t2 values('1.1','1.2')
insert #t2 values('2.1','2.2')
insert #t2 values('4.1','4.2')
insert #t3 values('1.1','1.2')
insert #t3 values('2.1','2.2')
insert #t3 values('5.1','5.2')
select t1.*, t2.* from #t1 t1, #t2 t2 where t1.c1 = t2.c1
select t1.*, t2.* from #t1 t1, #t2 t2 where t1.c1 *= t2.c1
-- t1 < t2 > t3
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 =* t2.c1 and t2.c1 *= t3.c1
-- t1 > t2 > t3
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 *= t2.c1 and t2.c1 *= t3.c1
-- t1 < t2 = t3
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 =* t2.c1 and t2.c1 = t3.c1
-- t1 > t2 < t3 - meaningless
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 *= t2.c1 and t2.c1 =* t3.c1
-- t1 > t2 = t3 - not allowed
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 *= t2.c1 and t2.c1 = t3.c1
-- t1 > t2 = t3 - not allowed even on a different key
select t1.*, t2.*, t3.* from #t1 t1, #t2 t2, #t3 t3 where t1.c1 *= t2.c1 and t2.c2 = t3.c2
print, raiserror | home - top
of the page -
|
=============================================== declare @b varchar(20) print "MYMY %1! and %2! MYMY", @a, @b =============================================== select count(1) from master..sysmessages You can create your own custom errors - either in sysusermessages - or on the fly if @@error != 0 raiserror 300002 , "MYMY %s MYMY ", @myvar -- error 300006 doesn't exist in the sysusermessages table. raiserror 300006 "MYMY %1! MYMY", @myvar |
Empty strings:
select @country = isnull(rtrim(ltrim(@country)),"") note that empty chars are expanded inside the char_length() function. A workaround this is to rtrim everything inside the char_length - and use isnull outside the char_length: Note here that rtim("") returns null, so char_length(rtrim("")) is null also, and not zero. Replacing in a string: |
Example of Printing:
exec myprint @mydebug,63,"starting" |
-
create procedure myprint ( |
Example of reporting a duration of some process:
declare @mysec datetime |
Example populating a "suffix" table (chopping words from the beginning until nothing left). This table is used as an index table to speed up search of words/phrases inside the string. It allows to use "starts with" logic ( ... where ss like 'xxx%' ), which uses index - thus it is fast.
CREATE PROCEDURE dbo.ins_words ( -- clean the @words -- we insert the whole string (even if it is empty - we still want to enter the id to make future updates easier) declare @pos int END |
Update with variables technique:
-- Example 1 -- We can do more interesting things - for example, concatenate values for the same id. This table should have only fixed-width fields (no varchar). This is necessary so that the table will be written to the same place - thus preserving th order of rows. select @prev = 0 Next we do concatenation: select @ss="" Finaly select the result: select id, max(names) from #work |
Example of batch inserting using id:
declare @myrowcount int select @myrowcount=1 select @min_id = min(id), @max_id = max(id) from source set rowcount 5000 while (@myrowcount > 0) begin insert #batch select @myrowcount = @@rowcount select @base_id = max(id) from #batch if (@myrowcount > 0) end set rowcount 0 |
Reducing the locking problem.
Read - put shared lock, update or insert - exclusive lock. Table locks may be a result of lock promotion, but more often they happen because of your SQL / indexes. Normally table read puts a shared lock. But if you have an UPDATE or a DELETE with a WHERE clause which has a table scan - you will get an exclusive table lock without any lock promotion, regardless of locking scheme. Switching to row level locking (DATAROWS locking scheme) may not solve your problem and is very expensive: your data takes up more space, UPDATES start causing a lot more fragmentation, and you need A LOT more locks. Lock promotion works by number of locks used, and since DATAROWS uses considerably more locks, you'll hit promotion threshold much sooner. Also if you run out of locks – you will also get a promotion to a table lock. Also, if you are committing more rows at a time than the HWM (high lock promotion threshold) – then you will still get lock promotion even with datarows locking scheme. There are 3 locking schemes: Use sp_lock and sp_objectstats to see if multiple connections/users attempt locks on rows on the same page. Use sp_sysmon to detect lock promotion. Lock promotion thresholds : Note: Switching to DATAROWS might help your blocking, but maybe the problem isn't that people are trying to read the same row, or even different rows on the same page. It might be that they're blocking on data vs index access. DATAPAGES is a good first step to try since it eliminates index locking and usually this is all you need. It has most of the benefits of DATAROWS without the extreme overhead. It's always a good idea to try DATAPAGES first. Also, the solution to avoid table locks may be simply to increase the lock promotion thresholds. Note: changing from ALLPAGES to DATAPAGES or DATAROWS is an expensive operation. Switching between DATAPAGES and DATAROWS is very easy and fast. Note: if a WHERE clause has local variables – optimizer will not use index, thus causing a table scan. Example:Modifying locking scheme: alter table authors lock datarows |
Isolation Level
You can change isolation level using a command like this: set transaction isolation level 0 The value is stored in variable @@isolation Possible values: You can override the isolation level by adding a hint to the end of SQL statement: with holdlock - force level 3 |
AutoCommit
set autocommit - the client-side commant to set autocommit on | off | 0 | 1 - turns autocommit on or off. |
Duplicates
ignore_dup_key - hint to skip duplicate keys while doing insert or update. Note - you cannot create a unique index on a column that includes duplicate values or more than one null value, whether or not ignore_dup_key is set. If you attempt to do so, Adaptive Server prints an error message that displays the first of the duplicate values. You must eliminate duplicates before Adaptive Server can create a unique index on the column. ignore_dup_row -
allows you to create a new, nonunique clustered index on a table that includes duplicate rows. ignore_dup_row deletes the duplicate rows from the table, and cancels any insert or update that would create a duplicate row, but does not roll back the entire transaction. allow_dup_row - allows you to create a nonunique clustered index on a table that includes duplicate rows, and allows you to duplicate rows with update and insert statements. |
tempdb, transaction log,
You may have many databases on the same server. But until ver. 12.* there was only 1 tempdb database. This was creating problems, as many sessions were competing for it. Starting with ver. 12 you may have several database - thus distributing the load. Usuallythe size of tempdb is ~2 GBytes, however I've seen it as big as 12.5 GBytes - and completely buffered (mapped to memory). When you have large transactions - you may want to increase the sizes of both tempdb and transaction log. But if you have replication - you want to limit the size of transactions to ~ 3,000 rows. |
Checkpoint event
Approximately once a minute, the checkpoint task checks each database on the server to see how many records have been added to the transaction log since the last checkpoint. If the server estimates that the time required to recover these transactions is greater than the database’s recovery interval, Adaptive Server issues a checkpoint. The modified pages are written from cache onto the database devices, and the checkpoint event is recorded in the transaction log. Then, the checkpoint task “sleeps” for another minute. To see the checkpoint task, execute sp_who. The checkpoint task is usually displayed as “CHECKPOINT SLEEP” in the “cmd” column. |
Working with big tables
don't create a clustered index, because when you will need to drop and create it - ASE will lock the whole table. |
xxx
xxxxxx |
xxx
xxxxxx |
xxx
xxxxxx |