Saturday, 7 March 2015

Find the query run by each user in Teradata - username to userid mapping in dbqlogtbl

First step is to enable query logging on Teradata for the User.

We need to enable query logging from DBC user. If you try to enable query logging using any other user, you will get an error that the user does not have 'Statement' permission.

Error code:

Failure 3523 (username does not have statement permission)

Example script:

---- enable query logging so that all the queries run by a particular user is logged
begin query logging with sql on training_1 ;

The below query will now give all the queries run by user 'Training_1'


select * from dbc.users;
select * from dbc.dbase;
select db1.databasename,dblog1.*from dbc.dbqlogtbl as dblog1
inner join
dbc.dbase as db1
on dblog1.userid = db1.databaseid
where db1.databasename = 'training_1';

We can end query logging using:

End query logging on user = 'training_1';

Please give us your valuable feedback and like us on Google Plus or LinkedIn.


Wednesday, 4 March 2015

All about SET and MULTISET tables - Default Table type in Teradata - An example using Teradata Studio Express to explain all about SET and MULTISET tables

We generally say that tables are created as SET tables if the type is not mentioned in the CREATE TABLE statement.

The fact is, the type of the table is entirely dependent on the Transaction Mode.
In ANSI, the tables are created as MULTISET and in BTET (Teradata) mode, the tables are created as SET.

Example:

First select the transaction mode for the current session.

SELECT sessionno, transaction_mode FROM dbc.SessionInfoV
WHERE SessionNo in (select session);

Finding the Transaction mode for current Teradata session
create table financial.userset2
 (
 userid integer

 ) primary index(userid);

The above query gives a SET table in Teradata mode.

Note: If we don’t specify the primary index, table is created as multiset even in BTET mode.
If we try to create a SET table explicitly as a NoPI, we get an error in Teradata - "Cannot create a NoPI table as a SET table"

SET table with No Primary Index Error in Teradata
create table financial.userset3
 (
 userid integer
 ) no primary index;

 --- results in a multiset table if the mode is BTET as the PI is not defined

 show table financial.userset2;

 ---Insert row into set table
 insert into financial.userset2 values(1);

 ---- insert duplicate rows in multiset table
 insert into financial.userset3 values(2);
 insert into financial.userset3 values(2);
 insert into financial.userset3 values(2);



 ---- insert from multiset table to set table
 ---- error raised in ANSI mode but suppressed in Teradata mode

insert into financial.userset2 select * from financial.userset3;

---- only 2 rows, userid 1 and 2 should be present.
---- Note that no errors or warnings are raised

select * from financial.userset2;


Rows inserted into SET table without warnings in BTET mode


If we set the session to ANSI mode and then run the below query:

create table financial.users(
uid integer
,uname varchar(20)

)no primary index;

The table is created as multiset table.

show table financial.users;

---By default tables get created as multiset tables in ANSI mode
/*
CREATE MULTISET TABLE financial.users ,NO FALLBACK ,
     NO BEFORE JOURNAL,
     NO AFTER JOURNAL,
     CHECKSUM = DEFAULT,
     DEFAULT MERGEBLOCKRATIO
     (
      uid INTEGER,
      uname VARCHAR(20) CHARACTER SET LATIN CASESPECIFIC)
NO PRIMARY INDEX ;
*/

Tuesday, 10 February 2015

Cursor code example in MySQL to update from multiple rows in source

Example script to update multiple rows from source, where each row has the changed primary key.

How the keys look like? The highlighted columns form a composite primary key.
For
Update_Code = 1, New PK value updates Record_Key
Update_Code = 2, New PK value updates Date of Birth
Update_Code = 3, New PK value updates First Name

Cursor Update in MySQL dataset


Example script to solve using cursor:

drop procedure if exists segment_update_mbr;
DELIMITER $$
CREATE PROCEDURE segment_update_mbr(v_file_id bigint)
proc_label: begin

#-----------------------------------------------------------#
# Usage: Update segment_data using each row     #
#-----------------------------------------------------------#

declare done int default 0;
declare a, b int default 0;
declare v_mbr_id, v_mbr_rec_no, v_mbr_id_old varchar(500);
declare v_error, v_file_type varchar(500);
declare v_error_count,v_threshold , v_threshold_breach, v_staging_count, v_ref_id, v_update_count bigint;
DECLARE cur1 CURSOR FOR SELECT reference_id, member_id, member_record_number FROM temp_multiple_mc order by cast(member_record_number as unsigned integer);

# Ensure to cast your order by column to integer, else varchar columns will give incorrect order of rows

declare continue handler for not found set done=1;

set v_error = 'start';
set v_threshold = 0;
SET v_threshold_breach = 0;
SET v_staging_count=0;
SET v_error_count=0;
set v_ref_id = 0;
set v_update_count= 0;

SELECT segment_file_type_description
INTO v_file_type
FROM segment_file_type
WHERE segment_file_type_id = (SELECT segment_file_type_id FROM segment_file WHERE file_id = v_file_id);

    select account_id into @v_account
    from segment_file
    where file_id=v_file_id;


if lower(v_file_type) = 'complete' then

set sql_safe_updates = 0;

open cur1;
strt1 : loop
fetch cur1 into v_ref_id, v_mbr_id, v_mbr_rec_no;
#set v_error = 'line15';
if done = 1 then leave strt1;
end if;
select count(1) into a from segment_data where member_id = v_mbr_id;
select fn_check_key_present(v_file_id, v_ref_id) into @exist_chk;
select fn_check_duplicate_primary_mbr_present(v_file_id, v_ref_id) into @primary_chk;
select v_mbr_id, v_mbr_rec_no,v_ref_id, @exist_chk, @primary_chk, a;
 
if a > 0  and @exist_chk = 0  and @primary_chk = 0 then
set sql_safe_updates = 0;
set foreign_key_checks = 0;

# Update the segment_data table using the 1 row present in the cursor

update segment_data ed

inner join
segment_file_staging_complete as efsc
on ed.member_id = efsc.member_id
and ed.member_first_name = efsc.member_first_name
and cast(date_format(ed.date_of_birth, "%m%d%Y") as char(10))= cast(efsc.date_of_birth as char(10))
and ed.group_id = efsc.group_id
and ed.client_id = efsc.client_id
and ed.account_id = @v_account
inner join
temp_multiple_mc as tmm
on cast(tmm.reference_id as unsigned integer) = cast(efsc.segment_file_staging_complete_id as unsigned integer)
set
ed.member_id = case when efsc.maintenance_code = '01' and efsc.transition_id is not null and efsc.transition_id <> '' then efsc.transition_id else efsc.member_id end,
ed.date_of_birth = case when efsc.maintenance_code = '02' and efsc.transition_id is not null and efsc.transition_id <> '' then str_to_date(efsc.transition_id, "%m%d%Y") else str_to_date(efsc.date_of_birth, "%m%d%Y") end,
ed.member_first_name = case when efsc.maintenance_code = '03' and efsc.transition_id is not null and efsc.transition_id <> '' then efsc.transition_id else efsc.member_first_name end,
ed.modified_by = efsc.modified_by,
ed.modified_date = efsc.modified_date,
ed.file_id = efsc.file_id,
#ed.segment_file_type_id = efsc.segment_file_type_id,
ed.client_id = efsc.client_id,
#ed.account_id = efsc.account_id,
ed.initial_member_id = efsc.initial_member_id,
ed.member_middle_initial = efsc.member_middle_initial,
ed.member_last_name = efsc.member_last_name,
ed.group_id = efsc.group_id,
ed.segment_effective_date = str_to_date(efsc.segment_effective_date, "%m%d%Y"),
ed.segment_end_date = CASE WHEN efsc.segment_end_date = '' THEN '2099-01-01'  WHEN efsc.segment_end_date IS NULL THEN '2099-01-01'  ELSE str_to_date(efsc.segment_end_date, "%m%d%Y") END,
ed.gender_desc = (select lookup_value from lookup where lookup_group='GENDER' and lookup_key=efsc.gender)
where tmm.reference_id = v_ref_id
and efsc.is_invalid_row = 0
and efsc.is_duplicate_row = 0
;
set v_update_count = v_update_count + row_count();

set sql_safe_updates = 1;


# Mark records that on update will create duplicate primary key
elseif @exist_chk = 129 then
set sql_safe_updates = 0;
update temp_multiple_mc
set is_invalid_row = 1
where reference_id = v_ref_id;

set sql_safe_updates = 1;

elseif @primary_chk = 165 then
set sql_safe_updates = 0;
update temp_multiple_mc
set is_duplicate_row = 1
where reference_id = v_ref_id;

set sql_safe_updates = 1;
end loop;
end if;

end if;

# Updated rows count is inserted into segment_file_status table
set sql_safe_updates = 0;

select max(segment_file_status_id) into @status_id from segment_file_status where file_id = v_file_id;

update segment_file_status
set
records_updated = v_update_count
,modified_by = current_user()
where file_id = v_file_id
and segment_file_status_id>0
and segment_file_status_id = @status_id
;
SET SQL_SAFE_UPDATES=1;
end$$
DELIMITER ;

Like us on Google+ or Facebook and share your valuable feedback.

Thursday, 22 January 2015

Find_in_set feature of MySQL to find elements in an array or list of Values - Example and Diagrams

In an earlier post , we have seen how using group_concat() function we can group all the values of a column into comma separated array/list of values.

Please see the picture below:

Array elements distributed over 5 different rows and needs to be merged

On doing a group_concat(eligibility_file_staging_complete_id), we get the values in all the rows into a single row.

Example query:

select group_concat(eligibility_file_staging_complete_id)into @v_staging_id_list from temp_multiple_mc_complete;

select @v_staging_id_list;

Merge all columns into a single array using group_concat() function


Now, our requirement is to find each ID from another table and do some operation if the match is found.
We can use the FIND_IN_SET() function for this.

Example query:

  select efsc.eligibility_file_staging_complete_id as reference_id, efsc.* from
eligibility_file_staging_complete as efsc
where find_in_set(efsc.eligibility_file_staging_complete_id , @v_staging_id_list)
and file_id = v_file_id;

The above query will pick only those rows that are in the array @v_staging_id_list.

Please like us on Facebook or LinkedIn if you find our blog helpful.


Viewpoint in Teradata - Sample screen to explain the new changes in Teradata monitor

The Teradata performance monitor is now Viewpoint. So, instead of being an application, it can be hosted and shared with developers.

Welcome to the new era of performance tuning, where we see live how the queries are performing in Teradata.
As we see in the diagram below, the skewness, CPU consumption, duration of the query, the PE status and all the other vital statistics are represented in one screen.

Thanks to #Teradata for coming up with this wonderful idea.

Teradata Viewpoint example. Vital stats about the query in one screen


Please let us know how we can help you in improving your Teradata journey. Like us on Google+ or Facebook by using the buttons below.